Please see the disclaimer.

Introduction

Paul Graham has an essay called The Hundred-Year Language in which he hypothesizes about what programming languages will look like in 100 years. He also comes to the conclusion that trying to build the Hundred-Year Language is feasible and useful enough to try doing now.

I agree. If it wasn’t obvious, that is what I have been trying to do with Yao.

I would like to explain a little bit of my thought process in the hope that the decisions I have made with Yao will start to make sense.

Evolution of Languages

The first thing that Paul says to do is to look at language families and figure out which ones are evolutionary dead ends. For some, that’s easy, but for others, that is really hard.

For example, I do think that Paul is right that Java, just like COBOL, is an evolutionary dead end. But why? It uses a syntax that is still popular with ALGOL descendents, so why is it an evolutionary dead end?

I think it’s because Java was built on the bad idea that everything is an object, not because of the syntax.

On the other hand, beyond a vocal minority who praise Lisp and its descendents to no end, it also seems like the Lisp family is a dead end; this time, it is not the idea(s) behind it (functional programming, extensibility, etc), but because of the syntax.

In my opinion, the world seems to have settled on an ALGOL- and C-like syntax. Programmers have also shown that they like the ideas of functional programming and extensibility, but they prefer the imperative style as the default.

Okay, so it seems the world is converging on a certain syntax and certain ideas. Should be pretty easy from here on out.

Not so fast.

Sure, a lot of the ideas can work well together. It’s even possible to make imperative programming feel like functional programming; we have seen that with Rust and Julia. However, in newer languages like Odin and Jai, the focus seems to be on control, which seems hard to reconcile with some of the ideas of functional programming.

Evolving One Language

So where do we start?

I think it’s safe to say that if we could have the syntax and control of C-like languages with the good ideas of functional programming, we could have the Next Big Language, and maybe even the Hundred-Year Language.

But wait, who says that, if we build the Hundred-Year Language now, it will be in use in 100 years? Won’t programming progress far beyond what we can do now?

The answer is yes.

But then we have a dilemma: if our language isn’t going to be used in 100 years anyway, why build it? Is there some way we can build it to make sure it’s still in use in 100 years?

The answer to that is also yes, but it’s not obvious why unless you have read and/or listened to one of the greatest computer science talks ever given. It’s called Growing a Language, and it was given by Guy L. Steele, Jr. at OOPSlA 1998.

This is one of those talks where you have to listen to it to get the gist, so I embedded it below, but it also helps to have the transcript.

“I need to design a language that can grow.”

— Guy L. Steele, Jr.

The first time I heard those words nearly a decade ago, I knew that they were among the truest I had ever heard in computer science.

But how does that quote answer the question from earlier? Can we really build a language that is still in use 100 years from now?

It seems hard because 100 years from now, they will probably need different things from their programming languages than we do, and how can we predict what they need?

The answer: if we build a language that can grow, we don’t have to worry about giving the language everything it needs for 100 years; we can just let it grow to include all of the things it needs.

In essence, we will design a language to evolve with the times.

Lisp

Have you wondered why Lisp is so loved, despite its terrible syntax? It’s because Lisp is the ultimate example of a language that can grow, an evolvable language. And besides functional programming, that is the top thing we have to take from Lisp to make the Hundred-Year Language.

But we have a problem: Lisp is an evolvable language because of its awful syntax, and we can’t use the syntax.

Is there any way to have the evolvability of Lisp with a better syntax? I think so.

Imperative Evolvability

Actually, there are already examples of evolvability in impertive languages.

First, any programming language worth its salt will allow programmers to define new types and new functions/procedures. On top of that, Rust and Julia have some form of macros, just like Lisp.

Macros are what makes Lisp the most evolvable language.

So why are imperative languages, especially Rust and Julia, not as evolvable as Lisp?

I think Guy Steele got it right when he said,

In Lisp, new words defined by the user look like primitives and, what is more, all primitives look like words defined by the user! In other words, if a user has good taste in defining new words, what comes out is a larger language that has no seams.

In Rust and Julia, macros have special syntax; they do not look like primitives, which means that they are not seamless.

And not only that, you cannot define new primitives!

What are primitives in imperative languages? They are anything that takes any special care by the compiler or interpreter.

In a typical imperative language, this can be anything from keywords to the syntax for strings and numbers.

And in order for the language to be as evolvable as Lisp, a user must be able to add a new primitive of any kind, including primitives that we haven’t even thought about!

This seems like a tall order, but we can take a clue from two very opposite places: Lisp and Jai.

First, one of Lisp’s nicknames is “the programmable programming language.”

Second, soon after Jonathan Blow announced his new programming Jai, he did a demo where the compiler could be controlled at compile time by build code.

He did this to prevent needing another language for building, but it sparked an idea in my head when I first saw it.

What if you made the compiler a library that the programmer would call to build your program, and you could directly manipulate the lexer, parser, and code generator?

I won’t go through the details (I have thought about this way too long), but if the compiler can be completely manipulated by build code, not only can user code define new instances of existing primitive types and create new primitive types, it can even switch the syntax on the fly!

This means that the language itself doesn’t even have to be tied to its syntax!

The Language Is a Library

But if new primitives can be defined in user code, then all of the language can be built as a library.

And that is exactly how Yao has been designed: the language is not defined by syntax or by keywords; it is defined in the standard library in its build scripts, which are, themselves, written in Yao code.

Rest assured that the documentation for the language will be like the documentation for normal languages, with exception of the documentation for advanced users. Beginners will not be exposed to these details.

I did this deliberately, for several reasons. First, it will make it easy for me to evolve the language quickly in its early stages to find a syntax that works best. Second, I can easily evolve it later. Third, users can grow the language to serve needs that I have not even anticipated!

Custom Optimizations

But that’s not all.

Because the entire compiler is programmable, that means that the user can add new optimizations that are useful for just their code.

Why is this useful?

The ideas and examples here come from “Death of Optimizing Compilers” by Daniel J. Bernstein.

Say you are researching a new algorithm for machine learning. You want to prove that it can be faster than anything else on the market. However, when you hand-optimize the C code for it, you have to make assumptions about the layout, eviction policies, and other semantics of the CPU caches.

As a result, it only works best when the caches fit those assumptions.

But say that you could write a program that, given the specs of caches of a particular chip, could optimize your algorithm for that chip. If you could, then you could ensure that your algorithm would be as fast as possible on any piece of hardware.

That will be no fantasy in Yao; it will be commonplace. Software will ship with any special optimizations that make sense for that software, and performance will just be better.

This is how Yao will be Faster than C.

Conclusion

My colleagues and others that I interact with on IRC have had trouble understanding some of the things I have said about Yao. I hope that when they read this post, it will start to make more sense.

Regardless, it’s time for a programmable programming language that is better than Lisp.

Once it exists, I think it will be the Hundred-Year Language.