JavaScript was never meant to be the most important programming language
in the world. It was hacked together in ten days, with ideas from Scheme
and Self packed into a C-like syntax. Even its name was an awkward fit,
referring to a language with little in common besides a few keywords.1 But
once JavaScript was released, there was no controlling it. As the only language
understood by all major browsers, JavaScript quickly became the
lingua franca of the Web. And with the introduction of Ajax in the early
2000s, what began as a humble scripting language for enhancing web pages
suddenly became a full-fledged rich application development language.
As JavaScript’s star rose, discontent came from all corners. Some pointed
to its numerous little quirks and inconsistencies.2 Others complained about
its lack of classes and inheritance. And a new generation of coders, who
had cut their teeth on Ruby and Python, were stymied by its thickets of
curly braces, parentheses, and semicolons.
A brave few created frameworks for web application development that generated
JavaScript code from other languages, notably Google’s GWT and
280 North’s Objective-J. But few programmers wanted to add a thick layer
of abstraction between themselves and the browser. No, they would press
on, dealing with JavaScript’s flaws by limiting themselves to “the good parts”
(as in Douglas Crockford’s 2008 similarly titled book).