The Evolution of AngularJS and Angular

When I first came across AngularJS, I found it fairly confusing - not in a, "how do I use this thing" or even a "how do I accomplish X in Angular" sort of way, but more of a "what is this thing even for? How is Angular better than... not Angular?" I wasn't trying to figure out why Angular is better than, say, React, or Boost, or jQuery, but - why is Angular better than nothing at all? What does it actually bring to the table?

Maybe it seemed so obvious to the original Angular developers and documenters that it didn't bear spelling out explicitly, but once I realized that Angular was a framework for the design and development of single-page web applications, the reason it was set up like it is immediately clicked into place. I've been around long enough to have completely internalized the workflow and design of multiple-page web applications for so long, it took a bit of a paradigm-shift for me to get used to thinking in terms of Angular and "thin client" web pages. So what was wrong with those multi-page web applications that we needed a new framework and a new way of thinking to address?

There was a time when even multi-page "applications" were something of an oddity on the web. Although the Internet — the global computer network that sprang from its predecessor the ARPANet — had been around since at least the 1980 (and arguably even before then), it took Tim Berners-Lee's creation which he called the World Wide Web to really kick start the internet phenomenon which fundamentally changed civilization (for the better?) The original world-wide web, though, codified by Berners-Lee's Hyper Text Transport Protocol (HTTP), was much more modest. If you look over the first HTTP specification, its bias toward document display and retrieval is evident: it was originally conceived as a way to share nuclear research documentation which contained links to other documents. As such, the protocol and the software that supported it was designed around a workflow where the user would download a document, spend some time reading it, and then click a link in that document to bring up a second, entirely different document. There weren't even any provisions in the first HTTP specification for user interaction of any kind. The earliest web sites didn't even keep much track of different users — if you're looking at nuclear research documentation, the only reason to keep track of who you are is to make sure that you're authorized to view it, but there was no expectation or provision for the possibility that what you saw might need to be different than what another user saw.

A lot of people started to realize the potential benefit in permitting limited two-way communication through the HTTP protocol. The earliest browser/server interaction that was allowed on the web was form posting. User input forms could be rendered with interactive text input boxes, checkboxes, radio buttons and multi-selects and, when the user was happy with the submission, a submit button would cause the contents of the filled-out form to be transmitted to the server — presumedly to cause some action to be taken on behalf of the user such as an online purchase.

Form handling turned out to be onerous, to say the least. Date inputs were inconsistent. There was no concept of validation, so if the user omitted some required detail, it was up to the server (after the user had pressed the submit button) to recognize this and redisplay the same form, with an appropriate error message. There was no standardization in this area — sometimes an error message would appear at the top of the page, sometimes it would be attached to the input control, and sometimes it would be omitted entirely. If the user was lucky, the form would be re-populated with the values that were valid, but getting that right required some relatively tricky programming, and the web development languages of the time such as PHP and ColdFusion didn't do much to make things easy in this department. Even if everything went right, the user experience left a bit to be desired - depending on the connection speed, the user could end up staring at a blank screen for an indeterminate amount of time, wondering if anything had gone wrong.

Marc Andreesen formed the Netscape corporation in 1995 to develop a commercial-quality browser based on his prior Mosaic browser. Among the many contributions that Netscape made to the web (which included the specification of SSL) was the inclusion of a simple scripting language named LiveScript. In an effort to confuse everybody for all of time, the marketing people at Netscape decided at the last minute to rename LiveScript as "JavaScript" because Sun's Java programming language was trending in popularity, in spite of the fact that Java and Javascript had no relationship with one another whatsoever.

Although a full-fledged programming language, LiveScript/Javascript was originally designed around relatively simple user interactions — specifically doing client-side form validations. Form input elements could have action handlers attached to them and specified in Javascript; if a field was supposed to be formatted as a date, for example, the Javascript could check the input and warn the user that the date was malformed right away. Actually using Javascript to manipulate web pages in any sort of sophisticated way, though, involved heavy use of the Document Object Model (DOM) API, which was verbose and unwieldy. Function calls like getElementById or getElementsByTagName were used to identify elements to be manipulated and new HTML was introduced dynamically by non-standard invocations like innerHTML — which meant that the HTML would change radically, and unpredictably, while the user was interacting with it, leading to some hard-to-debug scenarios. Factor in lack of standardization and browser incompatibilities and anything sophisticated became a major undertaking — Javascript's mostly iterative programming paradigm didn't help much either.

And of course, nothing about Javascript changed the fact that the server side, which was maintained in a different programming language entirely, was ultimately responsible for form processing and page rendering. Even if the client went through and validated everything, the server had to revalidate and, potentially, redisplay the form with validation messages. In fact, Javascript was notoriously insecure in its first few iterations and even when it was trustworthy from a security standpoint, a lot of poorly written Javascript could render a page unusable — so a lot of users just disabled it entirely (I remember developing web sites as recently as the early 2000's under the assumption that Javascript may not actually be available). The frameworks for server-side development at the time were more focused on maintaining the illusion of statefulness through cookies or URL rewriting than simplifying form handling.

A few server-side form-handling/validation frameworks began to gain some mindshare. One of the first such frameworks I had the, er, "pleasure" of working with was Apache Struts, which added some form-handling, um, "simplification" to the Java Servlet standard. In order to develop an interactive web site such as an online shopping portal using a server-side toolchain like Java, PHP, or ColdFusion, you'd write some code that was responsible for responding to HTTP GET requests and dynamically generating HTML to be rendered in the browser. Sun's Java servlet specification addressed the problem of adding state to HTTP so that if a user, say, added an item to their virtual shopping cart, that item would still appear in the next HTTP request, but it didn't do anything to simplify form rendering or validation. One of the goals of the Struts framework was to simplify the tying together of various web pages and keep track of user input so that, if the filled-out form was rejected for any reason (maybe an out-of-stock item), the user's original input was retained and any rejected inputs could easily be flagged and brought to the user's attention. Given the state of the art at the time, Struts was actually a pretty decent solution to the problems it was meant to address, but it also created a maintenance headache in the form of a LOT of metadata in hard-to-distinguish XML files without much in the way of feedback if you put an element in the wrong place.

None of this addressed the fundamental user experience gap, though, that the user had to wait for a full round-trip to determine whether they had input a valid form or not. Multiple developers independently started to come up with the idea of what's come to be known as a single-page app: all of the markup and forms that make up the application which used to be a collection of independent pages are sent to the user once, and whenever the user wants to transmit data to the server, Asynchronous Javascript and XML (AJAX) requests are submitted behind the scenes and the page is updated gradually. If something was wrong with the input, the user would be informed as soon as the server was able to determine the problem, but the user could continue to view and even interact with the page while this was happening in the background.

It should be noted that, no matter how sophisticated the client-side validation may be, it's still incumbent on the server to verify the data that it gets from the client, since any browser user can trivially disable any client-side validation checks.

AJAX relied on the XMLHttpRequest JavaScript object whose usage was something like:

var xmlhttp = new XMLHttpRequest();

xmlhttp.onreadystatechange = function()  {
  if (xmlhttp.readystate == 4 && xmlhttp.status == 200)  {
    var response = xmlhttp.responseText;
    // Parse the response and update the page to reflect
    // the parsed response
  } else if (xmlhttp.status == 400)  {
    // Display an error message to the user
};"GET", "/data/", true);

One issue with this format was its verbosity. Coupled with the inherent verbosity and complexity of DOM manipulation, this made for some hard to write, read and debug code. The fact that a typo in a Javascript handler would usually just cause the page to completely stop rendering rather than provide you with a usable error message meant that Javascript development could get to be tedious, quick. The jQuery client-side framework came along with an aim to greatly simplify the syntax of selecting DOM elements for manipulation in a cross-browser way and even streamline the syntax of the AJAX calls themselves.

Another issue was the introduction of asynchronous behavior (the 'A' in the acronym AJAX) in web pages. Before AJAX, it was perfectly reasonable to write Javascript under the expectation that, whatever it needed to do, it would be done doing it at the end of the function. However, AJAX calls included, by definition, a remote call to a server which could very easily take long enough that a user would start to wonder if the page had stopped working before the call could complete. (After all, if the timing of that round-trip hadn't been a usability issue, we might never have started considering single-page web applications in the first place). This meant that developers had to start to get used to the idea of initiating an action but expecting it to complete at some indeterminate time in the future — during which the user might well have initiated even more interactions!

Finally, a big issue with Javascript development in general became evident when the idea of automated unit testing started to gain widespread acceptance. Unit testing, say, Java, was (conceptually at least) simple: one of the first things that Java developers learn how to do is create a main routine to trigger some Java in a standalone way. Javascript, on the other hand, never even had that concept: Javascript was always part of a web page — to so much as trigger a Javascript function meant loading a web page in a browser and interacting with it. This complicated the idea of automating Javascript testing.

The first draft of the Angular Javascript framework was aimed at addressing the complexity in keeping track of the responses that came back (asynchronously!) from the remote server, rendering them in HTML, and responding to user input outside of simple form handling. Angular 1 divided web apps into the model — essentially the data that came back from the server, the view — the HTML that displayed the model data and the controller which responded to user input. This Model-View-Controller (MVC) development approach wasn't new to Angular — it originated with SmallTalk in the 80's — but codifying it this way in a Javascript framework was new. One of the biggest anti-patterns in pre-Angular web development was the intermingling of HTML, CSS and Javascript: although through fastidious attention to detail, developers could separate development to avoid this, Angular recommended a sort of code organization that simplified separating concerns.

For the most part, Angular 1 added value (to single-page, asynchronous web applications) by creating new script-like HTML tags that were far more limited in functionality than full-blown Javascript, and then allowing the developer to delegate more complex functionality to actual Javascript which itself was defined in a separate file called the Controller. The class-based implementation of the controller lent itself well to unit testing as well, so the application could more easily be tested and verified outside of a browser.

It also simplified the art of creating the illusion of a multi-page application — complete with individual forms and responses — by defining client-side "routing". It took advantage of the page-linking structure that had been part of HTML from the beginning: HTML documents were designed to include links to one another, but also within themselves, to support things like footnotes and cross-references. A URL ending with a fragment identifier #fragment was a link to a specific location within a document. Since clicking on a fragment link in a single document didn't trigger a server refresh, Angular took advantage of that infrastructure and captured fragment links to emulate page transitions without actually refreshing the page.

About the time Angular started to become a popular framework and everybody who was working with it started to feel comfortable working in it, its designers decided to scrap everything and start over. Angular 2 was a complete rewrite of the simpler Angular framework; they're so dissimilar that they can hardly be recognized as being related. Angular 1 coding was done entirely in Javascript — although you could use pure Javascript for Angular 2 if you really wanted to, all of the examples and documentation center around TypeScript, a type-safe superset of Javascript that is compiled down to cross-browser Javascript for execution. While Angular 1 applications were immediately recognizable from their inclusion of the AngularJS bootstrap code in a script tag, Angular 2 tries to hide most of that from the application developer who is left to develop components and describe to the framework when and where those components ought to be rendered.

A note on terminology: I keep referring to Angular 1 and and Angular 2. For a while, those were the official names; in what was supposed to be an effort to simplify but has instead ended up complicating, the Angular development teams call Angular 1 AngularJS and Angular 2+ Angular.

Just as Angular 1 created new HTML elements and a client-side parser that would read through a document and modify the Angular-specific markup to be proper HTML, Angular 2 created a different set of new HTML elements to tie views to models and controllers. They're on Angular 5 now, with two more releases scheduled, but fortunately the new releases are much more backwards compatible. I've been working with it for a couple of years now and, although it took some getting used to, I'll take the minimal overhead of Angular over the hair-tearing insanity of the first generation of web applications.

Add a comment:

Completely off-topic or spam comments will be removed at the discretion of the moderator.

You may preserve formatting (e.g. a code sample) by indenting with four spaces preceding the formatted line(s)

Name: Name is required
Email (will not be displayed publicly):
Comment is required
My Book

I'm the author of the book "Implementing SSL/TLS Using Cryptography and PKI". Like the title says, this is a from-the-ground-up examination of the SSL protocol that provides security, integrity and privacy to most application-level internet protocols, most notably HTTP. I include the source code to a complete working SSL implementation, including the most popular cryptographic algorithms (DES, 3DES, RC4, AES, RSA, DSA, Diffie-Hellman, HMAC, MD5, SHA-1, SHA-256, and ECC), and show how they all fit together to provide transport-layer security.

My Picture

Joshua Davies

Past Posts