JSApps 101: AngularJS In A Nutshell

Introduction

Moving on with the topic, in the previous part of this series, we discussed about that interesting new tendency to let the browser do all the user interface heavy-lifting on the client-side, since it is such a good guy. Of course, this involves a LOT of JavaScript going on, since it is the only language the browser understands besides HTML.

We all know how complex and crazy things can get when you involve JavaScript in the picture. The language is usually seen as over-complicated, it used to be hard to debug, learning curve is steep as hell and it can get confusing quickly; also, there used to be a lack of tools and IDEs with strong support for its development just sums up for bad reputation.

However, fear not! After this article, you will have a solid base on how you can turn the fiendish JavaScript you have known for years from a simple DOM traversal helper to a whole reliable application development framework with the aid of AngularJS; man, now THAT’S a catchy name.

What In The World Is AngularJS?

AngularJS is a JavaScript library that allows you to implement the Model-view-controller pattern (in some sort of way) at a client-side level (dude, the browser). Why is this? Because you have two options in life:

  • Option A    You implement a +1000 lines JavaScript source file with a bunch of code that not only is in charge of handling manipulation of the HTML structure and creation of UI components, but also handles all of the data validation and display logic of it, all of this while your fellow teammates start hating the guts out of you.
  • Option B    You be a good developer, who cares about separation of concerns, and split all those tasks in several components where each is in charge of a single thing; those written in separate files for your code maintainability’s sake, while making a lot of friends in the process.

In the good old days, option A was “kind” of affordable (heavy quotations), since JavaScript was used for simple things like adding and removing elements from the HTML structure, changing colors and such. But when we talk about implementing a client application only using JavaScript, option A starts struggling. Mainly because we are not only moving HTML elements around, animating stuff and other user interface tasks; but also performing data validation, error handling and server communications in the client-side.

Of course, option B is the way to go. Since there is a ton of things going on in the browser, we need a way to create components that handle each task separately. This is known as the Separation of Concerns principle, or SOC; which states that your code has to be split in several parts, handling a single task each, orchestrated in order to achieve a common good. And this is where AngularJS shines. It allows you to:

  • Organize your client-side code in a Model-view-controller fashion, so you have separate components where each handles a different task in the logic stack: from user input to server data posting (Don’t worry, we’ll get on this later on.)
  • Live template processing and data-binding; more specifically: munch a template, bind it to specific data coming from the server or anywhere else and then produce a valid HTML piece that can be displayed in the browser.
  • Creation of re-usable user interface components.
  • Implementation of advanced patterns, like dependency injection; which is tied to…
  • Ability to implement unit tests for the JavaScript code.

Client-side Model-View-Whatever

We all know Model-View-Controller (MVC from now on) was conceived for a good reason: Roles and responsibilities matters. It was designed as a practical way to separate all of the front-end logic into three interconnected parts, so code having to do with how data is displayed is clearly separated from the code that validates, stores and retrieves that data. Also, it was thought with unit testing in mind. MVC Flow Now, MVC is commonly used server-side; think of ASP.NET MVC or Spring MVC, where you have a class representing the “controller”, which fetches data into “models” and then binds them to a template producing a “view” in the form of an HTML document, which is returned to the browser for display on the user screen. Remember, each time we talk about MVC, we are generally referring to the Presentation Layer of an application.

Now, why go crazy and MVC-fy my JavaScript code? Well, because what’s trendy right now are dynamic web interfaces where pretty much everything is asynchronous, little post-backs are done to the server unless really necessary and you have complex UI components like pop-ups, multi-level grids and others living on the screen (think of Facebook, for example). The only way to do this is delegate the browser (through JavaScript) with the user-interface composition, interaction between all of the components and lastly fetching and posting data back and forth from the server. You really, really NEED a way to achieve law and order so the client-side code does not become an uncontrollable mess.

OK, enough talk already. Let’s dive into how to implement a simple application using AngularJS.

Hello, Angular

Consider the classic “To-do List” example. We want a screen that displays a list of items to be completed; also we need to show a mini-form that allows us to enter and add new items to the list. Here is how it should look: To-do List UI So, in order to get this done, we need to complete some steps:

  • Create the HTML document which will be representing our view.
  • Include the AngularJS library in the HTML document so we can start creating controllers that will handle behavior of the user interface.
  • Create and include a JavaScript file where we are going to create our application object (wait for it) and define some controllers.
  • Add scope (almost there…) variables and functions that will be bound to the view through special AngularJS HTML attributes.

Basic View Markup

Let’s start this in a natural way: let’s create the user interface first. We will create a file called todolist.html and we will add the following HTML to it:
ANGJS SNIP1
Now, nothing fancy right there; we just included the AngularJS library at line 8 and another file at line 9, which we will create in a moment. However, just notice the structure of the document: We have an UL element which will represent our list of to-do items and then a couple input controls so we can add new items to the list. It does not have any information on it for now, but eventually it will… And it will be awesome.

Let’s leave this code as it is for now, let’s move to the app.js file which has all the fun going on.

Adding Logic: The Controller

Just by looking at the previous HTML document, you will realize that there are two things we need regarding to view state data:

  • An array to store the to-do items to be displayed in the UL element at line 13.
  • A function to add new items when I click the Add button at line 15.

Simple, right? Well, let’s create a new file called app.js and add the following code; I’ll explain it later:

ANGJS SNIP2

First things first: let’s take a look at line 1. Before we start building controllers that handle UI logic, we need to create what is known as an AngularJS application module; this is done by calling the angular.module() function and passing it the module name, which returns the created module. This object will later be used as a container where you will store all of the controllers belonging to this particular JS App.

The variable angular is available after we included the AngularJS library in our HTML document at line 8.

After we create the application object, we can start creating controllers; being done in line 3 through the controller() function, which is now callable from the application module we created in line 1. This function takes two arguments: the name of the controller and a callback function that will be responsible of initializing the controller and which will be called each time a controller is instantiated by AngularJS. AngularJS also passes a special variable to that function called $scope; everything that is added to the $scope variable during controller initialization, will be accessible to the view for data binding; better said: it represents what the view can see.

The rest of the lines are quite self-explanatory:

  • Lines 7 to 11 adds a pre-initialized array of to-do items to the scope, that will be used as storage for new items. It represents the items that will be displayed in the empty UL element we have right now.
  • Line 13 declares a variable that will hold the description of new items to be added after the Add button is clicked.
  • Lines 17 to 21 define a function that will add new items to the to-do items array; this should be called each time the Add button is clicked.

Binding The View

Now that we have AngularJS, the application module and the to-do list controller, we are ready to start binding HTML elements in the view to the scope. But before that, we need to change our <html> tag a little and add some special AngularJS attributes:

ANGJS SNIP3

Notice the ng-app attribute we added to the <html> tag; this is your first contact with one of the several directives that are part of the AngularJS data binding framework.

Directives are simply HTML element markers that are to be processed by the data binding engine. What does ng-app do? Well, it tells AngularJS which application module it should use for this HTML document; you might notice we specified ToDoListApp, which is the one we created in out app.js file at line 1.

After we associate our HTML document with the ToDoListApp, second step is to specify a controller for our view; this is done through the ng-controller directive. The ng-controller directive assigns a controller to a section of the HTML document; in other words, this is how we tell Angular where a view starts and when it ends.

Anyways, modify the <body> tag so it looks like this:

SNIP4

Same with the ng-app directive, the previous code tells AngularJS that everything inside the <body> tag will be governed by the ToDoListController, which we created in our app.js file at line 3.

Now we are ready to start binding elements to members added to the $scope variable during the ToDoListController controller initialization.

The ng-repeat Directive

Let’s start with the empty UL list element. In this case we want to create a new child LI element per item that is contained in the to-do items array. Let’s use the ng-repeat directive for this matter; add the following code inside the UL tags:

ANGJS SNIP5

OK, hold on tight. The ng-repeat directive will repeat an element per item in an array used as data source; the quoted value represents the iterator expression, which defines the array to be iterated and an alias used to refer the current item being iterated.

In this case, we specified that it should repeat the LI element for all of the items in the items array defined in the $scope variable from line 7 through 11 in the app.js file.

Lastly, inside the LI tags we define an Angular template, which is an expression enclosed in double curly braces that will be compiled dynamically during run-time. In this case, the template is extracting a member named desc from each item and is going to display its value inside the LI tags being repeated.

Go ahead, save your files and open todolist.html in your preferred browser; you will see how the list gets filled. Awesome stuff, huh?

The ng-model Directive

Next in our list is the ng-model directive, which is used to bind the value of an input element, a text box, text area, option list, etc.; to a variable of the $scope. But before I give you more talk, change the input element at line 15:

ANGJS SNIP6

Binding the value of an element means that each time the value of the input element changes, the variable specified in the ng-model directive will change with that value.

AngularJS enables two-way binding by default, meaning that if the value of the variable the input element is bound to changes, the change will be reflected in the input element on screen. This is the true magic of AngularJS: You can change the value of elements displayed on screen just by modifying values of the $scope variable; no need of selectors, no need to access the DOM from JavaScript, no need of extra code.

In this case, the input element has been bound to the newItemDescription variable of the $scope, defined at line 13 of the app.js file. Each time the value of the input element changes, the variable at the scope will be updated and viceversa.

The ng-click Directive

What if I want to do something when I click that? For that, AngularJS provides a bunch of event handler directives. These directives can be used to invoke functions defined on the $scope each time the user performs an action over an element.

The only one we are going to use for this example is the ng-click, which handles the user click on a button or input element; the expression it takes is basically the code to be executed on each click. In our case, we will modify the button element at line 15 and add the following directive:

ANGJS SNIP7

If you look closely, we are telling AngularJS to call the addItem function defined in the $scope. If you see the code of that function from line 17 to 21, you will see that it adds a new item to the to-do list array, based on the value of the newItemDescription variable.

If you save your files and open todolist.html, you will see how the list is automatically updated each time you enter a new description in the text box and click on Add.

Your HTML document is dynamic and alive. How cool is that?

What Kind Of Sorcery Is This!?

OK, I must tell you that all of this does not happen through magic. AngularJS is what’s called an unobtrusive library; which means that everything happens automatically as soon as the library finishes loading.

At the very beginning, when the HTML document finished loading, AngularJS crawls through HTML document structure looking for ng-app directives which tells it that something has to be done there. After all of the directives have been found, the marked elements are processed separately: elements are bound to controllers and templates are compiled.

The $scope variable lifetime is automatically handled by AngularJS, each time something happens on-screen, it is notified and performs anything that has to be done to ensure the controller and view are synced; from updating values in the $scope bound to a particular element to refreshing an element or template. This is done through the implementation of some sort of the observable pattern that allows AngularJS to react to changes on elements and the $scope itself.

Conclusion

Woah! That was a bunch of stuff to digest. Hopefully, you will make sense out of it with some practice. Of course those three directives are not everything that there is on AngularJS; for a complete list on all of the possible directives you can use out-of-the-box, take a look here.

Have in mind that you can also extend the library with custom directives, but that is an advanced topic we might cover in a future article.

AngularJS is not everything there is about JS Apps, this was a very basic introduction and I want you to have knowledge you can use in real life, so we will be learning how to build more complex JS Apps in my next article. Topics to be covered will be:

  • Creating custom Angular directives.
  • Complex user interfaces (windows, redirection and data validation.)
  • RequireJS for on-demand asynchronous JavaScript files loading.

Hope that sounds exciting enough for you. Stay tuned! :D

Source Code

Further Reading

JSApps 101: Introduction To JavaScript Applications

Introduction

So, JavaScript… Again! After some months away from this blog, I am back with a new series of articles related to the incredible, magical and mysterious world of JavaScript. More specifically, JavaScript applications. Have you ever heard of AngularJS, Backbone, Knockout JS, LESS and such things? Read on, this might interest you.

We have used, at some point of our Internet life, some awesome websites, such as Facebook, Github, Spotify, and others; where everything is asynchronous, the user interface is super-responsive and couldn’t be closer to a desktop application in matters of functionality, all of this right in our browser. Less that some people imagine is that these sites owe their slickness mainly to our good old friend in battle: JavaScript; oh so many developers underestimate JavaScript. This article series will dive you into the basis of how these kind of powerful JavaScript applications are built and over what technologies and frameworks, so let’s move forward into some basic concepts.

Server-side v.s. Client-side

So, what in the world is a JavaScript application anyways? Well, as you might know, the traditional way a web application works is that you have a set of specialized frameworks and tools (name it ASP.NET, PHP, Spring Framework) running server-side; when someone requests a page from the server, it responds with an HTML document, usually resulting of the parsing of a server-side template (a PHP, ASPX or the alike) and then bound to data coming from the database. Those templates being processed by the server usually contain special syntax and directives that instructs the server’s templating engine how to bind data to it and produce a valid HTML document; some might recall these as the dreaded “server tags.”

Standard Server Request/Response

 

Some server-side technologies like ASP.NET use “controls” or helpers that assist in the rendering of complex user interface components into HTML like grids, forms and charts bound to dynamic data coming from the database. Each time these components need to be refreshed, they do it through asynchronous AJAX requests or a full-page refresh (known as a server post-back, which all users love, or not). While these are handy for speed-building of web solutions, is not as efficient as pure-JavaScript graphical components.

ASP.NET WebControls

Often, JavaScript is used to manipulate the structure of the resulting HTML document, get the value of a field, and other simple tasks dynamically on the browser (better known as “the client side”) without the need of refreshing the page. But as popularity of JavaScript arise (let’s thank jQuery for that), it is being delegated with more and more complex stuff like rendering templates into HTML, so it is done client-side and not server-side; binding of server data, validation of user input and controlling page flow. This being said, a JavaScript application is basically a “client” that runs on the browser, thanks to the leverage of technologies such as JavaScript, HTML5 and CSS3. All of the UI logic is controlled client-side, right there in the browser.

Structure of a JavaScript App

Before moving on, it is true that this requires a paradigm shift if you have been working on traditional web applications for a while, specially if you have never used a Model-View-Controller approach. If you have never heard of, or used Model-View-Controller, I’m afraid there is some reading to be done before continuing. But hey! You can start here, or else you can continue reading this incredibly sexy article.

As mentioned before, a JavaScript application, or JS App (patent pending), usually follows an MVC approach. It is composed of several “views”, which are usually HTML documents or templates; “controllers” that handles validation, logic flow and communications with the server; and “models” that contains the data to be displayed and bind on the views. As you might notice, is a pretty similar model to server-side technologies like ASP.NET MVC and Spring MVC, just that the entire presentation layer is being moved to the browser, specifically into JavaScript components. We’ll analyze advantages of this later on.

With all the presentation logic being handled by the browser, where does the data we see on the UI coming from? It comes from the server; that is the real use we have for it. The controller at the browser is the one responsible for this channel of communication; it retrieves data from the server each time the user pages through a data grid and sends data to it whenever the user needs to create or edit information. JS Apps work in a similar way to smartphone apps, in which a client runs on the phone locally and it uses data coming from a remote server. In fact, there are specialized build tools, like PhoneGap, that creates applications to be installed on a smartphone from HTML/JS/CSS3 sources.

JS App Structure

Pros & Cons

While JS Apps goes far off any conventional use of a browser, it offers several advantages:

  • Rendering of pages and templates is done by the browser in the client computer, taking advantage of the client computer’s processing power and leaving less workload on the server.
  • Better user interface responsiveness, since all calls to the server are asynchronously and JavaScript UI components are usually lightweight.
  • Completely decoupled from the server logic.
  • Less calls to the server, since it is only accessed to get data and not pages in every possible display state it might have.
  • High separation of concerns, since the server ONLY handles business logic and not UI-related validation and such.
  • Easy unit testing of the user interface and UI logic.

Also, it might represent some disadvantages:

  • Lots, lots and LOTS of JavaScript to be written; we all know it can be a pain to maintain if not properly done.
  • The learning curve is quite step, since most people is used to jQuery and DOM manipulation but not to JavaScript controllers, models and pseudo-classes; let alone advanced concepts like JavaScript dependency injection.
  • Data incoming to the server needs to be double-checked in order to prevent bad information sent by tampered JavaScript components.

Sounds Kind of Interesting, Now What?

OK, now that you might get the picture of what a JS App might look like and its advantages, so the next step would be to analyze the technologies and frameworks you could use, getting your hands dirty along the way so you can start developing this kind of applications.

In the following articles we will move into learning JavaScript libraries like AngularJS, for client-side Model-View-Controller; RequireJS, a library that allows asynchronous loading of JavaScript files on-demand; usage of Twitter Bootstrap, to build nice HTML5-compliant user interfaces; and ultimately how to structure your server application as a solid data provider for your JavaScript application.

So, stay tuned for more articles! :D

 

Why There Is Interface Pollution in Java 8

I was reading this interesting post about The Dark Side of Java 8. In it, Lukas Edger, the author, mentions how bad it is that in the JDK 8 the types are not simply called functions. For instance, in a language like C#, there is a set of predefined function types accepting any number of arguments with an optional return type (Func and Action each one going up to 16 parameters of different types T1, T2, T3, …, T16), but in the JDK 8 what we have is a set of different functional interfaces, with different names and different method names, and whose abstract methods represent a subset of well know function signatures (i.e. nullary, unary, binary, ternary, etc).

The Type Erasure Issue

So, in a way, both languages suffer from some form of interface pollution (or delegate pollution in C#). The only difference is that in C# they all have the same name. In Java, unfortunately, due to type erasure, there is no difference between Function<T1,T2> and Function<T1,T2,T3> or Function<T1,T2,T3,...Tn>, so evidently, we couldn’t simply name them all the same way and we had to come up with creative names for all possible types of function combinations.

Don’t think the expert group did not struggle with this problem. In the words of Brian Goetz in the lambda mailing list:

[...] As a single example, let’s take function types. The lambda strawman offered at devoxx had function types. I insisted we remove them, and this made me unpopular. But my objection to function types was not that I don’t like function types — I love function types — but that function types fought badly with an existing aspect of the Java type system, erasure. Erased function types are the worst of both worlds. So we removed this from the design.

But I am unwilling to say “Java never will have function types” (though I recognize that Java may never have function types.) I believe that in order to get to function types, we have to first deal with erasure. That may, or may not be possible. But in a world of reified structural types, function types start to make a lot more sense [...]

So, how does this affect us as developers? The following is a categorization of some of the most important new functional interfaces (and some old ones) in the JDK 8 organized by function return type and the number of expected arguments in the interface method.

Functions with Void Return Type

In the realm of functions with a void return type, we have the following:

Type of Function Lambda Expression Known Functional Interfaces
Nullary
				() -> doSomething() 
			
Runnable
Unary
				foo  -> System.out.println(foo)
			
Consumer
IntConsumer
LongConsumer
DoubleConsumer
Binary
				(console,text) -> console.print(text)
			
BiConsumer
ObjIntConsumer
ObjLongConsumer
ObjDoubleConsumer
n-ary
				(sender,host,text) -> sender.send(host, text)
			
Define your own

Functions with Some Return Type T

In the realm of functions with a return type T, we have the following:

Type of Function Lambda Expression Known Functional Interfaces
Nullary
				() -> "Hello World" 
			
Callable
Supplier
BooleanSupplier
IntSupplier
LongSupplier
DoubleSupplier
Unary
				n -> n + 1   
				n -> n >= 0  
			
Function
IntFunction
LongFunction
DoubleFunction

IntToLongFunction
IntToDoubleFunction
LongToIntFunction
LongToDoubleFunction
DoubleToIntFunction
DoubleToLongFunction

UnaryOperator
IntUnaryOperator
LongUnaryOperator
DoubleUnaryOperator

Predicate
IntPredicate
LongPredicate
DoublePredicate

Binary
				(a,b) -> a > b ? 1 : 0
				(x,y) -> x + y 
				(x,y) -> x % y == 0
			
Comparator
BiFunction
ToIntBiFunction
ToLongBiFunction
ToDoubleBiFunction

BinaryOperator
IntBinaryOperator
LongBinaryOperator
DoubleBinaryOperator

BiPredicate

n-ary
				(x,y,z) -> 2 * x + Math.sqrt(y) - z
			
Define your own

An advantage of this approach is that we can define our own interface types with methods accepting as many arguments as we would like, and we could use them to create lambda expressions and method references as we see fit. In other words, we have the power to pollute the world with yet even more new functional interfaces. Also we can create lambda expressions even for interfaces in earlier versions of the JDK or for earlier versions of our own APIs that defined SAM types like these. And so now we have the power to use Runnable and Callable as functional interfaces.

However, these interfaces become more difficult to memorize since they all have different names and methods.

Still, I am one of those wondering why they didn’t solve the problem as in Scala, defining interfaces like Function0, Function1, Function2, …, FunctionN. Perhaps, the only argument I can come up with against that is that they wanted to maximize the possibilities of defining lambda expressions for interfaces in earlier versions of the APIs as mentioned before.

Lack of Value Types

So, evidently type erasure is one driving force here. But if you are one of those wondering why we also need all these additional functional interfaces with similar names and method signatures and whose only difference is the use of a primitive type, then let me remind you that in Java we also lack of value types like those in a language like C#. This means that the generic types used in our generic classes can only be reference types, and not primitive types.

In other words, we can’t do this:

List<int> numbers = asList(1,2,3,4,5);

But we can indeed do this:

List<Integer> numbers = asList(1,2,3,4,5);

The second example, though, incurs in the cost of boxing and unboxing of the wrapped objects back and forth from/to primitive types. This can become really expensive in operations dealing with collections of primitive values. So, the expert group decided to create this explosion of interfaces to deal with the different scenarios. To make things “less worse” they decided to only deal with three basic types: int, long and double.

Quoting the words of Brian Goetz in the lambda mailing list:

More generally: the philosophy behind having specialized primitive streams (e.g., IntStream) is fraught with nasty tradeoffs. On the one hand, it’s lots of ugly code duplication, interface pollution, etc. On the other hand, any kind of arithmetic on boxed ops sucks, and having no story for reducing over ints would be terrible. So we’re in a tough corner, and we’re trying to not make it worse.

Trick #1 for not making it worse is: we’re not doing all eight primitive types. We’re doing int, long, and double; all the others could be simulated by these. Arguably we could get rid of int too, but we don’t think most Java developers are ready for that. Yes, there will be calls for Character, and the answer is “stick it in an int.” (Each specialization is projected to ~100K to the JRE footprint.)

Trick #2 is: we’re using primitive streams to expose things that are best done in the primitive domain (sorting, reduction) but not trying to duplicate everything you can do in the boxed domain. For example, there’s no IntStream.into(), as Aleksey points out. (If there were, the next question(s) would be “Where is IntCollection? IntArrayList? IntConcurrentSkipListMap?) The intention is many streams may start as reference streams and end up as primitive streams, but not vice versa. That’s OK, and that reduces the number of conversions needed (e.g., no overload of map for int -> T, no specialization of Function for int -> T, etc.)

We can see that this was a difficult decision for the expert group. I think few would agree that this is cool, and most of us would most likely agree it was necessary.

The Checked Exceptions Issue

There was a third driving force that could have made things even worse, and it is the fact that Java supports two type of exceptions: checked and unchecked. The compiler requires that we handle or explicitly declare checked exceptions, but it requires nothing for unchecked ones. So, this creates an interesting problem, because the method signatures of most of the functional interfaces do not declare to throw any exceptions. So, for instance, this is not possible:

Writer out = new StringWriter();
Consumer<String> printer = s -> out.write(s); //oops! compiler error

It cannot be done because the write operation throws a checked exception (i.e. IOException) but the signature of the Consumer method does not declare it throws any exception at all. So, the only solution to this problem would have been to create even more interfaces, some declaring exceptions and some not (or come up with yet another mechanism at the language level for exception transparency). Again, to make things “less worse” the expert group decided to do nothing in this case.

In the words of Brian Goetz in the lambda mailing list:

Yes, you’d have to provide your own exceptional SAMs. But then lambda conversion would work fine with them.

The EG discussed additional language and library support for this problem, and in the end felt that this was a bad cost/benefit tradeoff.

Library-based solutions cause a 2x explosion in SAM types (exceptional vs not), which interact badly with existing combinatorial explosions for primitive specialization.

The available language-based solutions were losers from a complexity/value tradeoff. Though there are some alternative solutions we are going to continue to explore — though clearly not for 8 and probably not for 9 either.

In the meantime, you have the tools to do what you want. I get that you prefer we provide that last mile for you (and, secondarily, your request is really a thinly-veiled request for “why don’t you just give up on checked exceptions already”), but I think the current state lets you get your job done.

So, it’s up to us, the developers, to craft yet even more interface explosions to deal with these in a case-by-case basis:

interface IOConsumer<T> {
   void accept(T t) throws IOException;
}

static<T> Consumer<T> exceptionWrappingBlock(IOConsumer<T> b) {
   return e -> {
	try { b.accept(e); }
	catch (Exception ex) { throw new RuntimeException(ex); }
   };
}

In order to do:

Writer out = new StringWriter();
Consumer<String> printer = exceptionWrappingBlock(s -> out.write(s));

Probably, in the future (maybe JDK 9) when we get Support for Value Types in Java and Reification, we will be able to get rid of (or at least no longer need to use anymore) these multiple interfaces.

In summary, we can see that the expert group struggled with several design issues. The need, requirement or constraint to keep backwards compatibility made things difficult, then we have other important conditions like the lack of value types, type erasure and checked exceptions. If Java had the first and lacked of the other two the design of JDK 8 would probably have been different. So, we all must understand that these were difficult problems with lots of tradeoffs and the EG had to draw a line somewhere and make a decisions.

So, when we find ourselves in the dark side of Java 8, probably we need to remind ourselves that there is a reason why things are dark in that side of the JDK :-)

6 Days with Windows Phone

Disclaimer: What follows is my personal opinion, it does not reflect Informatech’s position necessarily. Though I’ve tried to be as unbiased as possible, it will undoubtedly reflect my views.

A friend upgrades to a Lumia 920 and a Lumia 900 is left orphaned.

With a Galaxy Nexus experiment going on since Oct 2012, and waiting for Apple’s comeback iOS 7 (plus whatever they introduce later this year), I dive in for a week to see how the Windows Phone 7.8 experience stacks up.

Clarifications

I hail from a background of Apple devices, at least for a couple of years now (I had Nokia smartphones before). I’ve been exploring Android, if anything because as a developer it’s unforgivable not to have any experience in it, but customizability is not something that drives my purchases.

Beyond specific OS choices, I believe in finished, polished products. I dislike having to hack or otherwise mod my devices. Yep, that includes spending hours tweaking and configuring.

I like my stuff to just work, with minimal fuss. Things can be technologically interesting, but in a device I want a product. That said, let’s delve in.

The Experience

2013-06-10 10.54.522013-06-10 10.55.03

What do I usually do in a smartphone (that will thus dictate the experience on Windows Phone)? Pretty much WhatsApp, Facebook, push Gmail, push Google Contacts, camera, and Dropbox auto photo uploads. Yes, other things matter, but that’s what realistically I use most of the time, and will be the scope of this review.

So let’s not waste too much time discussing setup (which is generally polished), suffice it to say that of the above:

  • WhatsApp and Facebook were installed from the Marketplace.
  • Camera is good, but there is no official Dropbox support. I routed around this by enabling SkyDrive auto uploads, so no biggie.
  • When setting up the Google account, we hit our first snag. Because Google discontinued ActiveSync, the only straightforward choice is IMAP setup just for email, no calendar or contacts. Fortunately you can add Gmail as an Exchange server manually through the end of July. This worked OK, but the contact import was kinda crappy (ie. contacts with multiple numbers, just had the first imported).

Once everything was working, it took me about a day to get used to the concept of Live Tiles. That is the driver of the Windows Phone UI, where you’ll launch apps, get notifications, and see periodic content changes of relevant content. The idea is novel and elegantly implemented, and after months on Android, it made me feel that special care had been taken in the consistency and polish of the interface. Both were very much welcome.

In full day-to-day usage, the UI shines, though where I felt the most joy was in the touch keyboard. It is absolutely a pleasure to use (no gestures or swipes, straight taps), and by far it’s the best of any smartphone that I’ve used.

The camera was another pleasant surprise. The capture and photo browser were excellent, behold:

WP_000006

The multitasking also requires a bit of getting used to, as a long press of the back key will allow you to get an open app list, however it will not let you kill any. You have to jump in, and continually press back until you exit.

To finish off the “stock” functionality, the People hub was generally useful (even though the Google contacts were indeed not wholly sync’d). It gave quick access to recent contacts, and integrated well with social networks.

Even though Facebook is integrated in, the way to properly see your Newsfeed is through the standalone app. Which sadly is not developed by FB, and is quite honestly sub par to similar offerings on iOS and Android.

WhatsApp was a similar story, the implementation is not up to par with the other platforms, and more annoyingly it activated the music controls and gobbled battery. As a workaround, you have to download a separate app that kills the music controls, and run it periodically. The app also seemed to implode under heavily used group chats.

Even with the mediocrity of third-party apps, I can honestly say the OS is pleasant to use, and the tiles are colorful and attractive. So rounding out:

Pros

  • Superb interface, extremely polished.
  • Quite simply the best touch keyboard I’ve ever used, on any smartphone.
  • Integration between social networks and contacts is almost seamless.
  • Excellent camera.
  • Lumia hardware is very capable and attractive.
  • Integration with Microsoft services is predictably good.

Cons

  • Synchronization with Google services is poor (especially now that ActiveSync was retired).
  • App selection and most importantly, quality, is low, low, low. Years behind iOS and Android.
  • WhatsApp drains the battery, and requires “Stop the Music” to kill it every once in a while.
  • Multitasking does not allow you close the application from the app list (WP 7.8?)
  • Lack of a centralized notification area is confusing.

Conclusion

If you’re a Hotmail user, and you live your life in Exchange and Microsoft Office, Windows Phone is a natural fit. The Lumia hardware is capable and attractive, the UI is very polished, and if you can live with the poor app selection and quality, you’ll enjoy it.

However if you use Google services, and have gotten used to the abundance of other app stores, the UI may not compensate the tradeoffs for functionality you would have to give up. In the future this may change, but at present it’s too much to take.

Memoized Fibonacci Numbers with Java 8

Since today is Fibonacci Day, I decided that it would be interesting to publish something related to it.

I believe one of the first algorithms we all see when learning non-linear recursion is that of calculating a Fibonacci number. I found a great explanation on the subject in the book Structure and Interpretation of Computer Programs [SIC] and I dedicated some time to playing with the Fibonacci algorithm just for fun. While doing so I found an interesting way to improve the classical recursive algorithm by using one of the new methods (added in Java 8) in the Map interface and which I used here to implement a form of memoization.

Classical Recursive Fibonacci

In the classical definition of Fibonacci we learn that:

fib(n) = \left\{ \begin{array}{ll} 0 & \mbox{if n=0}\\1 & \mbox{if n=1}\\fibn(n-1)+fib(n-2) & \mbox{otherwise} \end{array} \right.

We program this very easily in Java:

public static long fibonacci(int x) {
   if(x==0 || x==1)
      return x;
   return fibonacci(x-1) + fibonacci(x-2);
}

Now the problem with this algorithm is that, with the exception of the base case, we recursively invoke our function twice and interestingly one of the branches recalculates part of other branch every time we invoke the function. Consider the following image (taken from SIC) that represents an invocation to fibonacci(5).

Clearly the branch to the right is redoing all the work already done during the recursive process carried out by the left branch. Can you see how many times fibonacci(2) was calculated? The problem gets worse as the function argument gets bigger. In fact this problem is so serious that the calculation of a small argument like fibonacci(50) might take quite a long time.

Memoized Recursive Fibonacci

However, there is a way to improve the performance of the original recursive algorithm (I mean without having to resort to a linear-time algorithm using, for instance, Binet’s formula).

The serious problem we have in the original algorithm is that we do too much rework. So, we could alleviate the problem by using memoization, in other words by providing a mechanism to avoid repeated calculations by caching results in a lookup table that can later be used to retrieve the values of already processed arguments.

In Java we could try to store the Fibonacci numbers in a hast table or map. In the case of the left branch we’ll have to run the entire recursive process to obtain the corresponding Fibonacci sequence values, but as we find them, we update the hash table with the results obtained. This way, the right branches will only perform a table lookup and the corresponding value will be retrieved  from the hash table and not through a recursive calculation again.

Some of the new methods in the class Map , in Java 8, simplify a lot the writing of such algorithm, particularly the method computeIfAbsent(key, function). Where the key would be the number for which we would like to look up the corresponding Fibonacci number and the function would be a lambda expression capable of triggering the recursive calculation if the corresponding value is not already present in the map.

So, we can start by defining a map and putting the values in it for the base cases, namely, fibonnaci(0) and fibonacci(1):

private static Map<Integer,Long> memo = new HashMap<>();
static {
   memo.put(0,0L); //fibonacci(0)
   memo.put(1,1L); //fibonacci(1)
}

And for the inductive step all we have to do is redefine our Fibonacci function as follows:

public static long fibonacci(int x) {
   return memo.computeIfAbsent(x, n -> fibonacci(n-1) + fibonacci(n-2));
}

As you can see, the method computeIfAbsent will use the provided lambda expression to calculate the Fibonacci number when the number is not present in the map, this recursive process will be triggered entirely for the left branch, but the right branch will use the momoized values. This represents a significant improvement.

Based on my subjective observation, this improved recursive version was outstandingly faster for a discrete number like fibonacci(70). With this algorithm we can safely calculate up to fibonacci(92) without running into long overflow. Even better, to be sure that our algorithm would never cause overflows without letting the user know we could also use one of the new methods in Java 8 added to the Math class and which throws an ArithmeticException when overflow occurs. So we could change our code as follows:

public static long fibonacci(int x) {
   return memo.computeIfAbsent(x, n -> Math.addExact(fibonacci(n-1),
                                                     fibonacci(n-2)));
}

This method would start failing for fibonacci(93). If we need to go over 92 we would have to use BigInteger in our algorithm, instead of just long.

Notice that the memozied example uses mutations, therefore, in order to use this code in a multithreaded environment we would first need to add some form of synchronization to the proposed code, or use a different map implementation, perhaps a ConcurrentHashMap, which evidently, may impact performance as well. Arguably, this would still be better than the original recursive algorithm.

Java 8 Optional Objects

In this post I present several examples of the new Optional objects in Java 8 and I make comparisons with similar approaches in other programming languages, particularly the functional programming language SML and  the JVM-based programming language Ceylon, this latter currently under development by Red Hat.

I think it is important to highlight that the introduction of optional objects has been a matter of debate. In this article I try to present my perspective of the problem and I do an effort to show arguments in favor and against the use of optional objects. It is my contention that in certain scenarios the use of optional objects is valuable, but ultimately everyone is entitled to an opinion and I just hope this article helps the readers to make an informed one just as writing it helped me understand this problem much better.

About the Type of Null

In Java we use a reference type to gain access to an object, and when we don’t have a specific object to make our reference point to, then we set such reference to null to imply the absence of a value.

In Java null is actually a type, a special one: it has no name, we cannot declare variables of its type, or cast any variables to it, in fact there is a single value that can be associated with it (i.e. the literal null), and unlike any other types in Java, a null reference can be safely assigned to any other reference types (See JLS  3.10.7 and 4.1).

The use of null is so common that we rarely meditate on it: field members of objects are automatically initialized to null and programmers typically initialize reference types to null when they don’t have an initial value to give them and, in general, null is used everywhere to imply that, at certain point, we don’t know or we don’t have a value to give to a reference.

About the Null Pointer Reference Problem

Now, the major problem with the null reference is that if we try to dereference it then we get the ominous and well known NullPointerException.

When we work with a reference obtained from a different context than our code (i.e. as the result of a method invocation or when we receive a reference as an argument in a method we are working on),  we all would like to avoid this error that has the potential to make our application crash, but often the problem is not noticed early enough and it finds its way into production code where it waits for the right moment to fail (which is typically a Friday at the end of the month, around 5 p.m. and just when you are about to leave the office to go to the movies with your family or drink some beers with your friends). To make things worse, the place where your code fails is rarely the place where the problem originated, since your reference could have been set to null far away from the place in your code where you intended to dereference it. So, you better cancel those plans for the Friday night…

It’s worth mentioning that this concept of null references was first introduced by Tony Hoare, the creator of ALGOL, back in 1965. The consequences were not so evident in those days, but he later regretted his design and he called it “a billion dollars mistake“, precisely referring to the uncountable amount of hours that many of us have spent, since then, fixing this kind null dereferencing problems.

Wouldn’t it be great if the type system could tell the difference between a reference that, in a specific context, could be potentially null from one that couldn’t? This would help a lot in terms of type safety because the compiler could then enforce that the programmer do some verification for references that could be null at the same time that it allows a direct use of the others. We see here an opportunity for improvement in the type system. This could be particularly useful when writing the public interface of APIs because it would increase the expressive power of the language, giving us a tool, besides documentation, to tell our users that a given method may or may not return a value.

Now, before we delve any further, I must clarify that this is an ideal that modern languages will probably pursue (we’ll talk about Ceylon and Kotlin later), but it is not an easy task to try to fix this hole in a programming language like Java when we intend to do it as an afterthought. So, in the coming paragraphs I present some scenarios in which I believe the use of optional objects could arguably alleviate some of this burden. Even so, the evil is done, and nothing will get rid of null references any time soon, so we better learn to deal with them. Understanding the problem is one step and it is my opinion that these new optional objects are just another way to deal with it, particularly in certain specific scenarios in which we would like to express the absence of a value.

Finding Elements

There is a set of idioms in which the use of null references is potentially problematic. One of those common cases is when we look for something that we cannot ultimately find. Consider now the following simple piece of code used to find the first fruit in a list of fruits that has a certain name:

public static Fruit find(String name, List<Fruit> fruits) {
   for(Fruit fruit : fruits) {
      if(fruit.getName().equals(name)) {
         return fruit;
      }
   }
   return null;
}

As we can see, the creator of this code is using a null reference to indicate the absence of a value that satisfies the search criteria (7). It is unfortunate, though, that it is not evident in the method signature that this method may not return a value, but a null reference..

Now consider the following code snippet, written by a programmer expecting to use the result of the method shown above:

List<Fruit> fruits = asList(new Fruit("apple"),
                            new Fruit("grape"),
                            new Fruit("orange"));

Fruit found = find("lemon", fruits);
//some code in between and much later on (or possibly somewhere else)...
String name = found.getName(); //uh oh!

Such simple piece of code has an error that cannot be detected by the compiler, not even by simple observation by the programmer (who may not have access to the source code of the find method). The programmer,  in this case, has naively failed to recognize the scenario in which the find method above could return a null reference to indicate the absence of a value that satisfies his predicate. This code is waiting to be executed to simply fail and no amount of documentation is going to prevent this mistake from happening and the compiler will not even notice that there is a potential problem here.

Also notice that the line where the reference is set to null (5) is different from the problematic line (7). In this case they were close enough, in other cases this may not be so evident.

In order to avoid the problem what we typically do is that we check if a given reference is null before we try to dereference it. In fact, this verification is quite common and in certain cases this check could be repeated so many times on a given reference that Martin Fowler (renown for hist book on refactoring principles) suggested that for these particular scenarios such verification could  be avoided with the use of what he called a Null Object. In our example above, instead of returning null, we could have returned a NullFruit object reference which is an object of type Fruit that is hollowed inside and which, unlike a null reference, is capable of properly responding to the same public interface of a Fruit.

Minimum and Maximum

Another place where this could be potentially problematic is when reducing a collection to a value, for instance to a maximum or minimum value. Consider the following piece of code that can be used to determine which is the longest string in a collection.

public static String longest(Collection<String> items) {
   if(items.isEmpty()){
      return null;
   }
   Iterator<String> iter = items.iterator();
   String result = iter.next();
   while(iter.hasNext()) {
       String item = iter.next();
       if(item.length() > result.length()){
          result = item;
       }
   }
   return result;
}

In this case the question is what should be returned when the list provided is empty? In this particular case a null value is returned, once again, opening the door for a potential null dereferencing problem.

The Functional World Strategy

It’s interesting that in the functional programming paradigm, the statically-typed programming languages evolved in a different direction. In languages like SML or Haskell there is no such thing as a null value that causes exceptions when dereferenced. These languages provide a special data type capable of holding an optional value and so it can be conveniently used to also express the possible absence of a value.  The following piece of code shows the definition of the SML option type:

datatype 'a option = NONE | SOME of 'a

As you can see, option is a data type with two constructors, one of them stores nothing (i.e. NONE) whereas the other is capable of storing a polymorphic value of some value type 'a (where 'a is just a placeholder for the actual type).

Under this model, the piece of code we wrote before in Java, to find a fruit by its name, could be rewritten in SML as follows:

fun find(name, fruits) =
   case fruits of
        [] => NONE
      | (Fruit s)::fs => if s = name
                         then SOME (Fruit s)
                         else find(name,fs)

There are several ways to achieve this in SML, this example just shows one way to do it. The important point here is that there is no such thing as null, instead a value NONE is returned when nothing is found (3), and a value SOME fruit is returned otherwise (5).

When a programmer uses this find method, he knows that it returns an option type value and therefore the programmer is forced to check the nature of the value obtained to see if it is either NONE (6) or SOME fruit (7), somewhat like this:

let
   val fruits = [Fruit "apple", Fruit "grape", Fruit "orange"]
   val found = find("grape", fruits)
in
   case found of
       NONE => print("Nothing found")
     | SOME(Fruit f) => print("Found fruit: " ^ f)
end

Having to check for the true nature of the returned option makes it impossible to misinterpret the result.

Java Optional Types

It’s a joy that finally in Java 8 we’ll have a new class called Optional that allows us to implement a similar idiom as that from the functional world. As in the case of of SML, the Optional type is polymorphic and may contain a value or be empty. So, we could rewrite our previous code snippet as follows:

public static Optional<Fruit> find(String name, List<Fruit> fruits) {
   for(Fruit fruit : fruits) {
      if(fruit.getName().equals(name)) {
         return Optional.of(fruit);
      }
   }
   return Optional.empty();
}

As you can see, the method now returns an Optional reference (1), if something is found, the Optional object is constructed with a value (4), otherwise is constructed empty (7).

And the programmer using this code would do something as follows:

List<Fruit> fruits = asList(new Fruit("apple"),
                            new Fruit("grape"),
                            new Fruit("orange"));

Optional<Fruit> found = find("lemon", fruits);
if(found.isPresent()) {
   Fruit fruit = found.get();
   String name = fruit.getName();
}

Now it is made evident in the type of the find method that it returns an optional value (5), and the user of this method has to program his code accordingly (6-7).

So we see that  the adoption of this functional idiom is likely to make our code safer, less prompt to null dereferencing problems and as a result more robust and less error prone. Of course, it is not a perfect solution because, after all, Optional references can also be erroneously set to null references, but  I would expect that programmers stick to the convention of not passing null references where an optional object is expected, pretty much as we today consider a good practice not to pass a null reference where a collection or an array is expected, in these cases the correct is to pass an empty array or collection. The point here is that now we have a mechanism in the API that we can use to make explicit that for a given reference we may not have a value to assign it and the user is forced, by the API, to verify that.

Quoting an article I reference later about the use of optional objects in the Guava Collections framework: “Besides the increase in readability that comes from giving null a name, the biggest advantage of Optional is its idiot-proof-ness. It forces you to actively think about the absent case if you want your program to compile at all, since you have to actively unwrap the Optional and address that case”.

Other Convenient Methods

As of the today, besides the static methods of and empty explained above, the Optional class contains the following convenient instance methods:

ifPresent() Which returns true if a value is present in the optional.
get() Which returns a reference to the item contained in the optional object, if present, otherwise throws a NoSuchElementException.
ifPresent(Consumer<T> consumer) Which passess the optional value, if present, to the provided Consumer (which could be implemented through a lambda expression or method reference).
orElse(T other) Which returns the value, if present, otherwise returns the value in other.
orElseGet(Supplier<T> other) Which returns the value if present, otherwise returns the value provided by the Supplier (which could be implemented with a lambda expression or method reference).
orElseThrow(Supplier<T> exceptionSupplier) Which returns the value if present, otherwise throws the exception provided by the Supplier (which could be implemented with a lambda expression or method reference).

Avoiding Boilerplate Presence Checks

We can use some of the convenient methods mentioned above to avoid the need of having to check if a value is present in the optional object. For instance, we may want to use a default fruit value if nothing is found, let’s say that we would like to use a “Kiwi”. So we could rewrite our previous code like this:

Optional<Fruit> found = find("lemon", fruits);
String name = found.orElse(new Fruit("Kiwi")).getName();

In this other example, the code prints the fruit name to the main output, if the fruit is present. In this case, we implement the Consumer with a lambda expression.

Optional<Fruit> found = find("lemon", fruits);
found.ifPresent(f -> { System.out.println(f.getName()); });

This other piece of code uses a lambda expression to provide a Supplier which can ultimately provide a default answer if the optional object is empty:

Optional<Fruit> found = find("lemon", fruits);
Fruit fruit = found.orElseGet(() -> new Fruit("Lemon"));

Clearly, we can see that these convenient methods simplify a lot having to work with the optional objects.

So What’s Wrong with Optional?

The question we face is: will Optional get rid of null references? And the answer is an emphatic no! So, detractors immediately question its value asking: then what is it good for that we couldn’t do by other means already?

Unlike functional languages like SML o Haskell which never had the concept of null references, in Java we cannot simply get rid of the null references that have historically existed. This will continue to exist, and they arguably have their proper uses (just to mention an example: three-valued logic).

I doubt that the intention with the Optional class is to replace every single nullable reference, but to help in the creation of more robust APIs in which just by reading the signature of a method we could tell if we can expect an optional value or not  and force the programmer to use this value accordingly. But ultimately, Optional will be just another reference and subject to same weaknesses of every other reference in the language. It is quite evident that Optional is not going to save the day.

How these optional objects are supposed to be used or whether they are valuable or not in Java has been the matter of a heated debate in the project lambda mailing list. From the detractors we hear interesting arguments like:

  • The fact that other alternatives exist ( i.e. the Eclipse IDE supports a set of proprietary annotations for static analysis of nullability, the JSR-305 with annotations like @Nullable and @NonNull).
  • Some would like it to be usable as in the functional world, which is not entirely possible in Java since the language lacks many features existing in functional programming languages like SML or Haskell (i.e. pattern matching).
  • Others argue about how it is impossible to retrofit preexisting code to use this idiom (i.e. List.get(Object)which will continue to return null).
  • And some complain about the fact that the lack of language support for optional values creates a potential scenario in which Optional could be used inconsistently in the APIs, by this creating incompatibilities, pretty much like the ones we will have with the rest of the Java API which cannot be retrofitted to use the new Optional class.
  • A compelling argument is that if the programmer invokes the get method in an optional object, if it is empty, it will raise a NoSuchElementException, which is pretty much the same problem that we have with nulls, just with a different exception.

So, it would appear that the benefits of Optional are really questionable and are probably constrained to improving readability and enforcing public interface contracts.

Optional Objects in the Stream API

Irrespective of the debate, the optional objects are here to stay and they are already being used in the new Stream API in methods like findFirstfindAnymax and min. It could be worth mentioning that  a very similar class has been in used in the successful Guava Collections Framework.

For instance, consider the following example where we extract from a stream the last fruit name in alphabetical order:

Stream<Fruit> fruits = asList(new Fruit("apple"),
                              new Fruit("grape")).stream();
Optional<Fruit> max = fruits.max(comparing(Fruit::getName));
if(max.isPresent()) {
   String fruitName = max.get().getName(); //grape
}

Or this another one in which we obtain the first fruit in a stream

Stream<Fruit> fruits = asList(new Fruit("apple"),
                              new Fruit("grape")).stream();
Optional<Fruit> first = fruits.findFirst();
if(first.isPresent()) {
   String fruitName = first.get().getName(); //apple
}

Ceylon Programming Language and Optional Types

Recently I started to play a bit with the Ceylon programming language since I was doing a research for another post that I am planning to publish soon in this blog. I must say I am not a big fan of Ceylon, but still I found particularly interesting that in Ceylon this concept of optional values is taken a bit further, and the language itself offers some syntactic sugar for this idiom. In this language we can mark any type with a ? (question mark) in order to indicate that its type is an optional type.

For instance, this find function would be very similar to our original Java version, but this time returning an optional Fruit? reference (1). Also notice that a null value is compatible with the optional Fruit? reference (7).

Fruit? find(String name, List<Fruit> fruits){
   for(Fruit fruit in fruits) {
      if(fruit.name == name) {
         return fruit;
      }
   }
   return null;
}

And we could use it with this Ceylon code, similar to our last Java snippet in which we used an optional value:

List<Fruit> fruits = [Fruit("apple"),Fruit("grape"),Fruit("orange")];
Fruit? fruit = find("lemon", fruits);
print((fruit else Fruit("Kiwi")).name);

Notice the use of the else keyword here is pretty similar to the method orElse in the Java 8 Optional class. Also notice that the syntax is similar to the declaration of C# nullable types, but it means something totally different in Ceylon. It may be worth mentioning that Kotlin, the programming language under development by Jetbrains, has a similar feature related to null safety (so maybe we are before a trend in programming languages).

An alternative way of doing this would have been like this:

List<Fruit> fruits = [Fruit("apple"),Fruit("grape"),Fruit("orange")];
Fruit? fruit = find("apple", fruits);
if(exists fruit){
   String fruitName = fruit.name;
   print("The found fruit is: " + fruitName);
} //else...

Notice the use of the exists keyword here (3) serves the same purpose as the isPresent method invocation in the Java Optional class.

The great advantage of Ceylon over Java is that they can use this optional type in the APIs since the beginning, within the realm of their language they won’t have to deal with incompatibilities, and it can be fully supported everywhere (perhaps their problem will be in their integration with the rest of the Java APIs, but I have not studied this yet).

Hopefully, in future releases of Java, this same syntactic sugar from Ceylon and Kotlin will also be made available in the Java programming language, perhaps using, under the hood, this new Optional class introduced in Java 8.

Further Reading

Overview Of The Task Parallel Library (TPL)

Introduction

Remember those times when we needed to spawn a separate thread in order to execute long-running operations without locking the application execution until the operation execution completes? Well, time to rejoice; those days are long gone. Starting by its version 4.5, the Microsoft.NET Framework delivers a new library that introduces the concept of “tasks”. This library is known as the Task Parallel Library; or TPL.

Tasks v.s. Threads

In the good (annoying) old days we frequently had the need to spawn a separate thread to query the database without locking the main application thread so we could show a loading message to the user and wait for the query to finish execution and then process results. This is a common scenario in desktop and mobile applications. Even though there are several ways to spawn background threads (async delegates, background workers and such), in the most basic and rudimentary fashion, things went a little something like this:

User user = null;

// Create background thread that will get the user from the repository.
Thread findUserThread = new Thread(() =>
{
    user = DataContext.Users.FindByName("luis.aguilar");
});

// Start background thread execution.
findUserThread.Start();

Console.WriteLine("Loading user..");

// Block current thread until background thread finishes assigning a
// value to the "user" variable.
findUserThread.Join();

// At this point the "user" variable contains the user instance loaded
// from the repository.
Console.WriteLine("User loaded. Name is " + user.Name);

Once again, this code is effective, it does what it has to do: Load a user from a repository and show the loaded user’s name on console. However, this code sacrifices succinctness completely in order to initialize, run and join the background thread that loads the user asynchronously.

The Task Parallel Library introduces the concept of “tasks”. Tasks are basically operations to be run asynchronously, just like what we just did using “thread notation”. This means that we no longer speak in terms of threads, but tasks instead; which lets us execute asynchronous operations by writing very little amount of code (which also is a lot easier to understand and read). Now, things have changed for good like this:

Console.WriteLine("Loading user..");

// Create and start the task that will get the user from the repository.
var findUserTask = Task.Factory.StartNew(() => DataContext.Users.FindByName("luis.aguilar"));

// The task Result property hold the result of the async operation. If
// the task has not finished, it will block the current thread until it does.
// Pretty much like the Thread.Join() method.
var user = findUserTask.Result;

Console.WriteLine("User loaded. Name is " + user.Name);

A lot better, huh? Of course it is. Now we have the result of the async operation strongly typed. Pretty much like using async delegates but without all the boilerplate code required to create delegates; which is possible thanks to the power of C# lambda expressions and built-in delegates (Func, Action, Predicate, etc.)

Tasks have a property called Result. This property contains the value returned by the lambda expression we passed to the StartNew() method. What happens when we try to access this property while the task is still running? Well, the execution of the calling method is halted until the task finishes. This behavior is similar to Thread.Join() (line 16 of the first code example).

Tasks Continuations

OK, we now have knowledge of how all this thing about tasks goes. But, let’s assume you don’t want to block the calling thread execution until the task finishes, but have it call another task after it finishes that will do something with the result later on. For such scenario, we have task continuations.

The Task Parallel Library allows us to chain tasks together so they are executed one after another. Even better, code to achieve this is completely fluent and verbose.

Console.WriteLine("Loading user..");

// Create tasks to be executed in fluent manner.
Task.Factory
    .StartNew<User>(() => DataContext.Users.FindByName("luis.aguilar")) // First task.
    .ContinueWith(previousTask =>
    {
        // This will execute after the first task finishes. First task's result
        // is passed as the first argument of this lambda expression.
        var user = previousTask.Result;

        Console.WriteLine("User loaded. Name is " + user.Name);
    });

// Tasks will start running asynchronously. You can do more things here...

As verbose as it gets, you can read the previous code like “Start new task to find a user by name and continue by printing the user name on console”. Is important to notice that the first parameter of the ContinueWith() method is the previously executed task which allows us to access its return value through its Result property.

Async And Await

The Task Parallel Library means so much for the Microsoft.NET Framework that new keywords were added to all its languages specifications to deal with asynchronous tasks. These new keywords are async and await.

The async keyword is a method modifier that specifies that it is to be run in parallel with the caller method. Then we have the await keyword, which tells the runtime to wait for a task result before assigning it to a local variable, in the case of tasks which return values; or simply wait for the task to finish, in the case of those with no return value.

Here is how it works:

// 1. Awaiting For Tasks With Result:
async void LoadAndPrintUserNameAsync()
{
    // Create, start and wait for the task to finish; then assign the result to a local variable.
    var user = await Task.Factory.StartNew<User>(() => DataContext.Users.FindByName("luis.aguilar"));

    // At this point we can use the loaded user.
    Console.WriteLine("User loaded. Name is " + user.Name);
}

// 2. Awaiting For Task With No Result:
async void PrintRandomMessage()
{
    // Create, start and wait for the task to finish.
    await Task.Factory.StartNew(() => Console.WriteLine("Not doing anything really."));
}

// 3. Usage:
void RunTasks()
{
    // Load user and print its name.
    LoadAndPrintUserNameAsync();

    // Do something else.
    PrintRandomMessage();
}

As you can see, asynchronous methods are now marked with a neat async modifier. As I mentioned before, that means they are going to run asynchronously; better said: in a separate thread. Is important to clarify that asynchronous methods can contain multiple child tasks inside them which are going to run in any order, but by marking the method as asynchronous means that when it is called in traditional fashion, the runtime will implicitly wrap this method contents in a task object.

For example, writing this:

var loadAndPrintUserNameTask = LoadAndPrintUserAsync();

.. is equivalent to writing this:

var loadAndPrintUserNameTask = new Task(LoadAndPrintUserAsync);

Remember the task was created, but it has not been started yet. You need to call the Start() method in order to do so.

Now, we can also create awaitable methods. This special kind of methods are callable using the await keyword.

async Task LoadUserAsync()
{
    // Create, start and wait for the task to finish; then assign the result to a local variable.
    var user = await Task.Factory.StartNew<User>(() => DataContext.Users.FindByName("luis.aguilar"));

    // Return the loaded user. The runtime converts this to a Task<User> automagically.
    return user;
}

All awaitable methods specify a task as its return type. Now, there are things we need to discuss in detail here. This method’s signature specifies that it has a return value of type Task<User> but it is actually returning the loaded user instance instead (line 7). What is this? Well, this method can return two types of values depending of the calling scenario.

First scenario would be when it is called in a traditional fashion. In this case it returns the actual task instance ready to be executed.

Task loadUserTask = LoadUserAsync();

// The previous code is equivalent to:
Task loadUserTask = new Task<User>(() => LoadUserAsync().Result);

Second scenario would be when it is called using await. In this case it starts the task, waits for it to finish and gets the result, which then gets assigned to the specified local variable.

User user = await LoadUserAsync();

// The previous code is equivalent to:
User user = LoadUserAsync().Result;

See? Personally it is the first time I see a method that can return two types of value depending on how it is called. Even though is quite interesting such thing exists. By the way, is important to remember that any method which at any point awaits for an asynchronous method by using the await keyword needs to be marked as async.

Conclusion

This surely means something for the whole framework. Looks like Microsoft has taken care of parallel programming on its latest framework release. Desktop and mobile application developers will surely love this new feature which reduces significantly boilerplate code and increases code verbosity. We can all feel happy about our beloved framework moving forward the right way once again.

That’s all for now, folks. Stay tuned! ;)

Further Reading