# 6 Days with Windows Phone

Disclaimer: What follows is my personal opinion, it does not reflect Informatech’s position necessarily. Though I’ve tried to be as unbiased as possible, it will undoubtedly reflect my views.

## With a Galaxy Nexus experiment going on since Oct 2012, and waiting for Apple’s comeback iOS 7 (plus whatever they introduce later this year), I dive in for a week to see how the Windows Phone 7.8 experience stacks up.

### Clarifications

I hail from a background of Apple devices, at least for a couple of years now (I had Nokia smartphones before). I’ve been exploring Android, if anything because as a developer it’s unforgivable not to have any experience in it, but customizability is not something that drives my purchases.

Beyond specific OS choices, I believe in finished, polished products. I dislike having to hack or otherwise mod my devices. Yep, that includes spending hours tweaking and configuring.

I like my stuff to just work, with minimal fuss. Things can be technologically interesting, but in a device I want a product. That said, let’s delve in.

### The Experience

What do I usually do in a smartphone (that will thus dictate the experience on Windows Phone)? Pretty much WhatsApp, Facebook, push Gmail, push Google Contacts, camera, and Dropbox auto photo uploads. Yes, other things matter, but that’s what realistically I use most of the time, and will be the scope of this review.

So let’s not waste too much time discussing setup (which is generally polished), suffice it to say that of the above:

• WhatsApp and Facebook were installed from the Marketplace.
• Camera is good, but there is no official Dropbox support. I routed around this by enabling SkyDrive auto uploads, so no biggie.
• When setting up the Google account, we hit our first snag. Because Google discontinued ActiveSync, the only straightforward choice is IMAP setup just for email, no calendar or contacts. Fortunately you can add Gmail as an Exchange server manually through the end of July. This worked OK, but the contact import was kinda crappy (ie. contacts with multiple numbers, just had the first imported).

Once everything was working, it took me about a day to get used to the concept of Live Tiles. That is the driver of the Windows Phone UI, where you’ll launch apps, get notifications, and see periodic content changes of relevant content. The idea is novel and elegantly implemented, and after months on Android, it made me feel that special care had been taken in the consistency and polish of the interface. Both were very much welcome.

In full day-to-day usage, the UI shines, though where I felt the most joy was in the touch keyboard. It is absolutely a pleasure to use (no gestures or swipes, straight taps), and by far it’s the best of any smartphone that I’ve used.

The camera was another pleasant surprise. The capture and photo browser were excellent, behold:

The multitasking also requires a bit of getting used to, as a long press of the back key will allow you to get an open app list, however it will not let you kill any. You have to jump in, and continually press back until you exit.

To finish off the “stock” functionality, the People hub was generally useful (even though the Google contacts were indeed not wholly sync’d). It gave quick access to recent contacts, and integrated well with social networks.

Even though Facebook is integrated in, the way to properly see your Newsfeed is through the standalone app. Which sadly is not developed by FB, and is quite honestly sub par to similar offerings on iOS and Android.

WhatsApp was a similar story, the implementation is not up to par with the other platforms, and more annoyingly it activated the music controls and gobbled battery. As a workaround, you have to download a separate app that kills the music controls, and run it periodically. The app also seemed to implode under heavily used group chats.

Even with the mediocrity of third-party apps, I can honestly say the OS is pleasant to use, and the tiles are colorful and attractive. So rounding out:

### Pros

• Superb interface, extremely polished.
• Quite simply the best touch keyboard I’ve ever used, on any smartphone.
• Integration between social networks and contacts is almost seamless.
• Excellent camera.
• Lumia hardware is very capable and attractive.
• Integration with Microsoft services is predictably good.

### Cons

• Synchronization with Google services is poor (especially now that ActiveSync was retired).
• App selection and most importantly, quality, is low, low, low. Years behind iOS and Android.
• WhatsApp drains the battery, and requires “Stop the Music” to kill it every once in a while.
• Multitasking does not allow you close the application from the app list (WP 7.8?)
• Lack of a centralized notification area is confusing.

### Conclusion

If you’re a Hotmail user, and you live your life in Exchange and Microsoft Office, Windows Phone is a natural fit. The Lumia hardware is capable and attractive, the UI is very polished, and if you can live with the poor app selection and quality, you’ll enjoy it.

However if you use Google services, and have gotten used to the abundance of other app stores, the UI may not compensate the tradeoffs for functionality you would have to give up. In the future this may change, but at present it’s too much to take.

# Memoized Fibonacci Numbers with Java 8

Since today is Fibonacci Day, I decided that it would be interesting to publish something related to it.

I believe one of the first algorithms we all see when learning non-linear recursion is that of calculating a Fibonacci number. I found a great explanation on the subject in the book Structure and Interpretation of Computer Programs [SIC] and I dedicated some time to playing with the Fibonacci algorithm just for fun. While doing so I found an interesting way to improve the classical recursive algorithm by using one of the new methods (added in Java 8) in the Map interface and which I used here to implement a form of memoization.

## Classical Recursive Fibonacci

In the classical definition of Fibonacci we learn that:

$fib(n) = \left\{ \begin{array}{ll} 0 & \mbox{if n=0}\\1 & \mbox{if n=1}\\fibn(n-1)+fib(n-2) & \mbox{otherwise} \end{array} \right.$

We program this very easily in Java:

public static long fibonacci(int x) {
if(x==0 || x==1)
return x;
return fibonacci(x-1) + fibonacci(x-2);
}


Now the problem with this algorithm is that, with the exception of the base case, we recursively invoke our function twice and interestingly one of the branches recalculates part of other branch every time we invoke the function. Consider the following image (taken from SIC) that represents an invocation to fibonacci(5).

Clearly the branch to the right is redoing all the work already done during the recursive process carried out by the left branch. Can you see how many times fibonacci(2) was calculated? The problem gets worse as the function argument gets bigger. In fact this problem is so serious that the calculation of a small argument like fibonacci(50) might take quite a long time.

## Memoized Recursive Fibonacci

However, there is a way to improve the performance of the original recursive algorithm (I mean without having to resort to a linear-time algorithm using, for instance, Binet’s formula).

The serious problem we have in the original algorithm is that we do too much rework. So, we could alleviate the problem by using memoization, in other words by providing a mechanism to avoid repeated calculations by caching results in a lookup table that can later be used to retrieve the values of already processed arguments.

In Java we could try to store the Fibonacci numbers in a hast table or map. In the case of the left branch we’ll have to run the entire recursive process to obtain the corresponding Fibonacci sequence values, but as we find them, we update the hash table with the results obtained. This way, the right branches will only perform a table lookup and the corresponding value will be retrieved  from the hash table and not through a recursive calculation again.

Some of the new methods in the class Map , in Java 8, simplify a lot the writing of such algorithm, particularly the method computeIfAbsent(key, function). Where the key would be the number for which we would like to look up the corresponding Fibonacci number and the function would be a lambda expression capable of triggering the recursive calculation if the corresponding value is not already present in the map.

So, we can start by defining a map and putting the values in it for the base cases, namely, fibonnaci(0) and fibonacci(1):

private static Map<Integer,Long> memo = new HashMap<>();
static {
memo.put(0,0L); //fibonacci(0)
memo.put(1,1L); //fibonacci(1)
}


And for the inductive step all we have to do is redefine our Fibonacci function as follows:

public static long fibonacci(int x) {
return memo.computeIfAbsent(x, n -> fibonacci(n-1) + fibonacci(n-2));
}


As you can see, the method computeIfAbsent will use the provided lambda expression to calculate the Fibonacci number when the number is not present in the map, this recursive process will be triggered entirely for the left branch, but the right branch will use the momoized values. This represents a significant improvement.

Based on my subjective observation, this improved recursive version was outstandingly faster for a discrete number like fibonacci(70). With this algorithm we can safely calculate up to fibonacci(92) without running into long overflow. Even better, to be sure that our algorithm would never cause overflows without letting the user know we could also use one of the new methods in Java 8 added to the Math class and which throws an ArithmeticException when overflow occurs. So we could change our code as follows:

public static long fibonacci(int x) {
fibonacci(n-2)));
}


This method would start failing for fibonacci(93). If we need to go over 92 we would have to use BigInteger in our algorithm, instead of just long.

Notice that the memozied example uses mutations, therefore, in order to use this code in a multithreaded environment we would first need to add some form of synchronization to the proposed code, or use a different map implementation, perhaps a ConcurrentHashMap, which evidently, may impact performance as well. Arguably, this would still be better than the original recursive algorithm.

# Java 8 Optional Objects

In this post I present several examples of the new Optional objects in Java 8 and I make comparisons with similar approaches in other programming languages, particularly the functional programming language SML and  the JVM-based programming language Ceylon, this latter currently under development by Red Hat.

I think it is important to highlight that the introduction of optional objects has been a matter of debate. In this article I try to present my perspective of the problem and I do an effort to show arguments in favor and against the use of optional objects. It is my contention that in certain scenarios the use of optional objects is valuable, but ultimately everyone is entitled to an opinion and I just hope this article helps the readers to make an informed one just as writing it helped me understand this problem much better.

## About the Type of Null

In Java we use a reference type to gain access to an object, and when we don’t have a specific object to make our reference point to, then we set such reference to null to imply the absence of a value.

In Java null is actually a type, a special one: it has no name, we cannot declare variables of its type, or cast any variables to it, in fact there is a single value that can be associated with it (i.e. the literal null), and unlike any other types in Java, a null reference can be safely assigned to any other reference types (See JLS  3.10.7 and 4.1).

The use of null is so common that we rarely meditate on it: field members of objects are automatically initialized to null and programmers typically initialize reference types to null when they don’t have an initial value to give them and, in general, null is used everywhere to imply that, at certain point, we don’t know or we don’t have a value to give to a reference.

## About the Null Pointer Reference Problem

Now, the major problem with the null reference is that if we try to dereference it then we get the ominous and well known NullPointerException.

When we work with a reference obtained from a different context than our code (i.e. as the result of a method invocation or when we receive a reference as an argument in a method we are working on),  we all would like to avoid this error that has the potential to make our application crash, but often the problem is not noticed early enough and it finds its way into production code where it waits for the right moment to fail (which is typically a Friday at the end of the month, around 5 p.m. and just when you are about to leave the office to go to the movies with your family or drink some beers with your friends). To make things worse, the place where your code fails is rarely the place where the problem originated, since your reference could have been set to null far away from the place in your code where you intended to dereference it. So, you better cancel those plans for the Friday night…

It’s worth mentioning that this concept of null references was first introduced by Tony Hoare, the creator of ALGOL, back in 1965. The consequences were not so evident in those days, but he later regretted his design and he called it “a billion dollars mistake“, precisely referring to the uncountable amount of hours that many of us have spent, since then, fixing this kind null dereferencing problems.

Wouldn’t it be great if the type system could tell the difference between a reference that, in a specific context, could be potentially null from one that couldn’t? This would help a lot in terms of type safety because the compiler could then enforce that the programmer do some verification for references that could be null at the same time that it allows a direct use of the others. We see here an opportunity for improvement in the type system. This could be particularly useful when writing the public interface of APIs because it would increase the expressive power of the language, giving us a tool, besides documentation, to tell our users that a given method may or may not return a value.

Now, before we delve any further, I must clarify that this is an ideal that modern languages will probably pursue (we’ll talk about Ceylon and Kotlin later), but it is not an easy task to try to fix this hole in a programming language like Java when we intend to do it as an afterthought. So, in the coming paragraphs I present some scenarios in which I believe the use of optional objects could arguably alleviate some of this burden. Even so, the evil is done, and nothing will get rid of null references any time soon, so we better learn to deal with them. Understanding the problem is one step and it is my opinion that these new optional objects are just another way to deal with it, particularly in certain specific scenarios in which we would like to express the absence of a value.

## Finding Elements

There is a set of idioms in which the use of null references is potentially problematic. One of those common cases is when we look for something that we cannot ultimately find. Consider now the following simple piece of code used to find the first fruit in a list of fruits that has a certain name:

public static Fruit find(String name, List<Fruit> fruits) {
for(Fruit fruit : fruits) {
if(fruit.getName().equals(name)) {
return fruit;
}
}
return null;
}


As we can see, the creator of this code is using a null reference to indicate the absence of a value that satisfies the search criteria (7). It is unfortunate, though, that it is not evident in the method signature that this method may not return a value, but a null reference..

Now consider the following code snippet, written by a programmer expecting to use the result of the method shown above:

List<Fruit> fruits = asList(new Fruit("apple"),
new Fruit("grape"),
new Fruit("orange"));

Fruit found = find("lemon", fruits);
//some code in between and much later on (or possibly somewhere else)...
String name = found.getName(); //uh oh!


Such simple piece of code has an error that cannot be detected by the compiler, not even by simple observation by the programmer (who may not have access to the source code of the find method). The programmer,  in this case, has naively failed to recognize the scenario in which the find method above could return a null reference to indicate the absence of a value that satisfies his predicate. This code is waiting to be executed to simply fail and no amount of documentation is going to prevent this mistake from happening and the compiler will not even notice that there is a potential problem here.

Also notice that the line where the reference is set to null (5) is different from the problematic line (7). In this case they were close enough, in other cases this may not be so evident.

In order to avoid the problem what we typically do is that we check if a given reference is null before we try to dereference it. In fact, this verification is quite common and in certain cases this check could be repeated so many times on a given reference that Martin Fowler (renown for hist book on refactoring principles) suggested that for these particular scenarios such verification could  be avoided with the use of what he called a Null Object. In our example above, instead of returning null, we could have returned a NullFruit object reference which is an object of type Fruit that is hollowed inside and which, unlike a null reference, is capable of properly responding to the same public interface of a Fruit.

## Minimum and Maximum

Another place where this could be potentially problematic is when reducing a collection to a value, for instance to a maximum or minimum value. Consider the following piece of code that can be used to determine which is the longest string in a collection.

public static String longest(Collection<String> items) {
if(items.isEmpty()){
return null;
}
Iterator<String> iter = items.iterator();
String result = iter.next();
while(iter.hasNext()) {
String item = iter.next();
if(item.length() > result.length()){
result = item;
}
}
return result;
}


In this case the question is what should be returned when the list provided is empty? In this particular case a null value is returned, once again, opening the door for a potential null dereferencing problem.

## The Functional World Strategy

It’s interesting that in the functional programming paradigm, the statically-typed programming languages evolved in a different direction. In languages like SML or Haskell there is no such thing as a null value that causes exceptions when dereferenced. These languages provide a special data type capable of holding an optional value and so it can be conveniently used to also express the possible absence of a value.  The following piece of code shows the definition of the SML option type:

datatype 'a option = NONE | SOME of 'a


As you can see, option is a data type with two constructors, one of them stores nothing (i.e. NONE) whereas the other is capable of storing a polymorphic value of some value type 'a (where 'a is just a placeholder for the actual type).

Under this model, the piece of code we wrote before in Java, to find a fruit by its name, could be rewritten in SML as follows:

fun find(name, fruits) =
case fruits of
[] => NONE
| (Fruit s)::fs => if s = name
then SOME (Fruit s)
else find(name,fs)


There are several ways to achieve this in SML, this example just shows one way to do it. The important point here is that there is no such thing as null, instead a value NONE is returned when nothing is found (3), and a value SOME fruit is returned otherwise (5).

When a programmer uses this find method, he knows that it returns an option type value and therefore the programmer is forced to check the nature of the value obtained to see if it is either NONE (6) or SOME fruit (7), somewhat like this:

let
val fruits = [Fruit "apple", Fruit "grape", Fruit "orange"]
val found = find("grape", fruits)
in
case found of
NONE => print("Nothing found")
| SOME(Fruit f) => print("Found fruit: " ^ f)
end


Having to check for the true nature of the returned option makes it impossible to misinterpret the result.

## Java Optional Types

It’s a joy that finally in Java 8 we’ll have a new class called Optional that allows us to implement a similar idiom as that from the functional world. As in the case of of SML, the Optional type is polymorphic and may contain a value or be empty. So, we could rewrite our previous code snippet as follows:

public static Optional<Fruit> find(String name, List<Fruit> fruits) {
for(Fruit fruit : fruits) {
if(fruit.getName().equals(name)) {
return Optional.of(fruit);
}
}
return Optional.empty();
}


As you can see, the method now returns an Optional reference (1), if something is found, the Optional object is constructed with a value (4), otherwise is constructed empty (7).

And the programmer using this code would do something as follows:

List<Fruit> fruits = asList(new Fruit("apple"),
new Fruit("grape"),
new Fruit("orange"));

Optional<Fruit> found = find("lemon", fruits);
if(found.isPresent()) {
Fruit fruit = found.get();
String name = fruit.getName();
}


Now it is made evident in the type of the find method that it returns an optional value (5), and the user of this method has to program his code accordingly (6-7).

So we see that  the adoption of this functional idiom is likely to make our code safer, less prompt to null dereferencing problems and as a result more robust and less error prone. Of course, it is not a perfect solution because, after all, Optional references can also be erroneously set to null references, but  I would expect that programmers stick to the convention of not passing null references where an optional object is expected, pretty much as we today consider a good practice not to pass a null reference where a collection or an array is expected, in these cases the correct is to pass an empty array or collection. The point here is that now we have a mechanism in the API that we can use to make explicit that for a given reference we may not have a value to assign it and the user is forced, by the API, to verify that.

Quoting an article I reference later about the use of optional objects in the Guava Collections framework: “Besides the increase in readability that comes from giving null a name, the biggest advantage of Optional is its idiot-proof-ness. It forces you to actively think about the absent case if you want your program to compile at all, since you have to actively unwrap the Optional and address that case”.

## Other Convenient Methods

As of the today, besides the static methods of and empty explained above, the Optional class contains the following convenient instance methods:

 ifPresent() Which returns true if a value is present in the optional. get() Which returns a reference to the item contained in the optional object, if present, otherwise throws a NoSuchElementException. ifPresent(Consumer consumer) Which passess the optional value, if present, to the provided Consumer (which could be implemented through a lambda expression or method reference). orElse(T other) Which returns the value, if present, otherwise returns the value in other. orElseGet(Supplier other) Which returns the value if present, otherwise returns the value provided by the Supplier (which could be implemented with a lambda expression or method reference). orElseThrow(Supplier exceptionSupplier) Which returns the value if present, otherwise throws the exception provided by the Supplier (which could be implemented with a lambda expression or method reference).

## Avoiding Boilerplate Presence Checks

We can use some of the convenient methods mentioned above to avoid the need of having to check if a value is present in the optional object. For instance, we may want to use a default fruit value if nothing is found, let’s say that we would like to use a “Kiwi”. So we could rewrite our previous code like this:

Optional<Fruit> found = find("lemon", fruits);
String name = found.orElse(new Fruit("Kiwi")).getName();


In this other example, the code prints the fruit name to the main output, if the fruit is present. In this case, we implement the Consumer with a lambda expression.

Optional<Fruit> found = find("lemon", fruits);
found.ifPresent(f -> { System.out.println(f.getName()); });


This other piece of code uses a lambda expression to provide a Supplier which can ultimately provide a default answer if the optional object is empty:

Optional<Fruit> found = find("lemon", fruits);
Fruit fruit = found.orElseGet(() -> new Fruit("Lemon"));


Clearly, we can see that these convenient methods simplify a lot having to work with the optional objects.

## So What’s Wrong with Optional?

The question we face is: will Optional get rid of null references? And the answer is an emphatic no! So, detractors immediately question its value asking: then what is it good for that we couldn’t do by other means already?

Unlike functional languages like SML o Haskell which never had the concept of null references, in Java we cannot simply get rid of the null references that have historically existed. This will continue to exist, and they arguably have their proper uses (just to mention an example: three-valued logic).

I doubt that the intention with the Optional class is to replace every single nullable reference, but to help in the creation of more robust APIs in which just by reading the signature of a method we could tell if we can expect an optional value or not  and force the programmer to use this value accordingly. But ultimately, Optional will be just another reference and subject to same weaknesses of every other reference in the language. It is quite evident that Optional is not going to save the day.

How these optional objects are supposed to be used or whether they are valuable or not in Java has been the matter of a heated debate in the project lambda mailing list. From the detractors we hear interesting arguments like:

• The fact that other alternatives exist ( i.e. the Eclipse IDE supports a set of proprietary annotations for static analysis of nullability, the JSR-305 with annotations like @Nullable and @NonNull).
• Some would like it to be usable as in the functional world, which is not entirely possible in Java since the language lacks many features existing in functional programming languages like SML or Haskell (i.e. pattern matching).
• Others argue about how it is impossible to retrofit preexisting code to use this idiom (i.e. List.get(Object)which will continue to return null).
• And some complain about the fact that the lack of language support for optional values creates a potential scenario in which Optional could be used inconsistently in the APIs, by this creating incompatibilities, pretty much like the ones we will have with the rest of the Java API which cannot be retrofitted to use the new Optional class.
• A compelling argument is that if the programmer invokes the get method in an optional object, if it is empty, it will raise a NoSuchElementException, which is pretty much the same problem that we have with nulls, just with a different exception.

So, it would appear that the benefits of Optional are really questionable and are probably constrained to improving readability and enforcing public interface contracts.

## Optional Objects in the Stream API

Irrespective of the debate, the optional objects are here to stay and they are already being used in the new Stream API in methods like findFirstfindAnymax and min. It could be worth mentioning that  a very similar class has been in used in the successful Guava Collections Framework.

For instance, consider the following example where we extract from a stream the last fruit name in alphabetical order:

Stream<Fruit> fruits = asList(new Fruit("apple"),
new Fruit("grape")).stream();
Optional<Fruit> max = fruits.max(comparing(Fruit::getName));
if(max.isPresent()) {
String fruitName = max.get().getName(); //grape
}


Or this another one in which we obtain the first fruit in a stream

Stream<Fruit> fruits = asList(new Fruit("apple"),
new Fruit("grape")).stream();
Optional<Fruit> first = fruits.findFirst();
if(first.isPresent()) {
String fruitName = first.get().getName(); //apple
}


## Ceylon Programming Language and Optional Types

Recently I started to play a bit with the Ceylon programming language since I was doing a research for another post that I am planning to publish soon in this blog. I must say I am not a big fan of Ceylon, but still I found particularly interesting that in Ceylon this concept of optional values is taken a bit further, and the language itself offers some syntactic sugar for this idiom. In this language we can mark any type with a ? (question mark) in order to indicate that its type is an optional type.

For instance, this find function would be very similar to our original Java version, but this time returning an optional Fruit? reference (1). Also notice that a null value is compatible with the optional Fruit? reference (7).

Fruit? find(String name, List<Fruit> fruits){
for(Fruit fruit in fruits) {
if(fruit.name == name) {
return fruit;
}
}
return null;
}


And we could use it with this Ceylon code, similar to our last Java snippet in which we used an optional value:

List<Fruit> fruits = [Fruit("apple"),Fruit("grape"),Fruit("orange")];
Fruit? fruit = find("lemon", fruits);
print((fruit else Fruit("Kiwi")).name);


Notice the use of the else keyword here is pretty similar to the method orElse in the Java 8 Optional class. Also notice that the syntax is similar to the declaration of C# nullable types, but it means something totally different in Ceylon. It may be worth mentioning that Kotlin, the programming language under development by Jetbrains, has a similar feature related to null safety (so maybe we are before a trend in programming languages).

An alternative way of doing this would have been like this:

List<Fruit> fruits = [Fruit("apple"),Fruit("grape"),Fruit("orange")];
Fruit? fruit = find("apple", fruits);
if(exists fruit){
String fruitName = fruit.name;
print("The found fruit is: " + fruitName);
} //else...


Notice the use of the exists keyword here (3) serves the same purpose as the isPresent method invocation in the Java Optional class.

The great advantage of Ceylon over Java is that they can use this optional type in the APIs since the beginning, within the realm of their language they won’t have to deal with incompatibilities, and it can be fully supported everywhere (perhaps their problem will be in their integration with the rest of the Java APIs, but I have not studied this yet).

Hopefully, in future releases of Java, this same syntactic sugar from Ceylon and Kotlin will also be made available in the Java programming language, perhaps using, under the hood, this new Optional class introduced in Java 8.

# Overview Of The Task Parallel Library (TPL)

## Introduction

Remember those times when we needed to spawn a separate thread in order to execute long-running operations without locking the application execution until the operation execution completes? Well, time to rejoice; those days are long gone. Starting by its version 4.5, the Microsoft.NET Framework delivers a new library that introduces the concept of “tasks”. This library is known as the Task Parallel Library; or TPL.

In the good (annoying) old days we frequently had the need to spawn a separate thread to query the database without locking the main application thread so we could show a loading message to the user and wait for the query to finish execution and then process results. This is a common scenario in desktop and mobile applications. Even though there are several ways to spawn background threads (async delegates, background workers and such), in the most basic and rudimentary fashion, things went a little something like this:

User user = null;

// Create background thread that will get the user from the repository.
{
user = DataContext.Users.FindByName("luis.aguilar");
});

// value to the "user" variable.

// At this point the "user" variable contains the user instance loaded
// from the repository.
Console.WriteLine("User loaded. Name is " + user.Name);


Once again, this code is effective, it does what it has to do: Load a user from a repository and show the loaded user’s name on console. However, this code sacrifices succinctness completely in order to initialize, run and join the background thread that loads the user asynchronously.

The Task Parallel Library introduces the concept of “tasks”. Tasks are basically operations to be run asynchronously, just like what we just did using “thread notation”. This means that we no longer speak in terms of threads, but tasks instead; which lets us execute asynchronous operations by writing very little amount of code (which also is a lot easier to understand and read). Now, things have changed for good like this:

Console.WriteLine("Loading user..");

// Create and start the task that will get the user from the repository.

// The task Result property hold the result of the async operation. If
// the task has not finished, it will block the current thread until it does.
// Pretty much like the Thread.Join() method.

Console.WriteLine("User loaded. Name is " + user.Name);


A lot better, huh? Of course it is. Now we have the result of the async operation strongly typed. Pretty much like using async delegates but without all the boilerplate code required to create delegates; which is possible thanks to the power of C# lambda expressions and built-in delegates (Func, Action, Predicate, etc.)

Tasks have a property called Result. This property contains the value returned by the lambda expression we passed to the StartNew() method. What happens when we try to access this property while the task is still running? Well, the execution of the calling method is halted until the task finishes. This behavior is similar to Thread.Join() (line 16 of the first code example).

OK, we now have knowledge of how all this thing about tasks goes. But, let’s assume you don’t want to block the calling thread execution until the task finishes, but have it call another task after it finishes that will do something with the result later on. For such scenario, we have task continuations.

The Task Parallel Library allows us to chain tasks together so they are executed one after another. Even better, code to achieve this is completely fluent and verbose.

Console.WriteLine("Loading user..");

// Create tasks to be executed in fluent manner.
.StartNew<User>(() => DataContext.Users.FindByName("luis.aguilar")) // First task.
{
// This will execute after the first task finishes. First task's result
// is passed as the first argument of this lambda expression.

Console.WriteLine("User loaded. Name is " + user.Name);
});

// Tasks will start running asynchronously. You can do more things here...


As verbose as it gets, you can read the previous code like “Start new task to find a user by name and continue by printing the user name on console”. Is important to notice that the first parameter of the ContinueWith() method is the previously executed task which allows us to access its return value through its Result property.

## Async And Await

The Task Parallel Library means so much for the Microsoft.NET Framework that new keywords were added to all its languages specifications to deal with asynchronous tasks. These new keywords are async and await.

The async keyword is a method modifier that specifies that it is to be run in parallel with the caller method. Then we have the await keyword, which tells the runtime to wait for a task result before assigning it to a local variable, in the case of tasks which return values; or simply wait for the task to finish, in the case of those with no return value.

Here is how it works:

// 1. Awaiting For Tasks With Result:
{
// Create, start and wait for the task to finish; then assign the result to a local variable.
var user = await Task.Factory.StartNew<User>(() => DataContext.Users.FindByName("luis.aguilar"));

// At this point we can use the loaded user.
Console.WriteLine("User loaded. Name is " + user.Name);
}

// 2. Awaiting For Task With No Result:
async void PrintRandomMessage()
{
// Create, start and wait for the task to finish.
await Task.Factory.StartNew(() => Console.WriteLine("Not doing anything really."));
}

// 3. Usage:
{
// Load user and print its name.

// Do something else.
PrintRandomMessage();
}


As you can see, asynchronous methods are now marked with a neat async modifier. As I mentioned before, that means they are going to run asynchronously; better said: in a separate thread. Is important to clarify that asynchronous methods can contain multiple child tasks inside them which are going to run in any order, but by marking the method as asynchronous means that when it is called in traditional fashion, the runtime will implicitly wrap this method contents in a task object.

For example, writing this:

var loadAndPrintUserNameTask = LoadAndPrintUserAsync();


.. is equivalent to writing this:

var loadAndPrintUserNameTask = new Task(LoadAndPrintUserAsync);


Remember the task was created, but it has not been started yet. You need to call the Start() method in order to do so.

Now, we can also create awaitable methods. This special kind of methods are callable using the await keyword.

async Task LoadUserAsync()
{
// Create, start and wait for the task to finish; then assign the result to a local variable.
var user = await Task.Factory.StartNew<User>(() => DataContext.Users.FindByName("luis.aguilar"));

// Return the loaded user. The runtime converts this to a Task<User> automagically.
return user;
}


All awaitable methods specify a task as its return type. Now, there are things we need to discuss in detail here. This method’s signature specifies that it has a return value of type Task<User> but it is actually returning the loaded user instance instead (line 7). What is this? Well, this method can return two types of values depending of the calling scenario.

First scenario would be when it is called in a traditional fashion. In this case it returns the actual task instance ready to be executed.

Task loadUserTask = LoadUserAsync();

// The previous code is equivalent to:


Second scenario would be when it is called using await. In this case it starts the task, waits for it to finish and gets the result, which then gets assigned to the specified local variable.

User user = await LoadUserAsync();

// The previous code is equivalent to:


See? Personally it is the first time I see a method that can return two types of value depending on how it is called. Even though is quite interesting such thing exists. By the way, is important to remember that any method which at any point awaits for an asynchronous method by using the await keyword needs to be marked as async.

## Conclusion

This surely means something for the whole framework. Looks like Microsoft has taken care of parallel programming on its latest framework release. Desktop and mobile application developers will surely love this new feature which reduces significantly boilerplate code and increases code verbosity. We can all feel happy about our beloved framework moving forward the right way once again.

That’s all for now, folks. Stay tuned!

# Unit Testing 101: Inversion Of Control

## Introduction

Inversion Of Control is one of the most common and widely used techniques for handling class dependencies in software development and could easily be the most important practice in unit testing. Basically, it determines if your code is unit-testable or not. Not just that, but it can also help improve significantly your overall software structure and design. But what is it all about? It is really that important? Hopefully we’ll clear those out on the following lines.

## Identifying Class Dependencies

As we mentioned before,  Inversion Of Control is a technique used to handle class dependencies effectively; but, What exactly is a dependency? In real life, for instance, a car needs an engine in order to function; without it, it probably won’t work at all. In programming it is the same thing; when a class needs another one in order to function properly, it has a dependency on it. This is called a class dependency or coupling.

Let’s look at the following code example:

public class UserManager
{

public UserManager()
{
}

{
// Get the user from the database

// Set the user new password

// Save the user back to the database.
DataContext.Users.Update(user);
DataContext.Commit();
}

// More methods...
}

{
{
// Hash password using an encryption algorithm...
}
}


The previous code describes two classes, UserManager and PasswordHasher. We can see how UserManager class initializes a new instance of the PasswordHasher class on its constructor and keeps it as a class-level variable so all methods in the class can use it (line 3). The method we are going to focus on is the ResetPassword method. As you might have already noticed, the line 15 is highlighted. This line makes use of the PasswordHasher instance, hence, marking a strong class dependency between UserManager and PasswordHasher.

## Don’t Call Us, We’ll Call You

When a class creates instances of its dependencies, it knows what implementation of that dependency is using and probably how it works. The class is the one controlling its own behavior. By using inversion of control, anyone using that class can specify the concrete implementation of each of the dependencies used by it; this time the class user is the one partially controlling the class behavior (or how it behaves on the parts where it uses those provided dependencies).

Anyways, all of this is quite confusing. Let’s look at an example:

public class UserManager
{

{
}

{
// Get the user from the database

// Set the user new password

// Save the user back to the database.
DataContext.Users.Update(user);
DataContext.Commit();
}

// More methods...
}

{
}

{
{
// Hash password using an encryption algorithm...
}
}


Inversion of Control is usually implemented by applying a design pattern called the Strategy Pattern (as defined in The Gang Of Four book). This pattern consists on abstracting concrete component and algorithm implementations from the rest of the classes by exposing only an interface they can use; thus making implementations interchangeable at runtime and encapsulate how these implementations work since any class using them should not care about how they work.

So, in order to achieve this, we need to sort some things out:

• Abstract an interface from the Md5PasswordHasher class, IPasswordHasher; so anyone can write custom implementations of password hashers (line 28-31).
• Mark the Md5PasswordHasherclass as an implementation of the IPasswordHasher interface (line 33).
• Change the type of the password hasher used by UserManager to IPasswordHasher (line 3).
• Add a new constructor parameter of type IPasswordHasher interface (line 5), which is the instance the UserManager class will use to hash its passwords. This way we delegate the creation of dependencies to the user of the class and allows the user to provide any implementation it wants, allowing it to control how the password is going to be hashed.

This is the very essence of inversion of control: Minimize class coupling. The user of the UserManager class has now control over how passwords are hashed. Password hashing control has been inverted from the class to the user. Here is an example on how we can specify the only dependency of the UserManager class:

IPasswordHasher md5PasswordHasher = new Md5PasswordHasher();



So, Why is this useful? Well, we can go crazy and create our own hasher implementation to be used by the UserManager class:

// Plain text password hasher:
{
{
// Let's disable password hashing by returning
}
}

// Usage:

// Resulting password will be: 12345.


## Conclusion

So, this concludes our article on Inversion of Control. Hopefully with a little more practice, you will be able to start applying this to your code. Of course, the  biggest benefit of this technique is related to unit testing. So, What does it has to do with unit testing? Well, we’re going to see this when we get into type mocking. So, stay tuned!

# Unit Testing 101: Basics

## Introduction

We all know unit testing is an essential part of the development cycle. Actually, unit tests code is as important as the actual application code (Yeap, you read that right); this is something we should never forget. That’s why we are going to look at some important (introductory) concepts relating to composing proper testing code.

I will be using NUnit as my testing library. The package comes with the framework libraries and a set of test runner clients. You can download it at their site’s download section.

## Unit Test Structure

Unit tests are usually grouped in test fixtures. Basically, a test fixture is a group of unit tests targeted to verify a single application feature. Let’s illustrate this in code:

using NUnit.Framework;

namespace AppDemo.Tests
{
[TestFixture(Category = "User Authentication")]
public class WhenUserIsBeingAuthenticated
{
[Test]
public void ShouldReturnTrueIfValidationIsSuccessful()
{
// TODO: Implement test code.
}

[Test]
{
// TODO: Implement test code.
}
}
}


We can now picture how a test fixture looks in code. In this case, the test fixture is a regular class filled out with test methods. As you might have noticed, the class name describes the state of the feature being tested: “When the user is being authenticated”. Each particular test method seeks to verify a required result on a specific condition: “Should return true if validation is successful”.

## Running Tests

Once you have your fixture ready to go, is now time to run all tests on it and see results. I will be using the NUnit GUI Runner which looks for all classes in the assembly marked with the [TestFixture] attribute and then calls each method on them marked with the [Test] attribute. Is important to remember that all tests must be in a separate class library. First reason is because it is a good practice, you should not be mixing application code with test code; and second because the NUnit test runner can only load DLL files.

So, first thing to do is to build the project so we have a DLL containing all our tests. Once we have a DLL file with our test fixture classes on it, fire up the NUnit test runner (NUnit.exe) and load the file on it.

At this point everything is quite intuitive. You can hit the “Run” button and see how all tests pass or rebuild the project on Visual Studio and see how the test runner auto-updates with new changes. Cool, huh?

## Arrange, Act and Assert

Test methods are usually composed of three common phases: Arrange, act and assert. Or “triple-A” if you like.

• Arrange: At the very beginning of the method, you need to setup the test scenario. This includes expected test results for comparison with actual results, instances of the components to be tested and type mocking.
• Act: After arrangement is done, we now have to actually perform the actions that will produce the actual test results. For instance, call the Validate method on the UserAuthenticator class which performs the actual user validation.
• Assert: The assertion phase verifies that actual tests results match what we are expecting.

Is a good practice to provide comments delimiting each phase:

[Test]
public void ShouldReturnTrueValidationIsSuccessful()
{
// Arrange
var expectedResult = true;
var userAuthenticator = new UserAuthenticator();

// Act
var actualResult = userAuthenticator.Validate("luis.aguilar", "1234");

// Assert
Assert.That(actualResult, Is.EqualTo(expectedResult), message = "Authentication failed though it should have succeeded.");
}


As you might see, these three phases are executed in order. Is a good practice to initialize variables with expected results at Arrange phase to make the Assert phase more readable. Also, for the sake of readability, I am using the Assert.That syntax of NUnit so assertions are more verbose.

## Tests Before Implementation

Even though unit testing is good for all development methodologies, I’m an avid supporter of the Test-Driven-Development (TDD) methodology. As the name implies, TDD is all about writing all tests BEFORE you implement actual application code. That way, your code will meet acceptance criteria right from inception. Basically, application infrastructure design is driven by tests. Now we think on user requirements rather than UML diagrams and classes.

For instance, you should write all the previous sample tests before implementing the UserAuthenticator class. That way that particular class will be born satisfying user requirements so we don’t have to  change its code later on, which helps save lots of time (and money, managers love to hear that) and improves code efficiency and design greatly.

## Conclusion

Okay, hopefully this served out as a brief introduction to the exciting world of unit testing. Of course, there’s a lot more on this topic. In next articles we are going to look at concepts like inversion of control, type mocking and more things related to TDD. Is going to be lots of fun!

Stay tuned!

# JavaScript For Regular Guys (Part 1)

## Introduction

So, JavaScript. That thing that makes AJAX, dynamic HTML and other fancy stuff possible. We know it exists somewhere deep on the browser. But what is it exactly? How does it work anyways? And even more important: How in the world do you even code for it?

Okay, reader. If you have any of those questions, today is your lucky day. We are going to find out the answers for them.

## Some Background

Like every exciting article, we will be starting with some history. So, JavaScript was born with Netscape browsers about 18 years ago. It was first shipped with Netscape version 2 as a feature that would allow web pages to run scripted content on the browser right in the user computer. Its original name was Mocha, then it changed to LiveScript and finally ended up as JavaScript (JS). Wow, that’s what I call name evolution.

## How Does It Work?

JavaScript is contained either on individual *.js files that are referenced by the web page or one or more inline HTML <script> blocks coded deep inside the HTML structure. The browser then interprets (that’s right, JavaScript is an interpreted language) all code on referenced files and <script> blocks by using its own JavaScript runtime and executes the code in the web page when load is complete.

To include JS files on a web page we would have to add <src="../file.js" /> tags for each file. The other way is to include JavaScript code inline inside a script block:

<script type="text/javascript" language="text/javascript">
function doSomething() {
}
</script>


It is recommended that these blocks are right after the last HTML tag before the <body> tag on the page to improve loading times. Always remember that JavaScript is a functional scripting language. It executes all code in a file or script block from top to bottom sequentially. For example, the script above won’t do anything since we are never calling the function anywhere. So, we would have to do something like:

<script type="text/javascript" language="text/javascript">
function doSomething() {
}

doSomething();
</script>


## The Domain Object Model (DOM)

JavaScript goes tight with the HTML structure of the page it is executed on. JavaScript sees all the HTML document structure as an HTML element tree called the DOM. This allows us to manipulate the document structure on runtime by referencing its DOM elements and change things as needed. For example, to change color of a paragraph 2 seconds after the page has loaded, we can use the following code:


<p id="target">Test paragraph...</p>

<script type="text/javascript" language="text/javascript">
setTimeout(function() {
var target = document.getElementById('target');
target.style.color = '#00FF00';
}, 2000);
</script>


By examining the previous code, we can see the use of the setTimeout function. JavaScript comes with a set of pre-defined global functions that don’t come from any object. They just can be used anywhere. No, literally ANYWHERE. If you come from a formal language like C# or Java, you know that, at some point, you’ll need to include namespaces or packages so your code can use things defined somewhere else. Yeah well, with JavaScript this is not the case. Global functions are auto-magically imported by the browser itself.

In the case of the setTimeout function, it takes two parameters: the first is a function that will execute after a specified amount of milliseconds; specified by the second parameter. The function we specified as the first parameter is what we know as an “anonymous function”. We call it “anonymous” because it has no name. Really? Yes, really. But you can assign a name to it if you want:

<script type="text/javascript" language="text/javascript">
function colorChanger() {
var target = document.getElementById('target');
target.style.color = '#00FF00';
};

setTimeout(colorChanger, 2000);
</script>


It now has a name. But we can still use it as a variable and pass it as first parameter of the setTimeout function. Dammit, JavaScript!

Anyways, we slipped a little from the whole DOM topic. The important thing you gotta remember is that the global document variable can be used to retrieve HTML elements by ID (you already got the idea of what the DOM is all about). After we have the element on a JavaScript variable, we can change its attributes, move it somewhere else on the tree or even just delete the crap out of it à la Godfather.

## JavaScript Has No Class

Even tough JavaScript is one of the most popular and used languages in the world, is not anything close to any other language you might have seen before. It is functional, dynamic weakly-typed and totally random at some points. Is like taking all programming languages known to mankind and mash them together in one single surprise-pack. Seriously, it gets a little crazy sometimes.

For instance, even though it has weakly-typed dynamic variables and other sorts of neat things, it has no formal way of defining classes. However, as the languages evolved from old computers into 21st century, a way to do something like Object-oriented Programming had to be on JavaScript by its first release. So, guys at ECMA had a dialogue, at some point while writing the first language specification, back in year 97′, that goes a little something like this:

• Dude 1: “Dude, JavaScript is cool so far and everything but everyone is using something called Object-oriented something something.”
• Dude 2: “Damn, kids these days. Let’s just allow functions to contain variables and other functions. Voilà. We got classes.”
• Dude 1: “That sounds like is going to confuse lots of people… D:”
• Dude 2: “Meh. Not really.”
• Dude 1: “But…”
• Dude 2: “NOT REALLY, I SAID!”

Okay maybe it was not like that but, since then, we have like four or five different ways to define something like classes. The one I like the most is as follows:

<script type="text/javascript" language="text/javascript">
function CarFactory() {
var carsProduced = 0; // Private variable

this.name = 'Car Factory'; // Public variable

this.getCarsProduced = new function() { // Public getter
return carsProduced;
};

this.createCar = function() { // Public function
showCarProductionMessage();
carsProduced++;
};

function showCarProductionMessage() { // Private function
};
}

var carFactory = new CarFactory();

carFactory.createCar();

alert("Cars produced so far: " + carFactory.getCarsProduced());
</script>


How pretty does that looks? So, a function can also be a class definition with private members and all, just like any object-oriented language. This is known as prototyping.

Hopefully, the example is clear enough. I have a couple of things to clarify though. JavaScript, while trying to be object-oriented, is still a scripted language; never forget that. So, if you put the class definition after you use it, you’ll get a nice error complaining about the use of an undefined type since the interpreter has not yet seen the class prototype definition as it is after the code trying to use it. Also, look at the use of the keyword this. Things are just starting to get more and more interesting for sure.

## The “this” Keyword

If you have worked with classes before, surely you’ll recognize the this keyword. This magical keyword often refers to the class where the current method using it is defined. Well, in JavaScript (once again screwing up with our brains) the this keyword refers to the function owner and can refer to several things depending on the scope it is used:

Now you see how tightly coupled JavaScript is with the DOM tree. The default owner for all scripts is the global window DOM element.

Now, you can see the last two usages in the table involve the use of the call() and apply() functions. These functions are useful if we would like to change the value this refers to. That would be your homework: Check the use of call() and apply().

## Conclusion

So, we have reviewed how JavaScript was conceived and how it is structured. This is essential in order to understand more complex examples like DOM element events and asynchronous server calls and of course– server-side JavaScript; which we will be examining with more care on future articles.

Stay tuned!