JSApps 101: AngularJS In A Nutshell

Introduction

Moving on with the topic, in the previous part of this series, we discussed about that interesting new tendency to let the browser do all the user interface heavy-lifting on the client-side, since it is such a good guy. Of course, this involves a LOT of JavaScript going on, since it is the only language the browser understands besides HTML.

We all know how complex and crazy things can get when you involve JavaScript in the picture. The language is usually seen as over-complicated, it used to be hard to debug, learning curve is steep as hell and it can get confusing quickly; also, there used to be a lack of tools and IDEs with strong support for its development just sums up for bad reputation.

However, fear not! After this article, you will have a solid base on how you can turn the fiendish JavaScript you have known for years from a simple DOM traversal helper to a whole reliable application development framework with the aid of AngularJS; man, now THAT’S a catchy name.

What In The World Is AngularJS?

AngularJS is a JavaScript library that allows you to implement the Model-view-controller pattern (in some sort of way) at a client-side level (dude, the browser). Why is this? Because you have two options in life:

  • Option A    You implement a +1000 lines JavaScript source file with a bunch of code that not only is in charge of handling manipulation of the HTML structure and creation of UI components, but also handles all of the data validation and display logic of it, all of this while your fellow teammates start hating the guts out of you.
  • Option B    You be a good developer, who cares about separation of concerns, and split all those tasks in several components where each is in charge of a single thing; those written in separate files for your code maintainability’s sake, while making a lot of friends in the process.

In the good old days, option A was “kind” of affordable (heavy quotations), since JavaScript was used for simple things like adding and removing elements from the HTML structure, changing colors and such. But when we talk about implementing a client application only using JavaScript, option A starts struggling. Mainly because we are not only moving HTML elements around, animating stuff and other user interface tasks; but also performing data validation, error handling and server communications in the client-side.

Of course, option B is the way to go. Since there is a ton of things going on in the browser, we need a way to create components that handle each task separately. This is known as the Separation of Concerns principle, or SOC; which states that your code has to be split in several parts, handling a single task each, orchestrated in order to achieve a common good. And this is where AngularJS shines. It allows you to:

  • Organize your client-side code in a Model-view-controller fashion, so you have separate components where each handles a different task in the logic stack: from user input to server data posting (Don’t worry, we’ll get on this later on.)
  • Live template processing and data-binding; more specifically: munch a template, bind it to specific data coming from the server or anywhere else and then produce a valid HTML piece that can be displayed in the browser.
  • Creation of re-usable user interface components.
  • Implementation of advanced patterns, like dependency injection; which is tied to…
  • Ability to implement unit tests for the JavaScript code.

Client-side Model-View-Whatever

We all know Model-View-Controller (MVC from now on) was conceived for a good reason: Roles and responsibilities matters. It was designed as a practical way to separate all of the front-end logic into three interconnected parts, so code having to do with how data is displayed is clearly separated from the code that validates, stores and retrieves that data. Also, it was thought with unit testing in mind. MVC Flow Now, MVC is commonly used server-side; think of ASP.NET MVC or Spring MVC, where you have a class representing the “controller”, which fetches data into “models” and then binds them to a template producing a “view” in the form of an HTML document, which is returned to the browser for display on the user screen. Remember, each time we talk about MVC, we are generally referring to the Presentation Layer of an application.

Now, why go crazy and MVC-fy my JavaScript code? Well, because what’s trendy right now are dynamic web interfaces where pretty much everything is asynchronous, little post-backs are done to the server unless really necessary and you have complex UI components like pop-ups, multi-level grids and others living on the screen (think of Facebook, for example). The only way to do this is delegate the browser (through JavaScript) with the user-interface composition, interaction between all of the components and lastly fetching and posting data back and forth from the server. You really, really NEED a way to achieve law and order so the client-side code does not become an uncontrollable mess.

OK, enough talk already. Let’s dive into how to implement a simple application using AngularJS.

Hello, Angular

Consider the classic “To-do List” example. We want a screen that displays a list of items to be completed; also we need to show a mini-form that allows us to enter and add new items to the list. Here is how it should look: To-do List UI So, in order to get this done, we need to complete some steps:

  • Create the HTML document which will be representing our view.
  • Include the AngularJS library in the HTML document so we can start creating controllers that will handle behavior of the user interface.
  • Create and include a JavaScript file where we are going to create our application object (wait for it) and define some controllers.
  • Add scope (almost there…) variables and functions that will be bound to the view through special AngularJS HTML attributes.

Basic View Markup

Let’s start this in a natural way: let’s create the user interface first. We will create a file called todolist.html and we will add the following HTML to it:
ANGJS SNIP1
Now, nothing fancy right there; we just included the AngularJS library at line 8 and another file at line 9, which we will create in a moment. However, just notice the structure of the document: We have an UL element which will represent our list of to-do items and then a couple input controls so we can add new items to the list. It does not have any information on it for now, but eventually it will… And it will be awesome.

Let’s leave this code as it is for now, let’s move to the app.js file which has all the fun going on.

Adding Logic: The Controller

Just by looking at the previous HTML document, you will realize that there are two things we need regarding to view state data:

  • An array to store the to-do items to be displayed in the UL element at line 13.
  • A function to add new items when I click the Add button at line 15.

Simple, right? Well, let’s create a new file called app.js and add the following code; I’ll explain it later:

ANGJS SNIP2

First things first: let’s take a look at line 1. Before we start building controllers that handle UI logic, we need to create what is known as an AngularJS application module; this is done by calling the angular.module() function and passing it the module name, which returns the created module. This object will later be used as a container where you will store all of the controllers belonging to this particular JS App.

The variable angular is available after we included the AngularJS library in our HTML document at line 8.

After we create the application object, we can start creating controllers; being done in line 3 through the controller() function, which is now callable from the application module we created in line 1. This function takes two arguments: the name of the controller and a callback function that will be responsible of initializing the controller and which will be called each time a controller is instantiated by AngularJS. AngularJS also passes a special variable to that function called $scope; everything that is added to the $scope variable during controller initialization, will be accessible to the view for data binding; better said: it represents what the view can see.

The rest of the lines are quite self-explanatory:

  • Lines 7 to 11 adds a pre-initialized array of to-do items to the scope, that will be used as storage for new items. It represents the items that will be displayed in the empty UL element we have right now.
  • Line 13 declares a variable that will hold the description of new items to be added after the Add button is clicked.
  • Lines 17 to 21 define a function that will add new items to the to-do items array; this should be called each time the Add button is clicked.

Binding The View

Now that we have AngularJS, the application module and the to-do list controller, we are ready to start binding HTML elements in the view to the scope. But before that, we need to change our <html> tag a little and add some special AngularJS attributes:

ANGJS SNIP3

Notice the ng-app attribute we added to the <html> tag; this is your first contact with one of the several directives that are part of the AngularJS data binding framework.

Directives are simply HTML element markers that are to be processed by the data binding engine. What does ng-app do? Well, it tells AngularJS which application module it should use for this HTML document; you might notice we specified ToDoListApp, which is the one we created in out app.js file at line 1.

After we associate our HTML document with the ToDoListApp, second step is to specify a controller for our view; this is done through the ng-controller directive. The ng-controller directive assigns a controller to a section of the HTML document; in other words, this is how we tell Angular where a view starts and when it ends.

Anyways, modify the <body> tag so it looks like this:

SNIP4

Same with the ng-app directive, the previous code tells AngularJS that everything inside the <body> tag will be governed by the ToDoListController, which we created in our app.js file at line 3.

Now we are ready to start binding elements to members added to the $scope variable during the ToDoListController controller initialization.

The ng-repeat Directive

Let’s start with the empty UL list element. In this case we want to create a new child LI element per item that is contained in the to-do items array. Let’s use the ng-repeat directive for this matter; add the following code inside the UL tags:

ANGJS SNIP5

OK, hold on tight. The ng-repeat directive will repeat an element per item in an array used as data source; the quoted value represents the iterator expression, which defines the array to be iterated and an alias used to refer the current item being iterated.

In this case, we specified that it should repeat the LI element for all of the items in the items array defined in the $scope variable from line 7 through 11 in the app.js file.

Lastly, inside the LI tags we define an Angular template, which is an expression enclosed in double curly braces that will be compiled dynamically during run-time. In this case, the template is extracting a member named desc from each item and is going to display its value inside the LI tags being repeated.

Go ahead, save your files and open todolist.html in your preferred browser; you will see how the list gets filled. Awesome stuff, huh?

The ng-model Directive

Next in our list is the ng-model directive, which is used to bind the value of an input element, a text box, text area, option list, etc.; to a variable of the $scope. But before I give you more talk, change the input element at line 15:

ANGJS SNIP6

Binding the value of an element means that each time the value of the input element changes, the variable specified in the ng-model directive will change with that value.

AngularJS enables two-way binding by default, meaning that if the value of the variable the input element is bound to changes, the change will be reflected in the input element on screen. This is the true magic of AngularJS: You can change the value of elements displayed on screen just by modifying values of the $scope variable; no need of selectors, no need to access the DOM from JavaScript, no need of extra code.

In this case, the input element has been bound to the newItemDescription variable of the $scope, defined at line 13 of the app.js file. Each time the value of the input element changes, the variable at the scope will be updated and viceversa.

The ng-click Directive

What if I want to do something when I click that? For that, AngularJS provides a bunch of event handler directives. These directives can be used to invoke functions defined on the $scope each time the user performs an action over an element.

The only one we are going to use for this example is the ng-click, which handles the user click on a button or input element; the expression it takes is basically the code to be executed on each click. In our case, we will modify the button element at line 15 and add the following directive:

ANGJS SNIP7

If you look closely, we are telling AngularJS to call the addItem function defined in the $scope. If you see the code of that function from line 17 to 21, you will see that it adds a new item to the to-do list array, based on the value of the newItemDescription variable.

If you save your files and open todolist.html, you will see how the list is automatically updated each time you enter a new description in the text box and click on Add.

Your HTML document is dynamic and alive. How cool is that?

What Kind Of Sorcery Is This!?

OK, I must tell you that all of this does not happen through magic. AngularJS is what’s called an unobtrusive library; which means that everything happens automatically as soon as the library finishes loading.

At the very beginning, when the HTML document finished loading, AngularJS crawls through HTML document structure looking for ng-app directives which tells it that something has to be done there. After all of the directives have been found, the marked elements are processed separately: elements are bound to controllers and templates are compiled.

The $scope variable lifetime is automatically handled by AngularJS, each time something happens on-screen, it is notified and performs anything that has to be done to ensure the controller and view are synced; from updating values in the $scope bound to a particular element to refreshing an element or template. This is done through the implementation of some sort of the observable pattern that allows AngularJS to react to changes on elements and the $scope itself.

Conclusion

Woah! That was a bunch of stuff to digest. Hopefully, you will make sense out of it with some practice. Of course those three directives are not everything that there is on AngularJS; for a complete list on all of the possible directives you can use out-of-the-box, take a look here.

Have in mind that you can also extend the library with custom directives, but that is an advanced topic we might cover in a future article.

AngularJS is not everything there is about JS Apps, this was a very basic introduction and I want you to have knowledge you can use in real life, so we will be learning how to build more complex JS Apps in my next article. Topics to be covered will be:

  • Creating custom Angular directives.
  • Complex user interfaces (windows, redirection and data validation.)
  • RequireJS for on-demand asynchronous JavaScript files loading.

Hope that sounds exciting enough for you. Stay tuned! :D

Source Code

Further Reading

JSApps 101: Introduction To JavaScript Applications

Introduction

So, JavaScript… Again! After some months away from this blog, I am back with a new series of articles related to the incredible, magical and mysterious world of JavaScript. More specifically, JavaScript applications. Have you ever heard of AngularJS, Backbone, Knockout JS, LESS and such things? Read on, this might interest you.

We have used, at some point of our Internet life, some awesome websites, such as Facebook, Github, Spotify, and others; where everything is asynchronous, the user interface is super-responsive and couldn’t be closer to a desktop application in matters of functionality, all of this right in our browser. Less that some people imagine is that these sites owe their slickness mainly to our good old friend in battle: JavaScript; oh so many developers underestimate JavaScript. This article series will dive you into the basis of how these kind of powerful JavaScript applications are built and over what technologies and frameworks, so let’s move forward into some basic concepts.

Server-side v.s. Client-side

So, what in the world is a JavaScript application anyways? Well, as you might know, the traditional way a web application works is that you have a set of specialized frameworks and tools (name it ASP.NET, PHP, Spring Framework) running server-side; when someone requests a page from the server, it responds with an HTML document, usually resulting of the parsing of a server-side template (a PHP, ASPX or the alike) and then bound to data coming from the database. Those templates being processed by the server usually contain special syntax and directives that instructs the server’s templating engine how to bind data to it and produce a valid HTML document; some might recall these as the dreaded “server tags.”

Standard Server Request/Response

 

Some server-side technologies like ASP.NET use “controls” or helpers that assist in the rendering of complex user interface components into HTML like grids, forms and charts bound to dynamic data coming from the database. Each time these components need to be refreshed, they do it through asynchronous AJAX requests or a full-page refresh (known as a server post-back, which all users love, or not). While these are handy for speed-building of web solutions, is not as efficient as pure-JavaScript graphical components.

ASP.NET WebControls

Often, JavaScript is used to manipulate the structure of the resulting HTML document, get the value of a field, and other simple tasks dynamically on the browser (better known as “the client side”) without the need of refreshing the page. But as popularity of JavaScript arise (let’s thank jQuery for that), it is being delegated with more and more complex stuff like rendering templates into HTML, so it is done client-side and not server-side; binding of server data, validation of user input and controlling page flow. This being said, a JavaScript application is basically a “client” that runs on the browser, thanks to the leverage of technologies such as JavaScript, HTML5 and CSS3. All of the UI logic is controlled client-side, right there in the browser.

Structure of a JavaScript App

Before moving on, it is true that this requires a paradigm shift if you have been working on traditional web applications for a while, specially if you have never used a Model-View-Controller approach. If you have never heard of, or used Model-View-Controller, I’m afraid there is some reading to be done before continuing. But hey! You can start here, or else you can continue reading this incredibly sexy article.

As mentioned before, a JavaScript application, or JS App (patent pending), usually follows an MVC approach. It is composed of several “views”, which are usually HTML documents or templates; “controllers” that handles validation, logic flow and communications with the server; and “models” that contains the data to be displayed and bind on the views. As you might notice, is a pretty similar model to server-side technologies like ASP.NET MVC and Spring MVC, just that the entire presentation layer is being moved to the browser, specifically into JavaScript components. We’ll analyze advantages of this later on.

With all the presentation logic being handled by the browser, where does the data we see on the UI coming from? It comes from the server; that is the real use we have for it. The controller at the browser is the one responsible for this channel of communication; it retrieves data from the server each time the user pages through a data grid and sends data to it whenever the user needs to create or edit information. JS Apps work in a similar way to smartphone apps, in which a client runs on the phone locally and it uses data coming from a remote server. In fact, there are specialized build tools, like PhoneGap, that creates applications to be installed on a smartphone from HTML/JS/CSS3 sources.

JS App Structure

Pros & Cons

While JS Apps goes far off any conventional use of a browser, it offers several advantages:

  • Rendering of pages and templates is done by the browser in the client computer, taking advantage of the client computer’s processing power and leaving less workload on the server.
  • Better user interface responsiveness, since all calls to the server are asynchronously and JavaScript UI components are usually lightweight.
  • Completely decoupled from the server logic.
  • Less calls to the server, since it is only accessed to get data and not pages in every possible display state it might have.
  • High separation of concerns, since the server ONLY handles business logic and not UI-related validation and such.
  • Easy unit testing of the user interface and UI logic.

Also, it might represent some disadvantages:

  • Lots, lots and LOTS of JavaScript to be written; we all know it can be a pain to maintain if not properly done.
  • The learning curve is quite step, since most people is used to jQuery and DOM manipulation but not to JavaScript controllers, models and pseudo-classes; let alone advanced concepts like JavaScript dependency injection.
  • Data incoming to the server needs to be double-checked in order to prevent bad information sent by tampered JavaScript components.

Sounds Kind of Interesting, Now What?

OK, now that you might get the picture of what a JS App might look like and its advantages, so the next step would be to analyze the technologies and frameworks you could use, getting your hands dirty along the way so you can start developing this kind of applications.

In the following articles we will move into learning JavaScript libraries like AngularJS, for client-side Model-View-Controller; RequireJS, a library that allows asynchronous loading of JavaScript files on-demand; usage of Twitter Bootstrap, to build nice HTML5-compliant user interfaces; and ultimately how to structure your server application as a solid data provider for your JavaScript application.

So, stay tuned for more articles! :D

 

Overview Of The Task Parallel Library (TPL)

Introduction

Remember those times when we needed to spawn a separate thread in order to execute long-running operations without locking the application execution until the operation execution completes? Well, time to rejoice; those days are long gone. Starting by its version 4.5, the Microsoft.NET Framework delivers a new library that introduces the concept of “tasks”. This library is known as the Task Parallel Library; or TPL.

Tasks v.s. Threads

In the good (annoying) old days we frequently had the need to spawn a separate thread to query the database without locking the main application thread so we could show a loading message to the user and wait for the query to finish execution and then process results. This is a common scenario in desktop and mobile applications. Even though there are several ways to spawn background threads (async delegates, background workers and such), in the most basic and rudimentary fashion, things went a little something like this:

User user = null;

// Create background thread that will get the user from the repository.
Thread findUserThread = new Thread(() =>
{
    user = DataContext.Users.FindByName("luis.aguilar");
});

// Start background thread execution.
findUserThread.Start();

Console.WriteLine("Loading user..");

// Block current thread until background thread finishes assigning a
// value to the "user" variable.
findUserThread.Join();

// At this point the "user" variable contains the user instance loaded
// from the repository.
Console.WriteLine("User loaded. Name is " + user.Name);

Once again, this code is effective, it does what it has to do: Load a user from a repository and show the loaded user’s name on console. However, this code sacrifices succinctness completely in order to initialize, run and join the background thread that loads the user asynchronously.

The Task Parallel Library introduces the concept of “tasks”. Tasks are basically operations to be run asynchronously, just like what we just did using “thread notation”. This means that we no longer speak in terms of threads, but tasks instead; which lets us execute asynchronous operations by writing very little amount of code (which also is a lot easier to understand and read). Now, things have changed for good like this:

Console.WriteLine("Loading user..");

// Create and start the task that will get the user from the repository.
var findUserTask = Task.Factory.StartNew(() => DataContext.Users.FindByName("luis.aguilar"));

// The task Result property hold the result of the async operation. If
// the task has not finished, it will block the current thread until it does.
// Pretty much like the Thread.Join() method.
var user = findUserTask.Result;

Console.WriteLine("User loaded. Name is " + user.Name);

A lot better, huh? Of course it is. Now we have the result of the async operation strongly typed. Pretty much like using async delegates but without all the boilerplate code required to create delegates; which is possible thanks to the power of C# lambda expressions and built-in delegates (Func, Action, Predicate, etc.)

Tasks have a property called Result. This property contains the value returned by the lambda expression we passed to the StartNew() method. What happens when we try to access this property while the task is still running? Well, the execution of the calling method is halted until the task finishes. This behavior is similar to Thread.Join() (line 16 of the first code example).

Tasks Continuations

OK, we now have knowledge of how all this thing about tasks goes. But, let’s assume you don’t want to block the calling thread execution until the task finishes, but have it call another task after it finishes that will do something with the result later on. For such scenario, we have task continuations.

The Task Parallel Library allows us to chain tasks together so they are executed one after another. Even better, code to achieve this is completely fluent and verbose.

Console.WriteLine("Loading user..");

// Create tasks to be executed in fluent manner.
Task.Factory
    .StartNew<User>(() => DataContext.Users.FindByName("luis.aguilar")) // First task.
    .ContinueWith(previousTask =>
    {
        // This will execute after the first task finishes. First task's result
        // is passed as the first argument of this lambda expression.
        var user = previousTask.Result;

        Console.WriteLine("User loaded. Name is " + user.Name);
    });

// Tasks will start running asynchronously. You can do more things here...

As verbose as it gets, you can read the previous code like “Start new task to find a user by name and continue by printing the user name on console”. Is important to notice that the first parameter of the ContinueWith() method is the previously executed task which allows us to access its return value through its Result property.

Async And Await

The Task Parallel Library means so much for the Microsoft.NET Framework that new keywords were added to all its languages specifications to deal with asynchronous tasks. These new keywords are async and await.

The async keyword is a method modifier that specifies that it is to be run in parallel with the caller method. Then we have the await keyword, which tells the runtime to wait for a task result before assigning it to a local variable, in the case of tasks which return values; or simply wait for the task to finish, in the case of those with no return value.

Here is how it works:

// 1. Awaiting For Tasks With Result:
async void LoadAndPrintUserNameAsync()
{
    // Create, start and wait for the task to finish; then assign the result to a local variable.
    var user = await Task.Factory.StartNew<User>(() => DataContext.Users.FindByName("luis.aguilar"));

    // At this point we can use the loaded user.
    Console.WriteLine("User loaded. Name is " + user.Name);
}

// 2. Awaiting For Task With No Result:
async void PrintRandomMessage()
{
    // Create, start and wait for the task to finish.
    await Task.Factory.StartNew(() => Console.WriteLine("Not doing anything really."));
}

// 3. Usage:
void RunTasks()
{
    // Load user and print its name.
    LoadAndPrintUserNameAsync();

    // Do something else.
    PrintRandomMessage();
}

As you can see, asynchronous methods are now marked with a neat async modifier. As I mentioned before, that means they are going to run asynchronously; better said: in a separate thread. Is important to clarify that asynchronous methods can contain multiple child tasks inside them which are going to run in any order, but by marking the method as asynchronous means that when it is called in traditional fashion, the runtime will implicitly wrap this method contents in a task object.

For example, writing this:

var loadAndPrintUserNameTask = LoadAndPrintUserAsync();

.. is equivalent to writing this:

var loadAndPrintUserNameTask = new Task(LoadAndPrintUserAsync);

Remember the task was created, but it has not been started yet. You need to call the Start() method in order to do so.

Now, we can also create awaitable methods. This special kind of methods are callable using the await keyword.

async Task LoadUserAsync()
{
    // Create, start and wait for the task to finish; then assign the result to a local variable.
    var user = await Task.Factory.StartNew<User>(() => DataContext.Users.FindByName("luis.aguilar"));

    // Return the loaded user. The runtime converts this to a Task<User> automagically.
    return user;
}

All awaitable methods specify a task as its return type. Now, there are things we need to discuss in detail here. This method’s signature specifies that it has a return value of type Task<User> but it is actually returning the loaded user instance instead (line 7). What is this? Well, this method can return two types of values depending of the calling scenario.

First scenario would be when it is called in a traditional fashion. In this case it returns the actual task instance ready to be executed.

Task loadUserTask = LoadUserAsync();

// The previous code is equivalent to:
Task loadUserTask = new Task<User>(() => LoadUserAsync().Result);

Second scenario would be when it is called using await. In this case it starts the task, waits for it to finish and gets the result, which then gets assigned to the specified local variable.

User user = await LoadUserAsync();

// The previous code is equivalent to:
User user = LoadUserAsync().Result;

See? Personally it is the first time I see a method that can return two types of value depending on how it is called. Even though is quite interesting such thing exists. By the way, is important to remember that any method which at any point awaits for an asynchronous method by using the await keyword needs to be marked as async.

Conclusion

This surely means something for the whole framework. Looks like Microsoft has taken care of parallel programming on its latest framework release. Desktop and mobile application developers will surely love this new feature which reduces significantly boilerplate code and increases code verbosity. We can all feel happy about our beloved framework moving forward the right way once again.

That’s all for now, folks. Stay tuned! ;)

Further Reading

Unit Testing 101: Inversion Of Control

Introduction

Inversion Of Control is one of the most common and widely used techniques for handling class dependencies in software development and could easily be the most important practice in unit testing. Basically, it determines if your code is unit-testable or not. Not just that, but it can also help improve significantly your overall software structure and design. But what is it all about? It is really that important? Hopefully we’ll clear those out on the following lines.

Identifying Class Dependencies

As we mentioned before,  Inversion Of Control is a technique used to handle class dependencies effectively; but, What exactly is a dependency? In real life, for instance, a car needs an engine in order to function; without it, it probably won’t work at all. In programming it is the same thing; when a class needs another one in order to function properly, it has a dependency on it. This is called a class dependency or coupling.

Let’s look at the following code example:

public class UserManager
{
    private Md5PasswordHasher passwordHasher;

    public UserManager()
    {
        this.passwordHasher = new Md5PasswordHasher();
    }

    public void ResetPassword(string userName, string password)
    {
        // Get the user from the database
        User user = DataContext.Users.GetByName(userName);

        string hashedPassword = this.passwordHasher.Hash(password);

        // Set the user new password
        user.Password = hashedPassword;

        // Save the user back to the database.
        DataContext.Users.Update(user);
        DataContext.Commit();
    }

    // More methods...
}

public class Md5PasswordHasher
{
    public string Hash(string plainTextPassword)
    {
        // Hash password using an encryption algorithm...
    }
}

The previous code describes two classes, UserManager and PasswordHasher. We can see how UserManager class initializes a new instance of the PasswordHasher class on its constructor and keeps it as a class-level variable so all methods in the class can use it (line 3). The method we are going to focus on is the ResetPassword method. As you might have already noticed, the line 15 is highlighted. This line makes use of the PasswordHasher instance, hence, marking a strong class dependency between UserManager and PasswordHasher.

Don’t Call Us, We’ll Call You

When a class creates instances of its dependencies, it knows what implementation of that dependency is using and probably how it works. The class is the one controlling its own behavior. By using inversion of control, anyone using that class can specify the concrete implementation of each of the dependencies used by it; this time the class user is the one partially controlling the class behavior (or how it behaves on the parts where it uses those provided dependencies).

Anyways, all of this is quite confusing. Let’s look at an example:

public class UserManager
{
    private IPasswordHasher passwordHasher;

    public UserManager(IPasswordHasher passwordHasher)
    {
        this.passwordHasher = passwordHasher;
    }

    public void ResetPassword(string userName, string password)
    {
        // Get the user from the database
        User user = DataContext.Users.GetByName(userName);

        string hashedPassword = this.passwordHasher.Hash(password);

        // Set the user new password
        user.Password = hashedPassword;

        // Save the user back to the database.
        DataContext.Users.Update(user);
        DataContext.Commit();
    }

    // More methods...
}

public interface IPasswordHasher
{
    string Hash(string plainTextPassword);
}

public class Md5PasswordHasher : IPasswordHasher
{
    public string Hash(string plainTextPassword)
    {
        // Hash password using an encryption algorithm...
    }
}

Inversion of Control is usually implemented by applying a design pattern called the Strategy Pattern (as defined in The Gang Of Four book). This pattern consists on abstracting concrete component and algorithm implementations from the rest of the classes by exposing only an interface they can use; thus making implementations interchangeable at runtime and encapsulate how these implementations work since any class using them should not care about how they work.

The Strategy Pattern

So, in order to achieve this, we need to sort some things out:

  • Abstract an interface from the Md5PasswordHasher class, IPasswordHasher; so anyone can write custom implementations of password hashers (line 28-31).
  • Mark the Md5PasswordHasherclass as an implementation of the IPasswordHasher interface (line 33).
  • Change the type of the password hasher used by UserManager to IPasswordHasher (line 3).
  • Add a new constructor parameter of type IPasswordHasher interface (line 5), which is the instance the UserManager class will use to hash its passwords. This way we delegate the creation of dependencies to the user of the class and allows the user to provide any implementation it wants, allowing it to control how the password is going to be hashed.

This is the very essence of inversion of control: Minimize class coupling. The user of the UserManager class has now control over how passwords are hashed. Password hashing control has been inverted from the class to the user. Here is an example on how we can specify the only dependency of the UserManager class:

IPasswordHasher md5PasswordHasher = new Md5PasswordHasher();
UserManager userManager = new UserManager(md5PasswordHasher);

userManager.ResetPassword("luis.aguilar", "12345");

So, Why is this useful? Well, we can go crazy and create our own hasher implementation to be used by the UserManager class:

// Plain text password hasher:
public class PlainTextPasswordHasher : IPasswordHasher
{
    public string Hash(string plainTextPassword)
    {
        // Let's disable password hashing by returning
        // the plain text password.
        return plainTextPassword;
    }
}

// Usage:
IPasswordHasher plainTextPasswordHasher = new PlainTextPasswordHasher();
UserManager userManager = new UserManager(plainTextPasswordHasher);

// Resulting password will be: 12345.
userManager.ResetPassword("luis.aguilar", "12345");

Conclusion

So, this concludes our article on Inversion of Control. Hopefully with a little more practice, you will be able to start applying this to your code. Of course, the  biggest benefit of this technique is related to unit testing. So, What does it has to do with unit testing? Well, we’re going to see this when we get into type mocking. So, stay tuned! ;)

Further Reading

Unit Testing 101: Basics

Introduction

We all know unit testing is an essential part of the development cycle. Actually, unit tests code is as important as the actual application code (Yeap, you read that right); this is something we should never forget. That’s why we are going to look at some important (introductory) concepts relating to composing proper testing code.

I will be using NUnit as my testing library. The package comes with the framework libraries and a set of test runner clients. You can download it at their site’s download section.

Unit Test Structure

Unit tests are usually grouped in test fixtures. Basically, a test fixture is a group of unit tests targeted to verify a single application feature. Let’s illustrate this in code:

using NUnit.Framework;

namespace AppDemo.Tests
{
    [TestFixture(Category = "User Authentication")]
    public class WhenUserIsBeingAuthenticated
    {
        [Test]
        public void ShouldReturnTrueIfValidationIsSuccessful()
        {
            // TODO: Implement test code.
        }

        [Test]
        public void ShouldReturnFalseIfUsernameOrPasswordIsNull()
        {
            // TODO: Implement test code.
        }
    }
}

We can now picture how a test fixture looks in code. In this case, the test fixture is a regular class filled out with test methods. As you might have noticed, the class name describes the state of the feature being tested: “When the user is being authenticated”. Each particular test method seeks to verify a required result on a specific condition: “Should return true if validation is successful”.

Running Tests

Once you have your fixture ready to go, is now time to run all tests on it and see results. I will be using the NUnit GUI Runner which looks for all classes in the assembly marked with the [TestFixture] attribute and then calls each method on them marked with the [Test] attribute. Is important to remember that all tests must be in a separate class library. First reason is because it is a good practice, you should not be mixing application code with test code; and second because the NUnit test runner can only load DLL files.

So, first thing to do is to build the project so we have a DLL containing all our tests. Once we have a DLL file with our test fixture classes on it, fire up the NUnit test runner (NUnit.exe) and load the file on it.

NUnit Test Runner

At this point everything is quite intuitive. You can hit the “Run” button and see how all tests pass or rebuild the project on Visual Studio and see how the test runner auto-updates with new changes. Cool, huh?

Arrange, Act and Assert

Test methods are usually composed of three common phases: Arrange, act and assert. Or “triple-A” if you like.

  • Arrange: At the very beginning of the method, you need to setup the test scenario. This includes expected test results for comparison with actual results, instances of the components to be tested and type mocking.
  • Act: After arrangement is done, we now have to actually perform the actions that will produce the actual test results. For instance, call the Validate method on the UserAuthenticator class which performs the actual user validation.
  • Assert: The assertion phase verifies that actual tests results match what we are expecting.

Is a good practice to provide comments delimiting each phase:

[Test]
public void ShouldReturnTrueValidationIsSuccessful()
{
    // Arrange
    var expectedResult = true;
    var userAuthenticator = new UserAuthenticator();

    // Act
    var actualResult = userAuthenticator.Validate("luis.aguilar", "1234");

    // Assert
    Assert.That(actualResult, Is.EqualTo(expectedResult), message = "Authentication failed though it should have succeeded.");
}

As you might see, these three phases are executed in order. Is a good practice to initialize variables with expected results at Arrange phase to make the Assert phase more readable. Also, for the sake of readability, I am using the Assert.That syntax of NUnit so assertions are more verbose.

Tests Before Implementation

Even though unit testing is good for all development methodologies, I’m an avid supporter of the Test-Driven-Development (TDD) methodology. As the name implies, TDD is all about writing all tests BEFORE you implement actual application code. That way, your code will meet acceptance criteria right from inception. Basically, application infrastructure design is driven by tests. Now we think on user requirements rather than UML diagrams and classes.

For instance, you should write all the previous sample tests before implementing the UserAuthenticator class. That way that particular class will be born satisfying user requirements so we don’t have to  change its code later on, which helps save lots of time (and money, managers love to hear that) and improves code efficiency and design greatly.

Conclusion

Okay, hopefully this served out as a brief introduction to the exciting world of unit testing. Of course, there’s a lot more on this topic. In next articles we are going to look at concepts like inversion of control, type mocking and more things related to TDD. Is going to be lots of fun!

Stay tuned! ;)

JavaScript For Regular Guys (Part 1)

Introduction

So, JavaScript. That thing that makes AJAX, dynamic HTML and other fancy stuff possible. We know it exists somewhere deep on the browser. But what is it exactly? How does it work anyways? And even more important: How in the world do you even code for it?

Okay, reader. If you have any of those questions, today is your lucky day. We are going to find out the answers for them.

Some Background

Like every exciting article, we will be starting with some history. So, JavaScript was born with Netscape browsers about 18 years ago. It was first shipped with Netscape version 2 as a feature that would allow web pages to run scripted content on the browser right in the user computer. Its original name was Mocha, then it changed to LiveScript and finally ended up as JavaScript (JS). Wow, that’s what I call name evolution.

How Does It Work?

JavaScript is contained either on individual *.js files that are referenced by the web page or one or more inline HTML <script> blocks coded deep inside the HTML structure. The browser then interprets (that’s right, JavaScript is an interpreted language) all code on referenced files and <script> blocks by using its own JavaScript runtime and executes the code in the web page when load is complete.

To include JS files on a web page we would have to add <src="../file.js" /> tags for each file. The other way is to include JavaScript code inline inside a script block:

<script type="text/javascript" language="text/javascript">
    function doSomething() {
        alert("Something happened!");
    }
</script>

It is recommended that these blocks are right after the last HTML tag before the <body> tag on the page to improve loading times. Always remember that JavaScript is a functional scripting language. It executes all code in a file or script block from top to bottom sequentially. For example, the script above won’t do anything since we are never calling the function anywhere. So, we would have to do something like:

<script type="text/javascript" language="text/javascript">
    function doSomething() {
        alert("Something happened!");
    }

    doSomething();
</script>

The Domain Object Model (DOM)

JavaScript goes tight with the HTML structure of the page it is executed on. JavaScript sees all the HTML document structure as an HTML element tree called the DOM. This allows us to manipulate the document structure on runtime by referencing its DOM elements and change things as needed. For example, to change color of a paragraph 2 seconds after the page has loaded, we can use the following code:


<p id="target">Test paragraph...</p>

<script type="text/javascript" language="text/javascript">
    setTimeout(function() {
        var target = document.getElementById('target');
        target.style.color = '#00FF00';
    }, 2000);
</script>

By examining the previous code, we can see the use of the setTimeout function. JavaScript comes with a set of pre-defined global functions that don’t come from any object. They just can be used anywhere. No, literally ANYWHERE. If you come from a formal language like C# or Java, you know that, at some point, you’ll need to include namespaces or packages so your code can use things defined somewhere else. Yeah well, with JavaScript this is not the case. Global functions are auto-magically imported by the browser itself.

In the case of the setTimeout function, it takes two parameters: the first is a function that will execute after a specified amount of milliseconds; specified by the second parameter. The function we specified as the first parameter is what we know as an “anonymous function”. We call it “anonymous” because it has no name. Really? Yes, really. But you can assign a name to it if you want:

<script type="text/javascript" language="text/javascript">
    function colorChanger() {
        var target = document.getElementById('target');
        target.style.color = '#00FF00';
    };

    setTimeout(colorChanger, 2000);
</script>

It now has a name. But we can still use it as a variable and pass it as first parameter of the setTimeout function. Dammit, JavaScript!

Anyways, we slipped a little from the whole DOM topic. The important thing you gotta remember is that the global document variable can be used to retrieve HTML elements by ID (you already got the idea of what the DOM is all about). After we have the element on a JavaScript variable, we can change its attributes, move it somewhere else on the tree or even just delete the crap out of it à la Godfather.

JavaScript Has No Class

Even tough JavaScript is one of the most popular and used languages in the world, is not anything close to any other language you might have seen before. It is functional, dynamic weakly-typed and totally random at some points. Is like taking all programming languages known to mankind and mash them together in one single surprise-pack. Seriously, it gets a little crazy sometimes.

For instance, even though it has weakly-typed dynamic variables and other sorts of neat things, it has no formal way of defining classes. However, as the languages evolved from old computers into 21st century, a way to do something like Object-oriented Programming had to be on JavaScript by its first release. So, guys at ECMA had a dialogue, at some point while writing the first language specification, back in year 97′, that goes a little something like this:

  • Dude 1: “Dude, JavaScript is cool so far and everything but everyone is using something called Object-oriented something something.”
  • Dude 2: “Damn, kids these days. Let’s just allow functions to contain variables and other functions. Voilà. We got classes.”
  • Dude 1: “That sounds like is going to confuse lots of people… D:”
  • Dude 2: “Meh. Not really.”
  • Dude 1: “But…”
  • Dude 2: “NOT REALLY, I SAID!”

Okay maybe it was not like that but, since then, we have like four or five different ways to define something like classes. The one I like the most is as follows:

<script type="text/javascript" language="text/javascript">
    function CarFactory() {
        var carsProduced = 0; // Private variable

        this.name = 'Car Factory'; // Public variable

        this.getCarsProduced = new function() { // Public getter
            return carsProduced;
        };

        this.createCar = function() { // Public function
            showCarProductionMessage();
            carsProduced++;
        };

        function showCarProductionMessage() { // Private function
            alert("A car was produced.");
        };
    }

    var carFactory = new CarFactory();

    alert("Factory name: " + carFactory.name);

    carFactory.createCar();

    alert("Cars produced so far: " + carFactory.getCarsProduced());
</script>

How pretty does that looks? So, a function can also be a class definition with private members and all, just like any object-oriented language. This is known as prototyping.

Hopefully, the example is clear enough. I have a couple of things to clarify though. JavaScript, while trying to be object-oriented, is still a scripted language; never forget that. So, if you put the class definition after you use it, you’ll get a nice error complaining about the use of an undefined type since the interpreter has not yet seen the class prototype definition as it is after the code trying to use it. Also, look at the use of the keyword this. Things are just starting to get more and more interesting for sure.

The “this” Keyword

If you have worked with classes before, surely you’ll recognize the this keyword. This magical keyword often refers to the class where the current method using it is defined. Well, in JavaScript (once again screwing up with our brains) the this keyword refers to the function owner and can refer to several things depending on the scope it is used:

This Keyword Scope Conventions

Now you see how tightly coupled JavaScript is with the DOM tree. The default owner for all scripts is the global window DOM element.

Now, you can see the last two usages in the table involve the use of the call() and apply() functions. These functions are useful if we would like to change the value this refers to. That would be your homework: Check the use of call() and apply().

Conclusion

So, we have reviewed how JavaScript was conceived and how it is structured. This is essential in order to understand more complex examples like DOM element events and asynchronous server calls and of course– server-side JavaScript; which we will be examining with more care on future articles.

Stay tuned! ;)

Improving Code With Fluent Interfaces

Fluent What…?

Fluent interface. A “fluent interface” is a concept originally coined by a couple smart guys called Martin Fowler and Eric Evans back in 2005. Rumor has it that once upon a time both guys were eating a huge bag of Cheetos while watching at an old episode of Dr. Who late at night and then aliens came out of the TV screen and granted them this coding style as a gift. Since then, it has become widely used by developers who have gradually started to worry more and more about their source code readability. OK, the story is all BS but it is more interesting than saying they just agreed on the concept in like 10 minutes after a workshop. :(

Anyways, as I just mentioned, the main purpose of this concept is to improve code readability by using a wrapper class that exposes existing functionality in the form of chained methods. Chained methods are methods that after they complete their work, they return an instance of the objects that contains them so more methods can be called subsequently. Of course, this is all quite confusing so let’s look at some code.

Fluent Threads

For this particular topic, I am going to create an example using one of the most loved features in Java: Threads and Runnables. So, let’s say I want my application to fire a long-running operation in a separate thread so, after the operation starts, my application can get stuck in a loop printing a message notifying the user that the operation is still running. After the operation completes, we print a message saying it so. Quite useless application we have here, but will help demonstrate our example.

Traditionally we would have some code like this to achieve what we want:

package com.codenough.demos.fluentthreads;

public class Main {
    private static Boolean operationThreadIsRunning = false;

    public static void main(String[] args) throws Exception {
        setOperationStatus(false);

        System.out.println("Creating thread...");

        Runnable runnableOperation = new Runnable() {
            @Override
            public void run() {
                setOperationStatus(true);

                try {
                    Thread.sleep(5000);
                    System.out.println("Operation execution finished.");
                } catch (Exception e) {
                    System.out.println("An error ocurred while executing operation.");
                }

                setOperationStatus(false);
            }
        };

        Thread operationThread = new Thread(runnableOperation);

        operationThread.start();

        System.out.println(&quot;Thread created. Now running.&quot;);

        while(true) {
            Thread.sleep(1000);

            if (operationThreadIsRunning) {
                System.out.println("Still waiting for thread...");
            }
            else {
                break;
            }
        }

        System.out.println("Thread execution completed.");
    }

    public static void setOperationStatus(Boolean isRunning) {
        operationThreadIsRunning = isRunning;
    }
}

Again, this code does is just fine. But someone with limited knowledge of Java (or programming at all) would have a hard time trying to figure out  what the hell is a Thread or Runnable. Let alone @Override. So, for the sake of easy readability we can modify the code structure a little bit so, instead of creating threads and runnables, we can do something like this:

createTaskFor(new Action() {
    @Override
    public void onExecute() {
        // Long-running operation..
    }
})
.onErrorUse(new ErrorHandler() {
    @Override
    public void onErrorCaught(Exception exception) {
        // Handle errors..
    }
})
.thenExecute();

Much better, huh? Now readers should only worry about guessing what the friendly @Override annotations are for.

Behind The Scenes

This lovely syntax is possible thanks to a wrapper class working behind the scenes called Task which is full o’ chained methods which return its own instance so we can keep calling and calling methods like there’s no tomorrow until we are ready to run the actual task by calling the thenExecute() method. All for the sake of syntactic sugar.

Here is the implementation of the Task class:

package com.codenough.demos.fluentthreads.threading;

public class Task {
    private Boolean isRunning;
    private ErrorAction errorAction;
    private final Action action;

    public Task(Action action) {
        this.action = action;
        this.isRunning = false;
    }

    public Boolean isRunning() {
        return this.isRunning;
    }

    public Task onErrorUse(ErrorAction errorAction) {
        this.errorAction = errorAction;

        return this;
    }

    public Task thenExecute() {
        Runnable runnableAction = new Runnable() {
            @Override
            public void run() {
                try {
                    isRunning = true;
                    action.onExecute();
                    isRunning = false;
                }
                catch(Exception exception) {
                    errorAction.onErrorCaught(exception);
                }
            }
        };

        new Thread(runnableAction).start();

        return this;
    }

    public static Task createTaskFor(Action action) {
        return new Task(action);
    }
}

As you might notice on most methods, they return their own class instance. These are chained methods and, in this case, are used to configure the task. I have taken advantage of Java static imports so I can call the createTaskFor method in the main method without having to reference the Task class at all on it; making our fluent interface totally unobtrusive. What a good boy. ;)

Now our main method looks a little something like this:

package com.codenough.demos.fluentthreads;

import static com.codenough.demos.fluentthreads.threading.Task.*;
import com.codenough.demos.fluentthreads.threading.*;

public class Main {
    public static void main(String[] args) throws Exception {
        System.out.println("Creating task..");

        Task task =
            createTaskFor(new Action() {
                @Override
                public void onExecute() {
                    try {
                        Thread.sleep(5000);
                        System.out.println("Task internal action execution finished.");
                    }
                    catch(InterruptedException exception) {
                        throw new Error("Thread sleep was interrupted.", exception);
                    }
                }
            })
            .onErrorUse(new ErrorHandler() {
                @Override
                public void onErrorCaught(Exception exception) {
                    System.out.println("An error ocurred while executing task internal action.");
                }
            })
            .thenExecute();

        System.out.println("Task created. Now running.");

        while(true) {
            Thread.sleep(1000);

            if (task.isRunning()) {
                System.out.println("Still waiting for task...");
            }
            else {
                break;
            }
        }

        System.out.println("Task execution completed.");
    }
}

Benefits

Fluent interfaces leverage existing language features (which are not that user-friendly) to improve code readability. This allows us to produce awesome code with superior quality since is a lot easier to read, understand and maintain (QAs will be grateful, which is good). Of course, additional code and effort is required to implement fluent interfaces but in the end it all pays when we, for example, have to implement additional features or modify existing code in a codebase we might have last touched ages ago. You’ll save thousands of dollars on aspirin tablets.

Example

Further Reading

UI Design Using Model-View-Presenter (Part 3)

In The Previous Episode

In the previous part of the series we finally wrote some code and implemented the three main components of the Model-View-Presenter pattern. We defined our Model class, extreme-makeover’d our old Windows Form class to a slick View-compliant class and then created a Presenter which will orchestrate all sorts of evil things so the Model and View play together nicely.

In case you missed any of our adrenaline-filled previous posts, now it is a good time to go back to Part 1 or Part 2 and drop a sweat, boy.

Wiring Everything Up

Now is time to wire up the Model, View and Presenter classes so we have a running application which does the exact same thing as our old application but with a pretty sexy code base. This wiring will take place at the Main method of the application, which will instantiate the repository class, the View and the Presenter; initialize everything and then finally run the application.

So, here is our new Program class doing what it does best:

namespace Codenough.Demos.WinFormsMVP
{
   public static class Program
   {
       [STAThread]
       public static void Main()
       {
           Application.EnableVisualStyles();
           Application.SetCompatibleTextRenderingDefault(false);

           var clientsForm = new ClientsForm();
           var clientsRepository = new ClientRepository();
           var clientsPresenter = new ClientsPresenter(clientsForm, clientsRepository);

           clientsForm.Closed += () =>
           {
               Application.Exit();
           };

           Application.Run(clientsForm);
       }
   }
}

If you smash on that Run toolbar button at this point, you will see a really nice window poppin’ up on-screen with the clients list box populated just like the old application, only that this time all data coming back and forth the View is going through the Presenter (what a controlling psycho freak). Most important of it all: You can now look in the code examples how each part has a single purpose. The Model stores data, the View displays data and the Presenter controls data flow.

Now, the only thing left to do and (hopefully) show the real benefit behind this pattern, is to try again and implement our pre-defined unit tests so we can verify our application meets all requirements.

Meeting Acceptance Criteria

Our application code has never been so ready before for some unit testing. We had some already defined, which we could not implement at the time simply because the code was being a total jack. But now, using the MVP pattern on all its glory, we are set up for it.

So, for unit testing I am using NUnit and Moq. Moq is a library that allows me to create mocks (in this case the repository class and the View interface) without having to write additional code; we don’t like to do that. Before we proceed, let’s see what a mock is and what it can do for us.

A “mock” is a fake implementation of an interface or a class which members produce data we already know– basically is like making a TV display a TiVO recording instead of an actual live show. You know, in this example we can know what the data coming from the repository class is just by looking at the code, but in the real world we might not have that source code at hand. Or even worst, that data might come from a production dabatase we don’t have access to. With a mock, we can fake a method to do or return anything we want.

Our first test is called ItShouldLoadAllClients. Since our test-naming skills are so incredibly sharp, we can imply, just by reading that title, that a clients list has to be loaded during Presenter class initialization. We first create a new method called SetUp that will run right before each time a test method runs to set everything up– in this case it will initialize mock instances for the repository class and view interface. We then proceed to create an instance of the Presenter class, which is the test subject (our lab rat, yes) and then we use the Verify method of the mock that will throw a nasty exception and make the unit test fail miserably if the specified View method (LoadClients for this particular case) was never called during Presenter initialization.

using Moq;
using NUnit.Framework;
using System.Collections.Generic;
using System.Linq;

namespace Codenough.Demos.WinFormsMVP
{
   [TestFixture]
   public class WhenClientsWindowLoads
   {
       private Mock<IClientsView> clientsViewMock;
       private Mock<ClientRepository> clientsRepositoryMock;

       [SetUp]
       public void SetUp()
       {
           this.clientsViewMock = new Mock<IClientsView>();
           this.clientsRepositoryMock = new Mock<ClientRepository>();
       }

       [Test]
       public void ItShouldLoadAllClients()
       {
           // Act
           var clientsPresenter = new ClientsPresenter(clientsViewMock.Object, clientsRepositoryMock.Object);

           // Assert
           clientsViewMock.Verify(view => view.LoadClients(It.IsAny<IList<ClientModel>>()), "Expected clients to be loaded on initialization.");
       }
   }
}

Of course, this test passes. A call to the LoadClients method is done at the Presenter constructor. The following tests are a little more complicated, since we will be setting up methods of the View and repository mocks to return what we need them to return (method setup).

Our next test is ItShouldShowFirstClientOnListDetails, which requires our presenter to load the first client on the list after initialization.

For this case, we let the SetUp method do its thing and create the respective mocks and then, at our test arrange phase, we use the Setup method to make the repository mock return a sample list of clients when the FindAll method is called. Finally, we verify the LoadClient method of the view was called; making this the assertion that makes the test case pass (and make us happy) or fail (and quit our jobs).

[Test]
public void ItShouldShowFirstClientOnListDetails()
{
   // Arrange
   var clients = new List<ClientModel>()
   {
       new ClientModel { Id = 1, Name = "Matt Dylan", Age = 28, Gender = "Male", Email = "mattd@none.com" },
       new ClientModel { Id = 2, Name = "Anna Stone", Age = 22, Gender = "Female", Email = "ann@none.com" }
   };

   clientsRepositoryMock.Setup(repository => repository.FindAll()).Returns(clients);

   // Act
   var clientsPresenter = new ClientsPresenter(clientsViewMock.Object, clientsRepositoryMock.Object);

   // Assert
   clientsViewMock.Verify(view => view.LoadClient(clients.First()), "Expected first client to be loaded on initialization.");
}

The last test is ItShouldShowClientDetailsOnListItemSelected and finding out the way it works will be your homework, dear reader. This is the most complicated of them all since now we use Moq to fire view events and wake up the Presenter, which should be listening to them.

[Test]
public void ItShouldShowClientDetailsOnListItemSelected()
{
   // Arrange
   var clients = new List<ClientModel>()
   {
       new ClientModel { Id = 1, Name = "Matt Dylan", Age = 28, Gender = "Male", Email = "mattd@none.com" },
       new ClientModel { Id = 2, Name = "Anna Stone", Age = 22, Gender = "Female", Email = "ann@none.com" }
   };

   clientsRepositoryMock.Setup(repository => repository.FindAll()).Returns(clients);
   clientsRepositoryMock.Setup(repository => repository.GetById(1)).Returns(clients.First());

   clientsViewMock.SetupGet(view => view.SelectedClient).Returns(clients.First());

   var clientsPresenter = new ClientsPresenter(clientsViewMock.Object, clientsRepositoryMock.Object);

   // Act
   clientsViewMock.Raise(view => view.ClientSelected += null);

   // Assert
   clientsViewMock.Verify(view => view.LoadClient(clients.First()), "Expected first client to be loaded on initialization.");
}

Conclusion

Making code testable is not easy at all. No, really, AT ALL. At least not at first, since you have to think on all the possible test scenarios your code could go through. But in the long term, this is what makes testable code more robust and clean than regular hastily thrown together code.

Okay, this makes it up for it. Hope you enjoyed it and learned a couple things. Stay tuned for more talk on patterns and good practices and maybe really off-topic things.

Oh! Don’t forget to check out the code examples (including an exciting port on Java).

Code Examples

UI Design Using Model-View-Presenter (Part 2)

What Just Happened?

In the previous part, we demonstrated the traditional way of doing things when it comes to UI design. We also reviewed some key concepts like Code Coverage and Separation Of Concerns. Lastly we reviewed some concepts behind the Model-View-Presenter pattern. Yeap, that was a lot of stuff. So, if you haven’t read it yet, be a good boy and go read it now.

Everything has been really theoric up until now, which is boring to depressing levels. But don’t worry. I’ll make it up for you on this part. We are now going to look at how each piece of the pattern is implemented. Alright, let’s move on then, but have in mind this diagram at all times:

MVP Diagram

UI Design Redemption

Ok, time to make things right. First thing to do to sort this mess up, dear reader, is to create the Model.

The Model is a domain object (fancy name for a class full of properties only) which contains the information the View is gonna show. In this case our View will contain a list of clients, so the Model is actually the Client class renamed to obey naming conventions as ClientModel. C’mon– lots of people, including seven devs, two architects and a baby goat were required to figure out that name refactor.

namespace Codenough.Demos.WinFormsMVP
{
   public class ClientModel
   {
      public int Id { get; set; }

      public string Name { get; set; }

      public int Age { get; set; }

      public string Gender { get; set; }

      public string Email { get; set; }
   }
}

As mentioned before (I tend to repeat myself, bear with me here), the Model is a really simple class composed only of properties; better known as plain old C# object or “POCO” for the friends. Wait– “poco” actually means “not much” in Spanish; so, it does not much. Wow, cool– right?

Okay, here is a list of things the Model should NOT do:

  • Do any data validation.
  • Refresh, save or update its data on the database.
  • Throw exceptions like it was something actually funny.
  • Be aware of any other class outside the domain model.
  • Provide a solution to world hunger (actually that would be pretty impressive, but DON’T!).

Next step is to create an interface that will serve as a contract of whatever the View is capable to do so the Presenter is aware of it. Remember it will be communicating with it to serve view data and receive input, so the guy needs to know what is in the view regardless of implementation.

using System;
using System.Collections.Generic;

namespace Codenough.Demos.WinFormsMVP
{
   public interface IClientsView
   {
      event Action ClientSelected;
      event Action Closed;

      IList<ClientModel>; Clients { get; }

      ClientModel SelectedClient { get; }

      void LoadClients(IList<ClientModel> clients);

      void LoadClient(ClientModel client);
   }
}

The interface only defines the methods needed by the Presenter so it can “puppet” it as needed, in order to get its data and validate anything before it gets back into the database; or any other event triggered by the user. Also, see those fancy events on the view interface? Dude, I didn’t even knew interfaces could have events on it, LOL– just kidding. Actually, they are there so the Presenter subscribe to them and then the View can fire them to notify of data changes. Is a two-way communication.

Okay now, to complete the View portion, we need to create the concrete implementation of the view interface. In this case is our old Windows Forms class which has gone under a strict diet low on carbs to reduce its functionality to the minimum.

using System;
using System.Collections.Generic;
using System.Windows.Forms;

namespace Codenough.Demos.WinFormsMVP
{
   public partial class ClientsForm : Form, IClientsView
   {
      public event Action ClientSelected;
      public event Action Closed;

      public ClientsForm()
      {
         this.InitializeComponent();
         this.BindComponent();
      }

      public IList<ClientModel> Clients
      {
         get { return this.clientsListBox.DataSource as IList<ClientModel>; }
      }

      public ClientModel SelectedClient
      {
         get { return this.clientsListBox.SelectedItem as ClientModel; }
      }

      public void LoadClients(IList<ClientModel> clients)
      {
         this.clientsListBox.DataSource = clients;
      }

      public void LoadClient(ClientModel client)
      {
         this.clientNameTextBox.Text = client.Name;
         this.clientEmailTextBox.Text = client.Email;
         this.clientGenderTextBox.Text = client.Gender;
         this.clientAgeTextBox.Text = client.Age.ToString();
      }

      private void BindComponent()
      {
         this.closeButton.Click += OnCloseButtonClick;

         this.clientsListBox.DisplayMember = "Name";
         this.clientsListBox.SelectedIndexChanged += OnClientsListBoxSelectedIndexChanged;
      }

      private void OnClientsListBoxSelectedIndexChanged(object sender, EventArgs e)
      {
         if (this.ClientSelected != null)
         {
            this.ClientSelected();
         }
      }

      private void OnCloseButtonClick(object sender, EventArgs e)
      {
         if (this.Closed != null)
         {
            this.Closed();
         }
      }
   }
}

Now the View implementation only loads data from someone else and fires events as the loaded data changes so anyone out there in space listening to it be aware of the changes and will take appropriate action (OK, that sounded like a line from a Law & Order episode). As you might have already guessed, the only component aware of the View events would be the Presenter. So, the next step is to implement the infamous Presenter class.

The Presenter is a component that is aware of what is happening in the View through its events (See what I said about repeating myself? But hopefully that will help get things carved out in your head, bud.). As we mentioned before, the Presenter talks to the View by using its interface instead of the concrete implementation, the ClientsForm class that is; this for the sake of code testability. Later on, we are going to provide the Presenter with a “mock” instead of the actual View implementation. We are going to learn what a mock is later, no need to hurry.

using System.Linq;

namespace Codenough.Demos.WinFormsMVP
{
   public class ClientsPresenter
   {
      private readonly IClientsView view;
      private readonly ClientRepository clientsRepository;

      public ClientsPresenter(IClientsView view, ClientRepository clientsRepository)
      {
         this.view = view;
         this.clientsRepository = clientsRepository;

         var clients = clientsRepository.FindAll();

         this.view.ClientSelected += OnClientSelected;
         this.view.LoadClients(clients);

         if (clients != null)
         {
            this.view.LoadClient(clients.First());
         }
      }

      public void OnClientSelected()
      {
         if (this.view.SelectedClient != null)
         {
            var id = this.view.SelectedClient.Id;
            var client = this.clientsRepository.GetById(id);

            this.view.LoadClient(client);
         }
      }
   }
}

This time around, the one handling data access through the repository class is the Presenter. The View is no longer aware of where its data is coming from; that’s a good thing. Also, we can notice how the Presenter reacts each time a client is selected on the list box, since it has subscribed to the View’s ClientSelected event. Fancy, right?

Now, the only thing left to do is to wire-up everything so we can run this incredibly clever application which will save many companies from bankruptcy from now on to the future. Of course, that is for the next part in our series. Gotcha, huh?

Coming Up

In our next part, we will wire up all this mess so we have a nice and working application. We will also implement all unit tests we defined in the first part. Is going to be lots of fun, I promise.

Go read Part 3! ;)

UI Design Using Model-View-Presenter (Part 1)

Introduction

Current tendencies and best practices in software development lead to the common pattern that a software system has to be solid, scalable and reliable. In order to comply with those three requirements, two aspects have to be considered during design phase: Code coverage and separation of concerns.

What? Ok so, code coverage is a metric that indicates how much of the code is being covered by unit tests or, in other words (English, plz), how testable is the source code we have written during those lovely 18-hour shifts. Yes– unit testing is SO important it has its own goddamn metric.

Then we have separation of concerns, which plays close with code coverage, since, in order to have handsome unit tests, we need to give each component of the system a single concern to deal with. In software design, the milk man cannot deliver newspapers around the neighborhood. Each component has a specific well-defined responsibility and it has to stick to that scope.

But what does all of this has to do with user interface design? Well, user interface code is the number one culprit of most software design flaws since it often lead developers into breaking these two design aspects specifically. Is a common well-known scenario where a class that represents a window to be rendered on-screen also performs business logic validations and queries the database directly. This not only screws up the whole idea of separation of concerns but also makes unit testing of that particular class almost impossible, punching code coverage right in the face.

That’s right. Developers are well-known for not being too skilled at designing pretty GUIs. However, when they nail it, they screw infrastructure up. Go figure.

Anyways, this three-part series of articles intend to demonstrate clear examples of both the traditional way and the “awesome” way (don’t mind quotes at all) using the Model-View-Presenter pattern in order to improve overall infrastructure design of the application user interface.

Examples will be provided in C#, using Windows Forms. The example application is Mono-compliant, so it can be compiled to run in several platforms using the Mono Framework. An additional version for Java using Swing will be provided at the last article of the series.

The Scenario

For our practical example, we are going to build a small application that displays a list of clients available in a data source to the user on-screen. The user will then be able to select clients from the list and view additional details like age, gender and e-mail address. You get the idea.

The clients will be queried from a repository which, for matters of simplicity (I’m just THAT lazy), will return data from a pre-defined generic list. For a start, here is the repo source code:

using System.Collections.Generic;

namespace Codenough.Demos.WinFormsMVP
{
   public class ClientRepository
   {
      private IList<Client> clients = new List<Client>()
      {
         new Client { Id = 1, Name = "Matt Dylan", Age = 28, Gender = "Male", Email = "mattd@none.com" },
         new Client { Id = 2, Name = "Anna Stone", Age = 22, Gender = "Female", Email = "ann@none.com" }
      };

      public virtual Client GetById(int id)
      {
         foreach (Client client in this.clients)
         {
            if (client.Id == id)
            {
               return client;
            }
         }

         return null;
      }

      public virtual IList FindAll()
      {
         return this.clients;
      }
   }
}

Traditional UI Design

Read “Effective”, Not “Efficient”

Traditional ways to develop user interfaces tend to give all responsibility to it of the data it shows, including: user input, data validation, error handling, database access and all other sorts of magical and mysterious things. The following piece of code creates the window containing a list box, to display available clients; and a series of text boxes showing additional stuff related to the client selected on the aforementioned list box and a really REALLY good looking close button.

using System;
using System.Windows.Forms;

namespace Codenough.Demos.WinFormsMVP
{
   public partial class ClientsForm : Form
   {
      private readonly ClientRepository clientsRepository;

      public ClientsForm()
      {
         this.clientsRepository = new ClientRepository();

         InitializeComponent();
         BindComponent();
      }

      private void InitializeComponent()
      {
         // Create and initialize window controls. Omitted for brevity..
      }

      private void BindComponent()
      {
         this.closeButton.Click += OnCloseButtonClick;
         this.clientsListBox.SelectedIndexChanged += OnClientsListBoxSelectedIndexChanged;

         this.clientsListBox.DisplayMember = "Name";
         this.clientsListBox.ValueMember = "Id";
         this.clientsListBox.DataSource = this.clientsRepository.FindAll();
      }

      private void OnClientsListBoxSelectedIndexChanged(object sender, EventArgs e)
      {
         var selectedClientId = (int)this.clientsListBox.SelectedValue;
         var selectedClient = this.clientsRepository.GetById(selectedClientId);

         this.clientNameTextBox.Text = selectedClient.Name;
         this.clientEmailTextBox.Text = selectedClient.Email;
         this.clientGenderTextBox.Text = selectedClient.Gender;
         this.clientAgeTextBox.Text = selectedClient.Age.ToString();
      }

      private void OnCloseButtonClick(object sender, EventArgs e)
      {
         Application.Exit();
      }
   }
}

First thing I would like to mention here is that this example satisfies all acceptance criteria (you know, what it is expected to do) we established for the end-product. It shows the list of clients, each time one is selected details of that client are displayed in the details group box and all that magic.

So, what is so bad about this code? It does what it should, and best of all: in roughly 49 lines of code. Well, as frustrating it might seem, something effective is not always efficient. The previous piece of code breaks pretty much all aspects of good design. Martin Fowler might even shed a little tear just by looking at it.

Testing What Cannot Be Tested

As I mentioned at the beginning of the article, unit test code is as important as the actual business source code. Always remember that, lad. Let’s define a couple requirements this application needs to satisfy.

The clients application should (as in “has to”):

  • Show all clients in the list box when it loads.
  • Show details of the first client in the list when it loads.
  • Show details of a client and show it when the user select an item in the clients list box.

Now, let’s write test method stubs for these:

using NUnit.Framework;

namespace Codenough.Demos.WinFormsMVP
{
   [TestFixture]
   public class WhenClientsWindowLoads
   {
      [Test]
      public void ItShouldLoadAllClients()
      {
         // ...
      }

      [Test]
      public void ItShouldShowFirstClientOnListDetails()
      {
         // ...
      }

      [Test]
      public void ItShouldShowClientDetailsOnListItemSelected()
      {
         // ...
      }
   }
}

When looking at these tests, we can notice they are specific to the presentation portion of the application. We are not testing data access (the ClientRepository class) or rendering-related stuff. By using the code example provided earlier, our tests could fail or pass depending of:

  • The repository classes.
  • The data coming from the repository.
  • The stability of the actual Windows Forms libraries (better known as the “Please Don’t Screw Up, Microsoft” prayer). 

So, we would have to adapt our tests for cases when the ClientsRepository class returns unexpected data or an environment-related exception occurs when rendering the window on-screen.

We simply can’t perform proper testing of the source code as it is right now. At this point, code coverage has been compromised; not even talk about separation of concerns. The ClientForm class does friggin’ EVERYTHING.

But fear not. Fortunately for us, some nerds locked up in a server room under an average temperature of about -5 degrees celcius already encountered themselves in this predicament and figured out a way to make of the world a better place: The Model-View-Presenter pattern.

The Model-View-Presenter Way

The Model-View-Presenter pattern was hatched in early 90′s at Taligent and popularized later on by a paper written by some really smart fella called Mike Potel; Taligent’s CTO at the time. The pattern was later used in Smalltalk user interface framework and adapted to Java when it started gaining popularity.

In a very general scope, the MVP pattern seeks to leverage the principles of separation of concerns, leaving each part composing the user interface code with a well-defined responsibility. Wow, that sounds awesome. Right?

Dropping the geek lingo, it means that, in our clients list application, the window should only care about showing data to the user on-screen and nothing else. No data-access, no business logic, no NOTHING.

The pattern is composed of three parts:

  • Model: The stuff that will be displayed to the user. It is updated as the user inputs new information on-screen.
  • View: As the name implies, the View is either a window, web page, app, smoke signals or anything used to show the Model information to the user and let it modify it as it pleases.
  • Presenter: The Presenter is actually a dude who stands right between the View and Model. It has the responsibility of providing the View with the data from the Model and validate data coming back from the View.

MVP Diagram

You might be asking yourself what in the world could be so good about separating the presentation layer of an already multi-layered software architecture in even more parts (or, source code “onionization”). Well, the MVP pattern allows us to fully test the presentation layer of our application regardless of the other layers; business and data-access that is.

In the case of our application, all unit tests required to comply with acceptance criteria are intended to validate user interface initialization and logic flow, task the Presenter is in charge of; window rendering is problem of the View and model data-access is responsibility of the ClientsRepository, which we don’t really care about whatsoever.

Coming Up

So, this concludes the first part of this article. We have reviewed some of the key aspects behind proper code design: code coverage and separation of concerns; which is important for what’s coming next. In the next part we will be exploring how to actually implement the MVP pattern to buff up our previous example and hopefully settle things up for good.

Go read Part 2! ;)