How many nines? Understanding availability

Most likely you have already experienced a system downtime, either on an application you have worked on or on some service that you consume. It has happened to Amazon, Netflix,  Microsoft, Salesforce, etc. How much did you have to wait? How much did your users?

If you’re building an application and you ask your boss (or your client) what’s the percentage of time  the application should be working correctly, most likely you’ll get an answer like “always” or  “100% (or more)”.

Even though we don’t wan’t bad things to happen, they will surely do; bugs, attacks, power outages, natural disasters, etc., all are scenarios that might affect a system. Expecting them not to happen is naive; it’s better to think on and plan for failure, since it is inevitable.

Availability is the capability of an application to be available after some problem occurs. Again, we are not saying there will be no problems, but how effectively will be able to recover from them instead. This means that we need to a) identify the potential fail points and b) create an strategy to be able to prevent the error becoming a failure affecting the user (this means a tree falling in the forest when no-one is around makes no sound).

Determine the size of the monster first

Many times I’ve heard clients or peer developers saying “Let’s use x framework” or “we will need to apply x technology”. The problem most of the times is that there is no prior analysis for the decision its being proposed. It is just because it’s something trendy (and cool) and that’s it. As one of my prior managers frequently said, “Hype and buzzwords are dangerous friends”.

The main question we need to ask is “What’s the problem we are trying to solve?”. It is surprising how frequently we come up with solutions for problems we barely know. We are eager to jump into the fight without knowing the size of the opponent before hand. What if it ends up being a monster instead of the small thingy we thought? Or just the opposite?

Introducción a Service Fabric (I)

This post is part of Service Fabric series

  1. Introduction to Service Fabric
  2. Creating a WCF Service for Azure Service Fabric

In the beginning (not so far away) it was the local server, and developer’s life was a chaos. IT team (if existing, otherwise the developer itself) was responsible of ensuring the server where the applications was installed worked as needed. It was his fault if this didn’t happen.

Later, whit the use of virtualization the cloud came bringing the ability of give us the chance of blaming transfer the responsibility to someone else.


What does all this have to do with this post? Microsoft Azure Service Fabric is a Platform as a Service option, built from scratch for supporting cloud, distributed, high-scale and high-availability applications. It started as a proposal for cloud databases (CloudDB) and currently is being used on rockstar Microsoft Products like Cortana, Skype for Business, Power BI, SQL Azure, etc.

Its main advantages are in the easiness given to developers for managing elements that are beyond functionality, like

  • Rolling updates
  • Logging
  • Monitoring and telemetry from the services
  • Failures
  • Security

This way developer can focus all his efforts and attention in coding.



Even though it is normally associated with microservices, Service Fabric benefits can be useful in multi-layer applications, APIs, etc. But, what are microservices? Although there is no standard definition, they are normally identified by spliting application functions in small parts. These parts are independently versioned, written in any language or technology and oriented to solve a concrete situation from the problem is intending to tackle. It is important to be clear that monolithic is not bad neither microservices is good; it all depends of the scenario and context.

By being distributed independently in different nodes (containers, servers, virtual machines) within a cluster where the replication and partition process is performed, each microservice can be scaled according to its own needs.


Service Fabric can run the same on Microsoft Azure, other clouds providers like AWS and even on private clouds, no matter if it is Linux or Windows. Even at development time, the required components are exactly the same, what makes really easy to move from one environment to the other when it is needed. This is because the components where thought to be standard and is not needed to make modifications according to the environment where it will be executed.

The cluster is a set of nodes components installed and configured to communicate each other; it provides an abstraction level between the application and the infraestructure where it’s executed. The main cluster abilites are

  • Supporting thousand of nodes
  • Dynamic change
  • Isolation unit

Infrastructure Services

Service fabric provides a set of services to help with infrastructure management

Cluster manager

In charge of cluster operations. By default it can be managed using REST through port 19800 with HTTP and with TCP through port 19000 using Powershell.

Failover manager

In charge of detecting when new nodes are added to the cluster, when they are removed, or when a fail occurs in order to re-balance for high availabiliy.


Maps the services with the endpoints, so the can communicate each other.

Fault Analysis

Helps you to introduce failures to your services so you can test different scenarios in a controlled manner.

Image Store

Contains the actual bits of the services, the master used for creating the copies that are replicated on the nodes.


In charge of updating Service Fabric components, exclusively on Azure.


Programming models

When working with Service Fabric, you have 3 options for creating your services

Reliable services

Provides a simple way for integrate with Service Fabric when creating your services, benefiting from the platform tools.

Reliable actors

Built on top of Reliable Services capabilities, it’s a framework that works with single-threaded units called Actors, based on the design pattern with the same name.

Guest executable

It’s just that, an executable that you can deploy to service fabric cluster, without fully integrating with the platform; Service Fabric just ensures the exec stays up and running. The programming language doesn’t matter, so it is a good option for existing applications.


Application and services

An applications is basically a set of services, defined in ApplicationManifest.xml file; in Service Fabric terms, we name it Application Type. Based on this type we create an Application Instance, that is the one we hit at runtime; this is very similar to the class and instance concepts in OOP. The same goes for Service Type and Service instance; additionaly it’s composed by 3 parts: code, data and configuration.

Each of this elements has its own version, so we can have an Application with version label 2.1.1 composed by one service with version 1.0.0.


That’s it for now; we’ve covered Service Fabric concepts that will be used for our next tutorials.

Fortune cookie app powered by Azure Functions using Twilio and Sendgrid outputs

One of the greatest advantages of working with Azure Functions is being able to prototype easily applications. For this post I will create a simple Fortune cookie app, which will send you a phrase to your email using Sendgrid and phone via SMS using Twilio. The app will be composed of three pieces

  • Front-end web page using HttpTrigger
  • POST request processing with HttpTrigger
  • Queue processing using Azure Queue Storage Trigger

Programmers Language 101

Have you ever happened to have a talk with a person which is specialist in one topic, and in a certain point you realize you don’t understand half of the conversation? I have, and a lot of times. Doctors will talk about free radicals, lawyers about alibis, construction engineers about material resistance, advanced math teachers about Laplace transforms, and so on.

As programmers we are not the exception. We use to be so deep into our own world that sometimes forgot that we are speaking our own language with people that doesn’t fully understand it. As Will Rogers said

Everybody is ignorant, only on different subjects

I’m really an ignorant on a lot of topics: when I speak to my accountant, I’d really like to have a guide that put what do accumulated depreciation or bank reconciliation really mean in plain human words.

So thinking about that I thought that it might be useful to write some posts about some of the terms that we commonly use as the programmers argot described in plain English. If you are already into the programming world, you may want to better jump to my Programming section, which covers some more advanced topics.

If you are a non-technical person that use to frequently speak with programmers, I hope you might find here some of the answers to the questions that might have arisen in any of those talks. I’ll base the content of the posts on my previous experience both as an instructor and a programmer, trying to give you a better understanding about the topic.


    Tech interviews should be like a terrorist interrogatory

    During my entire career I’ve been in a lot of tech screenings (both sides of the desk), and one common pattern I’ve noticed is that, in many of the cases, the interview is just a standard questionnaire about the technologies required or desired for the position. What is the point with that? The best that you can get with that is filter candidates that were too lazy to get prepared for the interview.

    This is not 1990 anymore, when sharing information globally represented a challenge for most of people. Many of the questions in these type of interviews are already posted in some website, go check yourself: just search in Google (or your preferred search engine) “interview questions [fill in here with the desired technology or language]” and you will get plenty of results (unless your desired technology is some esoteric programming language like LOLCODE).

    Following scripted interviews does not help the interviewer nor the candidate at all. On one occasion, when I was being screened for a developer position the interviewer said  “Well, lets proceed with the database questions”. Since it had been a long time that I had worked with databases directly I said: “Well, it’s been a long time since I worked with databases so I think I’m not in good shape for that”. His answer was “It doesn’t matter I need to ask all the questions anyway“.  What is the point of this? Why do you want to follow a predefined standard script?

    This makes me think that

    a) The interviewer is not “tech enough” to be able to make his own questions
    b) The interviewer is too lazy to go deeper
    c) The interviewer does not care about the real impact of the process (it might be a routine task that he must do, either for ego or obligation)
    d) The company does not care really care much about the proficiency level of the people is hiring, or it is ignorant about the impact of doing so
    e) All of the above

    So I really mean that tech interviews should be like a terrorist interrogatory, because what you really want is to get the truth about what the candidate is capable of doing. You need to push hard to determine what is the real background of a person, the challenges he has passed through, how have he sorted them, etc. How can you get this with a standard questionnaire? You need to start following the path the candidate is giving you with the answers, not just saying “Very well, next question”. He might have memorized some stuff just to make people think he is an expert, but you shouldn’t be fooled by flamboyant answers; they might hide more ignorance than the simple ones.

    In order to be capable of performing this type of interviews, the person really needs to have a strong technical background, so he can be capable to move through the answers of the candidate and be able to determine if he is the right person for the position. Sadly this is not the case on many companies, where Senior positions are get by “years-after-college” instead of real “years-of-experience”, but I’ll let that topic for another post.

    If you are a candidate and the interview you are going through is just a bunch of standard questions, follow the wise advice of Scott Hanselman: excuse yourself and run.




    Html.RatingFor: Extending the MVC HtmlHelper

    When working on a web application, I was in the need to add a rating for a product. That rating will be between 1 and 5 and will be always an int. So my model has a property like public int Rating {get;set;}. I decided to add 5 radio buttons, and each will hold the corresponding rating value.

    But then (as always happen) the requirement changed. We didn’t want to have only 1 rating property, but 5. So adding 5 radios for each was something that I didn’t want to happen

    In order to solve this problem, I created an extension method for the HtmlHelper class that we normally use in our MVC applications. As you may notice, in the method I created all the logic for adding the set of radio buttons needed for the rating process.

    public static MvcHtmlString RatingFor<tmodel, TProperty>(this HtmlHelper htmlHelper, Expression<func<tmodel, TProperty>> expression, int from, int to, object htmlAttributes = null)
    		var builder = new StringBuilder();
    		var metadata = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData);
    		var model = metadata.Model;
    		var name = ExpressionHelper.GetExpressionText(expression);
    		var attributes = HtmlHelper.AnonymousObjectToHtmlAttributes(htmlAttributes);
    		var fullName = htmlHelper.ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldName(name);
    		int direction = 1;
    		if (from > to)
    			direction = -1;
    		for (var i = from; direction == 1 ? i <= to : i >= to; i += direction)
    			var tagBuilder = new TagBuilder("input");
    			tagBuilder.MergeAttribute("type", "radio");
    			tagBuilder.MergeAttribute("name", fullName, true);
    			tagBuilder.MergeAttribute("value", i.ToString(CultureInfo.InvariantCulture));
    			//If model has a value we need to select it
    			if (model != null && model.Equals(i))
    				tagBuilder.MergeAttribute("checked", "checked");
    			ModelState modelState;
    			if (htmlHelper.ViewData.ModelState.TryGetValue(fullName, out modelState))
    				if (modelState.Errors.Count > 0)
    			tagBuilder.MergeAttributes(htmlHelper.GetUnobtrusiveValidationAttributes(name, metadata));
    		return MvcHtmlString.Create(builder.ToString());

    One important part of this code is

    if (model != null && model.Equals(i))
     tagBuilder.MergeAttribute("checked", "checked");

    where we assign the value of the property if it is already set. This is useful when you use this method on an Edit process.

    Now on your view, instead of having to create all that radio buttons manually, you can have something like this

    @Html.RatingFor(model => model.Rating, 1, 5)

    in order to add a rating from 1 to 5.

    Hopefully you will find this useful. If you have created another useful helper, it would be nice if you share it with the community 🙂

    Public field not bound when posting to WebAPI (or a deep dive into WebAPI model binding)

    When trying to create a sandbox project using WebAPI (on MVC4), I was struggling with a weird problem: My data wasn’t being received in the server. I had the following jQuery call

    $.post("api/Values", {value1:1, value2:2}, function(result){ console.log(result); })

    and the WebAPI service action that I was targeting was something like this

    public IEnumerable Post(Dummy value)
     return new string[] { value.Value1, value.Value2 };

    I noticed that even that the instance of Dummy was being created, Value1 and Value2 where always null. The Dummy class was

    public class Dummy
     public string Value1;
     public string Value2;

    Pretty simple, right?. Well, after reading doing a lot of research, I changed by accident one of the Dummy fields to become a property

    public class Dummy
     public string Value1;
     public string Value2 {get;set;}

    I tested again and Voilà!!… well, half voilà actually… When posting, now I was receiving data in Value2, but still not in Value1. This was really intriguing… how come property was being assigned correctly but not the field? Both are public, right? Why the difference?
    Obviously, I knew the solution was changing both fields to be properties now, but I wanted to know why was that happening. I started digging on how WebAPI works and found a really interesting Web API poster, that describes the full lifecycle of a HTTP message. There I got my first clue, so I started researching on how ModelBinding happens. As described there, one of the binding methods is MediaTypeFormatter. Since I was sending JSON object, I tested the Deserialization process based on the test methods provided in the WebAPI overview site

    T Deserialize(MediaTypeFormatter formatter, string str) where T : class
     // Write the serialized string to a memory stream.
     Stream stream = new MemoryStream();
     StreamWriter writer = new StreamWriter(stream);
     stream.Position = 0;
     // Deserialize to an object of type T
     return formatter.ReadFromStreamAsync(typeof(T), stream, null, null).Result as T;

    passing the same JSON object that I had on my jQuery call. The result: The method assigned successfully the values for both the field and the property. By inspecting the HTTP Request headers, I found out that data wasn’t being actually sent as JSON but in the following format: Content-Type:application/x-www-form-urlencoded; charset=UTF-8, which tells the server that data is being sent like this: Value1=1&Value2=2. Then, we need to change the AJAX call to be like this

      url: "api/Values",
      data: JSON.stringify({Value1:1,Value2:2}),
      type: "POST",
      contentType:"application/json; charset=utf-8"

    please notice 2 things: I changed the contentType for the request AND Stringified the JSON object. By doing these changes, Dummy public fields were now populated correctly.
    Now, I still wanted to know why my values weren’t bound when I wasn’t specifying the request content type. Doing more research, I found this really interesting article by Mike Stall called How WebAPI does parameter binding which states

    There are 2 techniques for binding parameters: Model Binding and Formatters. In practice, WebAPI uses model binding to read from the query string and Formatters to read from the body

    If you are not yet bored, you might remember that when we didn’t specify the request content type, the data was being sent as Content-Type:application/x-www-form-urlencoded; charset=UTF-8. This means, that WebAPI was using ModelBinding (and not formatters) to populate the Dummy instance. Moreover, the article has another interesting declaration:

    ModelBinding is the same concept as in MVC, […]. Basically, there are “ValueProviders” which supply pieces of data such as query string parameters, and then a model binder assembles those pieces into an object.

    And how does ModelBinding work in MVC? That was my next question. And I was really happy that Microsoft open-sourced the ASP.Net WebStack, because there is where we can find the answer. If we look into DefaultModelBinder source code, we’ll find that when talking about complex models, it only looks for the object properties to populate the data (maybe because having public fields is a bad practice).
    Well, I hope you can find this post as interesting as I found learning all this. Some times making silly errors can drive you into learn really interesting things.

    Useful references

    Backing field for automatically implemented property [Field] must be fully assigned before control is returned to the caller

    Working with structs in C# gives you a lot of flexibility on the way you design your applications, but since they are not reference types, they have some special features that we need to take in count.
    Recently I was working on a web application and I created a struct to hold a pair of values that is being used very frequently. It is something like this

    public struct StringTuple{
        public string Value1 {get; set;}
        public string Value2 {get; set;}

    After some code changes, I decided that it would be a good option to have a constructor to pass the struct values

    public struct StringTuple
     public StringTuple(string value1, string value2)
      Value1 = value1;
      Value2 = value2;
     public string Value1 { get; set; }
     public string Value2 { get; set; }

    but the compiler started complaining, giving me the following error

    Backing field for automatically implemented property Value1 must be fully assigned before control is returned to the caller

    It was the first time that I had seen that error, so after some time of think and research I remembered one of the basic principles of working with structs: members are initialized when the default constructor is called. That is why creating a new constructor was creating a problem, since we were overloading the constructor call and skipping that member initialization

    The solution

    Since the problem is that we’re not calling the default constructor, the solution is obviously call it, so we just need to add that call to the constructor that we just introduced.

    public struct StringTuple
     public StringTuple(string value1, string value2):this()
      Value1 = value1;
      Value2 = value2;
     public string Value1 { get; set; }
     public string Value2 { get; set; }

    By that, the error message is gone and we can continue happily working with structs