Introducción a Service Fabric (I)

This post is part of Service Fabric series

  1. Introduction to Service Fabric
  2. Creating a WCF Service for Azure Service Fabric

In the beginning (not so far away) it was the local server, and developer’s life was a chaos. IT team (if existing, otherwise the developer itself) was responsible of ensuring the server where the applications was installed worked as needed. It was his fault if this didn’t happen.

Later, whit the use of virtualization the cloud came bringing the ability of give us the chance of blaming transfer the responsibility to someone else.


What does all this have to do with this post? Microsoft Azure Service Fabric is a Platform as a Service option, built from scratch for supporting cloud, distributed, high-scale and high-availability applications. It started as a proposal for cloud databases (CloudDB) and currently is being used on rockstar Microsoft Products like Cortana, Skype for Business, Power BI, SQL Azure, etc.

Its main advantages are in the easiness given to developers for managing elements that are beyond functionality, like

  • Rolling updates
  • Logging
  • Monitoring and telemetry from the services
  • Failures
  • Security

This way developer can focus all his efforts and attention in coding.



Even though it is normally associated with microservices, Service Fabric benefits can be useful in multi-layer applications, APIs, etc. But, what are microservices? Although there is no standard definition, they are normally identified by spliting application functions in small parts. These parts are independently versioned, written in any language or technology and oriented to solve a concrete situation from the problem is intending to tackle. It is important to be clear that monolithic is not bad neither microservices is good; it all depends of the scenario and context.

By being distributed independently in different nodes (containers, servers, virtual machines) within a cluster where the replication and partition process is performed, each microservice can be scaled according to its own needs.


Service Fabric can run the same on Microsoft Azure, other clouds providers like AWS and even on private clouds, no matter if it is Linux or Windows. Even at development time, the required components are exactly the same, what makes really easy to move from one environment to the other when it is needed. This is because the components where thought to be standard and is not needed to make modifications according to the environment where it will be executed.

The cluster is a set of nodes components installed and configured to communicate each other; it provides an abstraction level between the application and the infraestructure where it’s executed. The main cluster abilites are

  • Supporting thousand of nodes
  • Dynamic change
  • Isolation unit

Infrastructure Services

Service fabric provides a set of services to help with infrastructure management

Cluster manager

In charge of cluster operations. By default it can be managed using REST through port 19800 with HTTP and with TCP through port 19000 using Powershell.

Failover manager

In charge of detecting when new nodes are added to the cluster, when they are removed, or when a fail occurs in order to re-balance for high availabiliy.


Maps the services with the endpoints, so the can communicate each other.

Fault Analysis

Helps you to introduce failures to your services so you can test different scenarios in a controlled manner.

Image Store

Contains the actual bits of the services, the master used for creating the copies that are replicated on the nodes.


In charge of updating Service Fabric components, exclusively on Azure.


Programming models

When working with Service Fabric, you have 3 options for creating your services

Reliable services

Provides a simple way for integrate with Service Fabric when creating your services, benefiting from the platform tools.

Reliable actors

Built on top of Reliable Services capabilities, it’s a framework that works with single-threaded units called Actors, based on the design pattern with the same name.

Guest executable

It’s just that, an executable that you can deploy to service fabric cluster, without fully integrating with the platform; Service Fabric just ensures the exec stays up and running. The programming language doesn’t matter, so it is a good option for existing applications.


Application and services

An applications is basically a set of services, defined in ApplicationManifest.xml file; in Service Fabric terms, we name it Application Type. Based on this type we create an Application Instance, that is the one we hit at runtime; this is very similar to the class and instance concepts in OOP. The same goes for Service Type and Service instance; additionaly it’s composed by 3 parts: code, data and configuration.

Each of this elements has its own version, so we can have an Application with version label 2.1.1 composed by one service with version 1.0.0.


That’s it for now; we’ve covered Service Fabric concepts that will be used for our next tutorials.

Fortune cookie app powered by Azure Functions using Twilio and Sendgrid outputs

One of the greatest advantages of working with Azure Functions is being able to prototype easily applications. For this post I will create a simple Fortune cookie app, which will send you a phrase to your email using Sendgrid and phone via SMS using Twilio. The app will be composed of three pieces

  • Front-end web page using HttpTrigger
  • POST request processing with HttpTrigger
  • Queue processing using Azure Queue Storage Trigger

Programmers Language 101

Have you ever happened to have a talk with a person which is specialist in one topic, and in a certain point you realize you don’t understand half of the conversation? I have, and a lot of times. Doctors will talk about free radicals, lawyers about alibis, construction engineers about material resistance, advanced math teachers about Laplace transforms, and so on.

As programmers we are not the exception. We use to be so deep into our own world that sometimes forgot that we are speaking our own language with people that doesn’t fully understand it. As Will Rogers said

Everybody is ignorant, only on different subjects

I’m really an ignorant on a lot of topics: when I speak to my accountant, I’d really like to have a guide that put what do accumulated depreciation or bank reconciliation really mean in plain human words.

So thinking about that I thought that it might be useful to write some posts about some of the terms that we commonly use as the programmers argot described in plain English. If you are already into the programming world, you may want to better jump to my Programming section, which covers some more advanced topics.

If you are a non-technical person that use to frequently speak with programmers, I hope you might find here some of the answers to the questions that might have arisen in any of those talks. I’ll base the content of the posts on my previous experience both as an instructor and a programmer, trying to give you a better understanding about the topic.


    Tech interviews should be like a terrorist interrogatory

    During my entire career I’ve been in a lot of tech screenings (both sides of the desk), and one common pattern I’ve noticed is that, in many of the cases, the interview is just a standard questionnaire about the technologies required or desired for the position. What is the point with that? The best that you can get with that is filter candidates that were too lazy to get prepared for the interview.

    This is not 1990 anymore, when sharing information globally represented a challenge for most of people. Many of the questions in these type of interviews are already posted in some website, go check yourself: just search in Google (or your preferred search engine) “interview questions [fill in here with the desired technology or language]” and you will get plenty of results (unless your desired technology is some esoteric programming language like LOLCODE).

    Following scripted interviews does not help the interviewer nor the candidate at all. On one occasion, when I was being screened for a developer position the interviewer said  “Well, lets proceed with the database questions”. Since it had been a long time that I had worked with databases directly I said: “Well, it’s been a long time since I worked with databases so I think I’m not in good shape for that”. His answer was “It doesn’t matter I need to ask all the questions anyway“.  What is the point of this? Why do you want to follow a predefined standard script?

    This makes me think that

    a) The interviewer is not “tech enough” to be able to make his own questions
    b) The interviewer is too lazy to go deeper
    c) The interviewer does not care about the real impact of the process (it might be a routine task that he must do, either for ego or obligation)
    d) The company does not care really care much about the proficiency level of the people is hiring, or it is ignorant about the impact of doing so
    e) All of the above

    So I really mean that tech interviews should be like a terrorist interrogatory, because what you really want is to get the truth about what the candidate is capable of doing. You need to push hard to determine what is the real background of a person, the challenges he has passed through, how have he sorted them, etc. How can you get this with a standard questionnaire? You need to start following the path the candidate is giving you with the answers, not just saying “Very well, next question”. He might have memorized some stuff just to make people think he is an expert, but you shouldn’t be fooled by flamboyant answers; they might hide more ignorance than the simple ones.

    In order to be capable of performing this type of interviews, the person really needs to have a strong technical background, so he can be capable to move through the answers of the candidate and be able to determine if he is the right person for the position. Sadly this is not the case on many companies, where Senior positions are get by “years-after-college” instead of real “years-of-experience”, but I’ll let that topic for another post.

    If you are a candidate and the interview you are going through is just a bunch of standard questions, follow the wise advice of Scott Hanselman: excuse yourself and run.




    Html.RatingFor: Extending the MVC HtmlHelper

    When working on a web application, I was in the need to add a rating for a product. That rating will be between 1 and 5 and will be always an int. So my model has a property like public int Rating {get;set;}. I decided to add 5 radio buttons, and each will hold the corresponding rating value.

    But then (as always happen) the requirement changed. We didn’t want to have only 1 rating property, but 5. So adding 5 radios for each was something that I didn’t want to happen

    In order to solve this problem, I created an extension method for the HtmlHelper class that we normally use in our MVC applications. As you may notice, in the method I created all the logic for adding the set of radio buttons needed for the rating process.

    public static MvcHtmlString RatingFor<tmodel, TProperty>(this HtmlHelper htmlHelper, Expression<func<tmodel, TProperty>> expression, int from, int to, object htmlAttributes = null)
    		var builder = new StringBuilder();
    		var metadata = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData);
    		var model = metadata.Model;
    		var name = ExpressionHelper.GetExpressionText(expression);
    		var attributes = HtmlHelper.AnonymousObjectToHtmlAttributes(htmlAttributes);
    		var fullName = htmlHelper.ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldName(name);
    		int direction = 1;
    		if (from > to)
    			direction = -1;
    		for (var i = from; direction == 1 ? i <= to : i >= to; i += direction)
    			var tagBuilder = new TagBuilder("input");
    			tagBuilder.MergeAttribute("type", "radio");
    			tagBuilder.MergeAttribute("name", fullName, true);
    			tagBuilder.MergeAttribute("value", i.ToString(CultureInfo.InvariantCulture));
    			//If model has a value we need to select it
    			if (model != null && model.Equals(i))
    				tagBuilder.MergeAttribute("checked", "checked");
    			ModelState modelState;
    			if (htmlHelper.ViewData.ModelState.TryGetValue(fullName, out modelState))
    				if (modelState.Errors.Count > 0)
    			tagBuilder.MergeAttributes(htmlHelper.GetUnobtrusiveValidationAttributes(name, metadata));
    		return MvcHtmlString.Create(builder.ToString());

    One important part of this code is

    if (model != null && model.Equals(i))
     tagBuilder.MergeAttribute("checked", "checked");

    where we assign the value of the property if it is already set. This is useful when you use this method on an Edit process.

    Now on your view, instead of having to create all that radio buttons manually, you can have something like this

    @Html.RatingFor(model => model.Rating, 1, 5)

    in order to add a rating from 1 to 5.

    Hopefully you will find this useful. If you have created another useful helper, it would be nice if you share it with the community 🙂

    Public field not bound when posting to WebAPI (or a deep dive into WebAPI model binding)

    When trying to create a sandbox project using WebAPI (on MVC4), I was struggling with a weird problem: My data wasn’t being received in the server. I had the following jQuery call

    $.post("api/Values", {value1:1, value2:2}, function(result){ console.log(result); })

    and the WebAPI service action that I was targeting was something like this

    public IEnumerable Post(Dummy value)
     return new string[] { value.Value1, value.Value2 };

    I noticed that even that the instance of Dummy was being created, Value1 and Value2 where always null. The Dummy class was

    public class Dummy
     public string Value1;
     public string Value2;

    Pretty simple, right?. Well, after reading doing a lot of research, I changed by accident one of the Dummy fields to become a property

    public class Dummy
     public string Value1;
     public string Value2 {get;set;}

    I tested again and Voilà!!… well, half voilà actually… When posting, now I was receiving data in Value2, but still not in Value1. This was really intriguing… how come property was being assigned correctly but not the field? Both are public, right? Why the difference?
    Obviously, I knew the solution was changing both fields to be properties now, but I wanted to know why was that happening. I started digging on how WebAPI works and found a really interesting Web API poster, that describes the full lifecycle of a HTTP message. There I got my first clue, so I started researching on how ModelBinding happens. As described there, one of the binding methods is MediaTypeFormatter. Since I was sending JSON object, I tested the Deserialization process based on the test methods provided in the WebAPI overview site

    T Deserialize(MediaTypeFormatter formatter, string str) where T : class
     // Write the serialized string to a memory stream.
     Stream stream = new MemoryStream();
     StreamWriter writer = new StreamWriter(stream);
     stream.Position = 0;
     // Deserialize to an object of type T
     return formatter.ReadFromStreamAsync(typeof(T), stream, null, null).Result as T;

    passing the same JSON object that I had on my jQuery call. The result: The method assigned successfully the values for both the field and the property. By inspecting the HTTP Request headers, I found out that data wasn’t being actually sent as JSON but in the following format: Content-Type:application/x-www-form-urlencoded; charset=UTF-8, which tells the server that data is being sent like this: Value1=1&Value2=2. Then, we need to change the AJAX call to be like this

      url: "api/Values",
      data: JSON.stringify({Value1:1,Value2:2}),
      type: "POST",
      contentType:"application/json; charset=utf-8"

    please notice 2 things: I changed the contentType for the request AND Stringified the JSON object. By doing these changes, Dummy public fields were now populated correctly.
    Now, I still wanted to know why my values weren’t bound when I wasn’t specifying the request content type. Doing more research, I found this really interesting article by Mike Stall called How WebAPI does parameter binding which states

    There are 2 techniques for binding parameters: Model Binding and Formatters. In practice, WebAPI uses model binding to read from the query string and Formatters to read from the body

    If you are not yet bored, you might remember that when we didn’t specify the request content type, the data was being sent as Content-Type:application/x-www-form-urlencoded; charset=UTF-8. This means, that WebAPI was using ModelBinding (and not formatters) to populate the Dummy instance. Moreover, the article has another interesting declaration:

    ModelBinding is the same concept as in MVC, […]. Basically, there are “ValueProviders” which supply pieces of data such as query string parameters, and then a model binder assembles those pieces into an object.

    And how does ModelBinding work in MVC? That was my next question. And I was really happy that Microsoft open-sourced the ASP.Net WebStack, because there is where we can find the answer. If we look into DefaultModelBinder source code, we’ll find that when talking about complex models, it only looks for the object properties to populate the data (maybe because having public fields is a bad practice).
    Well, I hope you can find this post as interesting as I found learning all this. Some times making silly errors can drive you into learn really interesting things.

    Useful references

    Backing field for automatically implemented property [Field] must be fully assigned before control is returned to the caller

    Working with structs in C# gives you a lot of flexibility on the way you design your applications, but since they are not reference types, they have some special features that we need to take in count.
    Recently I was working on a web application and I created a struct to hold a pair of values that is being used very frequently. It is something like this

    public struct StringTuple{
        public string Value1 {get; set;}
        public string Value2 {get; set;}

    After some code changes, I decided that it would be a good option to have a constructor to pass the struct values

    public struct StringTuple
     public StringTuple(string value1, string value2)
      Value1 = value1;
      Value2 = value2;
     public string Value1 { get; set; }
     public string Value2 { get; set; }

    but the compiler started complaining, giving me the following error

    Backing field for automatically implemented property Value1 must be fully assigned before control is returned to the caller

    It was the first time that I had seen that error, so after some time of think and research I remembered one of the basic principles of working with structs: members are initialized when the default constructor is called. That is why creating a new constructor was creating a problem, since we were overloading the constructor call and skipping that member initialization

    The solution

    Since the problem is that we’re not calling the default constructor, the solution is obviously call it, so we just need to add that call to the constructor that we just introduced.

    public struct StringTuple
     public StringTuple(string value1, string value2):this()
      Value1 = value1;
      Value2 = value2;
     public string Value1 { get; set; }
     public string Value2 { get; set; }

    By that, the error message is gone and we can continue happily working with structs

    Install XAMPP on a Ubuntu 13.04 virtual machine running on Windows Azure

    One of the many great things about Windows Azure is how easy you can create a virtual machine, no matter the OS of your preference. But for good or bad, your virtual machine will be fresh, so you need to work on setting up whatever you need to get to work.

    Recently I needed to set up some web applications that I preferred to run on Linux, and for that I prefer to use XAMPP because of the simplicity of the installation process. But this time was not as straightforward as on my previous experiences, so then this is how I did it.

    I’ll assume that you already have the virtual machine created; I chose an Ubuntu Server 13.04 instance from the gallery. After getting the virtual machine up and running, the first step I did was to download the latest XAMPP version from the Apache friends website.

    sudo wget

    After that you need to extract the files from the tar, so we follow the process described for the XAMPP installation guide

    sudo tar xvfz xampp-linux-1.8.1.tar.gz -C /opt

    So far, so good. But when we try to start our XAMPP server using

    sudo /opt/lampp/lampp start

    We get the following error

    XAMPP is currently only availably as 32 bit application. Please use a 32 bit compatibility library for your system.

    To solve this, there are 2 posible solutions, both of them start by doing

    sudo apt-get update

    After this, you can install the ia32-lib package

    sudo apt-get install ia32-lib

    This solution worked for me on previous Ubuntu versions, but not this time. If this solution doesn’t work for you either, then you need to run the following command

    sudo dpkg --add-architecture i386 && sudo apt-get update && sudo apt-get install ia32-libs

    As stated in this answer,

    (…) installing through WUBI did not correctly detect the available foreign architectures. As tumbleweed suggested printing the foreign architectures probably returns nothing. Add i386 as a foreign architecture, update the apt cache, then install the 32 bit libs.

    So then now you might be able to start your XAMPP server by

    sudo /opt/lampp/lampp start

    You should now get something like this

    Starting XAMPP 1.8.1...
    LAMPP: Starting Apache...
    LAMPP: Starting MySQL...
    LAMPP started.

    With this you have successfully installed your XAMPP server, the next step is to test your web server. When you create a new virtual machine, by default the only open port is the one designed for SSH. In order to access the server via a different port we need to create a new endpoint. On the Virtual Machine administration page, go to the endpoints tab

    There you will see the list of the endpoints that we already have. If it is a new VM you might see only the one corresponding to SSH.

    Click on ADD ENDPOINT button at the bottom of the page, and you will see the small window to create a new endpoint.

    Click on Next button and you will see the window to specify the endpoint data

    You can choose the name you want but it cannot be the same as an existing one; the protocol will be TCP. The public port is the one you will use to access your webserver, so it can be anything you want. The private port is the one your XAMPP server is using to serve the content. It is normally the 80, but you can change that on the XAMPP configuration accordingly to your needs.

    After clicking the complete button, you should be able to see your new endpoint listed and now you can access your web server from any part in the world with something like this


    Once the page loads, you will most likely see the following error message

    New XAMPP security concept:
    Access to the requested object is only available from the local network.
    This setting can be configured in the file "httpd-xampp.conf".
    If you think this is a server error, please contact the webmaster.

    So then what you need to do is modify the specified file. According to our installation, it will be located in /opt/lampp/etc/extra/ directory. We need to find the section with the title “New XAMPP security concept” and comment out the full LocationMatch section or adjust the values of the allowed IP addresses if you don’t want to open your site to the public.

    Another change we need to make in the same file is on the Directory "/opt/lampp/phpmyadmin" section. We need to add there Requiere all granted to be able to access the phpMyAdmin site. Remember to add some IP filters so it is not open to anybody that has the URL address to access it.

    To finish, just restart your XAMPP server

    sudo /opt/lampp/lampp restart

    And voilà, you are now ready to work with your XAMPP server on the cloud.

    Localize your MVC app based on a subdomain

    Having an application in multiple languages is now a requirement in many projects. In, you can tell your application that the language that should be using corresponds to the one the browser is specifying. While this is a really nice feature in the ideal scenarios (since the user gets the applications in the proper language automatically), there are some scenarios where this might be not the expected behavior like:

    • If your user’s computer locale is different than the one he or she prefers for using your application (like when he or she is using a different computer than his/her own)
    • When the browser settings have been modified to some value diferent than whatever the user prefers and he or she does not have the knowledge to adjust this setting on the browser.
    In these cases, the user would rather to have a “fallback” mechanism so he or she can select his/her preferred language. One of the options you can use to achieve this is selecting the language/locale based on a subdomain. By this, you will give the users the following options:
    Desired language URL address


    In order to support this, you will need to create an ActionFilterAttribute, something like this

    public class LocalizationFilterAttribute : ActionFilterAttribute
            public override void OnActionExecuting(ActionExecutingContext filterContext)
                var locales = new Dictionary();
                locales.Add("mx", "es-MX");
                locales.Add("sp", "es-ES");
                locales.Add("vi", "vi-VN");
                locales.Add("fi", "fi-FI");
                var subdomain = GetSubDomain();
                if (subdomain != string.Empty && locales.ContainsKey(subdomain))
                    Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo(locales[subdomain]);
                    Thread.CurrentThread.CurrentUICulture = new System.Globalization.CultureInfo(locales[subdomain]);
                    HttpContext.Current.Response.Write(String.Format("Culture: {0}", Thread.CurrentThread.CurrentCulture.Name));
                    HttpContext.Current.Response.Write("Culture: Default ");
            private string GetSubDomain()
                var url = HttpContext.Current.Request.Headers["HOST"];
                var index = url.IndexOf(".");
                if (index < 0)
                    return string.Empty;
                var subdomain = url.Split('.')[0];
                if (subdomain == "www" || subdomain == "localhost")
                    return string.Empty;
                return subdomain;

    As you may already noticed, with this code you define a list of locales that will be selected according to the provided subdomain. The next step would be registering this filter so it is used in all the views. You can do this in your Global.asax file

    public static void RegisterGlobalFilters(GlobalFilterCollection filters)
                filters.Add(new LocalizationFilterAttribute());
                filters.Add(new HandleErrorAttribute());

    Once you have a way to set the locale for the current thread, all you need to do is the localization process, which can be done as you already have it. In my case, I’m using resource files to have all the translations and have a fallback resource file if any requested text has no translation on any of the language-specific resource files.

    By this, you can provide your users a simple and easy-to-remember way to get your application in their desired language.