Creando un servicio WCF para Azure Service Fabric (II)

Este post es parte de una serie acerca Service Fabric.

  1. Introducción a Service Fabric
  2. Creando un servicio WCF para Azure Service Fabric

Escribí un artículo para DotNetCurry acerca de WCF y Service Fabric. Pueden encontrarlo aquí.

Introducción a Service Fabric (I)

Este post es parte de una serie acerca Service Fabric.

  1. Introducción a Service Fabric
  2. Creando un servicio WCF para Azure Service Fabric

En el principio (en un tiempo no muy lejano) fue el servidor local y la vida del desarrollador era un caos. El equipo de IT (si no había uno, entonces el desarrollador) era responsable de que el servidor en donde se montaba una aplicación estuviera funcionando como debía. Era su culpa si esto no pasaba.

Después, con el aprovechamiento de la virtualización vino la nube y trajo, entre sus ventajas más notorias, el darnos la oportunidad de culpar transferir la responsabilidad a alguien más

 

¿Qué tiene que ver todo esto con este post? Microsoft Azure Service Fabric es una opción de Platform as a Service construida desde cero para soportar aplicaciones en la nube distribuidas, a gran escala y con alta disponibilidad. Inició como una propuesta para bases de datos en la nube (CloudDB) y actualmente es usada en productos estrella de Microsoft como Cortana, Skype for Business, Power BI, SQL Azure, etc.

Sus principales venajas estan en la facilidad que da a los desarrolladores en manejar elementos que van más allá de la funcionalidad como

  • Actualizaciones escalonadas
  • Log
  • Monitoreo y telemetría de los servicios
  • Manejo de fallas
  • Seguridad

De este modo el desarrollador puede enfocar sus esfuerzos y atencion en el código.

 

Microservicios

Aunque es normalmente asociado con microservicios, las ventajas de Service Fabric pueden aprovecharse en aplicaciones multi-capa, APIs, etc. Pero, ¿qué es son los microservicios?.  Aunque no hay una definición estándar, normalmente se caracterizan por separar la funcionalidad de una aplicación en partes más pequeñas. Estas partes son versionadas de manera independiente, pueden ser de cualquier tecnología, escalables y orientados resolver una parte concreta del problema que se está atacando. Es importante dejar claro que monolítico no es malo ni microservicios bueno. Todo depende del escenario y contexto.

Al ser distribuidos de manera independiente en nodos (contenedores, servidores, máquinas virtuales) diferentes agrupados dentro de un cluster en donde se lleva a cabo el proceso de réplica y partición, cada microservicio puede escalarse según sus necesidades propias.

 

Cluster

Service Fabric puede correr del mismo modo en Microsoft Azure, otras nubes como AWS e incluso en nubes privadas, ya sea en Linux o Windows. Incluso al momento del desarrollo, los componentes utilizados son iguales, lo que facilita el moverse de un entorno a otro cuando sea necesario. Esto es debido a que los componentes estan pensados para ser estandres y no es necesario realizar modificaciones de acuerdo al ambiente en donde se ejecute. El cluster provee un nivel de abstracción entre la aplicación y la infraestructura en que se ejecuten.; es un conjunto de nodos con los componentes instalados y configurados para comunicarse entre sí. Las principales características del cluster son

  • Puede soportar miles de nodos
  • Puede cambiarse dinámicamente
  • Es una unidad de aislamiento

 

Servicios

Service fabric provee un conjunto de servicios para facilitar la administración:

Cluster manager

Encargado de las operaciones referentes al cluster. Por default puede manejarse por medio de REST usando el puerto 19800 en HTTP y con TCP por el puerto 19000 usando Powershell.

Failover manager

Encargado de detectar cuando nuevos nodos se agregan al cluster, cuando se quitan, o cuando alguno falla y rebalancear para asegurar alta disponibilidad de los servicios.

Naming

Mapea los servicios con los endpoints, de manera que puedan comunicarse entre si.

Fault Analysis

Ayuda a introducir fallas a los servicios de manera que puedan probarse escenarios distintos de manera controlada.

Image Store

Contiene los bits de los servicios, el master del cual se hacen las copias que se replican en los nodos.

Upgrade

A cargo de actualizar los componentes de Service Fabric, exclusivamente en Azure.

 

Programming models

Cuando se trabaja con Service Fabric, se tienen 3 opciones para crear los servicios

Reliable services

Provee una manera simple de integrarse con Service Fabric cuando se crean los servicios, beneficiandose de las herramientas de plataforma.

Reliable actors

Construido sobre Reliable Services, es un framework que trabaja con unidades single-threaded llamadas Actors, basadas en el patrón de diseño con el mismo nombre.

Guest executable

Es sólo eso, un ejecutable que puede publicarse en un cluster sin integrarse completamente con la plataforma; Service Fabric sólo se asegura de que se encuentre corriendo. No importa el lenguaje, por lo que es una buena opción para llevar aplicaciones existentes.

Aplicaciones y servicios

Una aplicación es básicamente un conjunto de servicios, los cuales se definen en el archivo ApplicationManifest.xml; en términos de Service Fabric, a esto se le se denomina Application Type. De él creamos una instancia denominada Application Instance, que es la que contactamos en tiempo de ejecución, muy similar al concepto de clase e instancia en programación orientada a objetos. Del mismo modo pasa con Service Type y Service instance, además de que se compone de 3 partes: código, datos y configuración.

Cada uno de estos elementos tiene su propia versión, es decir puedo tener la versión 2.1.1 de mi Aplicación que se compone de 1 servicio con versión 1.0.0.

 


Con esto termina la introduccion; estos son los conceptos básicos de Service Fabric en los que nos basaremos para los siguientes tutoriales.

App de galleta de la suerte creada con Azure Functions usando Twilio y Sendgrid outputs

Una de las ventajas más grandes de trabajar con Azure Functions es poder hacer fácilmente prototipos de aplicaciones. Para este post voy a crear una aplicación de Galleta de la suerte bastante simple, que enviará una frase por email usando Sendgrid y por SMS usando Twilio. La app se compone de tres piezas

  • Página de Front-end usando HttpTrigger
  • Request POST procesado con HttpTrigger
  • Queue procesado usando el Trigger de Azure Queue Storage

Html.RatingFor: Extending the MVC HtmlHelper

When working on a web application, I was in the need to add a rating for a product. That rating will be between 1 and 5 and will be always an int. So my model has a property like public int Rating {get;set;}. I decided to add 5 radio buttons, and each will hold the corresponding rating value.

But then (as always happen) the requirement changed. We didn’t want to have only 1 rating property, but 5. So adding 5 radios for each was something that I didn’t want to happen

In order to solve this problem, I created an extension method for the HtmlHelper class that we normally use in our MVC applications. As you may notice, in the method I created all the logic for adding the set of radio buttons needed for the rating process.


public static MvcHtmlString RatingFor<tmodel, TProperty>(this HtmlHelper htmlHelper, Expression<func<tmodel, TProperty>> expression, int from, int to, object htmlAttributes = null)
	{
		var builder = new StringBuilder();
 
		var metadata = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData);
 
		var model = metadata.Model;
		var name = ExpressionHelper.GetExpressionText(expression);
 
		var attributes = HtmlHelper.AnonymousObjectToHtmlAttributes(htmlAttributes);
 
		var fullName = htmlHelper.ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldName(name);
 
		int direction = 1;
		if (from > to)
			direction = -1;
 
		for (var i = from; direction == 1 ? i <= to : i >= to; i += direction)
		{
			var tagBuilder = new TagBuilder("input");
			tagBuilder.MergeAttributes(attributes);
			tagBuilder.MergeAttribute("type", "radio");
			tagBuilder.MergeAttribute("name", fullName, true);
			tagBuilder.MergeAttribute("value", i.ToString(CultureInfo.InvariantCulture));
			//If model has a value we need to select it
			if (model != null && model.Equals(i))
			{
				tagBuilder.MergeAttribute("checked", "checked");
			}
			tagBuilder.GenerateId(fullName);
 
 
			ModelState modelState;
			if (htmlHelper.ViewData.ModelState.TryGetValue(fullName, out modelState))
			{
				if (modelState.Errors.Count > 0)
				{
					tagBuilder.AddCssClass(HtmlHelper.ValidationInputCssClassName);
				}
			}
 
			tagBuilder.MergeAttributes(htmlHelper.GetUnobtrusiveValidationAttributes(name, metadata));
 
			builder.AppendLine(tagBuilder.ToString(TagRenderMode.SelfClosing));
		}
 
 
		return MvcHtmlString.Create(builder.ToString());
	}

One important part of this code is

if (model != null && model.Equals(i))
{
 tagBuilder.MergeAttribute("checked", "checked");
}

where we assign the value of the property if it is already set. This is useful when you use this method on an Edit process.

Now on your view, instead of having to create all that radio buttons manually, you can have something like this

@Html.RatingFor(model => model.Rating, 1, 5)

in order to add a rating from 1 to 5.

Hopefully you will find this useful. If you have created another useful helper, it would be nice if you share it with the community 🙂

Public field not bound when posting to WebAPI (or a deep dive into WebAPI model binding)

When trying to create a sandbox project using WebAPI (on MVC4), I was struggling with a weird problem: My data wasn’t being received in the server. I had the following jQuery call

$.post("api/Values", {value1:1, value2:2}, function(result){ console.log(result); })

and the WebAPI service action that I was targeting was something like this

public IEnumerable Post(Dummy value)
{
 return new string[] { value.Value1, value.Value2 };
}

I noticed that even that the instance of Dummy was being created, Value1 and Value2 where always null. The Dummy class was

public class Dummy
{
 public string Value1;
 public string Value2;
}

Pretty simple, right?. Well, after reading doing a lot of research, I changed by accident one of the Dummy fields to become a property

public class Dummy
{
 public string Value1;
 public string Value2 {get;set;}
}

I tested again and Voilà!!… well, half voilà actually… When posting, now I was receiving data in Value2, but still not in Value1. This was really intriguing… how come property was being assigned correctly but not the field? Both are public, right? Why the difference?
Obviously, I knew the solution was changing both fields to be properties now, but I wanted to know why was that happening. I started digging on how WebAPI works and found a really interesting Web API poster, that describes the full lifecycle of a HTTP message. There I got my first clue, so I started researching on how ModelBinding happens. As described there, one of the binding methods is MediaTypeFormatter. Since I was sending JSON object, I tested the Deserialization process based on the test methods provided in the WebAPI overview site

T Deserialize(MediaTypeFormatter formatter, string str) where T : class
{
 // Write the serialized string to a memory stream.
 Stream stream = new MemoryStream();
 StreamWriter writer = new StreamWriter(stream);
 writer.Write(str);
 writer.Flush();
 stream.Position = 0;
 // Deserialize to an object of type T
 return formatter.ReadFromStreamAsync(typeof(T), stream, null, null).Result as T;
}

passing the same JSON object that I had on my jQuery call. The result: The method assigned successfully the values for both the field and the property. By inspecting the HTTP Request headers, I found out that data wasn’t being actually sent as JSON but in the following format: Content-Type:application/x-www-form-urlencoded; charset=UTF-8, which tells the server that data is being sent like this: Value1=1&Value2=2. Then, we need to change the AJAX call to be like this

$.ajax({
  url: "api/Values",
  data: JSON.stringify({Value1:1,Value2:2}),
  type: "POST",
  contentType:"application/json; charset=utf-8"
})

please notice 2 things: I changed the contentType for the request AND Stringified the JSON object. By doing these changes, Dummy public fields were now populated correctly.
Now, I still wanted to know why my values weren’t bound when I wasn’t specifying the request content type. Doing more research, I found this really interesting article by Mike Stall called How WebAPI does parameter binding which states

There are 2 techniques for binding parameters: Model Binding and Formatters. In practice, WebAPI uses model binding to read from the query string and Formatters to read from the body

If you are not yet bored, you might remember that when we didn’t specify the request content type, the data was being sent as Content-Type:application/x-www-form-urlencoded; charset=UTF-8. This means, that WebAPI was using ModelBinding (and not formatters) to populate the Dummy instance. Moreover, the article has another interesting declaration:

ModelBinding is the same concept as in MVC, […]. Basically, there are “ValueProviders” which supply pieces of data such as query string parameters, and then a model binder assembles those pieces into an object.

And how does ModelBinding work in MVC? That was my next question. And I was really happy that Microsoft open-sourced the ASP.Net WebStack, because there is where we can find the answer. If we look into DefaultModelBinder source code, we’ll find that when talking about complex models, it only looks for the object properties to populate the data (maybe because having public fields is a bad practice).
Well, I hope you can find this post as interesting as I found learning all this. Some times making silly errors can drive you into learn really interesting things.

Useful references

Backing field for automatically implemented property [Field] must be fully assigned before control is returned to the caller

Working with structs in C# gives you a lot of flexibility on the way you design your applications, but since they are not reference types, they have some special features that we need to take in count.
Recently I was working on a web application and I created a struct to hold a pair of values that is being used very frequently. It is something like this

public struct StringTuple{
    public string Value1 {get; set;}
    public string Value2 {get; set;}
}

After some code changes, I decided that it would be a good option to have a constructor to pass the struct values

public struct StringTuple
{
 public StringTuple(string value1, string value2)
 {
  Value1 = value1;
  Value2 = value2;
 }
 public string Value1 { get; set; }
 public string Value2 { get; set; }
}

but the compiler started complaining, giving me the following error

Backing field for automatically implemented property Value1 must be fully assigned before control is returned to the caller

It was the first time that I had seen that error, so after some time of think and research I remembered one of the basic principles of working with structs: members are initialized when the default constructor is called. That is why creating a new constructor was creating a problem, since we were overloading the constructor call and skipping that member initialization

The solution

Since the problem is that we’re not calling the default constructor, the solution is obviously call it, so we just need to add that call to the constructor that we just introduced.

public struct StringTuple
{
 public StringTuple(string value1, string value2):this()
 {
  Value1 = value1;
  Value2 = value2;
 }
 public string Value1 { get; set; }
 public string Value2 { get; set; }
}

By that, the error message is gone and we can continue happily working with structs

Install XAMPP on a Ubuntu 13.04 virtual machine running on Windows Azure

One of the many great things about Windows Azure is how easy you can create a virtual machine, no matter the OS of your preference. But for good or bad, your virtual machine will be fresh, so you need to work on setting up whatever you need to get to work.

Recently I needed to set up some web applications that I preferred to run on Linux, and for that I prefer to use XAMPP because of the simplicity of the installation process. But this time was not as straightforward as on my previous experiences, so then this is how I did it.

I’ll assume that you already have the virtual machine created; I chose an Ubuntu Server 13.04 instance from the gallery. After getting the virtual machine up and running, the first step I did was to download the latest XAMPP version from the Apache friends website.

sudo wget http://sourceforge.net/projects/xampp/files/XAMPP%20Linux/1.8.1/xampp-linux-1.8.1.tar.gz/download?use_mirror=iweb&download=

After that you need to extract the files from the tar, so we follow the process described for the XAMPP installation guide

sudo tar xvfz xampp-linux-1.8.1.tar.gz -C /opt

So far, so good. But when we try to start our XAMPP server using

sudo /opt/lampp/lampp start

We get the following error

XAMPP is currently only availably as 32 bit application. Please use a 32 bit compatibility library for your system.

To solve this, there are 2 posible solutions, both of them start by doing

sudo apt-get update

After this, you can install the ia32-lib package

sudo apt-get install ia32-lib

This solution worked for me on previous Ubuntu versions, but not this time. If this solution doesn’t work for you either, then you need to run the following command

sudo dpkg --add-architecture i386 && sudo apt-get update && sudo apt-get install ia32-libs

As stated in this askubuntu.com answer,

(…) installing through WUBI did not correctly detect the available foreign architectures. As tumbleweed suggested printing the foreign architectures probably returns nothing. Add i386 as a foreign architecture, update the apt cache, then install the 32 bit libs.

So then now you might be able to start your XAMPP server by

sudo /opt/lampp/lampp start

You should now get something like this

Starting XAMPP 1.8.1...
LAMPP: Starting Apache...
LAMPP: Starting MySQL...
LAMPP started.

With this you have successfully installed your XAMPP server, the next step is to test your web server. When you create a new virtual machine, by default the only open port is the one designed for SSH. In order to access the server via a different port we need to create a new endpoint. On the Virtual Machine administration page, go to the endpoints tab


There you will see the list of the endpoints that we already have. If it is a new VM you might see only the one corresponding to SSH.

Click on ADD ENDPOINT button at the bottom of the page, and you will see the small window to create a new endpoint.

Click on Next button and you will see the window to specify the endpoint data


You can choose the name you want but it cannot be the same as an existing one; the protocol will be TCP. The public port is the one you will use to access your webserver, so it can be anything you want. The private port is the one your XAMPP server is using to serve the content. It is normally the 80, but you can change that on the XAMPP configuration accordingly to your needs.

After clicking the complete button, you should be able to see your new endpoint listed and now you can access your web server from any part in the world with something like this

http://[youthostname].cloudapp.net:[yourpublicport]

Once the page loads, you will most likely see the following error message

New XAMPP security concept:

Access to the requested object is only available from the local network.

This setting can be configured in the file "httpd-xampp.conf".

If you think this is a server error, please contact the webmaster.

So then what you need to do is modify the specified file. According to our installation, it will be located in /opt/lampp/etc/extra/ directory. We need to find the section with the title “New XAMPP security concept” and comment out the full LocationMatch section or adjust the values of the allowed IP addresses if you don’t want to open your site to the public.

Another change we need to make in the same file is on the Directory "/opt/lampp/phpmyadmin" section. We need to add there Requiere all granted to be able to access the phpMyAdmin site. Remember to add some IP filters so it is not open to anybody that has the URL address to access it.

To finish, just restart your XAMPP server

sudo /opt/lampp/lampp restart

And voilà, you are now ready to work with your XAMPP server on the cloud.

Localize your MVC app based on a subdomain

Having an application in multiple languages is now a requirement in many projects. In ASP.net, you can tell your application that the language that should be using corresponds to the one the browser is specifying. While this is a really nice feature in the ideal scenarios (since the user gets the applications in the proper language automatically), there are some scenarios where this might be not the expected behavior like:

  • If your user’s computer locale is different than the one he or she prefers for using your application (like when he or she is using a different computer than his/her own)
  • When the browser settings have been modified to some value diferent than whatever the user prefers and he or she does not have the knowledge to adjust this setting on the browser.
In these cases, the user would rather to have a “fallback” mechanism so he or she can select his/her preferred language. One of the options you can use to achieve this is selecting the language/locale based on a subdomain. By this, you will give the users the following options:
Desired language URL address
English en.myapp.com
Spanish sp.myapp.com
Finnish fi.myapp.com

 

In order to support this, you will need to create an ActionFilterAttribute, something like this

public class LocalizationFilterAttribute : ActionFilterAttribute
    {
        public override void OnActionExecuting(ActionExecutingContext filterContext)
        {
            var locales = new Dictionary();

            locales.Add("mx", "es-MX");
            locales.Add("sp", "es-ES");
            locales.Add("vi", "vi-VN");
            locales.Add("fi", "fi-FI");

            var subdomain = GetSubDomain();

            if (subdomain != string.Empty && locales.ContainsKey(subdomain))
            {
                Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo(locales[subdomain]);
                Thread.CurrentThread.CurrentUICulture = new System.Globalization.CultureInfo(locales[subdomain]);

                HttpContext.Current.Response.Write(String.Format("Culture: {0}", Thread.CurrentThread.CurrentCulture.Name));
            }
            else
            {
                HttpContext.Current.Response.Write("Culture: Default ");
            }
            base.OnActionExecuting(filterContext);
        }
        private string GetSubDomain()
        {
            var url = HttpContext.Current.Request.Headers["HOST"];
            var index = url.IndexOf(".");

            if (index < 0)
            {
                return string.Empty;
            }

            var subdomain = url.Split('.')[0];
            if (subdomain == "www" || subdomain == "localhost")
            {
                return string.Empty;
            }

            return subdomain;
        }
    }

As you may already noticed, with this code you define a list of locales that will be selected according to the provided subdomain. The next step would be registering this filter so it is used in all the views. You can do this in your Global.asax file

public static void RegisterGlobalFilters(GlobalFilterCollection filters)
        {
            filters.Add(new LocalizationFilterAttribute());
            filters.Add(new HandleErrorAttribute());
        }

Once you have a way to set the locale for the current thread, all you need to do is the localization process, which can be done as you already have it. In my case, I’m using resource files to have all the translations and have a fallback resource file if any requested text has no translation on any of the language-specific resource files.

By this, you can provide your users a simple and easy-to-remember way to get your application in their desired language.