Sorry, this entry is only available in European Spanish.
Most likely you have already experienced a system downtime, either on an application you have worked on or on some service that you consume. It has happened to Amazon, Netflix, Microsoft, Salesforce, etc. How much did you have to wait? How much did your users?
If you’re building an application and you ask your boss (or your client) what’s the percentage of time the application should be working correctly, most likely you’ll get an answer like “always” or “100% (or more)”.
Even though we don’t wan’t bad things to happen, they will surely do; bugs, attacks, power outages, natural disasters, etc., all are scenarios that might affect a system. Expecting them not to happen is naive; it’s better to think on and plan for failure, since it is inevitable.
Availability is the capability of an application to be available after some problem occurs. Again, we are not saying there will be no problems, but how effectively will be able to recover from them instead. This means that we need to a) identify the potential fail points and b) create an strategy to be able to prevent the error becoming a failure affecting the user (this means a tree falling in the forest when no-one is around makes no sound).
Many times I’ve heard clients or peer developers saying “Let’s use x framework” or “we will need to apply x technology”. The problem most of the times is that there is no prior analysis for the decision its being proposed. It is just because it’s something trendy (and cool) and that’s it. As one of my prior managers frequently said, “Hype and buzzwords are dangerous friends”.
The main question we need to ask is “What’s the problem we are trying to solve?”. It is surprising how frequently we come up with solutions for problems we barely know. We are eager to jump into the fight without knowing the size of the opponent before hand. What if it ends up being a monster instead of the small thingy we thought? Or just the opposite?
This post is part of Service Fabric series
- Introduction to Service Fabric
- Creating a WCF Service for Azure Service Fabric
In the beginning (not so far away) it was the local server, and developer’s life was a chaos. IT team (if existing, otherwise the developer itself) was responsible of ensuring the server where the applications was installed worked as needed. It was his fault if this didn’t happen.
Later, whit the use of virtualization the cloud came bringing the ability of give us the chance of blaming transfer the responsibility to someone else.
What does all this have to do with this post? Microsoft Azure Service Fabric is a Platform as a Service option, built from scratch for supporting cloud, distributed, high-scale and high-availability applications. It started as a proposal for cloud databases (CloudDB) and currently is being used on rockstar Microsoft Products like Cortana, Skype for Business, Power BI, SQL Azure, etc.
Its main advantages are in the easiness given to developers for managing elements that are beyond functionality, like
- Rolling updates
- Monitoring and telemetry from the services
This way developer can focus all his efforts and attention in coding.
Even though it is normally associated with microservices, Service Fabric benefits can be useful in multi-layer applications, APIs, etc. But, what are microservices? Although there is no standard definition, they are normally identified by spliting application functions in small parts. These parts are independently versioned, written in any language or technology and oriented to solve a concrete situation from the problem is intending to tackle. It is important to be clear that monolithic is not bad neither microservices is good; it all depends of the scenario and context.
By being distributed independently in different nodes (containers, servers, virtual machines) within a cluster where the replication and partition process is performed, each microservice can be scaled according to its own needs.
Service Fabric can run the same on Microsoft Azure, other clouds providers like AWS and even on private clouds, no matter if it is Linux or Windows. Even at development time, the required components are exactly the same, what makes really easy to move from one environment to the other when it is needed. This is because the components where thought to be standard and is not needed to make modifications according to the environment where it will be executed.
The cluster is a set of nodes components installed and configured to communicate each other; it provides an abstraction level between the application and the infraestructure where it’s executed. The main cluster abilites are
- Supporting thousand of nodes
- Dynamic change
- Isolation unit
Service fabric provides a set of services to help with infrastructure management
In charge of cluster operations. By default it can be managed using REST through port 19800 with HTTP and with TCP through port 19000 using Powershell.
In charge of detecting when new nodes are added to the cluster, when they are removed, or when a fail occurs in order to re-balance for high availabiliy.
Maps the services with the endpoints, so the can communicate each other.
Helps you to introduce failures to your services so you can test different scenarios in a controlled manner.
Contains the actual bits of the services, the master used for creating the copies that are replicated on the nodes.
In charge of updating Service Fabric components, exclusively on Azure.
When working with Service Fabric, you have 3 options for creating your services
Provides a simple way for integrate with Service Fabric when creating your services, benefiting from the platform tools.
Built on top of Reliable Services capabilities, it’s a framework that works with single-threaded units called Actors, based on the design pattern with the same name.
It’s just that, an executable that you can deploy to service fabric cluster, without fully integrating with the platform; Service Fabric just ensures the exec stays up and running. The programming language doesn’t matter, so it is a good option for existing applications.
Application and services
An applications is basically a set of services, defined in ApplicationManifest.xml file; in Service Fabric terms, we name it Application Type. Based on this type we create an Application Instance, that is the one we hit at runtime; this is very similar to the class and instance concepts in OOP. The same goes for Service Type and Service instance; additionaly it’s composed by 3 parts: code, data and configuration.
Each of this elements has its own version, so we can have an Application with version label 2.1.1 composed by one service with version 1.0.0.
That’s it for now; we’ve covered Service Fabric concepts that will be used for our next tutorials.
One of the greatest advantages of working with Azure Functions is being able to prototype easily applications. For this post I will create a simple Fortune cookie app, which will send you a phrase to your email using Sendgrid and phone via SMS using Twilio. The app will be composed of three pieces
- Front-end web page using HttpTrigger
- POST request processing with HttpTrigger
- Queue processing using Azure Queue Storage Trigger
Have you ever happened to have a talk with a person which is specialist in one topic, and in a certain point you realize you don’t understand half of the conversation? I have, and a lot of times. Doctors will talk about free radicals, lawyers about alibis, construction engineers about material resistance, advanced math teachers about Laplace transforms, and so on.
As programmers we are not the exception. We use to be so deep into our own world that sometimes forgot that we are speaking our own language with people that doesn’t fully understand it. As Will Rogers said
Everybody is ignorant, only on different subjects
I’m really an ignorant on a lot of topics: when I speak to my accountant, I’d really like to have a guide that put what do accumulated depreciation or bank reconciliation really mean in plain human words.
So thinking about that I thought that it might be useful to write some posts about some of the terms that we commonly use as the programmers argot described in plain English. If you are already into the programming world, you may want to better jump to my Programming section, which covers some more advanced topics.
If you are a non-technical person that use to frequently speak with programmers, I hope you might find here some of the answers to the questions that might have arisen in any of those talks. I’ll base the content of the posts on my previous experience both as an instructor and a programmer, trying to give you a better understanding about the topic.
During my entire career I’ve been in a lot of tech screenings (both sides of the desk), and one common pattern I’ve noticed is that, in many of the cases, the interview is just a standard questionnaire about the technologies required or desired for the position. What is the point with that? The best that you can get with that is filter candidates that were too lazy to get prepared for the interview.
This is not 1990 anymore, when sharing information globally represented a challenge for most of people. Many of the questions in these type of interviews are already posted in some website, go check yourself: just search in Google (or your preferred search engine) “interview questions [fill in here with the desired technology or language]” and you will get plenty of results (unless your desired technology is some esoteric programming language like LOLCODE).
Following scripted interviews does not help the interviewer nor the candidate at all. On one occasion, when I was being screened for a developer position the interviewer said “Well, lets proceed with the database questions”. Since it had been a long time that I had worked with databases directly I said: “Well, it’s been a long time since I worked with databases so I think I’m not in good shape for that”. His answer was “It doesn’t matter I need to ask all the questions anyway“. What is the point of this? Why do you want to follow a predefined standard script?
This makes me think that
a) The interviewer is not “tech enough” to be able to make his own questions
b) The interviewer is too lazy to go deeper
c) The interviewer does not care about the real impact of the process (it might be a routine task that he must do, either for ego or obligation)
d) The company does not care really care much about the proficiency level of the people is hiring, or it is ignorant about the impact of doing so
e) All of the above
So I really mean that tech interviews should be like a terrorist interrogatory, because what you really want is to get the truth about what the candidate is capable of doing. You need to push hard to determine what is the real background of a person, the challenges he has passed through, how have he sorted them, etc. How can you get this with a standard questionnaire? You need to start following the path the candidate is giving you with the answers, not just saying “Very well, next question”. He might have memorized some stuff just to make people think he is an expert, but you shouldn’t be fooled by flamboyant answers; they might hide more ignorance than the simple ones.
In order to be capable of performing this type of interviews, the person really needs to have a strong technical background, so he can be capable to move through the answers of the candidate and be able to determine if he is the right person for the position. Sadly this is not the case on many companies, where Senior positions are get by “years-after-college” instead of real “years-of-experience”, but I’ll let that topic for another post.
If you are a candidate and the interview you are going through is just a bunch of standard questions, follow the wise advice of Scott Hanselman: excuse yourself and run.