They are both orchestrators and pretty much can handle:
Container Images Hosting
Scaling
Healing
Monitoring
Rolling Updates
Service Location
However, Service Fabric support for containers came recently. Initially, Azure Service Fabric was mostly an orchestrator for .NET processes running Windows only.
In 2017, k8s became an industry standard. Every Cloud vendor offers full support and some offer Kubernetes as a Service such as Azure Kubernetes Service (AKS) where the vendor takes care of the cluster creation, maintenance and management.
Given this, it became obvious that if any business is planning to adopt Microservices as a developmemt strategy, they are most likely thinking about employing Kubernetes. Managed services offered by many Cloud vendors such as Azure AKS makes this decison a lot easier as dvelopers no longer have to worry about provisioning or maintaining k8s clusters.
Besides the huge developer and industry support that it is currently receiving, K8s is a joy to work with. Deployments can be described in yaml or json and thrown at the cluster so it can make sure that the desired state is realized. Please refer to this post for more information.
Not sure about these:
Can k8s create singletons?
Can I have multiple depoyments of the same Docker instance with different enviroment variables?
In my opinion, one of the most differentiating factor for Service Fabric is its developer-friendly programming model. It supports reliable stateless, stateful and actor models in a powerful yet abstracted way which makes programming in Service Fabric safe and easy. Please refer to earlier posts here and here for more information.
In addition, Service Fabric supports different ways to host code:
Guest Executable i.e. unmodified Win32 applications or services
Guest Windows and Linux containers
Reliable stateless and stateful services in .NET C#, F# and Java
Reliable actors in .NET C#, F# and Java
The concept of app in Service Fabric is quite sophisticated allowing developers to create an app type and instantiate many copies of it making the concept work well in multi-tenant deployments.
One of the major setbacks for Service Fabric (or at least the public version that was made available) is that it was conceived at a time when k8s is burgeoning into an industry standard. It is becoming very difficult for Microsoft to convince developers and businesses to adopt this semi-proprieytary platform when they can use k8s.
There are also a couple of things that can be improved:
Service Fabric relies on XML documents to describe services and configuration.
Reliance on Visual Studio although it is possible to do things in Service Fabric without Visual Studio as demonstrated here
I love Service Fabric! But unfortunately (I say unfortyantely because I actually did spend a lot of time on it), I don't think it has a particularly great future given the strength and the momentum of k8s. Ideally Microsoft should continue to suppprt k8s in a strong way, perhaps port its Service Fabric programming model to work on k8s using .NET Core and eventually phase out Service Fabric.
The account you use to log in to Azure with must be a Global admin.
In case of errors during deployment, please check the Azure Activity Logs. It is pretty good and provides a very useful insight to what went wrong.
After a deployment is successful, you can modify the ARM template and re-deploy. This will update the cluster. For example, if you added a new LN port and re-deployed using the PowerShell script, that new port will be available.
To log in to the fabric explorer requires that you complete the steps where you go to the AD in which the cluster belongs to, select the app that was created and assign an admin role to it as described in the above article. This must be done from the classic portal.
To connect using PowerShell, use Connect-ServiceFabricCluster -ConnectionEndpoint ${dnsName}:19000 -ServerCertThumbprint "6C84CEBF914FF489551385BA128542BA63A16222" -AzureActiveDirectory. Please note that, similar to the browser, this requires that the user be assigned as in the previous step.
Please note that securing the cluster does not mean that your own application endpoint is secured. You must do whatever you need to do to enable HTTPs in your own application and provide some sort of token authentication.
I noticed that the only VM size that worked reliably was the Standard_D2. Anything less than that causes health issues due to disk space, etc. I heard from Microsoft here that they are working on ways to reduce the cost of the VMs, particularly by allowing us to use smaller VMs and still get the reasonable reliability/durability levels, which would help reduce costs without sacrificing the safety or uptime of our service.
I am compiling notes when working with Service Fabric from the folks at Microsoft! In this post, I will enumerate a few things that I ran into which I think might be helpful to others who are learning the environment as well. This will not be a complete list ....so I will add to it as I go along.
Actors are single threaded! They only allow one thread to be acting on them at any given time. In fact, in the Service Fabric terminology, this is referred to as Turn-based treading. From my observation, it seems that this is how the platform and the actors
What happens to the clients who are calling the actors? If two clients are trying to access an actor at the same time, one blocks until the actor finishes the first client method. This is to ensure that an actor works on one thing at a time.
Let us say, we have an actor that has two methods like this:
public Task<string> GetThing1()
{
// Simulate long work
Thread.Sleep(5000);
return Task.FromResult<string>("thing1");
}
public Task<string> GetThing2()
{
// Simulate long work
Thread.Sleep(10000);
return Task.FromResult<string>("thing2");
}
If you call GetThing1 from one client and immediately call GetThing2 from another client (or Postman session), the second client will wait at least 15 seconds to get the string thing2 response.
Given this:
I think it is best to front-end actors with a service that can invoke methods on actors when it receives requests from a queue. This way the service is waiting on actors to complete processing while it is in its RunAsync method.
It is important to realize that actors should really not be queried and that actors should employ several things:
Backup state to an external store such as DocumentDB or others. This way the external store can be queried instead.
Aggregate result externally perhaps via Azure Function into some outside store so queries could run against this store
Aggregate result to an aggregator actor that can quickly respond to queries which will relieve the processing actors from worrying about query requests.
Actor Reminders are really nice to have. In the sample app that I am working on, I use them to schedule processing after I return to the caller. Effectively they seem to give me the ability to run things asynchronously and return to the caller right away. Without this, the actor processing throughput may not be at best if the the processing takes a while to complete.
In addition to firing a future event, they do allow me to pack an item or object that can be retrieved when the reminder triggers. This makes a lot of scenarios possible because we are able to pack the item that we want the reminder to work on.
Please note, however, that when a reminder is running in an actor, that actor cannot respond to other method invocation! This is because the platform makes sure that there is only a single threaded operating on an actor at any time.
The best way I found out to schedule reminders to fire immediately is something like this:
public async Task Process(SomeItem item)
{
var error = "";
try
{
if (item == null)
{
...removed for brevity
return;
}
await this.RegisterReminderAsync(
ReprocessReminder,
ObjectToByteArray(item),
TimeSpan.FromSeconds(0), // If 0, remind immediately
When the reminder triggers, ReprocessReminder is called to process the item that was packed within the reminder: ObjectToByteArray(item). Here are possible implementation of packing and unpacking the item:
From this article: The arguments of all methods, result types of the tasks returned by each method in an actor interface, and objects stored in an actor's State Manager must be Data Contract serializable. This also applies to the arguments of the methods defined in actor event interfaces. (Actor event interface methods always return void.)
In my actor interface, I had many methods and everything was working great until I added these two methods:
Task<SomeView> GetView(int year, int month);
Task<SomeView> GetView(int year);
If you to compile a Service Fabric solution that has an interface that looks like the above, you will be met with a very strange compilation error:
What? What is that? Why? After hours, it turned out you can actually turn off this error. From the above Stack Overflow post:
By changing the project file .csproj of the project containing the actors and setting property:
So this tool can be disabled!! But still why is this happening? It turned out that the actor interfaces may not have overridden methods!! So the tool was complaining about the interface containing just that i.e. overridden methods. If the above interface is changed to the below, everything will work well:
Task<SomeView> GetViewByYearNMonth(int year, int month);
Task<SomeView> GetViewByYear(int year);
In addition, the actor event methods may not return anything but void. So if you have something like this, you will get the same FabActUtil.exe error:
public interface IMyActorEvents : IActorEvents
{
Task MeasuresRecalculated(....);
}
I am hoping to add to this post as I go along. Hopefully this has been helpful.
The Service Fabric iOT sample app is a great sample to follow for our own Service Fabric apps. In this post, I used code snippets and concepts from the iOT sample to build a small app to demonstrate some fundamentals concepts that I feel are important.
The app is called Rate Aggregator where we have an app that monitors hotel rate requests coming in from somewhere (presumably from some site) and aggregates the result by city. I also wanted the app to be multi-tenant so we can have an app instance for each rate service provider i.e. Contoso and Fabrican.
The app is quite simple and consists of two services: Web Service to act as a front-end and a rates service to actually process the rates and aggregate them:
The solution source code consists of 4 different projects:
Common Project - a class library that has the common classes that are shared across the other projects. Please note that this library must be built using the x64 platform.
Web Service - a stateless Service Fabric service created using the Visual Studio ASP.NET Core template.
Rates Service - a stateful Service Fabric service created using the Visual Studio ASP.NET Core template.
An app project to contain the Service Fabric instances and provide application manifests.
A collection of PowerShell scripts that manage the deployment, un-deployment, update, upgrade and test. We will go through those files in this post.
Please note:
I created the solution using VS 2015 Service Fabric template. But actually the projects are regular projects that include Service Fabric NuGet packages. The only project that is quite specific to Service Fabric is the app project i.e. RateAggregatorApp ...but as demonstrated in a previous post, the app manifests and packaging can be easily generated manually.
The ASP.NET Code template in Service Fabric is still in preview. I noticed some odd stuff about it:
The template assumes that you are building stateless services! To create Stateful services using the ASP.NET template, manual intervention have to take place which I will note in this post
The useful ServiceEventSource.cs class is not included in the generated project. So if you want to use ETW logging, you must create this file manually (copy it from another SF project)
The template includes, in the Program.cs file the Service Fabric registration code and the Service class. It is useful to break up apart and create a class (using the name of the service) to describe the service i.e. WebService and RatesService
The Service Fabric RateAggregatorAppAPplicationManifest.xml file has a section for DefaultServices which automatically deploys the default services whenever an app is deployed. I usually remove the default services from the manifest file so i can better control the named app instance and service creation process (which I will demo in this post).
The iOT sample includes really nice code utilities that can be used to build Uris for services especially when the service exposes HTTP endpoints. The most important concepts that I would like to convey are:
For stateful services, this makes total sense! The combination of partitionId and instanceId are great for diagnostics and the guid makes every endpoint unique which is really useful because services are sometimes moved around. However, for Stateless services, I think we can easily omit the partitionId, the instanceId and the guid since stateless service endpoints are usually load balanced as they accept traffic from the Internet. Examples of stateless services endpoints:
Http://localhost:8082/FabricanRateAggregatorApp
Http://localhost:8082/ContosoRateAggregatorApp
If you are planning to expose multiple stateless web services in each app instances, then perhaps adding the service name to the end of the URL would make sense.Examples:
The demo app source code common project includes a WebHostCommunicationListener class (which is borrowed from the iOT sample) shows a really good implementation of how to manage this:
string ip = this.serviceContext.NodeContext.IPAddressOrFQDN;
Using ASP.NET Core to implement the Stateless and Stateful services has the distinct advantage of allowing the services expose a Web API layer that can be used by clients to call on the services. The Web API layer has regular controllers with normal Web API decoration to allow the services be called from regular HTTP clients:
public async Task<IActionResult> GetQueueLengthAsync()
{
....
}
}
Please note that the service has the IReliableStateManager, the StatefulServiceContext and the CancellationSource injected. This allows the Web API controller to use the service reliable collections and anything else related to service context. For example, this is the implementation of the queue length Web API method:
[HttpGet]
[Route("queue/length")]
public async Task<IActionResult> GetQueueLengthAsync()
using (ITransaction tx = this.stateManager.CreateTransaction())
{
long count = await queue.GetCountAsync(tx);
return this.Ok(count);
}
}
Note how the API controller uses the injected StateManager to gain access to the reliable queue and reports on its length.
Since the service interface is implemented as regular Web API controllers (or controllers), they can also be exposed as Swagger and allow other an API management layer to front-end these services.
To make this possible, the service must override the CreateServiceInstanceListeners in case of stateless services and CreateServiceReplicaListeners in case of stateful services. Here is an example of the Stateful service:
Instead of HTTP Web API, Services (especially stateful) can expose an interface using a built-in RCP communicaton listener. In this case, the service implements an interface and make it easy for clients to call upon the service using the interface. For example, a stateful service might have an interface that looks like this:
public interface ILookupService : IService
{
Task EnqueueEvent(SalesEvent sEvent);
Task<string> GetNodeName();
Task<int> GetEventsCounter(CancellationToken ct);
}
The service will then be implemented this way:
internal sealed class LookupService : StatefulService, ILookupService
{
...
}
The service will override the CreateServiceReplicaListeners as follows:
Although this looks nice and complies with Object Oriented programming, I think it should only be used with internal stateful services (those that do not expose an outside interface). Stateless services that are used by external clients are better off using an HTTP Web API interface which makes them easily consumable by many clients in different languages.
Since we have the state manager injected in the stateful service Web API controllers, it makes all the service reliable collections available to the Web API controllers. In our demo, the RatesService Web API controller i.e. RatesController uses the reliable queue to get the queue length and enqueue rate requests to the service. The service processes the incoming RateRequest in its RunAsyc method and aggregates the results in a reliable dictionary indexed by city/country:
In our demo app, we use partitions in the Stateful service i.e. RatesService to partition our data in 4 different buckets:
Rate Requests for the United States
Rate Requests for Canada
Rate Requests for Australia
Rate Requests for other countries
Hence our partition key range is 0 (Low Key) to 3 (High Key). We use a very simple method to select the appropriate partition based on the request's country code:
private long GetPartitionKey(RateRequest request)
{
if (request.Country == "USA")
return 0;
else if (request.Country == "CAN")
return 1;
else if (request.Country == "AUS")
return 2;
else // all others
return 3;
}
To allow for high availability, Service Fabric uses replicas for stateful services and instances for stateless services. In Service Fabric literature, the term replicas and instances are often exchanged.
In order to guarantee high availability of stateful service state, the state for each partition is usually replicated. The number of replicas is decided at the time of deploying the service (as we will see soon in the PowerShell script). This means that, if a stateful service has 4 partitions and the target replica count is 3, for example, then there are 12 instances of that service in Service Fabric.
In order to guarantee high availability of stateless services, Service Fabric allows the instantiation of multiple instances. Usually the number of instances matches the number of nodes in the Service Fabric cluster which allows Service Fabric to distribute an instance on each node. The load balancer then distribute the load across all nodes.
Please note, however, that, unlike stateless service instances, a stateful service partitions cannot be changed at run-time once the service is deployed. The number of partitions must be decided initially before the service is deployed to the cluster. Of course, if the service state can be discarded, then of course changes to the partition are allowed. Stateless services number of instances can be updated at any time (up or down) at any time. In fact, this is one of the great features of Service Fabric.
Since the state is partitioned, does this mean that we have the reliable collections (i.e. queues and dictionaries) scattered among the different partitions? The answer is yes! For example, in order to get the queue length of a stateful service, the client has to query all partitions and ask each service instance about the queue length and add them together to determine the overall queue length for the stateful service:
[HttpGet]
[Route("queue/length")]
public async Task<IActionResult> GetQueueLengthAsync()
{
ServiceUriBuilder uriBuilder = new ServiceUriBuilder(RatesServiceName);
Uri serviceUri = uriBuilder.Build();
// service may be partitioned.
// this will aggregate the queue lengths from each partition
if (response.StatusCode != System.Net.HttpStatusCode.OK)
{
return this.StatusCode((int)response.StatusCode);
}
string result = await response.Content.ReadAsStringAsync();
count += Int64.Parse(result);
}
return this.Ok(count);
}
FabricClient is the .NET client used to provide all sorts of management capabilities. It is injected in the Web Service controller to allow them to communicate with each partitition replica and get the needed results as shown above. Then the Web Service adds the count of each partition and return the total lenth of all partitions queues.
Similarly, the Web Service uses the FabricClient to communicate with the each partition replica to get and aggregate the result of each country cities:
[HttpGet]
[Route("cities")]
public async Task<IActionResult> GetCitiesAsync()
{
ServiceUriBuilder uriBuilder = new ServiceUriBuilder(RatesServiceName);
One of the great features of Service Fabric is its ability to allow the creation of multi-tenant scanarios. In our demo case, we may launch an app for Contoso rates and another one for Fabrican rates. We want these two apps to be of the same type but they should be completely isolated of each other. So we create two named app instances: ConosoRateAggretor and FabricanRateAggregator. This means that we have different set of services for each app operated independely and perhaps scaled, updated and upgraded independently.
This is really useful in many scenarios and allows for many great advantages. In the next section, we will see how easy it is to actually deploy, un-deploy, update and upgrade these named instances.
Given that we have multiple named app instances, how do we pass different parameters for each named instance? In the RatesService, we would like to have the name of the provider (and probably other configuration items) so we can communicate with the provider to pull rates. In our demo app, we are not actually communicating with the provider.
To do this, we define parameters for the RatesService in the Service Settings file as follows:
<!-- Add your custom configuration sections and parameters here -->
<Section Name="ParametersSection">
<Parameter Name="ProviderName" Value="" />
</Section>
</Settings>
The section name can be anything. In our case, it is ParametersSection. To be able to override this value for a specific application instance, we create a ConfigOverride when we import the service manifest in the application manifest:
Finally, at the deployment time (as you will see in more detail in the deployment script), we will specify a PowerShell Hashtable to override these parameters per named instance:
This PowerShell script assumes that you used Visual Studio to generate the Service Fabric app package info (right-click the Service Fabric App and select Package) or you built the app package manually as demonstrated in a previous post. The created package directory is expected to have the following format v1.0.0 where 1.0.0 is the version number
This PowerShell script copies the package to the cluster, registers the app type and creates two name app instances (i.e. Contoso and Fabrican). In each app instance, create two services: Web Service as a front-end and Rates service as a back-end.
$clusterUrl = "localhost"
$imageStoreConnectionString = "file:C:\SfDevCluster\Data\ImageStoreShare" # Use this with OneBox
If ($clusterUrl -ne "localhost")
{
$imageStoreConnectionString = "fabric:ImageStore" # Use this when not using OneBox
}
# Used only for the inmage store....it can be any name!!!
This PowerShell script removes all application name instances and their services from the selected cluster. It does this based on the application type.
This PowerShell script updates the web service in each app named instance to have 5 instances. Please note that this works if the number of instances does not exceed the number of nodes in the cluster.
$clusterUrl = "localhost"
# Deploy the first aplication name (i.e. Contoso)
$appName = "fabric:/ContosoRateAggregatorApp"
$webServiceName = $appName + "/WebService"
# Dynamically change the named service's number of instances (the cluster must have at least 5 nodes)
This PowerShell script upgrades the application named instances to a higher version i.e. 1.1.0. As noted earlier, this assumes that you have a new folder named v1.1.0 which contains the upgraded application package. The script uses monitored upgrade modes and performs the upgrade using upgrade domains.
$clusterUrl = "localhost"
$imageStoreConnectionString = "file:C:\SfDevCluster\Data\ImageStoreShare" # Use this with OneBox
If ($clusterUrl -ne "localhost")
{
$imageStoreConnectionString = "fabric:ImageStore" # Use this when not using OneBox
}
# Used only for the inmage store....it can be any name!!!
I think Service Fabric has a lot of great and useful features that make it is a great candidate for a lot of scenarios. I will post more articles about Service Fabric as I expand my knowledge in this really cool technology.
Service Fabric is a cool technology from Microsoft! It has advanced features that allows many scenarios. But in this post, we will only cover basic concepts that are usually misunderstood by a lot of folks.
For the purpose of this demo, we are going to develop a very basic guest executable service written as a console app. We will use very basic application and service manifests and PowerShell script to deploy to Service Fabric and show how Service Fabric monitors services, reports their health and allows for upgrade and update.
The source code for this post is available here. Most of the code and ideas are credited to Jeff Richter of the Service Fabric Team.
The Guest service is a basic Win32 console app that invokes an HttpListener on a port that is passed in the argument. The little web server responds to requests like so:
Note that the service is NOT running the Service Fabric cluster.
That is it!! This simple web server accepts a command called crash which will kill the service completely:
http://localhost:8800?cmd=crash
In fact, it does support multiple commands:
var command = request.QueryString["cmd"];
if (!string.IsNullOrEmpty(command))
{
switch (command.ToLowerInvariant())
{
case "delay":
Int32.TryParse(request.QueryString["delay"], out _delay);
break;
case "crash":
Environment.Exit(-1);
break;
}
}
In order to make this service highly available, let us see how we can package this service to run within Service Fabric. Please note that this service is not cognizant of any Service Fabric. It is purely a simple Win32 service written as a console app.
Please note:
To debug the service locally from Visual Studio, you need to start VS in administrator mode.
Service Fabric requires the projects be X64! So you must change your projects to use X64 by using the Visual Studio Configuration Manager.
Application Package in Service Fabric is nothing but a folder that contains certain manifests in specific sub-folders! We will build the directory by hand instead of using Visual Studio so we can find out exactly how to do these steps. Let us create a directory called BasicAvailabilityApp (i.e. c:\BasicAvailabilityApp) to describe the Service Fabric application.
The root folder contains the application manifest and a sub-folder for each service in contains. Here is how the application manifest looks like for this demo application:
The service folder contains the service manifest and a sub-folder for each service in contains. Here is how the application manifest looks like for this demo application:
There are several pieces of information in this manifest:
The service package: CrashableServiceTypePkg.
The service version: 1.0.0.
The service type: CrashableServiceType.
The service type is stateless.
The service code package exists in a sub-folder called CodePkg and it is of version 1.0.0.
The service code consists of an executable called CrashableService.exe.
The XML name spaces are not important to us.
The Endoints must be specified to allow the Service Fabric to ACL the port that we want opened for our service to listen on. The Input type instructs SF to accepts input from the Internet.
This is how the service folder looks like:
This is what it takes to package an application in Service Fabric.
Please note that the package we created in the previous step needs to be deployed to Service Fabric in order to run. To do this, we will need to use either Visual Studio or PowerShell. Since we want to use the lower level commands, we will use PowerShell instead of Visual Studio:
This is extremely significant as it allows us to create multiple application instances within the same cluster and each named application instance has its own set of services. This is how the named application and services are related to the cluster (this is taken from Service Fabric team presentation):
Once the named application and the named service are deployed, the Service Fabric explorer shows it like this:
Now, if we access the service in Service Fabric, we will get a response that clearly indicates that the service is indeed running in Service Fabric:
Note that the service is running in Node 1 of the Service Fabric cluster.
One of the major selling points of Service Fabric is its ability to make services highly available by monitoring them and restarting them if necessary.
Regardless of whether the service is guest executable or Service Fabric cognizant service, Service Fabric monitors the service to make sure it runs correctly. In our case, the service will crash whenever a crash command is submitted. So if you crash the service, you will see that Service Fabric detects the failure and reports a bad health on the Service Fabric Explorer:
You will notice that the little web server is no longer available when you try to access it. But if you wait for a few seconds and try again, you will be very happy to know that the web server is available again. This is because Service Fabric detected that the service went down, restarted it and made it available holding to the promise of high availability or self healing.
However, there is only one little problem! The unhealthy indicators (warning or errors) on the explorer may never go away because there isn't anything that resets them. So the health checks will also be shown once they are reported. This could become a little of a problem if you have an external tool that read health check state.
The above statement is not entirely true! I have seen the latest versions of Service Fabric remove the warning/errors after a little while.
In any case, I will show a better way (in my opinion) to deal with this shortly in this post. So read on if you are interested.
It turned out that Service Fabric does not really care how you name your versions! If you name your versions as numbers like 1.0.0 or 1.1.0, this naming convention is referred to as Semantic Versioning. But you are free to use whatever version naming convention you want.
Let us use a different version scheme for our simple app. How about alpha, beta and productionV1, productionV2, etc. Let us cleanup our app from the cluster (as shown above), apply some changes to the crashable service, update the manifest files to make the version Beta and re-deploy using the beta version:
Now that the beta version is deployed, let us make another change in the service, change the version to ProdutionV1 (in the application and service manifests) and issue the following PowerShell commands to register and upgrade to ProductionV1
# Copy the application package ProductionV1 to the cluster
The upgrade takes place using a concept called Upgrade Domains which makes sure that the service that is being upgraded does not ever become unavailable:
Once the upgrade is done, the new application and service version is ProductionV1:
Now that our service is in production, let us see what how we can increase and decrease its number of instances at will. This is very useful to scale the service up and down depending on parameters determined by the operations team.
You may have noticed that we have always used instance count 1 when we deployed our named service:
# Create a named service within the named app from the service's type
Please note that if your test cluster has less than 5 nodes, you will get health warnings from Service Fabric because SF will not be place more instances than the number of available nodes. This is because SF cannot guarantee availability if it places multiple instances on the same node.
Anyway, if you get health warning or if you would like to scale back on your service, you can downgrade the number of instances using this PowerShell command:
In a previous section in this post, we deployed the crashable service and watched it crash when we submitted a crash command. Service Fabric reported the failure, restarted the service and made it available again. Now we will modify the deployment process to provide a better way to take care of the re-start process.
To do so, we will need another service that monitors our crashable service and reports health checks to Service Fabric. This new code is Service Fabric aware and is demonstrated by Jeff Richter of the Service Fabric team.
Let us modify the application package to include this new code. Remember our goal is not to change the crashable service at all.
Our crashable service is still the same. It accepts an argumengt to tell it which port number to listen on.
ConsoleRedirection is added to allow us to see the console output in the SF log files. This is to be removed in production.
Now there is one service i.e. CrashableServiceType but two code bases: one for the original exe and another code for the monitor that will monitor our crashable service. This is really nice as it allows us to add Service Fabric code to an existing service without much of intervention.
The Endoints must be specified to allow the Service Fabric to ACL the port that we want opened for our service to listen on. The Input type instructs SF to accepts input from the Internet.
It is also a console app!! But it includes a Service Fabric Nuget package so it can use the FabricClient to communicate health checks to the local cluster. Basically, it sets up a timer to check the performance and availability of our crashable service. It reports to Service Fabric when failures take place.
Doing so makes our crashable service much more resilient to crashes or slow performances as it is monitored by the monitored service and re-started if necessary by Service Fabric. The health checks are also cleared much quicker.
You can use the Service Fabric cluster explorer to find out where Service Fabric stores services on disk. This is available from the Nodes section:
This directory has a log folder that stores the output of each service. This can be very useful for debug purposes. To use it, however, you must have the ConsoleRedirection turned on as shown above.
In future posts, I will use Service Fabric .NET programming model to develop and deploy stateless and stateful services to demonstrate Service Fabric fundamental concepts.