Service Fabric Fundamentals

Khaled Hikmat

Khaled Hikmat

Software Engineer

The Service Fabric iOT sample app is a great sample to follow for our own Service Fabric apps. In this post, I used code snippets and concepts from the iOT sample to build a small app to demonstrate some fundamentals concepts that I feel are important.

The source code for this post is available here.

The app scenario#

The app is called Rate Aggregator where we have an app that monitors hotel rate requests coming in from somewhere (presumably from some site) and aggregates the result by city. I also wanted the app to be multi-tenant so we can have an app instance for each rate service provider i.e. Contoso and Fabrican.

The app is quite simple and consists of two services: Web Service to act as a front-end and a rates service to actually process the rates and aggregate them:

App Components

Source Code#

The solution source code consists of 4 different projects:

  • Common Project - a class library that has the common classes that are shared across the other projects. Please note that this library must be built using the x64 platform.
  • Web Service - a stateless Service Fabric service created using the Visual Studio ASP.NET Core template.
  • Rates Service - a stateful Service Fabric service created using the Visual Studio ASP.NET Core template.
  • An app project to contain the Service Fabric instances and provide application manifests.
  • A collection of PowerShell scripts that manage the deployment, un-deployment, update, upgrade and test. We will go through those files in this post.

Please note:

  • I created the solution using VS 2015 Service Fabric template. But actually the projects are regular projects that include Service Fabric NuGet packages. The only project that is quite specific to Service Fabric is the app project i.e. RateAggregatorApp ...but as demonstrated in a previous post, the app manifests and packaging can be easily generated manually.
  • The ASP.NET Code template in Service Fabric is still in preview. I noticed some odd stuff about it:
    • The template assumes that you are building stateless services! To create Stateful services using the ASP.NET template, manual intervention have to take place which I will note in this post
    • The useful ServiceEventSource.cs class is not included in the generated project. So if you want to use ETW logging, you must create this file manually (copy it from another SF project)
    • The template includes, in the Program.cs file the Service Fabric registration code and the Service class. It is useful to break up apart and create a class (using the name of the service) to describe the service i.e. WebService and RatesService
  • The Service Fabric RateAggregatorApp APplicationManifest.xml file has a section for DefaultServices which automatically deploys the default services whenever an app is deployed. I usually remove the default services from the manifest file so i can better control the named app instance and service creation process (which I will demo in this post).

Fundamental Concepts#

The iOT sample includes really nice code utilities that can be used to build Uris for services especially when the service exposes HTTP endpoints. The most important concepts that I would like to convey are:

HTTP Endpoints#

If you would like to expose HTTP Endpoints for your service, Microsoft strongly recommends that you build the URL as follows:

HTTP Endpoint URL

Examples:

  1. Http://localhost:8084/ContosoRateAggregatorApp/7217642a-2ac8-4b29-b52d-3e92303ce1b2/131262989332452689/f74f07f7-d92f-47b9-8d6b-c86966c78d09
  2. Http://localhost:8084/FabricanRateAggregatorApp/13049e47-9727-4e02-9086-8fd6e2457313/131262989924122573/3b3455d8-487e-4ec4-9bd8-64ba8e658662

For stateful services, this makes total sense! The combination of partitionId and instanceId are great for diagnostics and the guid makes every endpoint unique which is really useful because services are sometimes moved around. However, for Stateless services, I think we can easily omit the partitionId, the instanceId and the guid since stateless service endpoints are usually load balanced as they accept traffic from the Internet. Examples of stateless services endpoints:

  1. Http://localhost:8082/FabricanRateAggregatorApp
  2. Http://localhost:8082/ContosoRateAggregatorApp

If you are planning to expose multiple stateless web services in each app instances, then perhaps adding the service name to the end of the URL would make sense.Examples:

  1. Http://localhost:8082/FabricanRateAggregatorApp/WebService
  2. Http://localhost:8082/ContosoRateAggregatorApp/WebService

The demo app source code common project includes a WebHostCommunicationListener class (which is borrowed from the iOT sample) shows a really good implementation of how to manage this:

string ip = this.serviceContext.NodeContext.IPAddressOrFQDN;
EndpointResourceDescription serviceEndpoint = this.serviceContext.CodePackageActivationContext.GetEndpoint(this.endpointName);
EndpointProtocol protocol = serviceEndpoint.Protocol;
int port = serviceEndpoint.Port;
string host = "+";
string listenUrl;
string path = this.appPath != null ? this.appPath.TrimEnd('/') + "/" : "";
if (this.serviceContext is StatefulServiceContext)
{
StatefulServiceContext statefulContext = this.serviceContext as StatefulServiceContext;
listenUrl = $"{serviceEndpoint.Protocol}://{host}:{serviceEndpoint.Port}/{path}{statefulContext.PartitionId}/{statefulContext.ReplicaId}/{Guid.NewGuid()}";
}
else
{
listenUrl = $"{serviceEndpoint.Protocol}://{host}:{serviceEndpoint.Port}/{path}";
}
this.webHost = this.build(listenUrl);
this.webHost.Start();
return Task.FromResult(listenUrl.Replace("://+", "://" + ip));

HTTP Web APIs#

Using ASP.NET Core to implement the Stateless and Stateful services has the distinct advantage of allowing the services expose a Web API layer that can be used by clients to call on the services. The Web API layer has regular controllers with normal Web API decoration to allow the services be called from regular HTTP clients:

[Route("api/[controller]")]
public class RatesController : Controller
{
private readonly IReliableStateManager stateManager;
private readonly StatefulServiceContext context;
private readonly CancellationTokenSource serviceCancellationSource;
public RatesController(IReliableStateManager stateManager, StatefulServiceContext context, CancellationTokenSource serviceCancellationSource)
{
this.stateManager = stateManager;
this.context = context;
this.serviceCancellationSource = serviceCancellationSource;
}
[HttpGet]
[Route("queue/length")]
public async Task<IActionResult> GetQueueLengthAsync()
{
....
}
}

Please note that the service has the IReliableStateManager, the StatefulServiceContext and the CancellationSource injected. This allows the Web API controller to use the service reliable collections and anything else related to service context. For example, this is the implementation of the queue length Web API method:

[HttpGet]
[Route("queue/length")]
public async Task<IActionResult> GetQueueLengthAsync()
{
IReliableQueue<RateRequest> queue = await this.stateManager.GetOrAddAsync<IReliableQueue<RateRequest>>(RatesService.RateQueueName);
using (ITransaction tx = this.stateManager.CreateTransaction())
{
long count = await queue.GetCountAsync(tx);
return this.Ok(count);
}
}

Note how the API controller uses the injected StateManager to gain access to the reliable queue and reports on its length.

Since the service interface is implemented as regular Web API controllers (or controllers), they can also be exposed as Swagger and allow other an API management layer to front-end these services.

To make this possible, the service must override the CreateServiceInstanceListeners in case of stateless services and CreateServiceReplicaListeners in case of stateful services. Here is an example of the Stateful service:

protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
{
return new ServiceReplicaListener[1]
{
new ServiceReplicaListener(
context =>
{
string tenantName = new Uri(context.CodePackageActivationContext.ApplicationName).Segments.Last();
return new WebHostCommunicationListener(
context,
tenantName,
"ServiceEndpoint",
uri =>
{
ServiceEventSource.Current.Message($"Listening on {uri}");
return new WebHostBuilder().UseWebListener()
.ConfigureServices(
services => services
.AddSingleton<StatefulServiceContext>(this.Context)
.AddSingleton<IReliableStateManager>(this.StateManager)
.AddSingleton<CancellationTokenSource>(this._webApiCancellationSource))
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseUrls(uri)
.Build();
});
})
};
}

Please note the use of the WebHostCommunicationListener and how we inject the service context, state manager and the cancellation token.

In our demo app, both statelss and stateful services implement their interface as Web API.

HTTP vs. RCP Endpoints#

Instead of HTTP Web API, Services (especially stateful) can expose an interface using a built-in RCP communicaton listener. In this case, the service implements an interface and make it easy for clients to call upon the service using the interface. For example, a stateful service might have an interface that looks like this:

public interface ILookupService : IService
{
Task EnqueueEvent(SalesEvent sEvent);
Task<string> GetNodeName();
Task<int> GetEventsCounter(CancellationToken ct);
}

The service will then be implemented this way:

internal sealed class LookupService : StatefulService, ILookupService
{
...
}

The service will override the CreateServiceReplicaListeners as follows:

protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
{
return new[]
{
new ServiceReplicaListener(context =>
this.CreateServiceRemotingListener(context),
"rpcPrimaryEndpoint",
false)
};
}

Although this looks nice and complies with Object Oriented programming, I think it should only be used with internal stateful services (those that do not expose an outside interface). Stateless services that are used by external clients are better off using an HTTP Web API interface which makes them easily consumable by many clients in different languages.

Reliable Collections#

Since we have the state manager injected in the stateful service Web API controllers, it makes all the service reliable collections available to the Web API controllers. In our demo, the RatesService Web API controller i.e. RatesController uses the reliable queue to get the queue length and enqueue rate requests to the service. The service processes the incoming RateRequest in its RunAsyc method and aggregates the results in a reliable dictionary indexed by city/country:

protected override async Task RunAsync(CancellationToken cancellationToken)
{
cancellationToken.Register(() => this._webApiCancellationSource.Cancel());
IReliableDictionary<string, RateAggregation> citiesDictionary = await this.StateManager.GetOrAddAsync<IReliableDictionary<string, RateAggregation>>(RateCitiesDictionaryName);
IReliableQueue<RateRequest> queue = await this.StateManager.GetOrAddAsync<IReliableQueue<RateRequest>>(RateQueueName);
while (true)
{
cancellationToken.ThrowIfCancellationRequested();
try
{
using (var tx = this.StateManager.CreateTransaction())
{
var result = await queue.TryDequeueAsync(tx);
if (result.HasValue)
{
RateRequest request = result.Value;
// TODO: Process the request
// TODO: Go against the reservation provider to pick up the rate
// TODO: How do I determine the reservation provider per tenant?
int nights = (request.CheckOutDate - request.CheckInDate).Days;
int netAmount = _random.Next(500) * nights;
var newAggregation = new RateAggregation();
newAggregation.Transactions = 1;
newAggregation.Nights = nights;
newAggregation.Amount = (double) netAmount;
await citiesDictionary.AddOrUpdateAsync(tx, $"{request.City}/{request.Country}", newAggregation, (key, currentValue) =>
{
currentValue.Transactions += newAggregation.Transactions;
currentValue.Nights += newAggregation.Nights;
currentValue.Amount += newAggregation.Amount;
return currentValue;
});
// This commits the add to dictionary and the dequeue operation.
await tx.CommitAsync();
}
}
}
catch (Exception e)
{
}
await Task.Delay(TimeSpan.FromMilliseconds(500), cancellationToken);
}
}

The reliable dictionary is then used in the service contrloller to return the aggregated result in an API call.

Partitions, Replicas and Instances#

In our demo app, we use partitions in the Stateful service i.e. RatesService to partition our data in 4 different buckets:

  1. Rate Requests for the United States
  2. Rate Requests for Canada
  3. Rate Requests for Australia
  4. Rate Requests for other countries

Hence our partition key range is 0 (Low Key) to 3 (High Key). We use a very simple method to select the appropriate partition based on the request's country code:

private long GetPartitionKey(RateRequest request)
{
if (request.Country == "USA")
return 0;
else if (request.Country == "CAN")
return 1;
else if (request.Country == "AUS")
return 2;
else // all others
return 3;
}

To allow for high availability, Service Fabric uses replicas for stateful services and instances for stateless services. In Service Fabric literature, the term replicas and instances are often exchanged.

In order to guarantee high availability of stateful service state, the state for each partition is usually replicated. The number of replicas is decided at the time of deploying the service (as we will see soon in the PowerShell script). This means that, if a stateful service has 4 partitions and the target replica count is 3, for example, then there are 12 instances of that service in Service Fabric.

In order to guarantee high availability of stateless services, Service Fabric allows the instantiation of multiple instances. Usually the number of instances matches the number of nodes in the Service Fabric cluster which allows Service Fabric to distribute an instance on each node. The load balancer then distribute the load across all nodes.

Please note, however, that, unlike stateless service instances, a stateful service partitions cannot be changed at run-time once the service is deployed. The number of partitions must be decided initially before the service is deployed to the cluster. Of course, if the service state can be discarded, then of course changes to the partition are allowed. Stateless services number of instances can be updated at any time (up or down) at any time. In fact, this is one of the great features of Service Fabric.

Result Aggregation#

Since the state is partitioned, does this mean that we have the reliable collections (i.e. queues and dictionaries) scattered among the different partitions? The answer is yes! For example, in order to get the queue length of a stateful service, the client has to query all partitions and ask each service instance about the queue length and add them together to determine the overall queue length for the stateful service:

[HttpGet]
[Route("queue/length")]
public async Task<IActionResult> GetQueueLengthAsync()
{
ServiceUriBuilder uriBuilder = new ServiceUriBuilder(RatesServiceName);
Uri serviceUri = uriBuilder.Build();
// service may be partitioned.
// this will aggregate the queue lengths from each partition
ServicePartitionList partitions = await this.fabricClient.QueryManager.GetPartitionListAsync(serviceUri);
HttpClient httpClient = new HttpClient(new HttpServiceClientHandler());
long count = 0;
foreach (Partition partition in partitions)
{
Uri getUrl = new HttpServiceUriBuilder()
.SetServiceName(serviceUri)
.SetPartitionKey(((Int64RangePartitionInformation)partition.PartitionInformation).LowKey)
.SetServicePathAndQuery($"/api/rates/queue/length")
.Build();
HttpResponseMessage response = await httpClient.GetAsync(getUrl, this.cancellationSource.Token);
if (response.StatusCode != System.Net.HttpStatusCode.OK)
{
return this.StatusCode((int)response.StatusCode);
}
string result = await response.Content.ReadAsStringAsync();
count += Int64.Parse(result);
}
return this.Ok(count);
}

FabricClient is the .NET client used to provide all sorts of management capabilities. It is injected in the Web Service controller to allow them to communicate with each partitition replica and get the needed results as shown above. Then the Web Service adds the count of each partition and return the total lenth of all partitions queues.

Similarly, the Web Service uses the FabricClient to communicate with the each partition replica to get and aggregate the result of each country cities:

[HttpGet]
[Route("cities")]
public async Task<IActionResult> GetCitiesAsync()
{
ServiceUriBuilder uriBuilder = new ServiceUriBuilder(RatesServiceName);
Uri serviceUri = uriBuilder.Build();
// service may be partitioned.
// this will aggregate cities from all partitions
ServicePartitionList partitions = await this.fabricClient.QueryManager.GetPartitionListAsync(serviceUri);
HttpClient httpClient = new HttpClient(new HttpServiceClientHandler());
List<CityStats> cities = new List<CityStats>();
foreach (Partition partition in partitions)
{
Uri getUrl = new HttpServiceUriBuilder()
.SetServiceName(serviceUri)
.SetPartitionKey(((Int64RangePartitionInformation)partition.PartitionInformation).LowKey)
.SetServicePathAndQuery($"/api/rates/cities")
.Build();
HttpResponseMessage response = await httpClient.GetAsync(getUrl, this.cancellationSource.Token);
if (response.StatusCode != System.Net.HttpStatusCode.OK)
{
return this.StatusCode((int)response.StatusCode);
}
JsonSerializer serializer = new JsonSerializer();
using (StreamReader streamReader = new StreamReader(await response.Content.ReadAsStreamAsync()))
{
using (JsonTextReader jsonReader = new JsonTextReader(streamReader))
{
List<CityStats> result = serializer.Deserialize<List<CityStats>>(jsonReader);
if (result != null)
{
cities.AddRange(result);
}
}
}
}
return this.Ok(cities);
}

Multi-Tenancy#

One of the great features of Service Fabric is its ability to allow the creation of multi-tenant scanarios. In our demo case, we may launch an app for Contoso rates and another one for Fabrican rates. We want these two apps to be of the same type but they should be completely isolated of each other. So we create two named app instances: ConosoRateAggretor and FabricanRateAggregator. This means that we have different set of services for each app operated independely and perhaps scaled, updated and upgraded independently.

Named App Instances

This is really useful in many scenarios and allows for many great advantages. In the next section, we will see how easy it is to actually deploy, un-deploy, update and upgrade these named instances.

Configuration#

Given that we have multiple named app instances, how do we pass different parameters for each named instance? In the RatesService, we would like to have the name of the provider (and probably other configuration items) so we can communicate with the provider to pull rates. In our demo app, we are not actually communicating with the provider.

To do this, we define parameters for the RatesService in the Service Settings file as follows:

<?xml version="1.0" encoding="utf-8" ?>
<Settings xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/2011/01/fabric">
<!-- Add your custom configuration sections and parameters here -->
<Section Name="ParametersSection">
<Parameter Name="ProviderName" Value="" />
</Section>
</Settings>

The section name can be anything. In our case, it is ParametersSection. To be able to override this value for a specific application instance, we create a ConfigOverride when we import the service manifest in the application manifest:

<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="RatesServicePkg" ServiceManifestVersion="1.0.0" />
<ConfigOverrides>
<ConfigOverride Name="Config">
<Settings>
<Section Name="ParametersSection">
<Parameter Name="ProviderName" Value="[RatesService_ProviderName]" />
</Section>
</Settings>
</ConfigOverride>
</ConfigOverrides>
</ServiceManifestImport>

The convention is to name the value as ServiceName_ParameterName

Finally, we must adjust the application manifest to include the required parameter:

<ApplicationManifest xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ApplicationTypeName="RateAggregatorAppType" ApplicationTypeVersion="1.0.0" xmlns="http://schemas.microsoft.com/2011/01/fabric">
<Parameters>
<Parameter Name="RatesService_ProviderName" DefaultValue="" />
</Parameters>
...
</ApplicationManifest>

Finally, at the deployment time (as you will see in more detail in the deployment script), we will specify a PowerShell Hashtable to override these parameters per named instance:

New-ServiceFabricApplication -ApplicationTypeName $appTypeName -ApplicationTypeVersion $version -ApplicationName $appName -ApplicationParameter @{"RatesService_ProviderName" = "Contoso"}
New-ServiceFabricApplication -ApplicationTypeName $appTypeName -ApplicationTypeVersion $version -ApplicationName $appName -ApplicationParameter @{"RatesService_ProviderName" = "Fabrican"}

The RatesService code will then make sure of this parameter to contact the instance-bound provider.

PowerShell Management Scripts#

Deployment#

This PowerShell script assumes that you used Visual Studio to generate the Service Fabric app package info (right-click the Service Fabric App and select Package) or you built the app package manually as demonstrated in a previous post. The created package directory is expected to have the following format v1.0.0 where 1.0.0 is the version number

This PowerShell script copies the package to the cluster, registers the app type and creates two name app instances (i.e. Contoso and Fabrican). In each app instance, create two services: Web Service as a front-end and Rates service as a back-end.

$clusterUrl = "localhost"
$imageStoreConnectionString = "file:C:\SfDevCluster\Data\ImageStoreShare" # Use this with OneBox
If ($clusterUrl -ne "localhost")
{
$imageStoreConnectionString = "fabric:ImageStore" # Use this when not using OneBox
}
# Used only for the inmage store....it can be any name!!!
$appPkgName = "RateAggregatorAppTypePkg"
# Define the app and service types
$appTypeName = "RateAggregatorAppType"
$webServiceTypeName = "WebServiceType"
$ratesServiceTypeName = "RatesServiceType"
# Define the version
$version = "1.0.0"
# Connect PowerShell session to a cluster
Connect-ServiceFabricCluster -ConnectionEndpoint ${clusterUrl}:19000
# Copy the application package to the cluster
Copy-ServiceFabricApplicationPackage -ApplicationPackagePath "RateAggregatorApp\pkg\v$version" -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $appPkgName
# Register the application package's application type/version
Register-ServiceFabricApplicationType -ApplicationPathInImageStore $appPkgName
# After registering the package's app type/version, you can remove the package
Remove-ServiceFabricApplicationPackage -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $appPkgName
# Deploy the first aplication name (i.e. Contoso)
$appName = "fabric:/ContosoRateAggregatorApp"
$webServiceName = $appName + "/WebService"
$ratesServiceName = $appName + "/RatesService"
# Create a named application from the registered app type/version
New-ServiceFabricApplication -ApplicationTypeName $appTypeName -ApplicationTypeVersion $version -ApplicationName $appName -ApplicationParameter @{"RatesService_ProviderName" = "Contoso"}
# Create a named service within the named app from the service's type
New-ServiceFabricService -ApplicationName $appName -ServiceTypeName $webServiceTypeName -ServiceName $webServiceName -Stateless -PartitionSchemeSingleton -InstanceCount 1
# Create a named service within the named app from the service's type
# For stateful services, it is important to indicate in the service manifest that the service is stateful and that it has a persisted state:
# <StatefulServiceType ServiceTypeName="RatesServiceType" HasPersistedState="true"/>
# Actually all of these switches are important on the PowerShell command:
# -PartitionSchemeUniformInt64 $true -PartitionCount 4 -MinReplicaSetSize 2 -TargetReplicaSetSize 3 -LowKey 0 -HighKey 3 -HasPersistedState
New-ServiceFabricService -ApplicationName $appName -ServiceTypeName $ratesServiceTypeName -ServiceName $ratesServiceName -PartitionSchemeUniformInt64 $true -PartitionCount 4 -MinReplicaSetSize 2 -TargetReplicaSetSize 3 -LowKey 0 -HighKey 3 -HasPersistedState
# Deploy the second aplication name (i.e. Fabrican)
$appName = "fabric:/FabricanRateAggregatorApp"
$webServiceName = $appName + "/WebService"
$ratesServiceName = $appName + "/RatesService"
# Create a named application from the registered app type/version
New-ServiceFabricApplication -ApplicationTypeName $appTypeName -ApplicationTypeVersion $version -ApplicationName $appName -ApplicationParameter @{"RatesService_ProviderName" = "Fabrican"}
# Create a named service within the named app from the service's type
New-ServiceFabricService -ApplicationName $appName -ServiceTypeName $webServiceTypeName -ServiceName $webServiceName -Stateless -PartitionSchemeSingleton -InstanceCount 1
# Create a named service within the named app from the service's type
New-ServiceFabricService -ApplicationName $appName -ServiceTypeName $ratesServiceTypeName -ServiceName $ratesServiceName -PartitionSchemeUniformInt64 $true -PartitionCount 4 -MinReplicaSetSize 2 -TargetReplicaSetSize 3 -LowKey 0 -HighKey 3 -HasPersistedState

Obliterate#

This PowerShell script removes all application name instances and their services from the selected cluster. It does this based on the application type.

$clusterUrl = "localhost"
# Define the app and service types
$applicationTypes = "RateAggregatorAppType"
# Connect PowerShell session to a cluster
Connect-ServiceFabricCluster -ConnectionEndpoint ${clusterUrl}:19000
# Remove all application names instances and their services
Get-ServiceFabricApplication | Where-Object { $applicationTypes -contains $_.ApplicationTypeName } | Remove-ServiceFabricApplication -Force
Get-ServiceFabricApplicationType | Where-Object { $applicationTypes -contains $_.ApplicationTypeName } | Unregister-ServiceFabricApplicationType -Force

Update#

This PowerShell script updates the web service in each app named instance to have 5 instances. Please note that this works if the number of instances does not exceed the number of nodes in the cluster.

$clusterUrl = "localhost"
# Deploy the first aplication name (i.e. Contoso)
$appName = "fabric:/ContosoRateAggregatorApp"
$webServiceName = $appName + "/WebService"
# Dynamically change the named service's number of instances (the cluster must have at least 5 nodes)
Update-ServiceFabricService -ServiceName $webServiceName -Stateless -InstanceCount 5 -Force
# Deploy the first aplication name (i.e. Fabrican)
$appName = "fabric:/FabricanRateAggregatorApp"
$webServiceName = $appName + "/WebService"
# Dynamically change the named service's number of instances (the cluster must have at least 5 nodes)
Update-ServiceFabricService -ServiceName $webServiceName -Stateless -InstanceCount 5 -Force

Upgrade#

This PowerShell script upgrades the application named instances to a higher version i.e. 1.1.0. As noted earlier, this assumes that you have a new folder named v1.1.0 which contains the upgraded application package. The script uses monitored upgrade modes and performs the upgrade using upgrade domains.

$clusterUrl = "localhost"
$imageStoreConnectionString = "file:C:\SfDevCluster\Data\ImageStoreShare" # Use this with OneBox
If ($clusterUrl -ne "localhost")
{
$imageStoreConnectionString = "fabric:ImageStore" # Use this when not using OneBox
}
# Used only for the inmage store....it can be any name!!!
$appPkgName = "RateAggregatorAppTypePkg"
# Define the new version
$version = "1.1.0"
# Connect PowerShell session to a cluster
Connect-ServiceFabricCluster -ConnectionEndpoint ${clusterUrl}:19000
# Copy the application package to the cluster
Copy-ServiceFabricApplicationPackage -ApplicationPackagePath "RateAggregatorApp\pkg\v$version" -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $appPkgName
# Register the application package's application type/version
Register-ServiceFabricApplicationType -ApplicationPathInImageStore $appPkgName
# After registering the package's app type/version, you can remove the package
Remove-ServiceFabricApplicationPackage -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $appPkgName
# Upgrade the first aplication name (i.e. Contoso)
$appName = "fabric:/ContosoRateAggregatorApp"
# Upgrade the application to the new version
Start-ServiceFabricApplicationUpgrade -ApplicationName $appName -ApplicationTypeVersion $version -Monitored -UpgradeReplicaSetCheckTimeoutSec 100
# Upgrade the second aplication name (i.e. Fabrican)
$appName = "fabric:/FabricanRateAggregatorApp"
# Upgrade the application to the new version
Start-ServiceFabricApplicationUpgrade -ApplicationName $appName -ApplicationTypeVersion $version -Monitored -UpgradeReplicaSetCheckTimeoutSec 100

Test#

This PowerShell scripts defines functions to exercise the Service Fabric Web service APIs for each named application instance.

Function Generate-RateRequests($appName = 'Contoso', $iterations = 20)
{
Try {
Write-Host "Generating $iterations random rate requests against $appName ...." -ForegroundColor Green
$url = "Http://localhost:8082/$appName" + "RateAggregatorApp/api/requests"
foreach($i in 1..$iterations)
{
$checkInDate = get-date -Year (get-random -minimum 2012 -maximum 2016) -Month (get-random -minimum 1 -maximum 12) -Day (get-random -minimum 1 -maximum 28)
$nights = get-random -minimum 1 -maximum 30
$checkOutDate = $checkInDate.AddDays($nights)
$hotelId = get-random -input "1", "2", "3" -count 1
$body = @{
CheckInDate = get-date $checkInDate -Format "yyy-MM-ddTHH:mm:ss";
CheckOutDate = get-date $checkOutDate -Format "yyy-MM-ddTHH:mm:ss";
HotelId = $hotelId;
HotelName = "Hotel$hotelId";
City = "City$hotelId";
Country = get-random -input "USA", "USA", "USA", "CAN", "CAN", "CAN", "AUS", "AUS", "AUS", "FRA", "GER", "UAE" -count 1
}
Write-Host "This is the JSON we are generating for iteration # $i...." -ForegroundColor yellow
$json = ConvertTo-Json $body -Depth 3
$json
$result = Invoke-RestMethod -Uri $url -Headers @{"Content-Type"="application/json" } -Body $json -Method POST -TimeoutSec 600
}
} Catch {
Write-Host "Failure message: $_.Exception.Message" -ForegroundColor red
Write-Host "Failure stack trace: $_.Exception.StackTrace" -ForegroundColor red
Write-Host "Failure inner exception: $_.Exception.InnerException" -ForegroundColor red
}
}
Function View-QueueLength($appName = 'Contoso')
{
Try {
Write-Host "View Queue Length for $appName...." -ForegroundColor Green
$url = "Http://localhost:8082/$appName" + "RateAggregatorApp/api/stats/queue/length"
$result = Invoke-RestMethod -Uri $url -Headers @{"Content-Type"="application/json" } -Method GET -TimeoutSec 600
$json = ConvertTo-Json $result -Depth 3
$json
} Catch {
Write-Host "Failure message: $_.Exception.Message" -ForegroundColor red
Write-Host "Failure stack trace: $_.Exception.StackTrace" -ForegroundColor red
Write-Host "Failure inner exception: $_.Exception.InnerException" -ForegroundColor red
}
}
Function View-Cities($appName = 'Contoso')
{
Try {
Write-Host "View cities for $appName...." -ForegroundColor Green
$url = "Http://localhost:8082/$appName" + "RateAggregatorApp/api/stats/cities"
$result = Invoke-RestMethod -Uri $url -Headers @{"Content-Type"="application/json" } -Method GET -TimeoutSec 600
$json = ConvertTo-Json $result -Depth 3
$json
} Catch {
Write-Host "Failure message: $_.Exception.Message" -ForegroundColor red
Write-Host "Failure stack trace: $_.Exception.StackTrace" -ForegroundColor red
Write-Host "Failure inner exception: $_.Exception.InnerException" -ForegroundColor red
}
}
Generate-RateRequests -appName Contoso -iterations 100
Generate-RateRequests -appName Fabrican -iterations 100
View-QueueLength -appName Contoso
View-QueueLength -appName Fabrican
View-Cities -appName Contoso
View-Cities -appName Fabrican

What is next?#

I think Service Fabric has a lot of great and useful features that make it is a great candidate for a lot of scenarios. I will post more articles about Service Fabric as I expand my knowledge in this really cool technology.

Service Fabric Basics

Khaled Hikmat

Khaled Hikmat

Software Engineer

Service Fabric is a cool technology from Microsoft! It has advanced features that allows many scenarios. But in this post, we will only cover basic concepts that are usually misunderstood by a lot of folks.

For the purpose of this demo, we are going to develop a very basic guest executable service written as a console app. We will use very basic application and service manifests and PowerShell script to deploy to Service Fabric and show how Service Fabric monitors services, reports their health and allows for upgrade and update.

The source code for this post is available here. Most of the code and ideas are credited to Jeff Richter of the Service Fabric Team.

Guest Service#

The Guest service is a basic Win32 console app that invokes an HttpListener on a port that is passed in the argument. The little web server responds to requests like so:

Web Server

Note that the service is NOT running the Service Fabric cluster.

That is it!! This simple web server accepts a command called crash which will kill the service completely:

http://localhost:8800?cmd=crash

In fact, it does support multiple commands:

var command = request.QueryString["cmd"];
if (!string.IsNullOrEmpty(command))
{
switch (command.ToLowerInvariant())
{
case "delay":
Int32.TryParse(request.QueryString["delay"], out _delay);
break;
case "crash":
Environment.Exit(-1);
break;
}
}

In order to make this service highly available, let us see how we can package this service to run within Service Fabric. Please note that this service is not cognizant of any Service Fabric. It is purely a simple Win32 service written as a console app.

Please note:

  • To debug the service locally from Visual Studio, you need to start VS in administrator mode.
  • Service Fabric requires the projects be X64! So you must change your projects to use X64 by using the Visual Studio Configuration Manager.

Application Package#

Application Package in Service Fabric is nothing but a folder that contains certain manifests in specific sub-folders! We will build the directory by hand instead of using Visual Studio so we can find out exactly how to do these steps. Let us create a directory called BasicAvailabilityApp (i.e. c:\BasicAvailabilityApp) to describe the Service Fabric application.

The root folder#

The root folder contains the application manifest and a sub-folder for each service in contains. Here is how the application manifest looks like for this demo application:

<?xml version="1.0" encoding="utf-8"?>
<ApplicationManifest ApplicationTypeName="BasicAvailabilityAppType" ApplicationTypeVersion="1.0.0"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://schemas.microsoft.com/2011/01/fabric">
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="CrashableServiceTypePkg" ServiceManifestVersion="1.0.0" />
</ServiceManifestImport>
</ApplicationManifest>

There are several pieces of information in this manifest:

  • The application type: BasicAvailabilityAppType.
  • The application version: 1.0.0.
  • The application contains a single service type CrashableServiceTypePkg with version 1.0.0.
  • The XML name spaces are not important to us.

This is how the application folder looks like:

Root Application Folder

The service folder#

The service folder contains the service manifest and a sub-folder for each service in contains. Here is how the application manifest looks like for this demo application:

<?xml version="1.0" encoding="utf-8"?>
<ServiceManifest Name="CrashableServiceTypePkg"
Version="1.0.0"
xmlns="http://schemas.microsoft.com/2011/01/fabric"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ServiceTypes>
<StatelessServiceType ServiceTypeName="CrashableServiceType" UseImplicitHost="true" />
</ServiceTypes>
<CodePackage Name="CrashableCodePkg" Version="1.0.0">
<EntryPoint>
<ExeHost>
<Program>CrashableService.exe</Program>
<Arguments>8800</Arguments>
</ExeHost>
</EntryPoint>
</CodePackage>
<!-- ACL the 8800 port where the crashable service listens -->
<Resources>
<Endpoints>
<Endpoint Name="InputEndpoint" Port="8800" Protocol="http" Type="Input" />
</Endpoints>
</Resources>
</ServiceManifest>

There are several pieces of information in this manifest:

  • The service package: CrashableServiceTypePkg.
  • The service version: 1.0.0.
  • The service type: CrashableServiceType.
  • The service type is stateless.
  • The service code package exists in a sub-folder called CodePkg and it is of version 1.0.0.
  • The service code consists of an executable called CrashableService.exe.
  • The XML name spaces are not important to us.
  • The Endoints must be specified to allow the Service Fabric to ACL the port that we want opened for our service to listen on. The Input type instructs SF to accepts input from the Internet.

This is how the service folder looks like:

service Folder

This is what it takes to package an application in Service Fabric.

Deployment#

Please note that the package we created in the previous step needs to be deployed to Service Fabric in order to run. To do this, we will need to use either Visual Studio or PowerShell. Since we want to use the lower level commands, we will use PowerShell instead of Visual Studio:

Here is the PowerShell script that we can use:

# Define equates (hard-coded):
$clusterUrl = "localhost"
$imageStoreConnectionString = "file:C:\SfDevCluster\Data\ImageStoreShare"
$appPkgName = "BasicAvailabilityAppTypePkg"
$appTypeName = "BasicAvailabilityAppType"
$appName = "fabric:/BasicAvailabilityApp"
$serviceTypeName = "CrashableServiceType"
$serviceName = $appName + "/CrashableService"
# Connect PowerShell session to a cluster
Connect-ServiceFabricCluster -ConnectionEndpoint ${clusterUrl}:19000
# Copy the application package to the cluster
Copy-ServiceFabricApplicationPackage -ApplicationPackagePath "BasicAvailabilityApp" -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $appPkgName
# Register the application package's application type/version
Register-ServiceFabricApplicationType -ApplicationPathInImageStore $appPkgName
# After registering the package's app type/version, you can remove the package from the cluster image store
Remove-ServiceFabricApplicationPackage -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $appPkgName
# Create a named application from the registered app type/version
New-ServiceFabricApplication -ApplicationTypeName $appTypeName -ApplicationTypeVersion "1.0.0" -ApplicationName $appName
# Create a named service within the named app from the service's type
New-ServiceFabricService -ApplicationName $appName -ServiceTypeName $serviceTypeName -ServiceName $serviceName -Stateless -PartitionSchemeSingleton -InstanceCount 1

The key commands are the last two where we:

  • Create a named application name from the registered application type and version:
# Create a named application from the registered app type/version
New-ServiceFabricApplication -ApplicationTypeName $appTypeName -ApplicationTypeVersion "1.0.0" -ApplicationName $appName
  • Create a named service within the named app from the service type:
# Create a named service within the named app from the service's type
New-ServiceFabricService -ApplicationName $appName -ServiceTypeName $serviceTypeName -ServiceName $serviceName -Stateless -PartitionSchemeSingleton -InstanceCount 1

This is extremely significant as it allows us to create multiple application instances within the same cluster and each named application instance has its own set of services. This is how the named application and services are related to the cluster (this is taken from Service Fabric team presentation):

Naming Stuff

Once the named application and the named service are deployed, the Service Fabric explorer shows it like this:

Success Deployment

Now, if we access the service in Service Fabric, we will get a response that clearly indicates that the service is indeed running in Service Fabric:

Deployed in SF

Note that the service is running in Node 1 of the Service Fabric cluster.

Availability#

One of the major selling points of Service Fabric is its ability to make services highly available by monitoring them and restarting them if necessary.

Regardless of whether the service is guest executable or Service Fabric cognizant service, Service Fabric monitors the service to make sure it runs correctly. In our case, the service will crash whenever a crash command is submitted. So if you crash the service, you will see that Service Fabric detects the failure and reports a bad health on the Service Fabric Explorer:

Error Deployment

You will notice that the little web server is no longer available when you try to access it. But if you wait for a few seconds and try again, you will be very happy to know that the web server is available again. This is because Service Fabric detected that the service went down, restarted it and made it available holding to the promise of high availability or self healing.

However, there is only one little problem! The unhealthy indicators (warning or errors) on the explorer may never go away because there isn't anything that resets them. So the health checks will also be shown once they are reported. This could become a little of a problem if you have an external tool that read health check state.

The above statement is not entirely true! I have seen the latest versions of Service Fabric remove the warning/errors after a little while.

In any case, I will show a better way (in my opinion) to deal with this shortly in this post. So read on if you are interested.

Cleanup#

In order to remove the named application and its services, you can issue these PowerShell commands:

# Delete the named service
Remove-ServiceFabricService -ServiceName $serviceName -Force
# Delete the named application and its named services
Remove-ServiceFabricApplication -ApplicationName $appName -Force

In order to delete the application type:

# If no named apps are running, you can delete the app type/version
Unregister-ServiceFabricApplicationType -ApplicationTypeName $appTypeName -ApplicationTypeVersion "1.0.0" -Force

Versions & Upgrade#

It turned out that Service Fabric does not really care how you name your versions! If you name your versions as numbers like 1.0.0 or 1.1.0, this naming convention is referred to as Semantic Versioning. But you are free to use whatever version naming convention you want.

Let us use a different version scheme for our simple app. How about alpha, beta and productionV1, productionV2, etc. Let us cleanup our app from the cluster (as shown above), apply some changes to the crashable service, update the manifest files to make the version Beta and re-deploy using the beta version:

The Application Manifest#

<?xml version="1.0" encoding="utf-8"?>
<ApplicationManifest ApplicationTypeName="BasicAvailabilityAppType"
ApplicationTypeVersion="Beta"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://schemas.microsoft.com/2011/01/fabric">
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="CrashableServiceTypePkg" ServiceManifestVersion="Beta" />
</ServiceManifestImport>
</ApplicationManifest>

The Service Manifest#

<?xml version="1.0" encoding="utf-8"?>
<ServiceManifest Name="CrashableServiceTypePkg"
Version="Beta"
xmlns="http://schemas.microsoft.com/2011/01/fabric"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ServiceTypes>
<StatelessServiceType ServiceTypeName="CrashableServiceType" UseImplicitHost="true" />
</ServiceTypes>
<CodePackage Name="CrashableCodePkg" Version="Beta">
<EntryPoint>
<ExeHost>
<Program>CrashableService.exe</Program>
<Arguments>8800</Arguments>
</ExeHost>
</EntryPoint>
</CodePackage>
<!-- ACL the 8800 port where the crashable service listens -->
<Resources>
<Endpoints>
<Endpoint Name="InputEndpoint" Port="8800" Protocol="http" Type="Input" />
</Endpoints>
</Resources>
</ServiceManifest>

Deployment#

# Define equates (hard-coded):
$clusterUrl = "localhost"
$imageStoreConnectionString = "file:C:\SfDevCluster\Data\ImageStoreShare"
$appPkgName = "BasicAvailabilityAppTypePkg"
$appTypeName = "BasicAvailabilityAppType"
$appName = "fabric:/BasicAvailabilityApp"
$serviceTypeName = "CrashableServiceType"
$serviceName = $appName + "/CrashableService"
# Connect PowerShell session to a cluster
Connect-ServiceFabricCluster -ConnectionEndpoint ${clusterUrl}:19000
# Copy the application package to the cluster
Copy-ServiceFabricApplicationPackage -ApplicationPackagePath "BasicAvailabilityApp" -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $appPkgName
# Register the application package's application type/version
Register-ServiceFabricApplicationType -ApplicationPathInImageStore $appPkgName
# After registering the package's app type/version, you can remove the package from the cluster image store
Remove-ServiceFabricApplicationPackage -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $appPkgName
# Create a named application from the registered app type/version
New-ServiceFabricApplication -ApplicationTypeName $appTypeName -ApplicationTypeVersion "Beta" -ApplicationName $appName
# Create a named service within the named app from the service's type
New-ServiceFabricService -ApplicationName $appName -ServiceTypeName $serviceTypeName -ServiceName $serviceName -Stateless -PartitionSchemeSingleton -InstanceCount 1

Upgrade#

Now that the beta version is deployed, let us make another change in the service, change the version to ProdutionV1 (in the application and service manifests) and issue the following PowerShell commands to register and upgrade to ProductionV1

# Copy the application package ProductionV1 to the cluster
Copy-ServiceFabricApplicationPackage -ApplicationPackagePath "BasicAvailabilityApp-ProductionV1" -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $appPkgName
# Register the application package's application type/version
Register-ServiceFabricApplicationType -ApplicationPathInImageStore $appPkgName
# After registering the package's app type/version, you can remove the package
Remove-ServiceFabricApplicationPackage -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $appPkgName
# Upgrade the application from Beta to ProductionV1
Start-ServiceFabricApplicationUpgrade -ApplicationName $appName -ApplicationTypeVersion "ProductionV1" -UnmonitoredAuto -UpgradeReplicaSetCheckTimeoutSec 100

The upgrade takes place using a concept called Upgrade Domains which makes sure that the service that is being upgraded does not ever become unavailable:

Upgrade Domains

Once the upgrade is done, the new application and service version is ProductionV1:

Production V1

Updates#

Now that our service is in production, let us see what how we can increase and decrease its number of instances at will. This is very useful to scale the service up and down depending on parameters determined by the operations team.

You may have noticed that we have always used instance count 1 when we deployed our named service:

# Create a named service within the named app from the service's type
New-ServiceFabricService -ApplicationName $appName -ServiceTypeName $serviceTypeName -ServiceName $serviceName -Stateless -PartitionSchemeSingleton -InstanceCount 1

Let us try to increase the instance count to 5 using PowerShell:

# Dynamically change the named service's number of instances
Update-ServiceFabricService -ServiceName $serviceName -Stateless -InstanceCount 5 -Force

Please note that if your test cluster has less than 5 nodes, you will get health warnings from Service Fabric because SF will not be place more instances than the number of available nodes. This is because SF cannot guarantee availability if it places multiple instances on the same node.

Anyway, if you get health warning or if you would like to scale back on your service, you can downgrade the number of instances using this PowerShell command:

Update-ServiceFabricService -ServiceName $serviceName -Stateless -InstanceCount 1 -Force

Please notice how fast the scaling (up or down) takes place!!

Better High Availability#

In a previous section in this post, we deployed the crashable service and watched it crash when we submitted a crash command. Service Fabric reported the failure, restarted the service and made it available again. Now we will modify the deployment process to provide a better way to take care of the re-start process.

To do so, we will need another service that monitors our crashable service and reports health checks to Service Fabric. This new code is Service Fabric aware and is demonstrated by Jeff Richter of the Service Fabric team.

Let us modify the application package to include this new code. Remember our goal is not to change the crashable service at all.

The Service Manifest#

<?xml version="1.0" encoding="utf-8"?>
<ServiceManifest Name="CrashableServiceTypePkg"
Version="Beta"
xmlns="http://schemas.microsoft.com/2011/01/fabric"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ServiceTypes>
<StatelessServiceType ServiceTypeName="CrashableServiceType" UseImplicitHost="true" />
</ServiceTypes>
<!-- Code that is NOT Service-Fabric aware -->
<!-- Remove Console Redirection in production -->
<!-- https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-existing-app -->
<CodePackage Name="CrashableCodePkg" Version="Beta">
<EntryPoint>
<ExeHost>
<Program>CrashableService.exe</Program>
<Arguments>8800</Arguments>
<ConsoleRedirection FileRetentionCount="5" FileMaxSizeInKb="2048"/>
</ExeHost>
</EntryPoint>
</CodePackage>
<!-- Code that is Service-Fabric aware -->
<!-- Remove Console Redirection in production -->
<!-- https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-existing-app -->
<CodePackage Name="MonitorCodePkg" Version="Beta">
<EntryPoint>
<ExeHost>
<Program>MonitorService.exe</Program>
<Arguments>8800</Arguments>
<ConsoleRedirection FileRetentionCount="5" FileMaxSizeInKb="2048"/>
</ExeHost>
</EntryPoint>
</CodePackage>
<!-- ACL the 8800 port where the crashable service listens -->
<Resources>
<Endpoints>
<Endpoint Name="InputEndpoint" Port="8800" Protocol="http" Type="Input" />
</Endpoints>
</Resources>
</ServiceManifest>

There are several things here:

  • Our crashable service is still the same. It accepts an argumengt to tell it which port number to listen on.
  • ConsoleRedirection is added to allow us to see the console output in the SF log files. This is to be removed in production.
  • Now there is one service i.e. CrashableServiceType but two code bases: one for the original exe and another code for the monitor that will monitor our crashable service. This is really nice as it allows us to add Service Fabric code to an existing service without much of intervention.
  • The Endoints must be specified to allow the Service Fabric to ACL the port that we want opened for our service to listen on. The Input type instructs SF to accepts input from the Internet.

The package folders look like this:

Advanced Service Dir

The Monitor Service#

It is also a console app!! But it includes a Service Fabric Nuget package so it can use the FabricClient to communicate health checks to the local cluster. Basically, it sets up a timer to check the performance and availability of our crashable service. It reports to Service Fabric when failures take place.

Doing so makes our crashable service much more resilient to crashes or slow performances as it is monitored by the monitored service and re-started if necessary by Service Fabric. The health checks are also cleared much quicker.

Console Outputs in the local cluster#

You can use the Service Fabric cluster explorer to find out where Service Fabric stores services on disk. This is available from the Nodes section:

SF Cluster Nodes

Node Disk

This directory has a log folder that stores the output of each service. This can be very useful for debug purposes. To use it, however, you must have the ConsoleRedirection turned on as shown above.

What is next?#

In future posts, I will use Service Fabric .NET programming model to develop and deploy stateless and stateful services to demonstrate Service Fabric fundamental concepts.

Xamarin Forms App using VS for mac

Khaled Hikmat

Khaled Hikmat

Software Engineer

I am new to Xamarin development and I also wanted to try the newly announced VS for mac! So I created a little Camera app that can be used to evaluate presentations. The app allows the user to take a selfie. When committed, the picture is sent to an Azure cognitive function to extract the gender, male and smile (a measure of emotion). The app then displays the taken picture and returned result in the app. It also sends the result to PowerBI real-time stream to allow the visualization of the evaluation results.

So in essence, a user uses the app to take a selfie with a smile or a frown to indicate whether the presentation was good, not so good or somewhere in between. For example, if the user submitted a picture that looks like this:

Evaluation

The cognitive result might look like this:

Cognitive Result

and the result will be pushed in real-time to a PowerBI dashboard:

PowerBI

Xamarin App#

Taking a Camera picture#

Using VS for mac, I created a blank XAML forms app solution with Android and iOS. Added the following Xamarin plugin to all of its projects (portable, iOS and Droid):

  • Xam.Plugin.Media

This allows me to use the Camera without having to deal with iOs or Android. I wrote the following simple XAML in the main page:

<?xml version="1.0" encoding="utf-8"?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:local="clr-namespace:PresentationEvaluation"
x:Class="PresentationEvaluation.PresentationEvaluationPage">
<StackLayout>
<Button x:Name="btnTakePicture" Clicked="btnTakePicture_Clicked" Text="Take selfie with emotion"/>
<ActivityIndicator x:Name="Indicator" Color="Black"/>
<StackLayout x:Name="ResultPanel" Padding="10">
<Image x:Name="Image" HeightRequest="240" />
<StackLayout x:Name="Age" Orientation="Horizontal">
<Label>Age</Label>
<Label x:Name="AgeData"></Label>
</StackLayout>
<StackLayout x:Name="Gender" Orientation="Horizontal">
<Label>Gender</Label>
<Label x:Name="GenderData"></Label>
</StackLayout>
<StackLayout x:Name="Smile" Orientation="Horizontal">
<Label>Smile</Label>
<Label x:Name="SmileData"></Label>
</StackLayout>
<Label x:Name="Result"></Label>
</StackLayout>
</StackLayout>
</ContentPage>

and I had this in the behind code:

namespace PresentationEvaluation
{
public partial class PresentationEvaluationPage : ContentPage
{
public PresentationEvaluationPage()
{
InitializeComponent();
ResultPanel.IsVisible = false;
}
private async void btnTakePicture_Clicked(object sender, EventArgs e)
{
try
{
await CrossMedia.Current.Initialize();
if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakeVideoSupported)
throw new Exception($"There is no camera on the device!");
var file = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions
{
SaveToAlbum = true,
Name = "SelfieEvaluation.jpg"
});
if (file == null)
throw new Exception($"Picture not captured to disk!!");
Image.Source = ImageSource.FromStream(() => file.GetStream());
//TODO: Do something with the image
}
catch (Exception ex)
{
await DisplayAlert("Sorry", "An error occurred: " + ex.Message, "Ok");
}
finally
{
}
}
}
}

Communicating with Cognitive#

Now that we got the picture from the Camera, I wanted to send it to Azure Cognitive to detect the age, gender and smile. I added some NuGet packages:

  • Microsoft.Net.Http
  • Newton.Json

First I had to convert the media image file to an array of bytes:

public static byte[] GetBytes(MediaFile file)
{
byte[] fileBytes = null;
using (var ms = new MemoryStream())
{
file.GetStream().CopyTo(ms);
file.Dispose();
fileBytes = ms.ToArray();
}
return fileBytes;
}

Then submitted to the congnitive APIs:

byte[] picture = GetBytes(file);
float age = -1;
string gender = "";
float smile = -1;
// Submit to Cognitive
using (var httpClient = new HttpClient())
{
httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "get-your-own");
HttpResponseMessage response;
var content = new ByteArrayContent(picture);
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
response = await httpClient.PostAsync(FacialApi, content);
string responseData = await response.Content.ReadAsStringAsync();
if (!response.IsSuccessStatusCode)
throw new Exception($"Unable to post to cognitive service: {response.StatusCode.ToString()}");
Face[] faces = JsonConvert.DeserializeObject<Face[]>(responseData);
if (faces != null && faces.Length > 0)
{
Face face = faces[0];
age = face.faceAttributes.age;
gender = face.faceAttributes.gender;
smile = face.faceAttributes.smile;
}
}

Where the Face classes are defined as follows (I just special pasted the docs JSON example into my Visual Studio to create these classes):

public class Face
{
public string faceId { get; set; }
public Facerectangle faceRectangle { get; set; }
public Faceattributes faceAttributes { get; set; }
public string glasses { get; set; }
public Headpose headPose { get; set; }
}
public class Facerectangle
{
public int width { get; set; }
public int height { get; set; }
public int left { get; set; }
public int top { get; set; }
}
public class Faceattributes
{
public float age { get; set; }
public string gender { get; set; }
public float smile { get; set; }
public Facialhair facialHair { get; set; }
}
public class Facialhair
{
public float mustache { get; set; }
public float beard { get; set; }
public float sideburns { get; set; }
}
public class Headpose
{
public float roll { get; set; }
public int yaw { get; set; }
public int pitch { get; set; }
}

Because I have a free cognitive account and I could be throttled, I created a randomizer to generate random values in case i don't wato to use the cognitive functions for testing. So I created a flag that I can change whenever I want to test without cognitive:

// Submit to Cognitive
if (IsCognitive)
{
///same as above
...
}
else
{
gender = genders.ElementAt(random.Next(genders.Count - 1));
age = ages.ElementAt(random.Next(ages.Count - 1));
smile = smiles.ElementAt(random.Next(smiles.Count - 1));
}

PowerBI#

Once I get the result back from the cognitive function, I create a real time event (and refer to the smile range as score after I multiply it by 10) and send it to PowerBI real-time very useful feature which displays visualizations in real-time:

using (var httpClient = new HttpClient())
{
var realTimeEvent = new
{
time = DateTime.Now,
age = (int)age,
score = (int)(smile * 10),
gender = gender
};
var data = new dynamic[1];
data[0] = realTimeEvent;
var postData = JsonConvert.SerializeObject(data);
HttpContent httpContent = new StringContent(postData, Encoding.UTF8, "application/json");
HttpResponseMessage response = await httpClient.PostAsync(PowerBIApi, httpContent);
string responseString = await response.Content.ReadAsStringAsync();
if (!response.IsSuccessStatusCode)
throw new Exception("Unable to post to PowerBI: " + response.StatusCode);
}

where PowerBIApi is the real-time API that you must post it. You will get this from PowerPI service when you create your own Real-Time dataset.

This allows people to watch the presentation evaluation result in real-time:

PowerBI

That was a nice exercise! I liked the ease of developing stuff in Xamarin forms as it shields me almost completely from Android and iOS. Visual Studio for mac (in preview), however, has a lot of room of improvement...it feels heavy, clunky and a bit buggy. Finally I would like to say that, in non-demo situations, it is probably better to send the picture to an Azure storage which will trigger an Azure Function that will send to cognitive and PowerBI.

The code is available in GitHub here

Return a file in ASP.NET Core from a Web API

Khaled Hikmat

Khaled Hikmat

Software Engineer

In ASP .NET 4.x, I had this code to return a file from an ASP.NET Web API. This worked well and allowed a client-side JavaScript client to download the file with a progress indicator:

[Route("api/some/file", Name = "SomeFile")]
public async Task<HttpResponseMessage> GetFile()
{
var error = "";
try
{
//TODO: Get the file in a string called contentData
MemoryStream stream = new MemoryStream();
StreamWriter writer = new StreamWriter(stream);
writer.Write(contentData);
writer.Flush();
stream.Position = 0;
var result = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(stream)
};
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
result.Content.Headers.ContentLength = stream.Length;
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = "content.json",
Size = stream.Length
};
return result;
}
catch (Exception e)
{
// The tag = ControllerName.RouteName
error = e.Message;
// TODO: do something with the error
return new HttpResponseMessage(HttpStatusCode.BadRequest);
}
}

Recently I created a new ASP.NET Core project for some other purpose which also had a requirement to download a file from a Web API. So naturally I copied the same code over. But that did not work...I end up getting the result in JSON....it looks something like this:

{
"version": {
"major": 1,
"minor": 1,
"build": - 1,
"revision": - 1,
"majorRevision": - 1,
"minorRevision": - 1
},
"content": {
"headers": [{
"key": "Content-Type",
"value": ["application/octet-stream"]
}, {
"key": "Content-Length",
"value": ["2346262"]
}, {
"key": "Content-Disposition",
"value": ["attachment; filename=content.json; size=2346262"]
}
]
},
"statusCode": 200,
"reasonPhrase": "OK",
"headers": [],
"requestMessage": null,
"isSuccessStatusCode": true
}

After several attempts, I eventually I found out that this below code works well in ASP.NET Core and my JavaScript is able to show a download progress bar:

[Route("api/some/file", Name = "SomeFile")]
public async Task<HttpResponseMessage> GetFile()
{
var error = "";
try
{
//TODO: Get the file in a string called contentData
HttpContext.Response.ContentType = "application/json";
HttpContext.Response.ContentLength = Encoding.ASCII.GetBytes(contentData).Length;
HttpContext.Response.Headers["Content-Disposition"] = new ContentDispositionHeaderValue("attachment")
{
FileName = "content.json",
Size = HttpContext.Response.ContentLength
}.ToString();
HttpContext.Response.Headers["Content-Length"] = "" + HttpContext.Response.ContentLength;
FileContentResult result = new FileContentResult(Encoding.ASCII.GetBytes(contentData), "application/octet-stream")
{
FileDownloadName = "content.json"
};
return result;
}
catch (Exception e)
{
// TODO: Handle error
HttpContext.Response.StatusCode = 400;
...
}
}

I hope this tip helps someone!

Kicking PowerApps Tires

Khaled Hikmat

Khaled Hikmat

Software Engineer

PowerApps is a newly released platform/service to build Line-of-business applications from Microsoft. Reading some documentation and attending some online webcasts, I think the PowerApps product is well positioned for LOB! There is definitely a need to create LOB mobile apps at the enterprise level and distribute them seamlessly without the friction of the app stores.

The thing is that the last two (at least) platforms that Microsoft created to build LOB applications were eventually abandoned i.e. SilverLight and LightSwitch. Hence there could be some resistance from some developers to start learning this new platform knowing that it might also have the same fate as its predecessors. However, from the first couple of hours that I spent on PowerApps, it seems to be a very capable environment and really easy to do stuff in. So I wanted to show off a very simple app that demonstrates some simple but important capabilities.

Another thing to note is that although, at the time of writing, the product had just made it to public preview from a gated preview, the documentation looks really complete and actually quite good. There also seems to be strong and engaging community!!

Simple App#

I am building an app that presents tabular sales data to users and allows them to drill through the hierarchical nature of the data. For example, at the top level, users will see sales data for different regions and then can drill down to see the sales data for individual countries:

Hierarchy Sales Data

So I wanted the ability to navigate to the same screen in PowerApps but with different data. The sample then shows how I solved this particular problem in PowerApps.

Navigation Scheme#

I created two screens: the first is an initial page to present the users certain options and allows them to start viewing the sales data and the second one is the sales data screen that will be navigated to and from in order to view the drill through sales data.

In order to manage the navigation, I created a Stack collection (in PowerApps it is called a collection...but it is like a table) that holds the navigation history of each screen. Upon initial navigation from the initial screen, I clear the collection. When the user drills down, I push (i.e. add) to the collection the screen id of the screen that I just navigated from. When the user drills up, I pop (i.e. remove the last item) from the collection the screen id that I must navigate to. For this app, the only column that I have in this collection is the screen id:

Navigation Scheme

Initial Screen#

In PowerApps studio, this is how the initial screen looks like:

Initial Screen

So when the user taps the initial drill down icon, the following script is executed:

Clear(Stack); Navigate(Screen2, ScreenTransition.Fade, {Screen:{Id: 1}})

This script consists of 3 main things:

  • Clear(Stack); clears the navigation collection that I named Stack.
  • Navigate(Screen2, ScreenTransition.Fade, {Screen:{Id: 1}}) navigates to screen2 (which is the data screen) with a fade
  • {Screen:{Id: 1}} adds a context variable called Screen with a single column called Id that contains the value 1 i.e. Level 1

This context variable is short-lived as it is only available within the boundary of a single screen. I use it to pass the screen id from one screen to another.

Data Screen - Drill Down#

In PowerApps studio, the data screen drill down looks like this:

Data Screen Drill Down

So when the user taps the drill down icon, the following script is executed:

Collect(Stack, {Id: Screen.Id}); Navigate(Screen2, ScreenTransition.Fade, {Screen:{Id: Last(Stack).Id + 1}})

This script consists of 3 main things:

  • Collect(Stack, {Id: Screen.Id}); adds (i.e. collects in PowerApps terminology) the screen id that was passed from the initial screen or the screen that I navigated from. The collection stores the screen ID.
  • Navigate(Screen2, ScreenTransition.Fade, {Screen:{Id: Last(Stack).Id + 1}}) navigates to the same screen (screen2) with a fade
  • {Screen:{Id: Last(Stack).Id + 1}} adds a context variable called Screen with a single column called Id that contains the value of the last item in the Stack i.e. Last(Stack).Id plus one! This what makes the context variable so powerful.

Data Screen - Drill Up#

In PowerApps studio, the data screen drill up looks like this:

Data Screen Drill Up

So when the user taps the drill up icon, the following script is executed:

Navigate(Screen2, ScreenTransition.Fade, {Screen:{Id: Last(Stack).Id}}); Remove(Stack, Last( Stack))

This script consists of 3 main things:

  • Navigate(Screen2, ScreenTransition.Fade, {Screen:{Id: Last(Stack).Id}}); navigates to the same screen (screen2) with a fade
  • {Screen:{Id: Last(Stack).Id}} adds a context variable called Screen with a single column called Id that contains the value of the last item in the Stack i.e. Last(Stack).Id.
  • Remove(Stack, Last( Stack)) removes the last item from the Stack! Effectively we are doing a pop.

In order to prevent un-supported drill ups, the drill up icon has a visibility property that is controlled by the following script:

If (CountRows(Stack) > 0, true)

So as long as there are items in the Stack collection, the drill-up icon is visible.

I hope this is a helpful short post to show how powerful and useful PowerApps can be. I am hoping to be able to add more posts about PowerApps in future posts.

Open Azure VM Port

Khaled Hikmat

Khaled Hikmat

Software Engineer

For a project I was working on, I needed to create a Windows VS2015 VM for testing. It is quit easy to spawn a VM in Azure ...it only takes a couple of seconds to do it from the portal. The next task was to open up port 8080 on that machine as I needed to access that port for testing.

Windows Server 2012#

Since the VM is a Windows Server 2012, all I needed to do is to go the server's Server Manager => Local Server and access the Windows Firewall. At the firewall, I access the advanced setting to add a new inbound rule for protocol type TCP and local port is 8080:

Inbound Rule

Azure Endpoints#

The above step is not enough to expose port 8080! What we also need is to let the VM's Network Security Group about this new endpoint that we want to allow. To do that, you also need to locate the VM's Resource Group. The new ARM-based Azure VMs have several things in the resource group:

  • Virtual Machine
  • Network Interface
  • Network Security Group
  • Public IP Address
  • Virtual Network
  • Storage Account

We access the Network Security Group:

Network Security Group

and add the Inbound Rule for port 8080:

NSG Inbound Rule

This will allow us to access port 8080 in the VM.

Please note that the instructions above are for the Azure ARM-based VMs...not the classic ones.

ASP.NET API Versioning

Khaled Hikmat

Khaled Hikmat

Software Engineer

A while back, I created an ASP.NET Web API 2 to be a back-end for a mobile app. I used basic authentication to make it easier for mobile apps to consume the Web API. I now decided to provide better security so I wanted to move to a token-based authentication. The problem is that if I change the Web API to a token-based, all existing mobile apps in the field will not function as they will be refused Web API connection.

The answer is to use Web API versioning! This way existing mobile users can continue to use the current version that uses basic authentication until the app is upgraded. The updated app version will switch over to use the new version which is based on token authentication. This post will discuss how I accomplished this versioning scheme.

Controller Selector#

The first step is to configure the ASP.NET framework to use a custom controller selector. In the WebApiConfig Register method, we tell the framework to use the custom selector:

config.Services.Replace(typeof(IHttpControllerSelector), new VersionAwareControllerSelector(config));

The selector is coded as follows:

public class VersionAwareControllerSelector : DefaultHttpControllerSelector
{
private const string VERSION_HEADER_NAME = "some-value";
private const string VERSION_QUERY_NAME = "v";
private HttpConfiguration _configuration;
public VersionAwareControllerSelector(HttpConfiguration configuration)
: base(configuration)
{
_configuration = configuration;
}
// This works for Web API 2 and Attributed Routing
// FROM: http://stackoverflow.com/questions/19835015/versioning-asp-net-web-api-2-with-media-types/19882371#19882371
// BLOG: http://webstackoflove.com/asp-net-web-api-versioning-with-media-types/
public override HttpControllerDescriptor SelectController(HttpRequestMessage request)
{
HttpControllerDescriptor controllerDescriptor = null;
// Get a list of all controllers provided by the default selector
IDictionary<string, HttpControllerDescriptor> controllers = GetControllerMapping();
IHttpRouteData routeData = request.GetRouteData();
if (routeData == null)
{
throw new HttpResponseException(HttpStatusCode.NotFound);
}
// Pick up the API Version from the header...but we could also do query string
var apiVersion = GetVersionFromHeader(request);
// Check if this route is actually an attribute route
IEnumerable<IHttpRouteData> attributeSubRoutes = routeData.GetSubRoutes();
if (attributeSubRoutes == null)
{
string controllerName = GetRouteVariable<string>(routeData, "controller");
if (controllerName == null)
{
throw new HttpResponseException(HttpStatusCode.NotFound);
}
string newControllerName = String.Concat(controllerName, apiVersion);
if (controllers.TryGetValue(newControllerName, out controllerDescriptor))
{
return controllerDescriptor;
}
else
{
throw new HttpResponseException(HttpStatusCode.NotFound);
}
}
else
{
string newControllerNameSuffix = String.Concat("V", apiVersion); ;
IEnumerable<IHttpRouteData> filteredSubRoutes = attributeSubRoutes.Where(attrRouteData =>
{
HttpControllerDescriptor currentDescriptor = GetControllerDescriptor(attrRouteData);
bool match = currentDescriptor.ControllerName.EndsWith(newControllerNameSuffix);
if (match && (controllerDescriptor == null))
{
controllerDescriptor = currentDescriptor;
}
return match;
});
routeData.Values["MS_SubRoutes"] = filteredSubRoutes.ToArray();
}
return controllerDescriptor;
}
private HttpControllerDescriptor GetControllerDescriptor(IHttpRouteData routeData)
{
return ((HttpActionDescriptor[])routeData.Route.DataTokens["actions"]).First().ControllerDescriptor;
}
// Get a value from the route data, if present.
private static T GetRouteVariable<T>(IHttpRouteData routeData, string name)
{
object result = null;
if (routeData.Values.TryGetValue(name, out result))
{
return (T)result;
}
return default(T);
}
private string GetVersionFromHeader(HttpRequestMessage request)
{
if (request.Headers.Contains(VERSION_HEADER_NAME))
{
var header = request.Headers.GetValues(VERSION_HEADER_NAME).FirstOrDefault();
if (header != null)
{
return header;
}
}
return "1";
}
private string GetVersionFromQueryString(HttpRequestMessage request)
{
var query = HttpUtility.ParseQueryString(request.RequestUri.Query);
var version = query[VERSION_QUERY_NAME];
if (version != null)
{
return version;
}
return "1";
}
}

As demonstarted above, I chose to send the version information as a header value! There are other options ...one of them is to pass it in the query string such as http://example.com/api/stats?v=2.

Controller Versions#

The above allows us to have versioned controllers with the following naming convention:

Controller Versions

The framework will pick version 1 (i.e. StatsV1Controller) by default unless the request's header contains a version header value. If the value is 2, then StatsV2Controller will be picked.

The V1 controller is defined this way:

[BasicAuthorize()]
public class StatsV1Controller : ApiController
{
[Route("api/stats", Name = "Stats")]
public virtual IHttpActionResult GetStats()
{
....
}
}

while the V2 controller is defined this way:

[TokenAuthorize()]
public class StatsV2Controller : StatsV1Controller
{
[Route("api/stats", Name = "StatsV2")]
public override IHttpActionResult GetStats()
{
return base.GetStats();
}
}
  • The V1 controller uses basic authorization (as decorated by the attribute on top of the controller class) and V2 uses token authentication as decorated.
  • The V2 controller inherits from the V1 controller so there is no need to re-implement the methods.
  • However, there is a need to supply a different route name for the V2 controller otherwise we will get a conflict. This is done by giving the V2 controller a route name that ends with V1 i.e. StatsV2. This is a little unfortunate but this is how it is. Had it not been for this, we could have simply inherited from V1 without having to repeat any method.
  • Since V2 inherits from V1, I noticed that both authentication filters run per request. This means that when V2 is picked, the token authorize filer will run first and then followed by the basic authorize filter. This can cause problems. So what I did is at the end of the token authorize filter, I inject a value in the request properties. In the basic authorize filter, I check if the value exists, and, it if it does, I abort the basic authorize filter since the token filter has already run.

Request Property#

Here is one way to inject a property in the request in the token filter:

actionContext.Request.Properties["some-key"] = "some-value";

Then in the basic filter, I check for this property existence. If it does exist, it means the request is authenticated and there is no need to perform basic authentication.

string accessToken;
if (!actionContext.Request.Properties.TryGetValue("sone-key", out accessToken))
{
}

I hope someone finds this post helpful. Having multiple versions has provided me with a way to transition my mobile app users from basic authentication to token authentication without breaking the existing mobile apps.