Tuesday, August 22, 2017

Using the Work Item Multivalue control in TFS 2017 (On premise)

Today one of my colleagues in our ALM team asked for help. For a customer we upgraded their TFS environment to TFS 2017. One of the changes we had to do along the way was replacing the existing MultiValue control by a newer version(https://github.com/Microsoft/vsts-extension-multivalue-control).


We followed the steps as mentioned here(https://github.com/Microsoft/vsts-extension-multivalue-control/blob/master/xmldetails.md) but the control didn’t appear.

Here is the template XML that we were using(I removed some content):

Although we couldn’t see any difference with the steps mentioned above, it didn’t work.

When I had a look I noticed that the extension is only registered in the WebLayout part of the Form.

Remark: In TFS you always had 2 layouts(a layout and a weblayout), the first one applies when opening a work item in Visual Studio or Test Manager, the weblayout applies when opening a workitem in the browser.

We first tried to add the extension to the layout part as well, but this didn’t work and resulted in some errors.

Then I had an idea, TFS 2017 offers a new Work Item experience. As mentioned here(https://www.visualstudio.com/en-us/docs/work/reference/weblayout-xml-elements) the WebLayout element is only applied on the new work item experience! This explains why our changes didn’t show up.

The new Work Item experience is an opt-in experience that should be enabled by a collection administrator. After enabling it, our Multivalue control appeared on the form…

Remark: For TFS 2017, the new form is automatically available when you add team projects to a new collection.

Friday, August 18, 2017

Tuples in C#7

One of the nice features in C#7 is the support for Tuples as a lightweight datastructure using the System.ValueTuple NuGet package. It simplifies your codebase where you had to fallback before to out parameters or arbitrary objects.

Let’s have a look at a simple example:


This sample shows how easy it is to return tuples from your methods. Only problem I have with this implementation is that if you try to access the output of this method call, you still see the Item1, Item2 properties which are not really meaningful:


You don’t have to stop there, we can update the method signature with some extra metadata:


If we now try to access the tuple values again, we see the following instead:


NOTE: The name associated with the tuple element is not a runtime metadata, i.e. there is no such a property/field with the name on the actual instance of that value tuple object, the property names are still Item1, Item2, etc., all element names are design time and compiler time only. If we decompile the code with JustDecompile, we see the following:


Notice the attribute generated on top of the code. This TupleElementNames attribute is picked up by Visual Studio and the compiler and provides the necessary intellisense. This guarantees that it also works when you import this DLL into another project…

Thursday, August 17, 2017

Are your if statements not hidden sagas?

This video by Udi Dahan made me rethink all if statements in my code:

In the video Udi uses the following deceptive simple looking requirement as an example:

“Disallow the user from buying products that are no longer available.”

Doh! This must be the easiest requirement I’ve ever seen. Let’s implement it…

Ok, when the user goes to the products page and we show a list of products, let’s add an extra check that only shows the items who are not deleted from our product catalog:

if(item.state== States.Deleted)

///Filter item from list

Ok, perfect. Problem solved! But wait, what if the user leaves the pages open for a while and in the mean time the product gets removed from the catalog, what happens if the user tries to add this product to his shopping cart? Ok, let’s add an extra check when the user tries to add an item to his cart:

if(item.state== States.Deleted)

///Show warning to user that product is no longer available

Ok, perfect. Problem solved! But wait, what if the user adds some products to his cart, leaves his cart open for a while and in the mean time the product gets removed from the catalog, what happens if the user tries to checkout his order? Ok, let’s add an extra check when the user tries to checkout his cart:

if(item.state== States.Deleted)

///Show warning to user that product is no longer available

Ok, perfect. Problem solved! But wait, what if the user spends a few minutes searching for his credit card during the checkout process and and in the mean time the product gets removed from the catalog, what happens if the user pays for his order?

Wait! Stop! Let’s break up here. It becomes obvious that there is always a moment where the if check is just to late.

The problem is that we end up with a business oriented eventual consistency problem that is hard to solve. Turns out that these kind of ‘if statements’ get better removed and replaced by long running processes that can impact the domain in multiple places.

To return to our example, the moment we set the IsDeleted flag to true for a product in our database, we’ll start a long running process that checks all active shopping carts, remove the deleted item from the carts and display the user a message when he returns to your website and opens his shopping cart:


Wednesday, August 16, 2017

Chrome HTTPS error on localhost: NET::ERR_CERT_COMMON_NAME_INVALID

If you are a developer and are using a Self-Signed certificate for your HTTPS server, you recently may have seen the following error in Chrome(or a non-Dutch equivalent Winking smile):


Starting from Chrome 58 an extra security check got introduced that requires certificates specify the hostname(s) to which they apply in the SubjectAltName field. After they first introduced this change, the error message was not very insightfull but today if you take a look at the Advanced section of the error message or the Security panel in the Developer tools, you’ll get some more details pointing to the SubjectAltName issue:



Create a new self-signed certificate

To fix it, we have to create a new self-signed certificate. We can not use the good old makecert.exe utility as it cannot set the SubjectAltName field in certificates. Instead, we’ll use the  New-SelfSignedCertificate command in PowerShell:

New-SelfSignedCertificate `
    -Subject localhost `
    -DnsName localhost `
    -KeyAlgorithm RSA `
    -KeyLength 2048 `
    -CertStoreLocation "cert:CurrentUser\My" `
    -FriendlyName "Localhost certificate"
Now you have a new certificate with a correct Subject Alternative Name in your Personal certificate store:
Next step is to trust this certificate by moving it to the Trusted Root Authorities. You can either do this by hand using the certmgr tool in Windows or script it with Powershell as well:
# set certificate password here
$pfxPassword = ConvertTo-SecureString -String "YourSecurePassword" -Force -AsPlainText
$pfxFilePath = "c:\tmp\localhost.pfx"
$cerFilePath = "c:\tmp\localhost.cer"

# create pfx certificate
Export-PfxCertificate -Cert $certificatePath -FilePath $pfxFilePath -Password $pfxPassword
Export-Certificate -Cert $certificatePath -FilePath $cerFilePath

# import the pfx certificate
Import-PfxCertificate -FilePath $pfxFilePath Cert:\LocalMachine\My -Password $pfxPassword -Exportable

# trust the certificate by importing the pfx certificate into your trusted root
Import-Certificate -FilePath $cerFilePath -CertStoreLocation Cert:\CurrentUser\Root

Import it in IIS

OK, almost there. A last step to get it working in IIS is to import the pfx in IIS:

  • Open IIS using inetmgr.
  • Go to Server Certificates.


  • Click on the Import… action on the right. The Import certificate screen is shown.


  • Select the pfx, specify the password and click OK.
  • Now that the certificate is available in IIS, you can change the bindings to use it. Click on the Default Web site(or any other site) on the left.
  • Click on the Bindings… action on the right. The Site Bindings screen is shown.


  • Click on the https item in the list and choose Edit… . The Edit Site Binding screen is shown.


  • Select the newly created SSL certificate from the list and click OK.

Monday, August 14, 2017

Using F# in Visual Studio Code

If you are interested in F# and want to start using it inside Visual Studio Code, I have a great tip for you:

Have a look at the F# with Visual Studio Code gitbook. This contains a short guide that explains you step by step on how to get your Visual Studio Code environment ready for your first lines of pure F# magic.


Happy coding!

Thursday, July 20, 2017

Caching your static files in ASP.NET Core

In ASP.NET Core static files(images, css,…) are typically served using the Static file middleware. The static file middleware can be configured by adding a dependency on the Microsoft.AspNetCore.StaticFiles package to your project and then calling the UseStaticFiles extension method from Startup.Configure:

Unfortunately this code will not do its job in the most efficient way. By default, no caching is applied meaning that the browser will request these files again and again increasing the load on your server.

Luckily it’s not that hard to change the middleware configuration to introduce caching. In this example we set the caching to one day:

Remark: An alternative approach would be to let your proxy server(IIS,…) handle the static file requests as discussed here.

Wednesday, July 19, 2017

Guaranteeing “exactly once” semantics by using idempotency keys

A few weeks ago I had a discussion with a colleague about the importance of idempotency.

From http://www.restapitutorial.com/lessons/idempotency.html:

From a RESTful service standpoint, for an operation (or service call) to be idempotent, clients can make that same call repeatedly while producing the same result. In other words, making multiple identical requests has the same effect as making a single request. Note that while idempotent operations produce the same result on the server (no side effects), the response itself may not be the same (e.g. a resource's state may change between requests).

A good example where you can get into trouble is when your API withdraws some money from a customer account. If the user accidently calls your API twice the customer is double-charged, which I don’t think they’ll like very much…

A solution for this problem is the usage of idempotency keys. The idea is that the client generates a unique key that is send to the server along with the normal payload. The server captures the key and stores is together with the executed action. If a second requests arrives with the same key, the server can recognize the key and take the necessary actions.

What are situations that can happen?

  • Situation 1 – The request didn’t made it to the server; in this case when the second request arrives the server will not know the key and just process the request normally
  • Situation 2 –The request made it to the server but the operation failed somewhere in between; in this case when the second request arrives the server should pick up the work where it failed previously. This behavior can of course vary from situation to situation.
  • Situation 3 – The request made it to the server, the operation succeeded but the result didn’t reach the client; in this case when the second request arrives the server recognizes the key and returns the (cached) result of the succeeded operation.

Note: Idempotency keys get important when you are running systems that are not ACID compliant. If you are running an ACID transactional system, you can just re-execute the same operation as the previous operation should be rolled back(or at least that’s the theory Winking smile).

Tuesday, July 18, 2017

Check compatibility between .NET versions

Compatibility is a very important goal of the .NET team at Microsoft. The team always made a great deal to guarantee that newer versions will not break previous functionality. However sometimes this is unavoidable to address security issues, fix bugs or improve performance.

To understand the consequences you have to make a difference between runtime changes and retargeting changes:

  • Runtime changes: Changes that occur when a new runtime is placed on a machine and the same binaries are run, but expose different behavior
  • Retargeting changes: Changes that occur when an assembly that was targeting .NET FW version x is now set to target version y. 

To help you identify possible compatibility issues, Microsoft created the .NET Compatibility Diagnostics, a set of Roslyn based analyzers.

Here is how to use them:

  • First you have to choose if you want to check for Runtime changes or for Retargeting changes
  • Now you need to select the ‘From .NET Framework version’ and the ‘To .NET Framework version’:


  • After making your selection, you’ll get a list of all changes classified by their impact:


Monday, July 17, 2017

The state of Developer Ecosystem in 2017

JetBrains did a survey among 5000 developers to identify the current state of the Developer Ecosystem.


Here are some interesting facts that came out of it:

  • Java is the most popular primary programming language
  • JavaScript is the most used programming language
  • Go is the most promising programming language
  • Only 55% of the participants write unit tests Sad smile
  • 50% are full stack developers
  • Of the C# developers 66% are using ASP.NET MVC
  • Of the C# developers 92% use Visual Studio and only 3% use VS Code

Go explore all the results yourself at https://www.jetbrains.com/research/devecosystem-2017/

Friday, July 14, 2017

Improve your HTML and CSS editing skills in Visual Studio by using Emmet

Last week I discovered a great feature in VS Code that made my life so much easier; the support for Emmet snippet extension.


If you don’t know Emmet I suggest having a look at the demo at https://docs.emmet.io/first……………………… convinced? Let’s show you how this works in VS Code:

  • Open an HTML file inside VS code:



  • Hit ‘Tab’ and let the magic happen:


Thursday, July 13, 2017

Type check your JavaScript files

Starting from TypeScript 2.3 you can not only use the type checking and error reporting features of TypeScript in your .ts files but also in your .js files as well.

This new feature is an opt-in option that can be enabled by adding the --checkJs flag. this will enable type checking for all .js files by default.  Open your tsconfig.json file and add the following line “checkJs”:true :


If you now create a JavaScript file and introduce a type checking error, you’ll get a nice error message from the TypeScript transpiler;


By default setting this flag will enable this feature for all your .js files. It is still possible to selective include/exclude specific files/lines by using any of the following comments;

  • Use // @ts-check to enable type checking for a single file.
  • Use // @ts-nocheck to disable type checking for a single file.
  • Use // @ts-ignore to disable type checking for a single line.

Wednesday, July 12, 2017

SwashBuckle: Add support for operation overloading

By default SwashBuckle, doesn’t handle overloaded controller methods very well. A solution is the usage of an OperationFilter that changes the operationname in the UI:

Next step is to add this OperationFilter to your Swagger configuration:

Tuesday, July 11, 2017

WPF Debounce trick

Sometimes you have to appreciate the power and simplicity that WPF(and by extend XAML has to offer). While having to do a lot of voodoo magic with Reactive Extensions to support debouncing, in WPF it can be reduced to one binding property,  the Delay Binding Property.


You can use the WPF Delay binding property to debounce binding events. For example following code will debounces the key input until nothing changes for 0.5 seconds:

Text="{Binding UserName,UpdateSourceTrigger=PropertyChanged,Delay=500}"


Monday, July 10, 2017

Installing a Windows Service using Topshelf

For a Windows Service we are building, we are using Topshelf:

Topshelf is a framework for hosting services written using the .NET framework. The creation of services is simplified, allowing developers to create a simple console application that can be installed as a service using Topshelf. The reason for this is simple: It is far easier to debug a console application than a service. And once the application is tested and ready for production, Topshelf makes it easy to install the application as a service.


Great tool and a lot easier during debugging and development! However when I tried to install the final exe as a windows service using installutil.exe it failed with the following error message:

"No public installers with the RunInstallerAttribute.Yes attribute could be found in the SampleService.exe assembly."

Turns out that you don’t need the installutil.exe tool when you are using Topshelf. Instead you should invoke your executable with the ‘install’ option:

SampleService.exe install

More information:


Friday, July 7, 2017

SQL Server Profiler is depecrated

The SQL Server Profiler has been around for a very long time. It is very useful if you are trying to see in real time what SQL queries are being executed against your database. However, in the SQL Server 2016, Microsoft announced that the SQL Profiler will be deprecated in future versions.

Does this mean that there are no profiling options left in SQL Server? Luckily no, the SQL Profiler will be replaced by Extended Events(XE). Extended Events works via Event Tracing (ETW) which consumes less resources and allows much more flexibility. It also can monitor more events than the Profiler.

Where can I find these Extended Events?

  • Open SQL Server Management Studio( for SQL 2014 or higher) and connect to one of your Database instances
  • Go to Management>Extended Events:


  • Expand the Extended Events section, right click on Sessions and choose New Session Wizard


  • The Session wizard will help us to select the events we want to profile. Click on Next >


  • Specify a session name and click Next >


  • Now you can choose an existing template. Leave the Do not use a template radiobutton checked and click on Next >


  • The next screen is where the real work is done. Here you specify the events you want to monitor. Luckily you can search through them. Select some events you want to track and click Next >


  • Now you can specify any global fields(e.g. username, database name,…) you want to capture together with the events. Click Next > to continue


  • Next step is to apply some optional filters to limit the amount of data that is returned. Click Next > to continue


  • The session storage allows you to specify how to collect the data and store it.  Click Next >.


  • On the Summary you get an overview of the selected options. Click Finish to generate the Session.
    • Note: There is an option to generate a script from this configuration. So you can re-use it later.


  • After clicking Finish you end up on a Success page. Here you can start the session immediately and watch the data when it is captured.


Thursday, July 6, 2017


A long listed request finally arrived into VSTS, the introduction of a built-in Wiki. Before you had to fallback to the Wiki extension in the marketplace.  It’s still in preview but offers enough features to start playing with it. Open up your VSTS account and you’ll find the following new menu item:


When you click on it, you are welcomed with one button ‘Create Wiki’. Shall we?


After clicking on Create Wiki, an editor(with Markdown support) is opened that allows you to create your first Wiki page:


Let’s hit Save, specify some comments and there is our first Wiki page on VSTS:


Creating a link to another Wiki page is as simple as using a Markdown link, don’t forget to replace spaces by a ‘-‘ in the link URL:


What features are coming next ?

Our team is working hard to get you the next set of Wiki features, such as…

  • Wiki search across projects
  • Tags
  • Wiki integration with work items
  • Rich editing experience that support:
    • HTML tags
    • Resizing images
    • Mathematical formulas

More information in the official Wiki announcement and the documentation.

Wednesday, July 5, 2017


You sometimes here people say that they’ll keep using HTTP because it’s faster than HTTPS. This doesn’t have to be true(to spoil the surprise think HTTP2 Smile). Let’s try the test on https://www.httpvshttps.com/:

The first test is using HTTPS, loads 360 unique, non-cached images and completes in 0.716 seconds.


To proof there is no magic, here is the network traffic overview:


Let’s do the same test but with HTTP instead. 7.770 seconds! Auw, that hurts:


And an overview of the related traffic:


So where is the magic? As a I already revealed at the beginning of this post, the magic is in the usage of HTTP2. HTTP2 is the replacement for HTTP with a focus on speed. It only works on secured connections using SSL but as the tests above proof is a lot faster than HTTP.  So no excuses anymore to not start using HTTPS(and HTTP2)!