Wrapping Up and Moving On

I've been laid off and the reasons don't matter. It's just business. My last official day is tomorrow. My soon-to-be former employer has a product that enterprises need and good people to back it up. Happily the future is bright and there are opportunities in the world of technology for the taking.

I'm excited for the future and know that soon I'll be doing something new, interesting and challenging. The friends and colleagues I leave behind will do well in the technology economy no matter where they go.

It's a prosperous time to live if you have valued technical skills. I'm grateful for all of the experience I've had that has brought me to this point. I look forward to utilizing it to help other organizations solve their technology problems and advance their organizational goals.

I believe that whatever changes the future holds, I will continue to blog from time to time and work on my personal OSS projects. Stay tuned.

How System.Net.Http 4.3.0 Ruined Everyone's Day

I have not updated the MessageWire library for about a year now. But it still works just fine. Mostly. New Year's resolution #1: Update MessageWire.

trustme

Dependency Hell

I'm cleverly using MessageWire to create a customized shared session service for a hybrid set of web applications that include old ASP.NET Web Forms running in .NET Framework 4.6.1 and newer, upcoming, web sites built on ASP.NET Core 2.0. So first it had to work with the older Web Forms site. And it did work. Very well. I'll share some of that fun code on another day in another post.

What did not go well is that my Web Forms site using a library that uses HttpClient in System.Net.Http to call an internal web service suddenly failed.

Here's the HTTP request before pulling MessageWire into the project.

POST http://localhost:53739/api/ProfileDetail HTTP/1.1
Accept: application/json
Content-Type: application/json; charset=utf-8
Host: localhost:53739
Content-Length: 592

{"Id": "…removed actual data…"}

And here's the same request after pulling MessageWire into the project with no other code changes.

POST http://localhost:53739/api/ProfileDetail HTTP/1.1
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json; charset=utf-8
Accept: application/json
Accept-Encoding: gzip, deflate
Host: localhost:53739
 
250
{"Id": "…removed actual data…"}
0

See the problem? Yikes, my web service was unable to deserialize the second request created with HttpClient v4.3.0, so of course, that led to a huge failure.

Everyone's Worst Nightmare

What's worst about this is that I did not know about this side effect and this code made it into production causing a nightmare during a critical moment of the day and the month. Of course we rolled it back to stop the bleeding, but damage was done and folks were not happy with me in the least. It took me the rest of the day to find the symptoms of the cause which precipitated the failure.

The fact that it took me so long felt like another failure. Even with the final discovery that the HTTP request was borked, I was clueless as to its exact cause. I even engaged several team members to pour through my code to find the problem with the code I had committed. They could not find it.

So today, with less pressure on, I began methodically thinking of it and tracing back the code that generates the HTTP request. Ultimately it came down to HttpClient in System.Net.Http. This led me to discover that pulling in MessageWire into my ASP.NET Web Forms application on .NET Framework 4.6.1 also pulled in the flawed System.Net.Http version 4.3.0 package.

Light bulb moment! Check for updates. Sure enough, there was a System.Net.Http 4.3.3 available. And boom goes the dynamite! The request was back to looking normal and working just fine with the service being called.

MS OSS Buyer Beware!

When you pull in any new NuGet package which brings with it some dependencies, be sure you check out any updates to those dependencies. Shocking as it may seem, even Microsoft's packages can contain nasty little bugs. And test your assumptions.

Allow Your Models to Cross Only One Boundary

Don't let your models cross more than one boundary. Sure there may be exceptions, but I haven't found one that justifies the loss of flexibility achieved when requirements change.

Some years ago I worked with a thoughtful and intelligent programmer named Nick. He convinced me that models, or data transfer objects (DTOs) should not cross more than one service or domain boundary. I was skeptical at first. It seemed like a lot of duplicated code.

He talked me into it. And the next time the business changed a requirement, which of course happens often in most software development shops, I became absolutely convinced. We added something to one model, changed the code that produced the model and the specific code that consumed the model without breaking all other users of other endpoints that used the underlying models and entity models or services that produced them.

Here's what it looks like in one implementation:

models

Of course, your own architecture may involve more boundaries than shown here. You may have a message queue or services bus. I believe the principle still applies.

The greatest advantage is that you have an abstraction layer at every boundary that allows you to move swiftly when you need to support changing requirements. And I've never met a company or set of software developers that do not deal with regularly changing business requirements.

This is a very opinionated post and if you balk at the notion at first, I urge you to at least give it a try. You will find that you can enforce separation of concerns. You will find that you can avoid having a dependency on a framework, such as Entity Framework, in your application server code or in your hard wired client code, the latter of which would need only a reference to your ViewModels assembly. (Assumes you are working in .NET but I believe the principles apply universally.)

You will find that in preventing your business logic from knowing anything about the implementation of the interface provided by the Models assembly, you are forced to inject the dependency and are thus free to replace the dependency with a different implementation or even a mock for unit testing.

The list of benefits is long and I've surely missed a few important ones, but you get my drift. I hope.

TIP: Use AutoMapper or something like it to help you with the tedium when transformations are an exact 1:1 match.

Microsoft REST API Guidelines

I’m a big believer in establishing standards for your REST APIs. I like this one from Microsoft but there are others. What I like about this particular guideline is the structural organization and its brevity. The introductory paragraph provides the reasoning and justification behind REST even when there are language specific SDKs made available.

Developers access most Microsoft Cloud Platform resources via RESTful HTTP interfaces. Although each service typically provides language-specific frameworks to wrap their APIs, all of their operations eventually boil down to REST operations over HTTP. Microsoft must support a wide range of clients and services and cannot rely on rich frameworks being available for every development environment. Thus a goal of these guidelines is to ensure Microsoft REST APIs can be easily and consistently consumed by any client with basic HTTP support.

One thing that would be nice to have is a standard for filtering, paging, etc., on the URL as query parameters. These get defined differently by every REST service architect out there. Or so it seems. A good jumping off point for some research and thought.

And I want to thank my friend Pablo for sending me the link to this.

Bad Code Leads to Worse Code

It is 100% my fault. I wrote a small library for accessing files in Azure Storage. It has an Exists method. But rather than returning false when the path does not exist, it throws a 404 Not Found exception. This is bad code. It should just return false. And not having time to fix it, I wrote the following worse code.

try
{
   var exists = await _blobIo.Exists(fileName);
}
catch (Exception e)
{
   if (null != e.InnerException && e.InnerException.Message.Contains("404"))
   {
      fileName = $"other/{key}/{id}.{ext}";
      try
      {
         var exists = await _blobIo.Exists(fileName);
      }
      catch (Exception ie)
      {
         if (null != ie.InnerException && ie.InnerException.Message.Contains("404"))
         {
            throw new HttpResponseException(
               new HttpResponseMessage(HttpStatusCode.NotFound)
               {
                  Content = new JsonContent(new 
                  { 
                     Error = "Not Found", 
                     Message = "Content not found." 
                  })
               });
         }
      }
   }
}

I hang my head in shame. Lesson: fix the bad code. Don’t write worse code to work around the bad code.

Update
Once the fire was put out, I fixed the original library and now have this code. It's better. Could be improved, but it's not as ugly as before.

if (false == await _blobIo.Exists(fileName))
{
   fileName = $"other/{key}/{id}.{ext}";
   if (false == await _blobIo.Exists(fileName))
   {
      throw new HttpResponseException(
         new HttpResponseMessage(HttpStatusCode.NotFound)
         {
            Content = new JsonContent(new 
               { 
                  Error = "Not Found", 
                  Message = "Content not found."
               })
         });
   }
}

Second lesson: Don't let stinky code lie around too long. Clean up your messes.

Context Switching and Task<Result>.ConfigureAwait Method

While heavily involved in a few ASP.NET Web API 2 projects over the past year, I’ve learned from experience that context switching, marshaling between threads when using the async and await constructs in C# can be undesirable.

To avoid the context switching which is undeniably needed in a UI application where you’re offloading work from the UI thread, the default value of “true” for the ConfigureAwait Method on the Task<Result> class, one must set the “continueOnCapturedContext” to false. Like this:

var user = await repo.GetUser(id).ConfigureAwait(false);

There is some debate as to whether this is helpful in an ASP.NET application given the request is handled on a thread pool thread which is returned to the pool while the asynchronous task is running.

The issue I’ve noticed is that in some cases setting the “continueOnCapturedContext” to false, eliminates the need to marshal the continuation of your code back to the original context and use of another thread pool thread. While there are no noticeable performance advantages, I have noticed that I experience fewer threading exceptions when I allow the execution to continue on the thread that just executed my asynchronous work.

Exploring .NET Core 1.0

After a busy year working on coding up REST services with ASP.NET Web API 2.0 and doing zero blogging, I’m finally taking a little time to explore the .NET Core 1.0 bits released recently in RC2 and watching a number of presentations. I’m very excited about what I’m learning and will blog more about it but here’s a list of links that I’ve found most instructive and useful.

First, the announcement of RC2: announcing-asp-net-core-rc2

Next, the install: https://www.microsoft.com/net 

The unforgettable Scotts at TechEd: Introducing ASP.NET Core 1.0 

And the ASP.NET Core Deep Dive into MVC that followed.

After some experimentation with the preview tooling, trying to figure out how to create a .NET Core class library NuGet package was made easier with this very helpful article: how-to-create-a-nuget-package-with-your-library

And understanding the IIS deployment story was made so much more clear just the other day by the always helpful Rick Strahl: Publishing-and-Running-ASPNET-Core-Applications-with-IIS

Then I went looking for something similar for using Apache as the reverse proxy in front of a Kestrel app and found this: linuxproduction

Which finally led me to clone the KestrelHttpServer.

Exploring the code and the repo has been interesting. I especially noted with interest the recent pull request to Downtarget Kestrel to NETStandard 1.3.

The comment that Kestrel needed no functionality beyond 1.3 and that this allowed greater flexibility with current scenarios was very interesting.

The most important piece of learning so far has been the very brief discussion of the Libuv library used by Kestrel: http://libuv.org/

This amazing little library is used by Node.js and now by the ASP.NET Core web server called Kestrel and according to reports in Microsoft presentations has served up 3.2 million requests per second on a single bare metal machine. Even with a beefy machine, that is impressive.

Seeing how Kestrel uses Libuv is even more interesting and makes me think I ought to explore it as the next engine for a ServiceWire and ServiceMq implementation that could be deployed as a Docker container.

I was especially interested to see how .NET Core 1.0 utilizes native libraries in the Kestrel integration and use of Libuv as a NuGet package: KestrelHttpServer Internal Networking

The runtimes contained in the Libuv package used by Kestrel were also interesting to note:

libuv_runtimes

This concludes about a week of research on the side where I could sneak in an hour or two here or there. I’m eager to keep learning.