Tuesday, April 09, 2019

Tuesday Quickie: When closure bites (or how not to configure Newtonsoft.Json)

TLDR: Be very careful using captured closures and anonymous methods - they can leak memory when you don't expect. Also copy-and-paste code is bad.

So yesterday we had an issue where a long-running service within our platform suddenly started throwing StackOverflowExceptions with a new version. It was a blocker to the next release, so inevitably I got tasked with fixing the issue.

A quick profile of a locally running copy replicated the error (phew!) and pointed to one of our HTTP based service clients being at fault. This was confusing, as those components hadn't changed. 

But what had changed was how those clients were being used - previously, they had accidentally been injected from the IoC container as singletons - now they were being injected as transients.

So why would the stack overflow?

It turns out that the service clients all had copy-and-paste code used to ensure that a custom JSON converter was added to the default JSONSerializerSettings used by the Newtonsoft JSON library, and that code was leaking memory and ultimately causing the StackOverflowException

But how? Here's the old code:

Pretty straightforward - and a static method in a static class shouldn't leak?


On lines 8 and 9 we're capturing a closure (defaultSettings) and then using that within an anonymous method that we're assigning back to JsonConvert.DefaultSettings

Because of the captured closure, the compiler couldn't make the anonymous method truly static - so you get a new instance every time the helper method is called - and those instances are pinned (again because of the captured closure) so that they can't be garbage collected.

Since this helper was being called every time a service client was being instantiated, and the clients were no longer singletons, the profile was showing tens of thousands of instances in memory of Func<JsonSerializerSettings>. Not good.

The solution is to be rather more defensive in adding the converter. Here's the corrected version:

We create our own default JsonSerializerSettings instance and a genuinely static Func<JsonSerializerSettings> helper method on lines 7 & 8.

If JsonConvert.DefaultSettings is null, we assign that helper (within a double-checked lock for safety) on line 18.

Finally, we use whatever helper factory was assigned to get the default JsonSerializerSettings instance (line 23) and add our converter if needed - again within a double-checked lock (line 32).

Re-running the repro resulted in exactly 1 instance of Func<JsonSerializerSettings> being created - and our bug is fixed.

The moral of the story? Watch out for anonymous methods and captured closures - they can bite! 

Friday, March 22, 2019

Friday Quickie: Fixing Kubernetes connectivity with Docker for Windows

Another aide-memoire - when using kubectl on Windows against Docker for Windows you get the following error:

Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

This might well just just be that the tool can't find it's config... and the fix is easy - just set an environment variable:
Restart your powershell instance and try again... Simple. (see also this issue in github)

Thursday, February 14, 2019

Thursday Quickie: When AssemblyBinding redirect doesn't... and how I fixed it

TLDR: If your csproj file has AutoGenerateBindingRedirects set to true, then you MUST include the xmlns on the assemblyBinding node for any custom binding redirects in app.config.

So I've been banging my head against the wall trying to get a piece of code delivered over the last few days, and kept hitting an issue where code runs perfectly locally, but when deployed to the target server was failing with the dreaded error
Could not load file or assembly 'System.Net.Http, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

I'd used Gertjan van Montfoort's excellent article as a guide, but whatever version of the NuGet package I installed, and whatever version I referenced in my binding redirects in app.config, it just would not fire up.

Looking at the Fusion logs (helped by Scott Hanselman's article) I could see that my app was initially loading System.Net.Http from the GAC for a reference, then successfully loaded from the app directory (the one from the NuGet package I referenced).

But when it was trying to load because of other dependencies Fusion was not honouring the binding redirect, and so the app was failing on startup.

Finally I found a comment on the net that hinted about case sensitivity within the app config, so I went looking for differences between my custom redirects and those generated on build into MyApp.exe.config.

And I found one - the generated redirects all have an xmlns attribute on the assemblyBinding node - and my custom one did not

The solution - add one in your app.config

<?xml version="1.0" encoding="utf-8"?>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
        <assemblyIdentity name="System.Net.Http" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
        <bindingRedirect oldVersion="" newVersion="" />

Now your binding redirects get merged with the generated ones and it all works.

My interpretation is that Fusion appears to honour the redirects in assemblyBinding nodes only if all have xmlns attributes or none have them - but not if mixed. 

Hopefully others will find this useful - now I can get on and do some actual work!

Wednesday, June 06, 2018

Honey, I shrunk `ConfigureServices()`

The problem with all DI systems is that configuration generally is pulled in two polar-opposite directions. Either your container is configured by convention (with a few rare exception cases), or it's configured directly and declaratively with masses of registrations.
The latter configuration practice has the benefit of not having any side-effects as you bring in third party libraries - you'll not get random Controller classes added to your system, for example. And this is why it's my preference.
BUT, declaring every dependency in your container can lead to the dreaded "God Configuration Method" - where one method has hundreds of lines of container registration. This bad - it couples your application tightly to all dependencies at all levels, and makes it hard to maintain.
For the last few years, I've been using a discovery system for Unity that moves the registrations for dependencies away from the "God Configuration Method" and into individual "Bootstrappers" that live alongside the implementation of the service that they register.
This "magic" discovery pattern works well - reducing the "God Configuration Method" to a single line - and allowing (for example) client libraries to define the "right" lifecycles themselves, not relying on a programmer getting it right each time they use the client.
Today, I'm releasing an experiment - a port of "magic" bootstrapper discovery for ASPNet Core and the Microsoft DI framework.
It might be useful - it's certainly simple enough to use. The experiment is to see what sort of take-up it gets.

Note: I added the "magic" extensions under the Microsoft.Extensions.DependencyInjection.Discovery namespace deliberately.


Using the discovery library is simple. First install the NuGet package:
install-package CheviotConsulting.DependencyInjection.Discovery
Next, where previously you would add registrations into the Startup.ConfigureServices(...) method, you write a IServiceDiscoveryBootstrapper implementation
using Microsoft.Extensions.DependencyInjection;

public sealed class WebAppServiceBootstrapper : IServiceDiscoveryBootstrapper
        public void ConfigureServices(IServiceCollection services)
            services.AddTransient<IWebAppService, MyWebAppService>();
You can have as many of these as you like, in as many referenced projects as you like - which allows you to put a service registration alongside a service or service client itself.
Then in the Startup.ConfigureServices(...) method, you add the following single line:
using Microsoft.Extensions.DependencyInjection.Discovery;

public class Startup
    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
And that's it. The discovery framework will load any assemblies referenced by your application, find all IServiceDiscoveryBootstrapper instances and execute the ConfigureServices(...) method on each in turn - bootstrapping the IServiceCollection used by ASPNet Core / Microsoft.Extensions.DependencyInjection.

Advanced Usage

Bootstrapper Dependencies

Unlike the Startup.ConfigureServices method, you can't pass dependencies in the method parameters. However, by default the bootstrapper classes themselves are resolved using the IServiceCollection as it stands just before the call to BootstrapByDiscovery() is made. So you can use constructor dependency injection in your bootstrapper.
public sealed class SampleServicesBootstrapper : IServiceDiscoveryBootstrapper
    private readonly IHttpContextAccessor contextAccessor;

    public SampleServicesBootstrapper(IHttpContextAccessor contextAccessor)
        this.contextAccessor = contextAccessor;

    public void ConfigureServices(IServiceCollection services)
        services.AddSingleton<MyCustomService>(new MyCustomService(contextAccessor));

Manual Bootstrapping

If you don't want to rely on the "magic" discovery, you can always bootstrap each of your bootstrappers individually within your Startup.ConfigureServices(...) method.
public class Startup
    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)

        // Bootstrap services manually
        services.BootstrapFrom(new WebAppServiceBootstrapper());

Tuesday, April 24, 2018

DDDSW 2018 - Retrospective

So it's now the Monday after the Saturday before and I've had time to chill and think back on the excellent event that was DDDSW 2018.

Held in the Redcliffe Sixth Form Centre again, my day started early as I was driving down and back in the day. Fortunately the M4 was clear and in spite of the roadworks around Bristol Temple Meads, I rolled into the car park just after 0830.

Once again, Landmark was sponsoring the Cream Teas, so I was able to bag a sponsors parking space behind the building (thanks Ross) and get the goodies in via the back door.

Setting up the Landmark stand-up and handing the box full of goodies for the delegate bags took but moments, and then it was up to the Speaker Room to do a quick run-through and check connectivity via my phone. 

I was scheduled for Track 5 just before lunch, so I would have had plenty of time to prepare had I needed it - as it was I could chat and catch up with old friends.

Speaker Briefing was followed by the usual intro and housekeeping talk in the main room. It was really good to see a strong turnout again too, right from the start

With these over, it was off to the sessions - and first up was one that was really just for me with no real bearing on my day-to-day job (tho' I've got a few ideas now).

Session 1 - Unity 101 with Andy Clarke

I was interested with this as I'd played a little with Unity 3d already, but hadn't really got past attaching standard behaviours to pre-fabs. Andy started simply with an intro to the product (used for everything from simple 3d games up to Neill Blomkamp's "Adam" series of shorts. He then gave a quick once over of UI (and the left-hand-coordinate-system), and soon had balls rolling and boxes bouncing off each other as they spun.

Next up was "Tank Dominoes", using pre-fabs from the Unity Store to indulge in a bit of monolith-toppling. 

With a tank. 😁 

It was only a shame that Andy's laptop refused to play the sound effect of the explosion.

Then Andy demonstrated the Inverse Kinematic Skeleton system available with a humanoid model.

By the end he had a two-player networked sample running where the characters could pass a batton between each other. The amount of code required - or rather the lack of it - is a real testament to what a powerful authoring environment Unity is.

Thoroughly enjoyable, Andy's talk will give me a good head-start against my 11 year old who wants to try Unity himself for game development (Scratch is so Junior School, apparently).

Session 2 - "Give it a REST - Tips for designing and consuming public APIs" with Liam Westley

You've got to give it to Liam - a day or so before DDDSW he'd been knocked off his bike (and had possibly broken a bone in his wrist), but that didn't stop him giving a polished presentation on the good, the bad and the ugly sides of public facing APIs.

Liam explaining (with video) his cycling accident
Liam started by describing some of the history and development of RESTful best practice - and how HATEOAS was a crucial upcoming technology. REST is not CRUD, and HATEOAS is a REST constraint, not an independent thing.

He described how Google ReCaptcha was exceptionally clever when deprecating service versions by using canary deployments to show a message designed to get users to prod web site owners to upgrade.

Versioning came under the microscope - here be dragons - as did effective patterns for genuinely asynchrons operations using HTTP 202 Accepted and a HATEOAS response payload.

Liam finally explained how a public API is effectively a contract between owner and user, and needs to be handled as such if you don't want to hack your users off.

Lots of good stuff to take back to work here, although I'm not sure I captured it all in my "sketchy" notes.

Next up was my own talk.

Session 3 - Azure in Action: CosmosDB, Functions and ServiceBus in perfect harmony

I'd had a humbling experience submitting this year - not one, but two talks accepted for DDDSW, with absolutely identical numbers of votes! I could only do one, so I'd decided to do my Azure talk rather than the soft-skills one.

Paraphrasing my own blurb, there's so many parts to Azure, it's sometimes hard to decide what to use and when. Do I use TableStore or CosmosDB? Would BlobStore be better? Should I host a full-fat .Net service in a VM, or stand up an ASP.Net Core WebApi? What about functions? The choices are myriad. 

Speaker's Eye View

I described how at Landmark we made these kind of decisions as we implemented some new features in our product. I talked about the questions you need to ask to make those decisions, where we went wrong, and how we succeeded in the end. 

Finally I described in detail how we used CosmosDB, Azure Functions and Service Bus together to provide a Compliance Audit trail feature that would scale properly, work reliably, be trivial to use, and that wouldn't break the bank, using a complete end-to-end sample code-base, lifted-and-shifted from the code we wrote at Landmark.

For a first outing, I'm really pleased with how this went - the slides gave the background and context I wanted, and the code demos pretty much all worked perfectly. 

Certainly the questions asked by the audience and the feedback I've received has been great. Thanks all.

It's been great to have such support from Landmark too - allowing me to effectively open-sourcing our Compliance Audit framework as the reference implementation to go along with this talk.

Once again, Ian Johnson captured the essence in one of his legendary sketchnotes - thanks, Ian!

For the curious, the slides and code are all available online:

After the obligatory pastie lunch (sponsored by Dyson this year), punctuated with interesting conversations as always, it was on to the next session:

Session 4 - "What's That Smell" with Anthony Dang

Anthony's session was an entertaining review of a series of "gotcha's" he's experienced in a Digital Agency environment, specifically when taking on other people's projects. 

Whilst focused a little on Umbraco (The CogWorks is an Umbraco Gold Partner), his advice is applicable to anyone scratching their head about web application performance.

One thing that jumped out at me was his suggestion that the simple act of turning off all caching can actually improve performance, and certainly show up the real underlying issues. It's all to easy as a developer to over-use (or over-engineer) caching in your application and bring in all sorts of un-testable issues.

Another break, more coffee, more chat and most importantly the Cream Teas (sponsored by Landmark). 

I was good and only had a couple. Really.

Then it was time for

Session 5 - "Lessions learned - ingesting, processing and securing 200 million messages / day" with Jess Panni

This was probably the talk most relevant to my day-to-day job, and boy did Jess deliver.
Jess described how Endjin ingest Radius data via a UDP-listener service, and pipe this to Azure Data Lake via Functions. He described the "Swiss Cheese" model of providing more and more security along the pipeline, how Shared Access Signature Tokens provide better security than Shared Access Policies (at the expense of somewhat more complicated code flows), particularly when used with Key Vault.

Jess also demonstrated how a "2-key launch control" could be implemented for software - requiring two independent operators to start an update to a system.

Finally, he showed how Azure Data Lake Analytics are used to process the 200 million events into useful data points for reporting and monitoring using uSQL.

All in all, a really interesting - and very fast paced - talk to finish up the day before the final wrap up and swag.

Wrap up and swag

Afterwards, it was off to Just Eat towers for more chat over beer and pizza - although I had to forgo both as I was driving home. 😞

All credit has to go to the DDDSW team - another excellent event this year - and to all the attendees and speakers who made it a great day.

Friday, November 17, 2017

Meetup: Leaders in Tech, Reading

Last night, I had the pleasure of attending the inaugural Meetup of "Leaders in Tech: Reading" at Austin Fraser's offices in central Reading.

Billed as a group "for CTOs, CIOs, VPs, Heads of IT and other senior technology leaders to get together and discuss current tech trends", this first event was a 20 minute whistle-stop tour by Andy Smith on the "What, Why & How of Whole Enterprise Agility".

Preparing a blog post interviewing the speaker prior to the event was genius - giving an insight into what to expect that the event.
The central Reading venue works perfectly for me (and many others, based on conversations I had) - just a short bus trip from work and handy for the station for that late train home.

Greeted in the impressive lobby of Thames Tower, we were guided up to the 11th floor offices of Austin Fraser, where drinks and hors d'oevres awaited with the opportunity to meet the other attendees before the event.

Unlike I've experienced at some of these sort of events, the hosts made a great effort to engage the attendees before the presentation - getting conversations started so that everyone was comfortable and the "wall-flower" effect was minimised.

The presentation itself was held in the break-out area between the foosball tables and the putting green, underneath a hanging garden (I kid you not). Austin Frasier have clearly taken to heart the need to provide a work environment that will appeal.

Andy started with a little history, comparing enterprise organisation in the 20th Century (slow, regulated, competitive) with what's emerged in the 21st - fast, distruptive, collaborative enterprises, where millennials demand a vibrant working environment whilst expecting to move at least 4 times before they're half way through their 30's.

He discussed the "Elastic Band of Culture" - and how unless agile transformation is invoked across an entire organisation, then those business functions not involved will only be a drag on those others.

There were so many things to take away from Andy's talk, but it could all be distilled into one soundbite:

"Organisational Agile is a change in the mindset and culture 
for a whole organisation."

After the presentation, the Q&A session turned into an excellent round-table discussion of the issues raised - lots of interaction with and contribution from the attendees that lasted longer than the presentation itself.

Finally, more networking and casual discussion (as well as more drinks and nibbles) rounded the evening off.

All credit has to go to the Austin Fraser team for hosting this event and making it a success - I'm going to be going again. 

Thursday, November 09, 2017

The 6 Step Happy Path to HTTPS on Amazon S3 and Cloudfront

Troy Hunt called it back in July, and now the HTTPS tipping point is here. From Chrome 62 onwards, sites are going to be flagged as dangerous if they don't have strong security in place. 

If you haven't already, read his article now - I'll wait.

Of course it doesn't have to be hard to implement - Troy has himself blogged on the "The 6-Step 'Happy Path' to HTTPS" - but I'm hosting my websites out of Amazon AWS, so my "6-step 'Happy Path' to HTTPS on Amazon S3 and Cloudfront" is a little different.

Step 1 - Get a free certificate

Difficulty level: Easy

So whilst I can't use LetsEncrypt, Amazon gives us the tools to add a custom SSL certificate to my Cloudfront distribution.

Go to the AWS Console and the Cloudfront Management module.

Select the distribution for your website, and click on "Edit" on the General tab. Amongst all the setting, you get the options shown right for selecting what certificate to use.

We want a custom SSL certificate, and all we have to do is click on "Request or Import a Certificate with ACM" to start the request process in a new window / tab
NOTE: You must have your AWS console configured for the N. Virginia region when going through the certificate request process. Whilst this should happen automatically, it didn't always for me. YMMV.
Working through the ACM wizard to get a certificate is simple enough that I'll not detail it here - but remember to add both the www.mydomain.com entry and the *.mydomain.com wildcard if you've got sub-domains.

When your cert has been created and validated, go back to the Cloudfront distribution page and hit the refresh button beside the certificate drop-down. Your shiny new certificate should be shown, so select it and save changes.

When the distribution has updated, you'll now be able to access your website using https and the ACM certificate.

Step 2 - Add a 301 "Permanent Redirect"

Difficulty level: Easy

This step is all about telling browsers to always use HTTPS - and Cloudfront has you covered here too.

Select the distribution for your website in the Cloudfront Management module again and this time choose the "Behaviours" tab. I had only a single default behavior, you may have more - if so, then you'll need to make the following change for each.

Check the checkbox and click on the "Edit" button to edit the behaviour. 

The setting we're interested in is "Viewer Protocol Policy" (shown right). 

Set this to "Redirect HTTP to HTTPS" and click on the save button (which is helpfully labelled "Yes, Edit") at the bottom - when the distribution finishes updating your website will now redirect HTTP requests to HTTPS.

Step 3 - Add HSTS

Difficulty level: Medium-Hard

This step is actually the meat of this blog post. Serving your website out of S3 and CloudFront may be cheap, but you don't get all the self-serve features offered by CloudFlare for adding standard security headers.

But all is not lost - we can use AWS Lambda to post-process all responses as they leave CloudFlare.

First, open the Lambda Management module (ensuring you're in the N. Virginia region).

We need to create a new function for each website you're serving from S3 / CloudFront - I've got 3 websites, and have completed this exercise on 2 so far as you can see right.

Click on the "Create Function" button and you'll be presented with a "Blueprints" page shown right. 

We want the cloudfront-modify-response-header blueprint, so click on the title of that card.

Now we're going to have to add some information about the Lambda function before we can create it. Interestingly, we're not actually able to edit the code for the function until after it's been created - we have to take the boiler-plate code as is for now.

Enter a name for your function - remembering that you'll create a new function within your account for each website you host. Something like AddSecurityHeadersForMyDomain might be a good choice here.

If you've never created a Lambda, you'll need to create a role, so select "Create New Role from Template", give it a name and choose "Basic Edge Lambda permissions" as the policy template.

Once you've done that, you can select "Choose an existing role" and pick the role you previously completed - roles can be shared across Lambda functions.

Next, we need to configure how the Lambda links to CloudFront.

Critical here is to select the correct CloudFront distribution - which is of course just a nice long code string. (sigh)

Leave the "Cache Behavior" option set to "*" (the default), and for "CloudFront Event" select "Viewer Response".

You have to check the "Enable trigger and replicate" option at this point to proceed - even though the "Create Function" (Save) button is way down the page below the boilerplate code. Click on that and you've successfully created your Lambda and bound it to your CloudFront distribution.

But, of course, we've yet to actually edit the code for this function to do what we want - namely add the HSTS header.

Click on the "Configuration" tab and you can see the boilerplate code. Helpfully AWS tells us that we can't edit the V1 function we just created, but have to switch to $LATEST - Lambda functions are versioned.

Let's pause though to have a look at what the boilerplate function is doing before we change it.

The function modifies the outbound headers - it takes the value from the "X-Amz-Meta-Last-Modified" header set by S3 as the origin and pastes it into the more standard "Last-Modified" header. 

It's all fairly obvious Node.js stuff, so let's add the HSTS header. Click on "Click here to go to $LATEST" and you'll be presented with an editable code pane.

The code we want to add is almost trivial, and we need to add it just above the callback(null, response) line:

    const hstsName = 'Strict-Transport-Security';
    const hstsValue = 'max-age=31536000; includeSubDomains';

    headers[hstsName.toLowerCase()] = [{
        key: hstsName.toLowerCase(),
        value: hstsValue,


Click on "Save" (in the activity bar at the top of the page) to save the Lambda.

AWS Lambdas have an in-built test harness, so we should configure this - but it's not automated or obvious.

Click on the "Select a test event..." dropdown and click on "Configure test events" to bring up the Create / Edit dialog.
Give your test a name (you can have 10 per function) and click on "Create" to save the test.

Now click on "Save and test" and your Lambda function is run - you should get a "success" banner to say all's well.

Expanding the details section lets you see the input and output of the function - and scrolling down the output area we see our HSTS header has been correctly added.

We're nearly there, honestly.

We have to publish and re-bind the function for it to take effect on our CloudFront distribution. Click on the "Actions" drop-down and click on "Publish new version". 

Enter a descriptive name for this version and click on "Publish".

You'll now be back to the Function details page, but with V2 selected. Click on the "Triggers" tab - and there's nothing there! Our new version needs to be bound to CloudFront, replacing the obsolete V1 version.

Click on "+ Add Trigger" and you get the trigger dialog. This should be pre-populated from the V1 settings so all you have to do is click on "Submit" to rebind to the V2 function. 

Load your site in a browse (you may need a hard-refresh) after a couple of seconds and using the developer tools you should be able see the HSTS header has been added.

Step 3 completed - finally.

Step 4 - Change insecure scheme references

Difficulty level: Boring

Yes, it's boring - but also very easy - to go through your website looking for insecure scheme references.

Most of mine were relative references anyway, so it was only the few external ones that caused any issues - on the home page specifically my LinkedIn badge GIF.

Now you could, of course, use the Lambda we created in Step 3 to replace any 'http://' found in the response body with 'https://' to get the same effect as flicking the switch in CloudFlare does, but for my noddy sites that's overkill.

A quick check using Chrome DevTools very quickly digs out the references - the Security tab is your friend here.

Actually getting the CloudFront distribution pushed so that the latest build of the codebase was being served was more problematic than anything else. Go figure.

Step 5 - Add the "upgrade-insecure-requests" CSP

Difficulty level: Easy

This step is actually easier in S3 and Cloudfront than in CloudFlare, in my opinion.

Now we've got a Lambda that modifies headers, all we need to do is add a couple more lines to add the CSP header:

    const cspName = 'Content-Security-Policy';
    const cspValue = 'upgrade-insecure-requests';

    headers[cspName.toLowerCase()] = [{
        key: cspName.toLowerCase(),
        value: cspValue,


Of course, we have to go round the loop of creating a new version of the function and re-binding it, but I'll leave that to you as an exercise.

Step 6 - Monitor CSP reports

Difficulty level: Trivial

Things have progressed since Troy wrote his article - he's recently joined Scott Helme as a partner in Report-Uri to build out that service.

So all we do for this step is sign up for the Report-Uri service and get a reporting URL from there. Implementing monitoring is then another simple change to our Lambda to add another header:

    const csprName = 'Content-Security-Policy-Report-Only';
    const csprValue = 'default-src https:;report-uri https://mysecretapikey.report-uri.com/r/d/csp/enforce';

    headers[csprName.toLowerCase()] = [{
        key: csprName.toLowerCase(),
        value: csprValue,


And we're done - that's the 6-step Happy Path to HTTPS on Amazon S3 and Cloudfront.

Of course, you should go further - running your site through Scott Helme's SecurityHeaders.io gives a load of advice on headers you can add with your Lambda. My personal site got an 'F' rating before I started this exercise - now it's an 'A'. Win!

So here's the full code for the Lambda that gets me the 'A' rating...