Friday, November 17, 2017

Meetup: Leaders in Tech, Reading

Last night, I had the pleasure of attending the inaugural Meetup of "Leaders in Tech: Reading" at Austin Fraser's offices in central Reading.

Billed as a group "for CTOs, CIOs, VPs, Heads of IT and other senior technology leaders to get together and discuss current tech trends", this first event was a 20 minute whistle-stop tour by Andy Smith on the "What, Why & How of Whole Enterprise Agility".

Preparing a blog post interviewing the speaker prior to the event was genius - giving an insight into what to expect that the event.
The central Reading venue works perfectly for me (and many others, based on conversations I had) - just a short bus trip from work and handy for the station for that late train home.

Greeted in the impressive lobby of Thames Tower, we were guided up to the 11th floor offices of Austin Fraser, where drinks and hors d'oevres awaited with the opportunity to meet the other attendees before the event.

Unlike I've experienced at some of these sort of events, the hosts made a great effort to engage the attendees before the presentation - getting conversations started so that everyone was comfortable and the "wall-flower" effect was minimised.


The presentation itself was held in the break-out area between the foosball tables and the putting green, underneath a hanging garden (I kid you not). Austin Frasier have clearly taken to heart the need to provide a work environment that will appeal.

Andy started with a little history, comparing enterprise organisation in the 20th Century (slow, regulated, competitive) with what's emerged in the 21st - fast, distruptive, collaborative enterprises, where millennials demand a vibrant working environment whilst expecting to move at least 4 times before they're half way through their 30's.

He discussed the "Elastic Band of Culture" - and how unless agile transformation is invoked across an entire organisation, then those business functions not involved will only be a drag on those others.

There were so many things to take away from Andy's talk, but it could all be distilled into one soundbite:

"Organisational Agile is a change in the mindset and culture 
for a whole organisation."

After the presentation, the Q&A session turned into an excellent round-table discussion of the issues raised - lots of interaction with and contribution from the attendees that lasted longer than the presentation itself.

Finally, more networking and casual discussion (as well as more drinks and nibbles) rounded the evening off.

All credit has to go to the Austin Fraser team for hosting this event and making it a success - I'm going to be going again. 

Thursday, November 09, 2017

The 6 Step Happy Path to HTTPS on Amazon S3 and Cloudfront

Troy Hunt called it back in July, and now the HTTPS tipping point is here. From Chrome 62 onwards, sites are going to be flagged as dangerous if they don't have strong security in place. 

If you haven't already, read his article now - I'll wait.



Of course it doesn't have to be hard to implement - Troy has himself blogged on the "The 6-Step 'Happy Path' to HTTPS" - but I'm hosting my websites out of Amazon AWS, so my "6-step 'Happy Path' to HTTPS on Amazon S3 and Cloudfront" is a little different.


Step 1 - Get a free certificate


Difficulty level: Easy


So whilst I can't use LetsEncrypt, Amazon gives us the tools to add a custom SSL certificate to my Cloudfront distribution.

Go to the AWS Console and the Cloudfront Management module.

Select the distribution for your website, and click on "Edit" on the General tab. Amongst all the setting, you get the options shown right for selecting what certificate to use.

We want a custom SSL certificate, and all we have to do is click on "Request or Import a Certificate with ACM" to start the request process in a new window / tab
NOTE: You must have your AWS console configured for the N. Virginia region when going through the certificate request process. Whilst this should happen automatically, it didn't always for me. YMMV.
Working through the ACM wizard to get a certificate is simple enough that I'll not detail it here - but remember to add both the www.mydomain.com entry and the *.mydomain.com wildcard if you've got sub-domains.

When your cert has been created and validated, go back to the Cloudfront distribution page and hit the refresh button beside the certificate drop-down. Your shiny new certificate should be shown, so select it and save changes.

When the distribution has updated, you'll now be able to access your website using https and the ACM certificate.

Step 2 - Add a 301 "Permanent Redirect"

Difficulty level: Easy

This step is all about telling browsers to always use HTTPS - and Cloudfront has you covered here too.

Select the distribution for your website in the Cloudfront Management module again and this time choose the "Behaviours" tab. I had only a single default behavior, you may have more - if so, then you'll need to make the following change for each.

Check the checkbox and click on the "Edit" button to edit the behaviour. 

The setting we're interested in is "Viewer Protocol Policy" (shown right). 

Set this to "Redirect HTTP to HTTPS" and click on the save button (which is helpfully labelled "Yes, Edit") at the bottom - when the distribution finishes updating your website will now redirect HTTP requests to HTTPS.

Step 3 - Add HSTS

Difficulty level: Medium-Hard

This step is actually the meat of this blog post. Serving your website out of S3 and CloudFront may be cheap, but you don't get all the self-serve features offered by CloudFlare for adding standard security headers.

But all is not lost - we can use AWS Lambda to post-process all responses as they leave CloudFlare.

First, open the Lambda Management module (ensuring you're in the N. Virginia region).

We need to create a new function for each website you're serving from S3 / CloudFront - I've got 3 websites, and have completed this exercise on 2 so far as you can see right.


Click on the "Create Function" button and you'll be presented with a "Blueprints" page shown right. 

We want the cloudfront-modify-response-header blueprint, so click on the title of that card.

Now we're going to have to add some information about the Lambda function before we can create it. Interestingly, we're not actually able to edit the code for the function until after it's been created - we have to take the boiler-plate code as is for now.


Enter a name for your function - remembering that you'll create a new function within your account for each website you host. Something like AddSecurityHeadersForMyDomain might be a good choice here.

If you've never created a Lambda, you'll need to create a role, so select "Create New Role from Template", give it a name and choose "Basic Edge Lambda permissions" as the policy template.

Once you've done that, you can select "Choose an existing role" and pick the role you previously completed - roles can be shared across Lambda functions.


Next, we need to configure how the Lambda links to CloudFront.

Critical here is to select the correct CloudFront distribution - which is of course just a nice long code string. (sigh)

Leave the "Cache Behavior" option set to "*" (the default), and for "CloudFront Event" select "Viewer Response".

You have to check the "Enable trigger and replicate" option at this point to proceed - even though the "Create Function" (Save) button is way down the page below the boilerplate code. Click on that and you've successfully created your Lambda and bound it to your CloudFront distribution.

But, of course, we've yet to actually edit the code for this function to do what we want - namely add the HSTS header.


Click on the "Configuration" tab and you can see the boilerplate code. Helpfully AWS tells us that we can't edit the V1 function we just created, but have to switch to $LATEST - Lambda functions are versioned.

Let's pause though to have a look at what the boilerplate function is doing before we change it.

The function modifies the outbound headers - it takes the value from the "X-Amz-Meta-Last-Modified" header set by S3 as the origin and pastes it into the more standard "Last-Modified" header. 

It's all fairly obvious Node.js stuff, so let's add the HSTS header. Click on "Click here to go to $LATEST" and you'll be presented with an editable code pane.

The code we want to add is almost trivial, and we need to add it just above the callback(null, response) line:

    const hstsName = 'Strict-Transport-Security';
    const hstsValue = 'max-age=31536000; includeSubDomains';

    headers[hstsName.toLowerCase()] = [{
        key: hstsName.toLowerCase(),
        value: hstsValue,

    }];

Click on "Save" (in the activity bar at the top of the page) to save the Lambda.


AWS Lambdas have an in-built test harness, so we should configure this - but it's not automated or obvious.


Click on the "Select a test event..." dropdown and click on "Configure test events" to bring up the Create / Edit dialog.
Give your test a name (you can have 10 per function) and click on "Create" to save the test.

Now click on "Save and test" and your Lambda function is run - you should get a "success" banner to say all's well.


Expanding the details section lets you see the input and output of the function - and scrolling down the output area we see our HSTS header has been correctly added.


We're nearly there, honestly.



We have to publish and re-bind the function for it to take effect on our CloudFront distribution. Click on the "Actions" drop-down and click on "Publish new version". 

Enter a descriptive name for this version and click on "Publish".

You'll now be back to the Function details page, but with V2 selected. Click on the "Triggers" tab - and there's nothing there! Our new version needs to be bound to CloudFront, replacing the obsolete V1 version.

Click on "+ Add Trigger" and you get the trigger dialog. This should be pre-populated from the V1 settings so all you have to do is click on "Submit" to rebind to the V2 function. 

Load your site in a browse (you may need a hard-refresh) after a couple of seconds and using the developer tools you should be able see the HSTS header has been added.

Step 3 completed - finally.


Step 4 - Change insecure scheme references

Difficulty level: Boring

Yes, it's boring - but also very easy - to go through your website looking for insecure scheme references.

Most of mine were relative references anyway, so it was only the few external ones that caused any issues - on the home page specifically my LinkedIn badge GIF.

Now you could, of course, use the Lambda we created in Step 3 to replace any 'http://' found in the response body with 'https://' to get the same effect as flicking the switch in CloudFlare does, but for my noddy sites that's overkill.

A quick check using Chrome DevTools very quickly digs out the references - the Security tab is your friend here.

Actually getting the CloudFront distribution pushed so that the latest build of the codebase was being served was more problematic than anything else. Go figure.



Step 5 - Add the "upgrade-insecure-requests" CSP

Difficulty level: Easy

This step is actually easier in S3 and Cloudfront than in CloudFlare, in my opinion.

Now we've got a Lambda that modifies headers, all we need to do is add a couple more lines to add the CSP header:

    const cspName = 'Content-Security-Policy';
    const cspValue = 'upgrade-insecure-requests';

    headers[cspName.toLowerCase()] = [{
        key: cspName.toLowerCase(),
        value: cspValue,

    }];

Of course, we have to go round the loop of creating a new version of the function and re-binding it, but I'll leave that to you as an exercise.



Step 6 - Monitor CSP reports


Difficulty level: Trivial



Things have progressed since Troy wrote his article - he's recently joined Scott Helme as a partner in Report-Uri to build out that service.

So all we do for this step is sign up for the Report-Uri service and get a reporting URL from there. Implementing monitoring is then another simple change to our Lambda to add another header:

    const csprName = 'Content-Security-Policy-Report-Only';
    const csprValue = 'default-src https:;report-uri https://mysecretapikey.report-uri.com/r/d/csp/enforce';

    headers[csprName.toLowerCase()] = [{
        key: csprName.toLowerCase(),
        value: csprValue,

    }];  

And we're done - that's the 6-step Happy Path to HTTPS on Amazon S3 and Cloudfront.

Of course, you should go further - running your site through Scott Helme's SecurityHeaders.io gives a load of advice on headers you can add with your Lambda. My personal site got an 'F' rating before I started this exercise - now it's an 'A'. Win!




So here's the full code for the Lambda that gets me the 'A' rating...


Enjoy.

Tuesday, October 04, 2016

DDDNorth 2016 - A Retrospective

Another year, and another amazing DDD North event.

This time I was day-tripping it over from family in Manchester, so had an early start (i.e. completely in the dark) to catch the train over the Penines. A brisk 20 minute walk up the hill from Leeds station and I was at the Mechanical Engineering building of the University of Leeds.

A couple of coffees and a brief speaker briefing and I was ready for the day.

First up was Martin Kearn from Microsoft with "Machine Learning for Muggles". 

Martin showed how ML is used to find patterns in data - the bigger the sample set, the more interesting patterns can be found. After some fun samples, he used Azure ML Studio to create a car pricing model, based on first a few, and then many parameters - and then making that usable via a web API with a few drag-and-drop-and-clicks. Impressive stuff.

Next, he introduced HowHappy.co.uk - an ML experiment that used Azure LUIS and Azure Facial Recognition to assess his audience. Martin has blogged about this in detail - very cool stuff.

For a change I was on in the second session of the morning. My "10 more things" talk was a second new one this year - clearly the appetites of the DDD North audience were very different to that of the DDD (Reading) audience.

It all went well - bang on time, not too rushed, lots of interaction when the audience warmed up - and I'm really pleased with the feedback. Thanks again to everyone that came to see me - links to the slide deck can be found on my speaking page.

After the second break, it was time for some containerisation. 

Naeem Sarfraz's session on "Developing Apps in Windows Containers on Docker" was a great introduction to the current state of play with Dockerisation (is that a thing?) with Windows. The newly released Windows Core / DotNet 462 image is going to be useful at work for a start.

Lunch was the usual brown-bag affair, with lots of catching up with old friends. The Onion Bhaji rolls were a revelation to a lot of people, I think!

First session after lunch was Garry Shutler's "Designing an API for Developer Happiness", where he replayed some very sensible lessons learned from creating the Cronofy API. Three solid pages of notes (and 19 individual items to consider) later, and I've got a load of work to do to bring those learnings to the teams at work.

Finally, was one of the stand-out talks of the day - Chris Alexander's "Software Development for Formula 1". 

Working at McLaren F1, Chris's talk was always going to have an immediate draw for me - and his use of classic F1 imagery (as well as amazing pictures of amazing McLaren road cars) was very much "toys for the boys". But he also gave an insight into the way software is developed there - not quite Agile, and very much tailored to delivering in time for the next race weekend. 

With the swag given out and thanks paid to the organisers, it was home time - in a Saturday night deluge. (Although that swift pint and final chat did warm me for the trip).

Roll on next year.

Friday, August 19, 2016

Friday Quickie - Setting up Powershell as an App on MacOs

So yesterday, Microsoft announced that Powershell was open source and runs on MacOS. Cool!

But the default installer doesn't make it available as an App within MacOS - you have to open a terminal first. :(

It's actually pretty easy to set this up tho'... 

TLDR

Create an Automator script and save it to Applications.

Step by Step:


Open Automator and File -> New. 

In the New Script dialog, select Application.

Add an AppleScript task from the Utilities section to the script by dragging it onto the design surface.



Then add the following in the script.

 

Finally, save the script to the Applications folder and you're done - Powershell is available as an app through finder. 

For bonus points, find an icon you like on the web, copy the image to your clipboard, GetInfo on the script you just created, select the icon at the top left (it'll get a blue outline), and you can paste the new icon for extra shininess.

Job done.

Monday, July 04, 2016

Monday Quickie: Git Aliases for Proxy Settings

If, like me, you find yourself working from home occasionally flipping the proxy setting on and off for GIT becomes tiresome.

So here's a snippet to give you two new GIT commands for setting and resetting the http.proxy setting that GIT uses.

git config --global alias.noproxy 'config --global --unset http.proxy'
git config --global alias.setproxy 'config --global http.proxy http://<proxyUrl>:<proxyPath>'


Now you can just use 'git noproxy' when at home to turn the proxy off and 'git setproxy' when you're back in the office.

Friday, October 30, 2015

Friday Quickie - Search, Filter and Copy matching files in Powershell

Another little aide-memoire - I want to find all files in a directory containing a specific string that were created on a specific date and copy them to another directory.

Using Powershell it's quite easy, with just a little wrinkle in the copy-item syntax:

PS C:\SourceFolder> get-childitem | where-object { $_.CreationTime -ge "10/29/2015" -and $_.CreationTime -le "10/30/2015" } | select-string -pattern "80029" | group path | select name | % { $_.Name | copy-item -destination C:\temp\TargetFolder }


Tuesday, September 22, 2015

HTML5, AngularJS and hosting on AWS S3 - Oh my!

So I've not done a big "how to" post in a l-o-n-g while, so I thought it'd be useful to document the process of moving from an effectively static ASP.Net MVC web site to an actually static web site that can be hosted directly from an S3 bucket.

Why? Well, my "toy" sites have no real dynamic content, so why maintain a micro-VM on Azure just to host them?

So this will be a step-by-step guide - partly for my own recollection, and also because finding some of the incantations needed to publish a web site successfully to AWS S3 took a fair bit of effort.


Step 0 - Setup


As I'm going to try and maintain these sites 'properly', I'm going to put the source code into GitHub.
joel$ cd Projects/
joel$ mkdir mywebsite.co.uk 
joel$ cd mywebsite.co.uk 
joel$ git init 
Initialized empty Git repository in /Users/joel/Projects/mywebsite.co.uk/.git/
So, I set up a new repository on GitHub with an Apache license and a default README.md file, and connected by empty project folder to that:
joel$ git remote add origin https://github.com/Me/mywebsite.co.uk  
joel$ git pull origin master 
From https://github.com/Me/mywebsite.co.uk * branch            master     -> FETCH_HEAD 
joel$ ls 
LICENSE README.md

Step 1 - Scaffolding

Scaffolding a sensibly structured HTML5/AngularJS site is amazingly easy using Yeoman. A quick check first that we're good to go...


joel$ yo --version && bower --version && grunt --version 
1.3.2 
1.3.12 
grunt-cli v0.1.13

then a whole AngularJS web site scaffolded with one command!
joel$ yo angular  
... 

Commit and push to GitHub gives me a baseline against which I can start working on the site
joel$ git add . 
joel$ git commit -m "Initial scaffolding" 
[master 995f8ba] Initial scaffolding 
 26 files changed, 1640 insertions(+) 
... 

joel$ git push origin master 
... 
To https://github.com/Me/mywebsite.co.uk.git 
   f3525d2..995f8ba  master -> master



Step 2 - Working on the site


I've got to admit I really like the workflow that's enabled by using VSCode and grunt file watching - a quick grunt serve and then just edit and save. With a two monitor setup, this is an absolute dream.

Capturing small changes as individual git commits feels "just right" too.


Step 3 - Setting up publishing to AWS S3


This is where things get interesting. 

Setting up a new bucket in S3 is easy - name the bucket after the web site url (mywebsite.co.uk in this example).

We then need to configure a grunt task to publish to that bucket - Rob Morgan has a very good walkthrough here of how to do this using the grunt-aws package.


joel$ npm install grunt-aws-s3 --save-dev...

And then we add some lines to the Gruntfile.js file:

grunt.loadNpmTasks('grunt-aws-s3'); 
// Configurable paths for the application
var appConfig = {
    app: require('./bower.json').appPath || 'app',
    dist: 'dist',
    s3AccessKey: grunt.option('s3AccessKey') || '',
    s3SecretAccessKey: grunt.option('s3SecretAccessKey') || '',
    s3Bucket: grunt.option('s3Bucket') || 'mywebsite.co.uk',

  };
 
grunt.initConfig({
...
aws_s3: {
            options: {
                accessKeyId: appConfig.s3AccessKey,
                secretAccessKey: appConfig.s3SecretAccessKey,
                bucket: appConfig.s3Bucket,
                region: 'eu-west-1',
            },
            production: {
              files: [
                  { expand: true,
                    dest: '.',
                    cwd: 'dist/',
                    src: ['**'],
                    differential: true }
                    ]
                  }
        }

});
 
grunt.registerTask('deploy', ['build', 'aws_s3']);
Notice that my AWS secrets are injected via grunt command line parameters - so no chance of committing them into GitHub!

Step 4 - Configuring AWS permissions


The biggest headache I found in this whole process was setting AWS permissions up correctly. I don't really want to push via my super-user account, and if I ever get a build server for all this working, I'd rather have a single user per web site with VERY limited permissions to push changes to AWS S3.

Create a deployment user 

In AWS IAM Management, I created a new user called mywebsite.deploy, with an associated Access Key / Secret pair that I downloaded and saved somewhere secure. 

There's no way to get back an access key, so be careful not to forget this step, or you'll have to regenerate the key pair! 


Actually, Amazon recommend rotating keys on a regular basis, so you'll be doing that anyway - but it's still not what you want to be doing every morning before you start.

Create a deployment group

Again in AWS IAM Management, I created a new group called mywebsite_deployment and added the mywebsite.deploy user to that group. 

Next up - permissions.


Grant permissions on the bucket to the group

To do this, we have to add an "Inline policy" to the mywebsite_deploy group to grant basic access to any users in the group. 

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::mywebsite.co.uk"
    }
  ]
}

Grant restricted rights on the bucket to the deployment user


We don't want the mywebsite.deploy user to be able to do anything to the bucket (such as change permissions), so we restrict their access rights to the bucket contents by applying a policy to the bucket itself

{ "Id": "Policy1438599268262", "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1438599259521", "Action": [ "s3:DeleteObject", "s3:GetObject", "s3:GetObjectAcl", "s3:PutObject", "s3:PutObjectAcl" ], "Effect": "Allow", "Resource": "arn:aws:s3:::mywebsite.co.uk/*", "Principal": { "AWS": [ "arn:aws:iam::765146773618:user/mywebsite.deploy" ] } } ]}

Step 5 - Deploying to AWS


With all that set up (phew!), then deploying the site to AWS S3 is a one-liner:


joel$ grunt deploy --s3AccessKey=<<your access key>> --s3SecretAccessKey=<<your secret>> 
... 
16/16 objects uploaded to bucket mywebsite.co.uk/ 
Done, without errors.

Step 6 - Set up Static Website Hosting

In the AWS S3 console, select the bucket and click on "Properties" to open the properties pane for the bucket.

Open the "Static Web Site Hosting" section and it's easy to enable hosting just by checking the option. Enter index.html as the default document.

Click "Save", and your content is served from the default endpoint.

Now's a good time to check that your web app runs nicely by just hitting that endpoint in a browser - and get a warm fuzzy feeling.

Step 7 - Domain setup


The last thing to do is to switch over the DNS for the target domain so that www.mywebsite.co.uk is a CNAME for the AWS S3 endpoint.

You can if you want set up AWS CloudFront delivery as well, but that's beyond the scope of this how-to.