Monday, October 21, 2013

DDDNorth 2013 - A retrospective

So it's the morning after the DDDNorth before, and for a change I'm going to write my retrospective with the event fresh in my mind.


Update & Admission: It took more than that morning to finish this post!!

I'd flown up to Newcastle the night before, and stayed the night in The Roker Hotel along with a few of the usual suspects, so in spite of a fairly late pre-DDD night in the bar, a bracing early morning run along the front and a good Full-English breakfast (with proper Black Pudding) ensured I was refreshed and ready for the day.


Catching a lift over to the venue (thanks Dave) got me to the venue bang on time.  



Early Morning on a Saturday


The venue for DDDNorth was the University of Sunderland campus at St Peters - using the library and its cafe as the central meeting point, sponsor showcase and general chat area, with sessions in that building, the David Goldman Informatics Centre and the Sir Tom Cowie Lecture theatre. 

The speaker room was in the Informatics Centre, so that was where I headed first, just in time to hear the speaker briefing and grab my (very nice, and Sage sponsored) purple speaker's t-shirt. Pleasantries over it was straight over to the library building for my first session. This time, I was much more comfortable with my presentation and equipment, and was on after lunch, so could properly enjoy DDDNorth as a delegate as well as a speaker.

First up was Phillip Trelford.


F# Eye for the C# Guy


Given that F# was added to VS2010, I'm amazed I've not even written "Hello World" with the language, so this introduction was always going to be new information to me.

Phil's presentation style is one full of energy, and I think he bagged the first ponies of the day in about the first 5 minutes, making the point that whilst it's often considered a language for "Financial / City" types, the 'F' is actually for FUN.


"F# is a statically typed, functional first, object orientated, open source .Net language, based on OCAML and available in VS and Xamarin studio."

Phil then went on to compare a 40-50 line implementation of an immutable POCO with first the 12 line and then a ONE line F# implementation. And then followed this up with examples of unit testing F# using effectively plain english tests via the TickSpec package, and "metaprogramming" samples using the F# quoting syntax ( <@ some code @> ) and the Unquote NuGet package.


"F# code is consistently shorter, easier to read, easier to refactor
and contains fewer bugs than its C# equivalent."

I was fairly skeptical when Phil stated that 30,000 lines of C++ code (for an unnamed financial system) had been replaced by only 200-300 lines of F# until he demonstrated a fully functional spreadsheet written in less than that!

Finally, he showed some of the "even funner" aspects of F# - including Pacman and Mario written in F# and cross-compiled to Javascript using Funscript.

I think that the F# eco-system is one to watch, and am very glad to have had such an engaging introduction from Phil - even without his "F# language jobs pay more" slide!


Scaling Systems: Architectures that Grow


Kendal Miller's session was a no-code, all-content affair that was entertaining, enlightening, and provided a 4-point guide to the issues surrounding scaling enterprise-level systems. The fact that I ended up with over 6 pages of notes is testament to how well he engages his audience.

First up was a mantra to code by:


"Time you spend making your software scale is time you're NOT spending delivering functionality."

Kendal then described how most applications (web or otherwise) will fail to scale out of the box, but when considering scalability you need to target the lowest practical numbers for response and loading as "scaling costs REAL money". The techniques usually applied to the problems of scalability he described as only "tactics" - understanding the key principles first is more important.

Kendal then described the four key factors to scaling - 3 that enable you to scale, and one that throws a spanner in the works of all four - Kendal's ACD/C factors:

  • A - Asynchronicity

    Do work - just NOT in your critical path. Defer it to later, or do it ahead of time.

  • C - Caching

    Don't do any work you don't have to - the fastest query you can ever run is the one you only ever have to run once.

  • D - Distribution

    Share the work between as many workers as you can - this is the easy route.

and the kicker

  • C - Consistency

    Agreeing on the level of consistency REQUIRED is the compromise that has to be made.

Kendal then went on to explain these all in much more detail, with great anecdotes to support his opinions - including describing how Amazon do basically NO work at all at the point of you placing and order, and how a he once lost an entire tractor in spite of what his inventory system said. 

I loved Kendal's session, and will be pushing ACD/C hard at work.

Last up before lunch was Richard Fennel's

Automation is not the end of the Story

Richard started by setting the scene with what he considers the minimum for build automation


  • Continuous Integration
    • Unit Tests
  • Nightly FULL builds
    • Static analysis
      • FxCop, CAP.Net, SpCop, StyleCop
    • Signing & Obfuscation
    • Deployment packaging
  • Manual release builds
He then went on to discuss what "deployment workflow" considerations should be made - in particular who can sign off a build as acceptable and who can promote between environments.

"Your build should be a bridge to the operations team - a shared language"

Finally, Richard gave a whistle-stop tour of some of the tools that are available when taking your build process beyond the desktop, including


  • ALM Rangers TFS Build Best Practice
  • Lab Management
  • OctopusDeploy
  • MS Virtual Machine Manager 2012
  • ALM Rangers VM Factory
  • Puppet
  • Chef
  • DevOps Workbench (Beta)

Richard's talk was informative and thought-provoking, and gave pointers to some tools I wasn't aware of and will definitely investigate - and it was a good precursor to my talk on OctopusDeploy.

Lunch was the usual brown bag sandwich affair, and with very nice sandwiches too, which I spent in the Library Cafe catching up with Eric Nelson and others. I had to make my excuses and leave a bit early because I wanted to check out the equipment in my room, as I was up next with

An Introduction to Octopus Deployment

This was my talk in which I covered the basics of WHY you need automated, repeatable, controlled deployments and WHAT OctopusDeploy is and HOW it provides a very nice out-of-the-box solution.

I demonstrated deploying a web app using OctoPack to package it and the Tentacle deployment method for "local" servers, alongside the SFTP method for "remote" servers (in my case an Azure WebSite).

Finally I showed how Azure deployments require a specific flavour of OctopusDeploy package, and how my NuGet.PackageNPublish tooling could be tweaked to create it.

All the resources, projects, slide decks, etc can be found in the GitHub repository.

I must admit to a certain amount of trepidation - amongst the 30 or so audience were the inimitable Liam Westley (who kindly acted as my room monitor) and Richard Fennel. Given how attentive (and non-heckling) both were I think it went well - and it was lovely to receive a tweet the next day from another attendee saying that they would be using OctopusDeploy soon because of my talk.


Update: The feedback scores are in - and very positive. I promise to talk louder next time tho'!

Finally, I moved just into the next room to hear Liam Westley talk about

Event Store


Event Store was created by Greg Young (of CQRS fame), and is an open-source document database written in C# with an embedded Javascript v8 engine that can be installed in single-node or High-Availability modes.

Liam quoted Jeff Attwood, saying

"OR Mapping was the Vietnam of Computer Science"

and went on to describe how metadata within a software system can often be as important as, or even more important than the data on which the system operates - and yet it is most often lumped together with that data in some kind of relational store.

Event store addresses this by providing a tighter focus - it's create & read ONLY, there are NO updates and NO deletes - the data is immutable in perpetuity.

It provides a simple, performant ReSTful api over HTTP or TCP using AtomPub as the representation. As such it's designed to be cached, and provides automatic versioning and an implicit CQRS / message queueing architecture.

"Indexes" (and you have to use that description lightly) are provided with data projections implemented using a Javascript derived domain language.

Liam then went on to demonstrate how to set up Event Store (including the oh-so-important

netsh http add uracl url=http://*.2213/ user=<serviceUser>

to enable access to the Event Store service.

Next up were demonstrations of projections against the DDDNorth agenda data that showcased how data can be transformed into "streams" of filtered, sorted or aggregated derived data - which is where the power of the software is really manifest. And not forgetting to start Event Store with the --run-projections=ALL option.

All in all, Liam managed to pack a lot of demos of a new and exciting addition to the software developer's toolkit into a short time - yet more to investigate.

Close

The day ended with the usual thanks, swag and farewells - including Sage giving a Surface Pro away, a NDCLondon Golden Ticket (won by Dave!) and Microsoft a MSDN license.

The young lad who won the latter was a student at Sunderland, and was initially non-plussed by the small package. When he asked "So what's this worth?" and being given the answer it was a joy to see his reaction - first shock at the monetary value, then what looked like abject terror at the value of the associated licenses, and then finally the slow realisation that he'd been given the tools to really make the most of his calling to software development. 

I could think of no greater illustration then of how powerful the DDD movement can be - bringing together devs from all backgrounds and levels to learn and share in our community.

Andy Westgarth had once again provided a perfect day, marshalling a great array of sponsors, providing a great venue and a great line up of speakers. The crowd rounded it off by giving him a rousing ovation - ultimately absolutely deserved.

Roll on next year - somewhere in the North West.








Tuesday, October 08, 2013

Announcing NuGet.TfsBuild - Private package repository support for TfsBuild

TLDR: NuGet.TfsBuild is a new NuGet package that works with NuGet Package Restore so that private (protected) package repositories can be used with TF Build Services.



A couple of months ago, as an experiment at work, we set up a real project on the cloud-hosted TFService. The goal was to see whether the entire project could be run in the cloud - from work item management, source code control, builds and automated testing. 


Because this was a "REAL" project, we were leveraging NuGet packages from our internal NuGet server - ones that contain proprietary code and couldn't just be shoved onto a public NuGet server. So the packages had to be sourced from a private, protected NuGet server - specifically a private MyGet feed created for the purpose.

But that's where we hit a bit of a snag - we wanted to use TF Build Services to avoid having any specific build server (virtual or physical) - but that means not being able to configure additional NuGet package sources.

You'd think that you could just add the package source to the nuget.exe.config chat gets checked in when Package Restore is enabled - the problem is that the credentials for a private package source are encrypted in the file using a key that's specific to the specific user on the specific machine that's doing the build.

And that would never work with TF Build.

Our solution - a NuGet package that adds an additional build step to the project that reads the package source and credentials from the MSBuild parameters - i.e. from the build DEFINITION rather than either the source, or machine configuration.

The upshot - it just works. The private package source is configured at the start of the build, just before PackageRestore kicks in for the first project being built (usually your "primary" project).

So today I'm very pleased to announce that Landmark Information Group ( http://www.landmark.co.uk /http://twitter.com/LandmarkUK ) have released this little helper package on NuGet.org, with the source code released under the Apache 2.0 license on GitHub.




Fork it, fix it, raise issues, generate pull requests, use it and enjoy it!