Microsoft Ignite Day 4

The fourth day of Ignite started with some intense sessions around using Chef with Azure and using Docker both locally and in the cloud. Not surprisingly I think that the two technologies can work together to orchestrate cloud scale deployments. More on the day after the break.


My first session of the day was running Chef in Azure. It turns out this is surprisingly simple due to the fact that Microsoft now includes the chef client as one of the VM extensions available on their Azure VMs. When provisioning a new VM using the portal, Azure CLI, or knife you can specify that the Chef client should be installed, what Chef server it should register with, the node name, and a bunch of other information.  There is also a preconfigured Chef server in the Azure marketplace running on Ubuntu, so if you don't already have a Chef server you could spin one up using the portal or the Azure CLI.  You could otherwise sign up for a hosted Chef account or install a Chef server on-premise. For demo and testing purposes I spun up a hosted Chef account, and it was super easy. You're limited to 25 nodes, which should be plenty for a non-production environment.

The second session I attended was all about using Docker on VMs in Azure. Much like Chef, there are preconfigured VMs in Azure that already have the Docker software installed. The demo was run by Tom Hauburger, and he took us through building an entire multi-tier application using Docker files. The whole thing was very impressive, and to see it spun up in real time gave me a sense of how Docker can quickly enable scale out architectures. The entire thing was running on a single VM, which would obviously be a single point of failure. What I would want to see is a demo that provisioned multiple VMs and had an understanding of failure domains. Each VM could have multiple docker containers running, and additionally the provisioning of containers should be intelligent enough to know that it needs to scale out along failure domains. There was a recent post by Scott Lowe on doing exactly that with Docker Swarm and Consul.

My third and fourth sessions of the day were Experts Unplugged sessions with the Microsoft Exchange Product Team. Both sessions were entirely Q&A from the audience and the panel kept it informative and lively. It's difficult to describe in a blog post, so I would highly recommend going to watch the sessions online. The main takeaway for me was how serious the Microsoft Exchange team is about the preferred architecture. They want physical servers running JBOD, end of story. That's a tough pill for me to swallow, especially since I am a big advocate of virtualization. The drumbeat has been clear from the virtualization camp for years, and that message has been virtualize everything.  Does it make sense to buy physical servers for Exchange? I'll give that the old consultant answer, "it depends". There's not enough room in this blog post to go thorough the entire debate, but I will say that things are not as cut and dry as the Exchange team would like. I think that virtualization and SANs have a place in the world of Exchange, but it's important to understand where the recommendations from the Exchange team are coming from and how they apply to your environment.

My last session was an overview of the coming features in PowerShell 5.  Jeffrey Snover, the creator of PowerShell, and Don Jones, the founder of powershell.org ran the session.  It was highly entertaining and informative. There are so many amazing features coming in the Windows Management Framework 5.  The main thing they are doing is trying to make PowerShell more friendly to developers.  What started out as a scripting language to manage the Windows operating system has morphed into a robust environment to create complex functions and automation. Essentially what started out as something operational has now begun infiltrating the development sphere, in fact there's a name for that isn't there? DevOps. That's a bit of a marketing buzzword right now, and I'm wary of using it, but PowerShell with DSC is becoming a true DevOps tool. To that end, the creation of classes is being added to PowerShell.  In addition they will be adding support for the Pester tool, that allows you to do automated unit testing on scripts and modules after making changes. There is also a script best practices analyzer to make sure that you are following the proper conventions of a well written script. Finally, they are adding support for pulling applications and modules from a repository, either online or internal. During the demonstration they pulled a module from the PowerShell online repository, and an application from the chocolatey repository. In the background PowerShell is using NuGet, which hey is a developer tool!

The message of the day was clear.  DevOps and automation are the next generation of IT.  Jeffrey made no bones about the fact that we are IT professionals and it is our job to learn the emerging technologies and how to implement them.  It certainly appears that DevOps is THE new technology, and if you aren't actively learning how your organization can benefit from this technology, then don't be surprised if your organization leaves you behind.

And that's day 4. Thanks for reading!

Labels: , , , , , , , ,