My third day of Microsoft Ignite draws to a close. Today was evenly split between Exchange and Hyper-V. Microsoft has a lot of excellent technology coming out in the next year from both camps, and the sessions I attended gave me a little more insight into how they plan to make good on their promises of a building a more intelligent cloud.
My first session of the day was actually a repeat of a session I missed out on from day 1. Deploying Exchange 2013 using PowerShell Desired State Configuration was a serious in-depth look at the experimental xExchange module for DSC and how it can be leveraged to automatically deploy, install, and configure Exchange 2013. Additionally, it can be used to control the current configuration of an Exchange environment and prevent configuration drift caused by unscrupulous admins. Michael Hendrickson led us through the basics of a DSC configuration and then extended those basic principles to the xExchange module. He then ran through two examples, a simple one node config and a more complex four node multisite config. Both of the configurations and the files are attached to his slide deck, and he said that he will be posting them on his blog shortly as well. One of the coolest pieces to me was that the configuration files can pull information from a CSV that the Exchange calculator generated. That includes the drive mount points, DAGs, and databases with activation preference. For consultants like me that do a lot of Exchange deployments, this is the type of thing that could save time and ensure that all settings are correct from the get go.
Michael and I talked briefly after the meeting and I suggested that he zip up the Exchange installer files and before trying to transfer them with the file resource in DSC. There are so many tiny files that the transfer can easily fail, and the transfer tends to take longer. The files can be zipped, and then the Archive resource in DSC can be leveraged to unzip them once they make it to the target Exchange server. Ideally, I would like Microsoft to host an application repository online with the latest versions of their products available for easy download from a package manager. You know, kind of like the entire world of Linux? But I digress. We also discussed Exchange 2016 and the likelihood of these scripts being easily ported from Exchange 2013. I think we both agreed that Exchange 2016 is going to be sufficiently similar that porting the scripts should be a trivial affair, with the exception of the Office Web Apps Servers, which don't exist today in Exchange 2013.
After that session I attended Deploying Exchange 2016 hosted by Brian Day. It was a great session that delved into the challenges and recommendations for deploying Exchange 2016 in a number of different coexistence scenarios. Brian ran through Exchange 2010/2016, 2013/2016, and even 2010/2013/2016. You'll notice that Exchange 2007 is not in that list, and that is because it is not supported for coexistence with 2016. So basically if you are still on 2007, which I know some people are for one reason or another, then I would highly recommend skipping Exchange 2010 and going straight to Exchange 2013. The new functionality exposed in Exchange 2013 will make all future migrations much simpler, including the migration to Exchange 2016. I really recommend watching the session, but the crux of the matter is that Exchange 2013 can up-proxy client connections to Exchange 2016 mailboxes and Exchange 2016 can down-proxy client connections to Exchange 2013 mailboxes. Notice that I specified mailboxes, not CAS. The client access front-end website acts as a proxy to the back-end website where your active mailbox is hosted. The behavior and methods are written in such a way that Exchange 2013 and 2016 can talk to each other. The migration process is simply deploying Exchange 2016 servers, rolling mailboxes over, and adding your Exchange 2016 servers to the load balancer pool. Then you cutover mail flow and roll your Exchange 2013 servers out of the load balancer!
They also spent a good amount of time talking about namespace planning and certificates. That always seems to be a point of confusion with clients, and I am glad that they addresses it in depth and simplified the requirements overall. The new wrinkle is the Office Web Apps Server, which is required if you plan to do in-line document viewing or editing in OWA. The OWAS VIP name also needs to be on the Exchange certificate, and there needs to be a distinct OWAS URL for each datacenter. If you already have a wildcard certificate, then you're all set. Otherwise you're going to need to update that SAN certificate again.
After lunch I attended the Windows Containers session. Ever since Microsoft announced that they would be supporting Docker and containers in the next release of Windows I have been extremely curious to see what that would look like. Although a lot of the information is pre-release, they have the basics figured out and were able to give some live demos. If you aren't familiar with Docker and containers, now would be a good time to watch the session video and maybe visit Docker's website. Windows is going to support containers in two ways, either by running the container within Windows Server or by running it in Hyper-V. The Windows Server could be on a physical box or running as a VM on your hypervisor of choice. If you are planning to run it in Windows server then you might want to take advantage of the newly announced Windows Nano Server. That is a stripped down version of Windows, only including the essential services to run. It can provide a lean platform to containers in the same way that ESXi provides a lean version of Linux to run VMs. There are some limitations however, and if you plan to run more traditional applications in containers then Nano probably won't have all the libraries you'll need. If you're planning to run containers in Hyper-V, then Microsoft is recommending using the next version of Server Core to do that. Again, the idea is to provide a lean and clean platform for the container to run on. The next technical preview of Windows Server 2016 will carry support for the server based containers, and sometime in Q4 support for Hyper-V containers will be added to the preview.
Speaking of Hyper-V, the last session I attended for the day was What's New in Windows Hyper-V. Ben Armstrong and Sarah Cooley gave a demo rich presentation of all the new features included in the next version of Hyper-V. And when I say demo rich, they had around 14 different scripted demos to run over the course of 75 minutes. Live demos are always a tricky proposition and to do 14 of them is tantamount to insanity, but they did it and every demo worked. Hats off to those two. They had a series of goals they wanted to meet with the next release of Hyper-V and they fit into the following categories:
- Security
- Isolation
- Availability
- Operation Improvements
- Working better at cloud scale
- Enhancing the platform
You can watch the linked session info for the full breakdown, but I wanted to highlight a couple features I thought were very important. First and foremost, the Hyper-V host can now run PowerShell commands on the guest VM without any network connectivity. That's right, you can now script out configurations for VMs and run them straight from the host. I can see the potential for some seriously simplified automation and management leveraging this tool. More importantly, it allows the host to inject updates to the VM through the existing PowerShell library, including making changes to the virtual hardware configuration. They are also going to be supporting ReFS as a file system for VM disks. ReFS allows them to take advantage of some of the features included in the file system, and that allows them to provision any size fixed disk in a matter of seconds. Likewise, the process of merging differencing disks drops down to a few seconds, even if there are a massive number of changes to merge.
One of the themes I noticed during all the sessions at Ignite is that ReFS is being supported for almost all the new Windows applications. It makes a lot of sense since ReFS meant to be the replacement for NTFS in the long term. I doubt that NTFS is going away anytime soon, but more and more I think you'll see that your OS drive is NTFS and all data drives will be ReFS.
And that was day 3! Thanks for reading!
Labels: Containers, Docker, DSC, Exchange 2016, hyper-v, Microsoft Ignite, PowerShell Desired State Configuration, ReFS