Recently I have been involved in a project that has a DevOps component to it. This has led in turn to a significant amount of PowerShell scripting for automated and sequenced application deployment in multiple scenarios. That's great when the servers already have all their prerequisites and base configuration, but what is the easiest way to get a vanilla OS ready for application deployment with minimal cost? Enter the new feature with Windows Server 2012 R2: PowerShell Desired State Configuration. What is it? And what is it good for?
PS DSC (PowerShell Desired State Configuration) is a new feature in Windows Server 2012 R2 that is meant to be used to apply a desired configuration to a member server. If you are familiar with Puppet or Chef, this is a similar feature implemented in PowerShell and borrowing off the CIM and MOF libraries. In it's most basic form, PS DSC takes a configuration and a set of configuration data to create MOF files. These MOF files are specific to a particular node (member server). The MOF files can be applied to the node either through a push or pull mechanism. In theory you could hand craft the MOF files yourself, but really PowerShell takes a lot of that heavy lifting over for you, and allows you to use parameters in your configuration and data so that adding another node is a simple affair.
The requirements on both the server and node side are relatively light. There is no client agent to install, unlike Puppet or Chef. You do need to enable Windows Remote Management and install the Windows Management Framework 4.0 (default on Server 2012 R2 and Windows 8.1). That's it. As long as you have an account that can execute PowerShell scripts on the remote node, you are ready to start creating and pushing configurations.
The application I am working on supporting has six server types: Web front-end, web database server, application front-end, application database server, ETL front-end, and ETL database server. The goal was to be able to deploy a new instance of any of these server types with minimal manual configuration. The entire solution is virtualized and each server can be deployed from a base image. That base image then needs prerequisites to be installed before the application install script can take over. These prerequisites include file and directory structures, registry settings, windows features, and unattended installs of SQL server where applicable. They also include a set of local users to run specific processes. Some of this can be controlled via Group Policy, but not all of it.
DSC has built-in resource sets for all of these items and many more. One of the immediate issues I ran into was the creation of local user accounts with an encrypted password and access to network shares with encrypted credentials. The solution is the use of certificates, which should be familiar to anyone who has worked with Puppet before. The first thing you must do in Puppet after installing the agent is generate a certificate request and send it to the puppet master. On the puppet master, you approve the request and each puppet node ends up with a signed certificate. The puppet master can use the public key of each node to encrypt it's configuration payload at rest and across the wire. DSC can do the same thing. If you already have an Enterprise PKI with autoenrollment for your domain-joined servers, then this is a trivial portion of the process. If not, you can install Certificate Services on your DSC Pull Server and have each node request a certificate from the DSC Pull Server. That was the case for me, so I scripted out the process, and it could be run as part of the deployment process prior to applying the DSC. Here's the PowerShell script:
New-Item -ItemType Directory -Path "C:\CertFiles"
Copy-Item
\\[localfileserver]\ScriptShare\Files\CertRequestTemplate.txt -Destination C:\Certfiles\CertRequestTemplate.txt
$reqetext
= Get-Content C:\Certfiles\CertRequestTemplate.txt
foreach
($line in $reqetext){
if($line -like "Subject*") {
$line.replace( "hostname",$env:COMPUTERNAME ) | Add-Content "C:\Certfiles\CertRequest $env:COMPUTERNAME.inf"
}
Add-Content -Value $line "C:\Certfiles\CertRequest$env:COMPUTERNAME.inf"
}
}
certreq
-new -config [DSCServer]\[CA_Name] C:\Certfiles\CertRequest $env:COMPUTERNAME.inf C:\Certfiles\CertRequest $env:COMPUTERNAME.req
certreq
-Submit -config [DSCServer]\[CA_Name] C:\Certfiles\CertRequest $env:COMPUTERNAME.req > C:\Certfiles\CertResponse $env:COMPUTERNAME.txt
$reqid
= ((Get-Content C:\Certfiles\CertResponse$env:COMPUTERNAME.txt )[0 ]). split(" ")[1 ]
#Approve on CA
$CertAdmin
= New-Object -ComObject CertificateAuthority.Admin
$CertAdmin
.ResubmitRequest( "[DSCServer]\[CA_Name]" ,$reqid)
certreq
-Retrieve -config [DSCServer]\[CA_Name] $reqid C:\Certfiles\DSC-Cert- $env:COMPUTERNAME.cer
certreq
-accept -f -machine C:\Certfiles\DSC-Cert- $env:COMPUTERNAME.cer
Remove-Item
C:\Certfiles -Recurse -Force
And here's the Request template:
[Version]
Signature="$Windows NT$
[NewRequest]
Subject = "CN=hostname.vertex.cloud"
Exportable = TRUE ; TRUE = Private key is exportable
KeyLength = 2048
KeySpec = 1 ; Key Exchange
KeyUsage = 0xA0 ; Digital Signature, Key Encipherment
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
RequestType = PKCS10
FriendlyName = "DSC signing certificate"
; Omit entire section if CA is an enterprise CA
[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1 ; Server Authentication
The script creates a local certificate file folder and copies a certifcate request text file to that directory. Then it replaces the hostname placeholder in the text file with the actual hostname of the node. After that it generates the certificate request, submits the request, approves the request, retrieves the certificate, and installs the certificate. Lastly, it removes the local certificate file folder. The CertificateAuthority ComObject that is used to approve the certificate remotely part or the Remote Server Administration Tools pack. If you're going to run this on client systems, you will need to install RSAT. The alternative is to split this script into the request phase and retrieval phase and approve the requests remotely. Or you could use Invoke-Command and a script block. Or you could use winrs and the cetutil.exe command. Or... well there's a lot of ways to skin that cat. I'll leave it to your imagination.
Once you have a certificate on your node, you can use the public key on the DSC Pull Server to encrypt portions of the MOF files. How do you get the public key? By using pieces of the script published here by the excellent PowerShellTeam bloggers, specifically Travis Plunk.
Here is the relevant code sample:
function Get-EncryptionCertificate
{
[CmdletBinding() ]
param ($computerName )
$returnValue= Invoke-Command -ComputerName $computerName -ScriptBlock {
$certificates = dir Cert:\LocalMachine\my
$certificates | %{
# Verify the certificate is for Encryption and valid
if ($_ .PrivateKey. KeyExchangeAlgorithm -and $_.Verify())
{
# Create the folder to hold the exported public key
$folder= Join-Path -Path $env:SystemDrive\ -ChildPath $using:publicKeyFolder
if (! (Test-Path $folder))
{
md $folder | Out-Null
}
# Export the public key to a well known location
$certPath = Export-Certificate -Cert $_ -FilePath (Join-Path -path $folder -childPath "EncryptionCertificate.cer" )
# Return the thumbprint, and exported certificate path
return @($_ .Thumbprint, $certPath);
}
}
}
Write-Verbose "Identified and exported cert..."
# Copy the exported certificate locally
$destinationPath = join-path -Path " $env:SystemDrive\$script:publicKeyFolder " -childPath " $computername.EncryptionCertificate.cer"
Copy-Item -Path (join-path -path \\ $computername -childPath $returnValue[1 ].FullName. Replace(":", "$")) $destinationPath | Out-Null
# Return the thumbprint
return $returnValue [0]
}
This function takes the remote node name and runs remote commands to export the public key, then copy it to a local location and return the thumbprint of the certificate to use for encryption.
Now that the DSC Pull server has the public certificate, it can encrypt credentials. The rest of that post shows how to use the script to encrypt credentials to access a network share, but the downside it that it will prompt you for those credentials everytime. From a security perspective that is good, but from an automation perspective it stinks. The compromise for me is to store the credentials in a secure string saved to a text file. The string is encrypted using the local machine key, and if you alter the permissions for the directory that holds the text files, then only a privileged user would be able to decrypt them. I wrote a script to store the credentials:
Function New-PasswordTextFile{
param(
[string] $filename
)
read-host -assecurestring | convertfrom-securestring | out-file $filename
}
Not much going on there. You feed the function a string with the filename you want to use and it prompts you for the secure credentials. In order to use the credentials, I put this into the Configuration Data portion of my config script:
$LocalUserAccountPass = get-content "C:\PSDSCConfigs\ReturnsService.txt" | convertto-securestring
$LocalUserCred = new-object -typename System.Management.Automation.PSCredential -argumentlist "LocalUserAccount" ,$LocalUserAccountPass
I now have a credential object built from a text file. That credential is passed to the configuration block as a parameter and then in the User feature block I reference it this way:
User LocalUserAccount
{
Username = "LocalUserAccount"
Disabled = $false
Ensure = "Present"
FullName = "Local User Account"
Description = "Account for running Local Service"
Password = $LocalUserCred
PasswordNeverExpires = $true
}
After running the configuration process with my nodes, there are a two MOF files that kick out for each node. The first is the main MOF file and the second is a MOF meta file for the Local Configuration Manager on the node. In it there is the thumbprint of the certificate used to encrypt the credentials in the second MOF file.
instance of MSFT_DSCMetaConfiguration as $MSFT_DSCMetaConfiguration1ref
{
CertificateID = "A4B03B35274A9F5804C1AAB8EE976F522F4F75F1";
};
Looking at the main MOF file, here is the relevant portion:
instance of MSFT_Credential as $MSFT_Credential1ref
{
Password = "SPZTuIZ/GlW9ZP0/Pqre4L3UvktZd0dsyhNKZWLPZ8fNuQkfb72x0z9JyxfHGxMaobe5GqpNNIJ0NOxNuhU8b6arJEGLuTnWdqClxAoNVmMYIvArvoMuAjpYCDzndz3seF/Es6s60XCe0O+Ev9sd3a5lSOukq0kt+jvEnwgwTeNAmzphtWv78iRNPkySnBBG5TT+gW3DTJOhsRWi74b7MPgi9d2WqiK/BNBDX6CXj6O3C5w2O7vW3vfVlX7ttocUM1H4GouvsTjd+z9x1A0FcbHEJFX/yx5rP/PrQf65HIbdkZ+C7RRZMI8w+sw6K9sao9XGNEbj0XibWzo7NSRwDg==";
UserName = "LocalUserAccount";
};
instance of MSFT_UserResource as $MSFT_UserResource1ref
{
ResourceID = "[User]LocalUserAccount";
FullName = "Local User Account";
UserName = "LocalUserAccount";
Ensure = "Present";
Password = $MSFT_Credential1ref;
Description = "Account for running Local Service";
PasswordNeverExpires = True;
SourceInfo = "C:\\PSDSCConfigs\\CredentialSample.psm1::13::9::User";
Disabled = False;
ModuleName = "PSDesiredStateConfiguration";
ModuleVersion = "1.0";
};
The User instance references the credential above, and the Credential instance has an encrypted password using the certificate referenced in the meta MOF file. Voila!
This was a pretty major detour off of the beaten path for DSC, but it's great that PowerShell provides the ability to accomplish my goal. For more info on DSC and setting it up check out these resources:
Labels: DevOps, DSC, powershell, PowerShell Desired State Configuration, Windows Server 2012 R2