Blog

The musings of an independent consultant specialising in the intersection of DevOps and ALM.

Configuring PowerShell DSC Pull Mode

After a few weeks preoccupation on non-tech stuff I finally got some time to go back to my playtime with Desired State Configuration. However, now the RTM version of Windows Server 2012 R2 is now available so I thought it sensible to use the latest bits.

I was really interested in getting the Pull Mode functionality working, as this simplifies the process of distributing your custom DSC Resource Providers (amongst other benefits). If you want to know more about Pull mode then I'd recommended checking-out some of these links:

Between the general lack of documentation covering how to do this and the various changes between the Preview and RTM versions (obsoleting much of what documentation did exist), this turned out to be a bit of a battle.

Getting Started

I used the TechEd demoes as my starting point, they are available here:

Specifically I downloaded the Windows 2012 R2 Preview demos http://blogs.msdn.com/cfs-file.ashx/_key/communityserver-blogs-components-weblogfiles/00-00-00-63-74-metablogapi/3124.Demo5F00WindowServer2012R22D00Preview5F00_4677B514.zip, once unzipped you should have the following directory structure:

DSC-Pull-DemoFolders.png

Updating the Scripts for Windows Server 2012 RTM

Navigating to PullServer\Setup\Scripts you should see:

DSC-Pull-SetupScripts.png

This provides a script for installing the web-based PullServer, however, due to the changes between Preview and RTM the InstallPullServerConfig.ps1 requires some changes to work correctly.

In the RTM release a DLL has been renamed from Microsoft.Powershell.DesiredConfig.PullServer.dll to Microsoft.Powershell.DesiredStateConfiguration.Service.dll, so the reference to it in InstallPullServerConfig.ps1 at line 57 must be updated:

Before:

-dependentBinaries "$pathPullServer\Microsoft.Powershell.DesiredConfig.PullServer.dll"

After:

-dependentBinaries "$pathPullServer\Microsoft.Powershell.DesiredStateConfiguration.Service.dll"

Install the Pull Server Components

Having made the above change and saved the script, you can now run it:

.\InstallPullServerConfig.ps1 -DSCServiceSetup

The -DSCServiceSetup switch tells the script to install the 'DSC-Service' Windows Feature (and associated dependencies).

After a couple of minutes the script should complete and you will be left with fully-configured PullServer setup in IIS - although the IIS management tools are not installed, but you can check that the new site is there using Get-WebSite:

DSC-Pull-PullWebSite.png

Adding Content to the Pull Server

The Pull Server will offer 2 types of content for download:

  • Server configurations: the .mof files generated when executing a DSC configuration script
  • Packaged DSC resources - specially crafted .zip files (and I mean special!)

These files need to be placed in the relevant folder under the following location:

DSC-Pull-ContentDir.png

Let's create a simple DSC configuration script that uses the Demo_Computer custom DSC Resource from the Windows 2012 R2 Preview demos downloaded earlier.

Setup your Dev Environment

NOTE: These steps assume that you are doing your DSC development on Windows 8.1 RTM or Windows Server 2012 R2 RTM (not the preview versions).

In order to generate a configuration script using the Demo_Computer resource we need to do the following:

  1. copy <extractPath>\PreReq\Resources\Demo_Computer $PSHome\Modules\PSDesiredStateConfiguration\PSProviders\Demo_Computer -recurse
  2. cd $PSHome\Modules\PSDesiredStateConfiguration\PSProviders\Demo_Computer
  3. Changes in the RTM version mean we have to patch the Demo_Computer.schema.mof file in the above folder, the following script fragment should do the job:

Now we can create a simple configuration script that references the custom resource:

As per usual, executing the above will give us the generated .mof configuration file inside a folder called 'PullDemo' with the filename Server01.mof.

Prepare the Server(s) to be Managed via DSC Pull Mode

Each machine that needs to use the Pull Server needs to be configured accordingly as well as being allocated a unique identifier (i.e. a GUID). For the purposes of this example I'm just going to assign an arbitrary GUID, however, Johan Åckerström (@Neptune443) has a nice example of using an Active Directory value which could be useful for domain machines that already have a machine domain account http://blog.cosmoskey.com/powershell/desired-state-configuration-in-pull-mode-over-smb/.

This was another area that needed some experimentation after having issues with what was in the Windows 2012 R2 Preview demos. I started with this:

... and eventually ended-up with the following which worked for me (I highly recommend the above linked blog post for more detailed information about these settings):

NOTE: If you are using non-domain connected machines then you may get a WinRM authentication error when trying to run the above (even if you have matching credentials on both machines). You will need to add the remote machine to your WinRM Client's TrustedHosts property:

set-item WSMan:\localhost\Client\TrustedHosts * -force

If all has gone well you should see output similar to this:

DSC-Pull-PrepareServer.png

Preparing the Assets for the Pull Server

As mentioned earlier the pull server will host the generated .mof configuration files for each server and the custom resources, however, each needs to be packaged in a particular way for things to work properly.

Rename the .mof file to match the GUID for the server it relates to:

  1. copy PullDemo\Server01.mof C:\ProgramData\PSDSCPullServer\Configuration\e528dee8-6f0b-4885-98a1-1ee4d8e86d82.mof

  2. Create a checksum file for the above file - the remote server seems to use this file for two purposes:

    • determine whether the configuration file has changed since the last Pull
    • rudimentary validation of the configuration file (i.e. not corrupted during the download etc.)
    • NOTE: If you update the configuration but do not update the checksum then the remote server assumes the configuration file is unchanged.
  3. Package each custom resource into its own ZIP file and copy it to C:\ProgramData\PSDSCPullServer\Modules

  4. Create a checksum file for each of the above ZIP files

The following script will handles steps 2-4:

Run the above script like this:

.\PublishToPullServer.ps1 -resources $PSHome\Modules\PSDesiredStateConfiguration\PSProviders\Demo_Computer

... and it populates the Pull Server area as shown below:

DSC-Pull-ContentFiles.png

Testing It

You can manually force a remote machine to perform a pull using a script provided as part of the Windows Server 2012 R2 Preview Demos, it can be found here:

<extractPath>\PullServer\Invoke-PullonNode.ps1

However, this also needs to be patched to work with the RTM version as the required CIM method has been renamed:

Before:

After:

It is a short script that just requires the name of the remote machine on which to trigger a pull:

.\Invoke-PullonNode.ps1 -computerName Server01

DSC-Pull-InvokePullOutput.png

If you have a poke around the remote machine you'll notice that the DSC resources downloaded from the Pull Server are not installed alongside the the built-in ones, instead they are placed in C:\Program Files\WindowsPowerShell\Modules (which seems to be the new trend I've noticed recently for where to install system modules):

dir 'C:\Program Files\WindowsPowerShell\Modules'

    Directory: C:\Program Files\WindowsPowerShell\Modules

Mode                LastWriteTime     Length Name
----                -------------     ------ ----
d----        01/10/2013     20:07            Demo_Computer

You don't get the same level of output when triggering a Pull operation, however, you can track any errors etc. by querying the event log on the remote machine:

Get-WinEvent -ProviderName Microsoft-Windows-DSC -ComputerName NewHostName | select TimeCreated,LevelDisplayName,Message -first 10 | ft -Wrap -AutoSize

That's about it, I'd be really interested to hear if this works for you or whether I've inadvertently glossed over something specific to my lab environment that makes it work.

PowerShell DSC Credential Puzzles

In my previous post I was experimenting with the ScriptResource provider, albeit on a rather simplistic level. I had intended to move on to writing a custom resource provider as my next challenge, but instead decided that I wanted to dig a little deeper on some of the core aspects surrounding DSC and look at them through more of a 'real world' lens.

For this post I'm focussing on DSC's use of credentials.

I had expected this to be a fairly straightforward exercise, albeit an important one to fully understand if you want to consider deploying DSC outside of a lab environment, where you might have to concern yourself with things like multiple domains, non-admin users, delegation of responsibilities etc.

However, in the end it proved to be all rather frustrating and this post is a much a cry for help as anything else.

The Theory

Let's start at the beginning; as I understand things it appears that there are two places where credentials come into play:

  1. When enacting a configuration with Start-DscConfiguration
  2. When defining a resource in a configuration script

1. Start-DscConfiguration

This cmdlet has an optional -credential parameter that is used to authenticate against the nodes being configured. Administrator-level access to these nodes is required in order to initiate the configuration run.

Therefore, this parameter is useful when the user you are logged-in as does not have administrative access to said nodes.

In the absence of admin access you'll get the following error:

VERBOSE: Perform operation 'Invoke CimMethod' with following parameters, ''methodName' =
SendConfigurationApply,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' =
root/Microsoft/Windows/DesiredStateConfiguration'.
Access is denied.
+ CategoryInfo  : PermissionDenied: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : HRESULT 0x80070005
+ PSComputerName: LAB-WEB01

VERBOSE: Operation 'Invoke CimMethod' complete.
VERBOSE: Time taken for configuration job to complete is 0.205 seconds

One point worth considering is that this also implies that you need a single credential that has admin access to all nodes included in a given configuration run. In certain scenarios I could see this being a constraint that would force you to split-up configuration scripts to accommodate it.

2. Resource Providers

Some of the built-in resource providers support a credential property and as far I can tell the intent here is to enable the execution of the resource to be in the context of a specific credential.

Clearly, it is entirely down to the individual resource provider to implement the necessary logic to support its execution under a custom identity.

Getting Practical

I wanted to understand, on a more practical level, what happened when using these different credential scenarios, so I hatched a plan to use a simple configuration that used a ScriptResource to output the current identity information:

As in earlier posts I'm still using a 2 VM setup (no AD):

  1. DSC01 - where I write and enact the configuration scripts
  2. LAB-WEB01 - the target of said configuration scripts

Test 1: No Credentials

For this first test I specified no explicit credential information, however, I was logged-in under the local Administrator account that had the same password on both VMs:

Start-DSCConfiguration -Path .\DscSecurity -Verbose -wait

For brevity I'll just include the output we're interested in:

VERBOSE: '[Script]UserInfo': ENV username: 'LAB-WEB01$'
VERBOSE: '[Script]UserInfo': Windows Identity: 'NT AUTHORITY\SYSTEM'
VERBOSE: '[Script]UserInfo': Thread Principal: ''

From this we can see that, by default, the configuration run is executed as the local system account - which is important if you have configuration resources that need to access/authenticate to network resources (oh dear, another overloaded term for us to deal with).

Test 2: Start-DscConfiguration Credentials

For this test I added the -credential parameter to the above command, additionally I was logged into DSC01 using a local, unprivileged account (that didn't exist on LAB-WEB01):

$cred = Get-Credential
Start-DSCConfiguration -Path .\DscSecurity -Verbose -Wait -Credential $cred

Again, the abridged output:

VERBOSE: '[Script]UserInfo': ENV username: 'LAB-WEB01$'
VERBOSE: '[Script]UserInfo': Windows Identity: 'NT AUTHORITY\SYSTEM'
VERBOSE: '[Script]UserInfo': Thread Principal: ''

As you can see, the -credential parameter had no effect on the identity of the process that actually executed the resource provider - though if I hadn't provided it then I would have got the 'access denied' error message I showed earlier.

Test 3: Resource Provider Credentials

Before moving on to testing this scenario I had to modify the configuration script as follows:

This was invoked using no credentials (whilst logged back in as the local Administrator):

Start-DSCConfiguration -Path .\DscSecurity -Verbose -Wait

Sadly, it failed (and thus began a harrowing-ish tale of trial and tribulation):

VERBOSE: Perform operation 'Invoke CimMethod' with following parameters, ''methodName' =
SendConfigurationApply,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' =
root/Microsoft/Windows/DesiredStateConfiguration'.
VERBOSE: An LCM method call arrived from computer DSC01 with user sid S-1-5-21-3860571724-2810567899-1303093401-500.
VERBOSE: 'DSCEngine': Starting to process the Set request.
VERBOSE: 'DSCEngine': Starting to process resource. '[Script]UserInfo'
VERBOSE: 'DSCEngine': Performing the test operation. '[Script]UserInfo'
VERBOSE: 'DSCEngine': [Script]UserInfo: The Test operation took 0.4840 seconds.
VERBOSE: 'DSCEngine': Set request completed.
PowerShell provider MSFT_ScriptResource  failed to execute Test-TargetResource functionality with error message:
Failure to get a valid result from the execution of TestScript. The Test script should return True or False.
+ CategoryInfo  : InvalidOperation: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : ProviderOperationExecutionFailure
+ PSComputerName: LAB-WEB01

VERBOSE: Operation 'Invoke CimMethod' complete.
VERBOSE: Time taken for configuration job to complete is 0.991 seconds

So, about that password?

Before I get embroiled in recounting my attempts to get the above to work, you may be wondering what happens to that credential that is setup in the configuration script - good question!

Given the above script, when I execute it (i.e. to generate the .mof file before running Start-DscConfiguration) the Get-Credential call prompts me to supply a credential. This is then available within the configuration script, but given that DSC is generating a .mof file that gets interpreted on the target machine... how does that credential actually get there?

Upon inspecting the generated .mof file I was somewhat surprised to see the cleartext password staring back at me:

/*
@TargetNode='LAB-WEB01'
@GeneratedBy=Administrator
@GenerationDate=07/23/2013 18:38:23
@GenerationHost=DSC01
*/

instance of MSFT_Credential as $MSFT_Credential1ref
{
Password = "R2Preview!";
 UserName = "Administrator";

};

instance of MSFT_ScriptResource as $MSFT_ScriptResource1ref
{
ResourceID = "[Script]UserInfo";
 GetScript = " return $null ";
 SetScript = " Write-Verbose (\"Empty SetScript\") ";
 TestScript = "\n            Write-Verbose (\"ENV username: '{0}'\" -f $env:USERNAME)\n         Write-Verbose (\"Windows Identity: '{0}'\" -f [System.Security.Principal.WindowsIdentity]::GetCurrent().Name)\n         Write-Verbose (\"Thread Principal: '{0}'\" -f [System.Threading.Thread]::CurrentPrincipal.Identity.Name)\n          return $false\n            ";
 Credential = $MSFT_Credential1ref;
 SourceInfo = "C:\\_DATA\\DscSecurity.ps1:7:3:Script";
 Requires = {
};

};

instance of MSFT_ConfigurationDocument
{
 Version="1.0.0";
 Author="Administrator";
};

I know very little about these .mof files and the underlying specification that defines their structure and format, so I can't say whether this is a limitation of the standard or of the DSC implementation. I can, however, appreciate that having a generalised, cross-platform mechanism for securely storing sensitive data is not necessarily straight-forward (i.e. without certificates etc.).

This is definitely something you need to be aware of as the .mof files will hang around on the machine that generated them until they are deleted.

Trials and Tribulations

Returning to the error above:

PowerShell provider MSFT_ScriptResource  failed to execute Test-TargetResource functionality with error message:
Failure to get a valid result from the execution of TestScript. The Test script should return True or False.
+ CategoryInfo  : InvalidOperation: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : ProviderOperationExecutionFailure
+ PSComputerName: LAB-WEB01

It didn't give me much to go on, so I checked the event log on the remote machine to see what it had to say for itself - a single error event logged under the DSC Operational log:

This event indicates that failure happens when DSCEngine is processing the configuration.
ErrorId is 0x1. ErrorDetail is The SendConfigurationApply function did not succeed..
ResourceId is [Script]UserInfo and SourceInfo is C:\_DATA\DscSecurity.ps1:7:3:Script.
Force is false

Hmmm, not terribly informative.

I was happy that the scriptblock was syntactically correct (it ran fine without credentials), so perhaps it was something to do with the Script resource provider's implementation when given a credential?

Cracking open the module that implements the Script resource (found here: C:\Windows\System32\WindowsPowerShell\v1.0\Modules\PSDesiredStateConfiguration\PSProviders\MSFT_ScriptResource\MSFT_ScriptResource.psm1), I narrowed things down to the following interesting little function:

function ScriptExecutionHelper 
{
    param 
    (
        [ScriptBlock] 
        $ScriptBlock,

        [System.Management.Automation.PSCredential] 
        $Credential
    )

    $scriptExecutionResult = $null;

    try
    {

        $executingScriptMessage = $($LocalizedData.ExecutingScriptMessage) -f ${ScriptBlock} ;
        Write-Debug -Message $executingScriptMessage;

       if($null -ne $Credential)
       {
          $scriptExecutionResult  = Start-Job @psboundparameters -ErrorAction Stop | Receive-Job -Wait 
       }
       else
       {
          $scriptExecutionResult = &$ScriptBlock;
       }
        $scriptExecutionResult;
    }
    catch
    {
        # Surfacing the error thrown by the execution of Get/Set/Test script.
        $_;
    }
}

From here we can clearly see the different code path taken when an explicit credential has been supplied. Interestingly, it uses a PowerShell job as the mechanism for invoking the scriptblock in the context of the credential.

So I decided to clone the Script resource provider so I could hack around with adding diagnostic output to try and figure out what was going wrong.

First off I added the following after the Start-Job call:

  $jobInfo = Get-Job | % {
      Write-Verbose ("JobCmd: {0}`n" -f $_.Command)
  }

This showed that the job had failed, but little else:

VERBOSE: '[ScriptEx]UserInfo': JobStateInfo:
                                   State Reason
                                   ----- ------
                                  Failed

Running the job interactively using equivalent parameters as the resource provider worked fine:

Start-Job -ScriptBlock { Write-Output ("Windows Identity: '{0}'" -f [System.Security.Principal.WindowsIdentity]::GetCurrent().Name); return $false } -ErrorAction Stop -Credential (Get-Credential) | Receive-Job -wait

Windows Identity: 'LAB-WEB01\Administrator'
False

However, this was running as my logged-in user, not the local SYSTEM account that we have already established is the default user context. Using PSEXEC.EXE I launched a PowerShell session running as LocalSystem:

psexec.exe -i -s Powershell.exe

Running the above command now gave the following error and also reported the job as failed:

[localhost] An error occurred while starting the background process. Error
reported: Access is denied.
+ CategoryInfo  : OpenError: (localhost:String) [], PSRemotingTran
   sportException
+ FullyQualifiedErrorId : -2147467259,PSSessionStateBroken

Once you appreciate the remoting aspect of starting a job, the problem becomes little more understandable. The local SYSTEM account (whilst having full administrative permissions) is not able to perform any network operations that require authentication - though I think you could be forgiven for wondering why starting a local job should involve an authenticated network hop.

All-in-all rather disappointing.

Conclusions

At this point I'm not not really sure whether I'm missing something obvious or if this is just something that isn't fully-baked yet - it is a preview/beta version after all.

From a security-perspective, you will want to safeguard the file-system of any machine that is generating configuration scripts (i.e. the .mof files) to ensure that any per-resource credentials are not seen by prying eyes as well as maybe having a secure purging process for old .mof files.

Whilst having the ability to define credentials used on a per-resource basis will be undoubtedly flexible, the lack of an ability to specify an identity to be used for the whole configuration run is disappointing (ideally, on a per target machine basis) - this seems like a far more typical use-case (IMO).

Finally, I should note that I have done no DSC testing using machines that are joined to Active Directory, which may have some affect on this behaviour.

As ever, I'd be really interested in any further thoughts or suggestions on this.

Using PowerShell DSC Script Resource Provider

In my last post I created a simple DSC configuration but had claimed my focus was to understand more about DSC's extensibility story - here's where I start making good on that.

I chose something simple to start with - changing the system timezone from PST to GMT. The most straightforward option for achieving this is to use the built-in ScriptResource.

A little context before we get into it - All resource providers have to implement 3 core operations:

  • Get - returns a hashtable containing information about the current configuration
  • Test - determines whether the current configuration matches the required configuration, simply returning true or false
  • Set - this knows how to apply the required configuration (DSC takes care of only calling this if the above Test returns false)

The ScriptResource allows you to implement these operations as inline PowerShell scriptblocks or strings that DSC handles invoking at runtime - I'll come back to this later.

Using the tzutil.exe tool that comes with Windows, my first attempt was something along the lines of this:

Script SetTimezone
{
   GetScript = { return @{ Timezone = ("{0}" -f (& tzutil /g)) } }
   TestScript = { return (& tzutil /g) -ieq "GMT Standard Time" }
   SetScript = { & tzutil /s "GMT Standard Time" }
}

However, when trying to run it in with Start-DSCConfiguration I got this error:

VERBOSE: 'DSCEngine': Performing the test operation. '[Script]SetTimezone'                                
WinRM cannot process the request because the input XML contains a syntax error.                 
+ CategoryInfo          : ParserError: (root/Microsoft/...gurationManager:String) [], CimExc    
+ FullyQualifiedErrorId : HRESULT 0x80338043                                                

This seemed to indicate the the & in the scriptblocks was causing some conflict with some later part of the process. The .mof files generated when executing these Configuration functions are not XML, so my best guess is that it's related to the over-the-wire format used as part of WinRM. In any event this feels like a bug, so I've raised it on the PowerShell Connect Site.

My first workaround attempt was to try various methods of escaping the alledged offender:

Script SetTimezone
{
   GetScript = { return @{ Timezone = ("{0}" -f (&amp`; tzutil /g)) } }
   TestScript = { return (&amp`; tzutil /g) -ieq "GMT Standard Time" }
   SetScript = { &amp`; tzutil /s "GMT Standard Time" }
}

However, this initially returned a different error:

Cannot invoke the SendConfigurationApply method. The SendConfigurationApply method is in progress and must return
before SendConfigurationApply can be invoked.
+ CategoryInfo  : NotSpecified: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : MI RESULT 1
+ PSComputerName: LAB-WEB01

It seemed that the first error had left things in a half-cocked state. I couldn't find a DSC-related cmdlet that looked like it might help, so I resorted to restarting the WinRM service (on the remote node) - this did the trick.

NOTE: On subsequent occurrences of this issue, I noticed that there seemed to be a timeout in play here too, as if I left it a little while then I could re-run the script without further invention.

Having gotten past this issue, it still didn't work - same XML syntax error.

I tried this rather ugly looking alternative:

Script SetTimezone
{
   GetScript = { return @{ Timezone = ("{0}" -f (Invoke-Expression "&amp; tzutil /g")) } }
   TestScript = { return (Invoke-Expression "&amp; tzutil /g") -ieq "GMT Standard Time" }
   SetScript = { Invoke-Expression '&amp; tzutil /s "GMT Standard Time"' }
}

Which still didn't work, but at least failed with a different error:

PowerShell provider MSFT_ScriptResource  failed to execute Test-TargetResource functionality with error message: The
term 'amp' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling
of the name, or if a path was included, verify that the path is correct and try again.
+ CategoryInfo  : InvalidOperation: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : ProviderOperationExecutionFailure
+ PSComputerName: LAB-WEB01

This led me to changing the scriptblocks to strings:

Script SetTimezone
{
   GetScript = 'return @{ Timezone = ("{0}" -f (&amp; tzutil /g)) }'
   TestScript = 'return (&amp; tzutil /g) -ieq "GMT Standard Time"'
   SetScript = '&amp; tzutil /s "GMT Standard Time"'
}

This resulted in:

PowerShell provider MSFT_ScriptResource  failed to execute Test-TargetResource functionality with error message: The
expression after '&' in a pipeline element produced an object that was not valid. It must result in a command name, a
script block, or a CommandInfo object.
+ CategoryInfo  : InvalidOperation: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : ProviderOperationExecutionFailure
+ PSComputerName: LAB-WEB01

So this time the escaping had worked, but the TestScript function wasn't executing properly. I soon realised that passing these scripts around as strings raised other complications with string interpolation and having to escape quotes etc.

Rather anti-climatically, I realised the scripblocks would run fine without the '&' at all, which resulted in this much cleaner version:

Script SetTimezone
{
   GetScript = { return @{ Timezone = ("{0}" -f (tzutil.exe /g)) } }
   TestScript = { return (tzutil.exe /g) -ieq "GMT Standard Time" }
   SetScript = { tzutil.exe /s "GMT Standard Time" }
}

It's all pretty straightforward really:

  • the /g switch outputs the current timezone setting, which we can capture as a string
  • the /s switch takes a string representation of the required timezone and sets it accordingly

Finally, the fanfare moment (such as it is), running it using:

Start-DscConfiguration -Path .\TimezoneTest -Wait -Credential (Get-Credential) -Verbose

resulted in:

VERBOSE: Perform operation 'Invoke CimMethod' with following parameters, ''methodName' =
SendConfigurationApply,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' =
root/Microsoft/Windows/DesiredStateConfiguration'.
VERBOSE: An LCM method call arrived from computer DSC01 with user sid S-1-5-21-3860571724-2810567899-1303093401-500.
VERBOSE: 'DSCEngine': Starting to process the Set request.
VERBOSE: 'DSCEngine': Starting to process resource. '[Script]SetTimezone'
VERBOSE: 'DSCEngine': Performing the test operation. '[Script]SetTimezone'
VERBOSE: 'DSCEngine': [Script]SetTimezone: The Test operation took 18.2390 seconds.
VERBOSE: 'DSCEngine': Performing Set operation. '[Script]SetTimezone'
VERBOSE: 'DSCEngine': [Script]SetTimezone: The Set operation took 1.4780 seconds.
VERBOSE: 'DSCEngine': The resource finished processing. '[Script]SetTimezone'
VERBOSE: 'DSCEngine': Set request completed.
VERBOSE: DSCEngine: The total operation took 21.0690 seconds.
VERBOSE: Operation 'Invoke CimMethod' complete.
VERBOSE: Time taken for configuration job to complete is 22.336 seconds

NOTE: It's worth pointing out that in the normal operation of applying a configuration script (i.e. using Start-DscConfiguration) the GetScript never actually gets executed.

Putting It Together

Adding this Script resource to what may become an evolving sample from my previous post gives this:

Wrapping Up

To summarise, you can hopefully see that the Script resource is a quick and easy way of having DSC run some custom functionality outside the scope of the built-in resource providers.

Whilst in theory those scriptblocks could be complex multi-line affairs I think that would result in a lot of poorly factored, unDRY and difficult to maintain code. Personally I will be steering clear of using them for such scenarios - except maybe as an initial prototyping mechanism.

For these more complex scenarios you will likely be better off writing your own custom provider - which will hopefully be the topic of my next post in this series.

My First DSC Configuration Script

Following my last post I have spent some time playing with the Desired State Configuration bits and in particular trying to get a handle on what it can do out-of-the-box and the effort involved in extending it.

I've been using the pre-prepared Windows Server 2012 R2 Preview VMs that Microsoft have made available for my testing (they report themselves as Windows 6.3.9431), so I can't vouch for whether these provide the same experience compared to a clean install from the preview ISO or the WMF v4 preview bits.

Before I get into things, you should realise that everything you see here is just based on my experiences of using DSC to-date and generally poking around the test VMs - as a result I'm sure that plenty of what I'm going to mention is incomplete.... at best!

In C:\Windows\System32\WindowsPowerShell\v1.0\Modules\PSDesiredStateConfiguration\PSProviders there are a set modules that implement the built-in resource providers:

  • MSFT_ArchiveResource - extracting ZIP files
  • MSFT_EnvironmentResource - managing environment variables
  • MSFT_GroupResource - managing local Windows groups
  • MSFT_LogResource - lets you write a log message (seems a bit strange this one)
  • MSFT_PackageResource - install/remove an Windows Installer (MSI) package
  • MSFT_ProcessResource - ensure that a given process is running (or not)
  • MSFT_RegistryResource - managing registry entries
  • MSFT_RoleResource - managing Windows features & roles
  • MSFT_ServiceResource - managing Windows services
  • MSFT_UserResource - managing local Windows user accounts
  • MSFT_ScriptResource - the initial extensibility hook that allows you to provide arbitrary powershell for DSC to execute

Interestingly, the documentation includes a reference to a FileResource, which doesn't seem to reside alongside the others (at least not in the same convention) though it does exist somewhere as I could use it.

I spun-up another VM and started experimenting with some simple configurations, here's one using some of the above resources:

Using the following commands:

.\SimpleConfig.ps1
Start-DSCConfiguration -Path SimpleConfig -Verbose -Wait

Produced the following output

VERBOSE: Perform operation 'Invoke CimMethod' with following parameters, ''methodName' =
SendConfigurationApply,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' =
root/Microsoft/Windows/DesiredStateConfiguration'.
VERBOSE: An LCM method call arrived from computer DSC01 with user sid S-1-5-21-3860571724-2810567899-1303093401-500.
VERBOSE: 'DSCEngine': Starting to process the Set request.
VERBOSE: 'DSCEngine': Starting to process resource. '[User]FooBarUser'
VERBOSE: 'DSCEngine': Performing the test operation. '[User]FooBarUser'
VERBOSE: 'DSCEngine': [User]FooBarUser: The Test operation took 17.0040 seconds.
VERBOSE: 'DSCEngine': Performing Set operation. '[User]FooBarUser'
VERBOSE: '[User]FooBarUser': Configuration of user foobar started.
VERBOSE: '[User]FooBarUser': User foobar created successfully.
VERBOSE: '[User]FooBarUser': Configuration of user foobar completed successfully.
VERBOSE: 'DSCEngine': [User]FooBarUser: The Set operation took 4.0300 seconds.
VERBOSE: 'DSCEngine': The resource finished processing. '[User]FooBarUser'
VERBOSE: 'DSCEngine': Starting to process resource. '[Group]FooGroup'
VERBOSE: 'DSCEngine': Performing the test operation. '[Group]FooGroup'
VERBOSE: 'DSCEngine': [Group]FooGroup: The Test operation took 2.6790 seconds.
VERBOSE: 'DSCEngine': Performing Set operation. '[Group]FooGroup'
VERBOSE: '[Group]FooGroup': Group Foo created successfully.
VERBOSE: 'DSCEngine': [Group]FooGroup: The Set operation took 6.4840 seconds.
VERBOSE: 'DSCEngine': The resource finished processing. '[Group]FooGroup'
VERBOSE: 'DSCEngine': Starting to process resource. '[Registry]EnableRdp'
VERBOSE: 'DSCEngine': Performing the test operation. '[Registry]EnableRdp'
VERBOSE: '[Registry]EnableRdp': Registry key value 'HKLM:SYSTEM\CurrentControlSet\Control\Terminal
Server\fDenyTSConnections' of type 'DWord' does not contain data '0'
VERBOSE: 'DSCEngine': [Registry]EnableRdp: The Test operation took 1.4280 seconds.
VERBOSE: 'DSCEngine': Performing Set operation. '[Registry]EnableRdp'
VERBOSE: 'DSCEngine': [Registry]EnableRdp: The Set operation took 0.3280 seconds.
VERBOSE: 'DSCEngine': The resource finished processing. '[Registry]EnableRdp'
VERBOSE: 'DSCEngine': Starting to process resource. '[File]MarkerFile'
VERBOSE: 'DSCEngine': Performing the test operation. '[File]MarkerFile'
VERBOSE: '[File]MarkerFile': The system cannot find the file specified.
VERBOSE: '[File]MarkerFile': The related file/directory is: C:\marker.txt.
VERBOSE: '[File]MarkerFile': The path cannot point to the root directory or to the root of a net share.
VERBOSE: 'DSCEngine': [File]MarkerFile: The Test operation took 0.1560 seconds.
VERBOSE: 'DSCEngine': Performing Set operation. '[File]MarkerFile'
VERBOSE: '[File]MarkerFile': The system cannot find the file specified.
VERBOSE: '[File]MarkerFile': The related file/directory is: C:\marker.txt.
VERBOSE: '[File]MarkerFile': The path cannot point to the root directory or to the root of a net share.
VERBOSE: '[File]MarkerFile': C:\marker.txt was successfully created.
VERBOSE: 'DSCEngine': [File]MarkerFile: The Set operation took 0.0160 seconds.
VERBOSE: 'DSCEngine': The resource finished processing. '[File]MarkerFile'
VERBOSE: 'DSCEngine': Set request completed.
VERBOSE: DSCEngine: The total operation took 33.7450 seconds.
VERBOSE: Operation 'Invoke CimMethod' complete.
VERBOSE: Time taken for configuration job to complete is 34.609 seconds

Notice how despite the ordering of the resources in the underlying script, the MarkerFile resource was last to run due to how its Requires element was setup to make it dependent upon FooGroup and EnableRdp - it also had an implicit dependency on FooBarUser via the FooGroup's requires element.

If you've seen the TechEd session you may also notice that the output is different - there was some discussion during the session about how log output should be recognisable rather than simply readable, which I thought was an interesting distinction... but I guess we'll need to wait for a later build before we get to see that!

In my next post I'll discuss my experience of using the ScriptResource to make a simple customisation to manage the system timezone.

Desired State Configuration: Initial Thoughts

How apt that the technology that seems like it might stir my blogging mojo should follow-on so seamlessly from my last post here, almost 2 years ago.  In that post I talked about why, despite the great software already available in the configuration management space, there was still a strong case for something more accessible for those pure Microsoft shops.

Fast forward to June 2013 and Microsoft announced the new Desired State Configuration (DSC)  feature that will ship as part of the Windows Management Framework (WMF) v4, which in turn will be available out-of-the-box for Windows 8.1 and Windows Server 2012 R2.  There are plenty of posts out there that summarise the technology so I won't add to that here, but at a high-level it offers a PowerShell-based DSL for defining how you want machines configured which then interacts with some new OS services (part of WMF) to apply said desired state.  The logic that applies this desired state is encapsulated inside a set of convention-based PowerShell modules.

As you might imagine there has been a mixed response to Microsoft's foray into this space.  The main criticism is that Microsoft has re-invented the wheel rather than adopting one of the existing tools in this space.  Microsoft argue that they have created a standards-based platform that others can build upon and integrate with.

As usual, the truth seems to lie somewhere in-between: 

  • Had Microsoft adopted an existing tool, then they would have faced an inevitable backlash from those aligned with the other tools.
  • Whilst Microsoft have arguably created a platform that can integrate with existing CF tools, it is clear that the native DSC resource providers (the things that actually implement configuration management logic) are going to become increasingly preferable to those defined in the other tools, where you are always living within the constraints of your powershell scripts being called out-of-process the underlying technology of the CF system in question (e.g. ruby etc.)

I can definitely sympathise with those people who have invested time and effort in building out the Windows-specific extensions to Chef, Puppet et al.  It seems likely to me that these will become less preferred if a native PowerShell option is available - how long before people just systematically port all those Puppet modules & Chef recipes to DSC?

Just to be clear, DSC is not a drop-in replacement for Chef or Puppet though it does certainly overlap in a number of areas.

Of course, it should be noted that WMF v4 (and hence DSC) is slated to only be available to Windows 7/WIndows Server 2008 R2 or later - which will be an issue for many (at least in terms of it providing a full coverage solution).

Additionally, DSC is very much a low-level tool - it will be interesting to see the extent to which Microsoft's other tools in this space, System Center *, move to adopt DSC.

I suspect people will view this low-level entry from Microsoft in this space through one of three lenses: 

  1. Annoying: in the sense of it being an incomplete solution for those that just want an off-the-shelf solution
  2. An Opportunity: for those inclined to build value-add software to plug these gaps that Microsoft often leave
  3. A Hiding to Nothing: for those who see the gap in the market, but also see the Microsoft juggernaut crushing all in its path 18-24 months down the road as their own product line catches up

Ultimately, as a PowerShell fan I view DSC as very interesting tool to have at my disposal and one that is at the top of my list for putting through its paces.  With any luck I'll chart my adventures in subsequent posts.

As ever, I'll be really interested to hear other people's perspective on this.

Cheers, 

James. 

 

A Vision for Configuration Management on Windows

In my first proper post here I talked about how despite the plethora of DevOps-oriented tools out there, the sysadmin in a pure Microsoft shop was more than likely at a serious disadvantage when it came to either using those tools or trying to find similar tools that specifically targeted the Microsoft platform.

After posting this I got a tweet from the DevOps Jedi himself, effectively laying down the challenge to go make it happen - time has ticked by and a follow-up post is overdue, so here goes…

Configuration Management, whilst a large space to tackle out-of-the-gate so to speak, seems like a sensible place to start – it is arguably the single biggest building block you can have to underpin the running of your infrastructure. It is also perhaps the area that folks in a Microsoft-only environment most view with envy as they see the options available to their *nix brethren (at least those who are aware of what’s happening in that world, but that’s another issue!), and all without a ‘big vendor’ anywhere to be seen.

Granted there are commercial configuration management tools out there, not least from Microsoft themselves but from what I’ve seen none of them take the approach of tools like Chef and Puppet (though feel free to correct me on that, I’m no expert on them all) and of course of there is the cost factor. The cost is less of an issue for large corporates, they will far more readily loosen the purse strings to fund an enterprise rollout of XYZ, not forgetting another chunk of cash for the associated consulting required to plan, implement and train.

Don’t get me wrong, I’m a consultant so I can hardly sit here and ridicule company’s that spend out on consulting – it’s what keeps a roof over my family’s head – the point I’m making is that this approach does not work for smaller companies, and therefore is not going to make the broader world of Microsoft DevOps a better place. That’s not to say that an alternative shouldn’t necessarily be a commercial offering, just that any pricing model needs to reflect the target audience and that it shouldn’t require a huge engineering effort just to implement.

Here’s what I propose….

A toolset built on the .NET Framework that combines the configuration management principles of Chef/Puppet with the virtualised provisioning of Vagrant (perhaps with less emphasis on the distribution-related features).

I realise I risk the wrath of those communities with charges of ‘Not Invented Here’, but once you accept that there is a swathe of IT practitioners for whom the existing products in this space are simply not viable solutions (whether you agree with their reasoning for the unviability is irrelevant really), it seems clear to me that this is the only way to really tackle this ‘poverty gap’.

The more I thought about it, the more I realised that developing such a solution for a Windows audience and only a Windows audience, makes the size of the problem several orders of magnitude smaller than the Chef and Puppet projects have to deal with:

  1. Fewer abstraction layers are required because all systems share the same underlying platform
  2. Being able to rely on features built-in to the Windows/.NET Framework platform means that we can get some serious heavy-lifting with very little effort.

Here’s an archetypal ‘big box’ diagram to try and illustrate what I mean.

ConceptualView_thumb.png

The aim here is that working with the tool should feel immediately familiar as it is using technologies that you are likely already using. If not, the benefits you will reap from getting familiar with say MSBuild will far outstrip simply understanding this tool – you will now be familiar with a fundamental tool used by your .NET developer colleagues, which can’t hurt.

  • MSBuild gives us a declarative orchestration engine that will manage resolving our dependency tree of required actions on a given client, for free.
  • Heavy use of PowerShell is, of course, a no brainer not only can we perform deep system administration natively but we also get the excellent remoting facility – in fact this gives us the option of not necessarily needing a client footprint (at least for some scenarios).
  • Windows Workflow Foundation ought to be good enough to orchestrate multi-server runs, ensuring that dependencies between servers can be resolved allowing them to be built in the correct order. For example, let’s not try to configure a SharePoint server before the SQL Server has finished - or perhaps, let’s just not try to build a SharePoint server!
wlEmoticon-winkingsmile.png

Obviously there is a lot of detail and hard work behind those boxes (and more boxes besides) even when utilising so much of the underlying platform, but this is my initial premise and I have started work on a prototype in what passes for my spare time.

I would really love to hear your thoughts on whether you believe the Microsoft eco-system needs this and your thoughts on how I’m going about it.

>jd

DevOps Tooling: The Microsoft Ghetto

I wrote about my discovery of the DevOps movement in a previous blogging life and in the intervening time I've been trying to keep up with some of the excellent material being produced by the Community and at the same time becoming slightly frustrated. The bulk of my work revolves around the Microsoft platform and to put it bluntly it is very much a second class citizen in terms of the available tooling.

Now I've fanned the flames, let me put some context around that. I don't mean that as a criticism, in fact I view the status quo as an entirely natural result given where the movement grew out of and, to be frank, the mindset of the typical Microsoft IT shop.  In a Microsoft environment there tends to be far greater reliance on big vendor products, whereas in the Linux/BSD world it is far more common to integrate a series of discrete tools into a complete tool chain that meets the needs for a given scenario.

The problem with the reliance on big vendor products is that it becomes almost a state of mind, where if preferred vendor's product A doesn't do Y then it just isn't possible - or more insidious still, the underlying requirements become largely defined based on the capabilities of product A; rather than, say, the actual requirements!

You can draw a reasonable parallel using the Java and .NET development platforms as an example.  Java, the more mature platform, had tools and frameworks that simply didn't exist in .NET (e.g. Maven, Hibernate, JUnit, CruiseControl, MVC web frameworks to name but a few).  Over time this inspired similar tools to be created by the .NET community, and then a vocal minority was spawned to champion the use of them (e.g. the Alt .NET movement) until eventually Microsoft starting shipping some of these things as part of the core .NET development environment.

For the remainder of this post I want to consider Configuration Management (CM) tooling in particular.

I first came across Chef about the same time as DevOps and was smitten by it, not because it was packed full with innovative concepts per se, but because here was a product that was aiming to do everything that a self-respecting infrastructure professional knows should be done.  Up until then we'd been stringing together our own partial systems on a per-case basis, or perhaps having to put up with one of the alleged CM 'beast' products from a big vendor if the customer was willing to stump-up the cash – above all Chef struck me as a shining example of codifying best practise.  In that regard, whilst there is clearly a lot of innovation in Chef the core principles it applies are not, they simply reflect what has become a common understanding of what works amongst those in the space.

So back to my frustration....

I have lost count of the number clients that were crying out for a Chef-like solution (even if they didn't know it).  However, for a Microsoft-only IT shop there are barriers to entry for Chef adoption - although with OpsCode broadening their commercial offerings they are whittling them down.

Whilst Chef has a cross-platform client, the server itself must run on a Linux platform which can present significant problems for pure Microsoft shops, both in terms of approval red tape as well as longer-term technical supportability.  Of course OpsCode’s Hosted and Private offerings are designed to address those concerns and certainly, for some, they do.

However, the second barrier is more a philosophical one than technical.  Even if an organisation is willing and able to adopt or at least trial, there is still this impedance mismatch that a Windows person experiences when having to work with a tool that originated in the Linux world.

I don’t really want to get into the rights and wrongs of an organisation not being willing to step outside the Microsoft walled garden, IT teams shying away from broadening their skills or companies not willing to invest in suitable training for their IT teams.  The fact is it does happen, whether we like it or not.

That being the case should we just shrug our shoulders and say “bad luck, we told you Windoze sucks” and continue down this path where an entire subclass develops? Ironically, I would suggest that the likely members of that subclass would be precisely those who could most benefit from having access to such tools – small to medium businesses with small budgets and small IT teams whose professional lives primarily consist of fire-fighting.

I want to see the values espoused by DevOps spread far and wide, including the quietest backwaters of corporate IT, where Windows, Office and IE 6 reign supreme. To that end, the Microsoft infrastructure community needs to take a similar approach as the .NET community did and start bringing some of the goodness that we see in the Linux world to the Microsoft platform in a way that facilitates adoption for all and actually takes advantage of the platform’s innate richness and strengths.

>jd