Server 2012 R2 KMS error STATUS_SUCCESS

Found a interesting one when activating my Server 2012 R2 KMS host with the KMS key.

All was going well and got as far as the commiting the changes when the following error popped up and it is rather strange

KMS_error

Despite the error text the commit was not successful and your configuration changes have not been saved

Turns out to be a rather simple fix. On the commit page for some reason the wizard defaults to 0 as the KMS TCP Listening port number. For KMS this should be 1688 changing this port number to 1688 resolves the error and allows the configuration to be saved.

 

System state backups failing

Im currently working on a script to automate system state backups and in my testing I encountered a issue, namely System state backups fail on my 2008 domain controller with the following error message

ERROR – Volume Shadow Copy Service operation error ( 0x800423f4). The Write experienced a non-transient error. If the backup is retried the error is likely to occur

Where to start with this one……..

Well the Hex error code indicates that the problem is with VSS failing to complete the read of data so the next port of call is check VSS. This can be done via powershell and the following command run from a administrative shell

vssadmin List Writers

which produces the following output

VSS

Which as you can see this confirmed that the NTDS VSS writer failed, which would be expected as we were backing up the system state. The first step in troubleshooting VSS failures is basic enough, restart the services and test. If that doesnt help then restart the server. This had no effect on the problem so it was time to dig a little deeper.

As always the best place to start is the event logs, Microsoft have really increased the level of logging on the servers and it is far more useful than in 2003.A quick perusal of the event logs showed that the backup ran until it tried to use the Extensible Storage Engine API ( ESENT) to read the shadowcopy headers of the Active directory database it then logged the following error

Log Name: Application
Source: ESENT
Date: <date & time>
Event ID: 412
Task Category: Logging/Recovery
Level: Error
Keywords: Classic
User: N/A
Computer: <computer name>
Description:
Lsass(640) Unable to read the header of logfile \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Windows\NTDS\edb.log  Error -546.

This error points to a known issue with Windows server 2008 ( which my Domain controller is) and applications that use ESENT. Microsoft have released a hotfix for this issue  : http://support.microsoft.com/kb/2470478

Once this hotfix was applied there were no further ESENT Errors logged and the VSS portion of the backup completed successfully.

 

File manipulation with Powershell

Some time ago I had a need to move some regularly generated files on a nightly basis off a disk due to storage concerns while a long term solution( a new SAN) was installed. In addition to moving them I only needed to keep them for 14 days, the best way of managing both these tasks is with powershell.

Moving the Files

Moving the files is quite simple, it is literally Move-Item . You can specify extensions by using *.extension and in that way narrow the files that are moved.

#region Copying the databases to the remote location

Move-Item d:\test\*.bak Y:\test
Move-Item e:\test\*.trn Y:\test1
Move-Item d:\test\*.txt Y:\test2

#Endregion

Cleaning up files older than 14 days

As the files are not required to be retained for longer than 14 days it is best to keep the clutter down and its a simple matter to extend the above copy script to clean up after itself.

The best way of doing this is to get the current date and then Add -14 to the current date using the Adddays method on the Get-Date . This gets us the value of 14 days ago

$now = Get-Date

$days = "14"

$TargetFolder = "y:\test"

$Lastwrite = $Now.AddDays(-$days)

Next step is to recursively query the Y:\test folder to get a list of files that are older than 14 days. In this sample code you can see I limited the $Files paramater to just *.bak, *.trn and *.txt , this is a safety mechanism to prevent mass deletions.

$Files = get-childitem $Targetfolder -include *.bak,*.trn,*.txt -Recurse | Where{$_.LastWriteTime -le "$Lastwrite"}

Then we do the removal using a Foreach loop to loop through the list of files and delete them.

if ($Files -ne $null)
{
	foreach ($file in $Files)
	{
		Remove-Item -path $File.Fullname -Verbose
	}
}

Adding in the Else to catch the error that would otherwise occur if there were no files to delete.

Else
{
Write-Host "Nothing to clean up"
}
#EndRegion

So there we have it, a simple script to move files to a folder and delete them when older than the required date. I’ve used this many times and it has always been quite fast and efficient. The full script can be downloaded from here

Problems and symptoms

I came across an interesting problem today, one which illustrates a point I’ve always tried to get across when discussing troubleshooting

The issue being reported may just be a symptom of the actual problem. Never assume you have the full picture.

Onto the problem :

This was a Citrix Presentation server 4.5 environment with Internet Explorer , Outlook and some industry specific applications. Virtual IPs were being used in a hosted environment to present IPs to Internet Explorer to allow external filtered internet access. Some users began to experience the following error and sessions were failing to launch.

So what could be the problem? The only symptom we had so far was that the users could not log on because the pool of virtual IPs was exhausted. Not much to go on so far so the best first step is check the VIP config as per the following article : http://support.citrix.com/article/CTX111898

To sum up the steps carried out here:

  1. Checked the VIPs were configured correctly in the Farm and assigned to the servers
  2. Checked the mentioned registry keys :
    HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\VIP\HookProcessesVIP
    HKLM\SOFTWARE\Citrix\CtxHook\AppInit_Dlls\VIPHook\<Process name>
  3. Check the viphook.dll was loaded into the processes by the use of Process monitor

Nothing was wrong however so it was time to look at the problem a different way. Looking at the Access management console for the affected servers and grouping the users by username displayed the following

 

Although the usernames are obscured each Red box in the image is a single user. As you can see each user is getting 2 IPs rather than one thus exhausting the IP pool faster. But why?

There is a common misconception with Virtual IPs that they only apply to the application they are assigned to in the Console. This isn’t the case, the IPs can be assigned to applications but in practice they are assigned to each session the user opens on the server. The configuration in the console merely allows that application to communicate with the Network using the Virtual IP address

Looking at the session list gave a clue as to where the problem was coming from. All the users who had the session sharing problem had Outlook open. Session depends on the applications being published with identical settings and the session sharing key matching. The key is composed of the following settings : Color depth, Screen Size, Access Control Filters (for SmartAccess), Sound, Drive Mapping, Printer Mapping

Checking the settings for Outlook confirmed that the session sharing problem was being caused by the Colour Depth being different to the other applications. Changing this back resolved the issue and the sessions started sharing again which eliminate the secondary issue of the virtual IP pool being exhausted and preventing logons.

By making no assumptions and treating all of the problems as a symptom what initially appeared to be a complicated and obtuse problem was actually rather simple to resolve.

Shane

Configuring application Crash dumps with Powershell

In a windows environment applications crash for many reasons and the best way to troubleshoot them is to collect a application crash dump. An application crash dump is a snapshot of what the application was doing when it crashed.

From Windows Vista and Windows Server 2008 onwards Microsoft introduced Windows Error Reporting or WER . This allows the server to be configured to automatically enable the generation and capture of Application Crash dumps. The configuration of this is discussed here . The main problem with the default configuration is the dump files are created and stored in the  %APPDATA%\crashdumps folder running the process which can make it awkward to collect dumps as they are spread all over the server. There are additional problems with this as but the main problem I always had with it was that its a simple task that is very repetitive but easy to do incorrectly. Therefore is it a perfect task to be automated.

I wrote this little script in Powershell

app_crashdump.ps1

This script does 3 things :

  1. Creates a folder to put the crash dumps in
  2. Gives the appropriate accounts access to this folder
  3. Configures the registry with the appropriate settings

Part 1 : Creating the folder

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.VisualBasic') | Out-Null
$Folder=[Microsoft.VisualBasic.Interaction]::InputBox("Specify where to store crashdumps (not network location)", "Path", "c:\Crashdump")

New-Item $Folder -Type Directory -ErrorAction SilentlyContinue

### Verify the folder the user specified was a valid folder. Else failback to c:\Crashdump

$validatepath=Test-Path $Folder
	if ($validatepath -eq $false)
	{
	New-Item C:\Crashdump -Type Directory
	Set-Variable -Name Folder -value C:\Crashdump -Scope Script
	}

This piece of code asks the user for input as to where to put the folder, makes the folder. If it cannot make the folder the user suggested then it has a default path of c:\Crashdump

Part 2 : Specifying the permissions

$Acl= get-acl $Folder
$machinename = hostname
$querydomain = [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()
$domain = $querydomain.name

#Setting ACLs

$Acl.SetAccessRuleProtection($true, $false)
$acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule("Network","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")))
$acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule("Network Service","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")))
$acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule("Local Service","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")))
$acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule("System","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")))
$acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule("Everyone","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")))

Set-Acl $folder $Acl

This code just defines some variables and assigns the following user accounts permissions to write to the folder

Network, Network Service, Local Service, System  and the domain account everyone. It then writes the ACL back to the folder.

Part 3: Actually configuring WER

$verifydumpkey = Test-Path "HKLM:\Software\Microsoft\windows\Windows Error Reporting\LocalDumps"

	if ($verifydumpkey -eq $false )
	{
	New-Item -Path "HKLM:\Software\Microsoft\windows\Windows Error Reporting\" -Name LocalDumps
	}

##### adding the values

$dumpkey = "HKLM:\Software\Microsoft\Windows\Windows Error Reporting\LocalDumps"

New-ItemProperty $dumpkey -Name "DumpFolder" -Value $Folder -PropertyType "ExpandString" -Force
New-ItemProperty $dumpkey -Name "DumpCount" -Value 10 -PropertyType "Dword" -Force
New-ItemProperty $dumpkey -Name "DumpType" -Value 2 -PropertyType "Dword" -Force

This Script checks if the Local Dumps Registry key exists, if it doesnt it creates it and then adds the necessary values. You have probably noticed a potential gotcha with powershell and registry entries. Powershell treats registry values as properties of the Key that they are in as discussed here

The full script can be downloaded from here . To run this script you need to allow unsigned scripts by running the following command from a administrative powershell window.

Set-ExecutionPolicy -ExecutionPolicy Unrestricted

alternatively you can sign the script, which isn’t very difficult but has a number of steps. The Technet Scripting Guy blog has a very good guide here and part 2 here .

I use this script in a couple of ways but mainly to sumplify the task of enabling users/admins with collecting application crash dumps for analysis.

Hope this helps

Shane

3 Pipe Problems

As I mentioned on the about page I am a virtulisation engineer with particular focus on Citrix , VMware, Active Directory and Exchange. Lately I’ve caught this Powershell bug that is going around the internet and have been working the kinks out of my scripting skills.

You’re probably all wondering where the name of this blog came from. Everyone has hobbies and one of mine is endlessly re-reading my Sherlock Holmes collection. In one of the Adventures of Sherlock Holmes he was presented with a difficult problem and described it as follows

“It is quite a three pipe problem, and I beg that you won’t speak to me for fifty minutes.”

Sherlock Holmes , The Red Headed League

In all the time I’ve worked with technology the most enjoyable problems were always the ones Holmes would have described as “3 pipe problems” . On this blog I hope to discuss some of these problems and other topics that interest me such as

  • Virtulisation
  • Scripting
  • troubleshooting methodologies
  • deployment strategies
  • General Science topics

Shane