To NUC or not to NUC

Intel Next Unit of Computing or NUC, you’ve probably heard of them. Essentially they are a very small form factor computer that Intel produce and sell as a reference design to show that regular PCs can be small and relatively affordable. NUCs come with a range of processors (atom, Celeron, i3, i5) and a range of specs (here, here and here being some examples). All they need is ram and a Hard drive to get them going.

Recently my wife was in need of her own computer so decided to go with the Intel NUC as it would be small and tidy. The monitor had a VESA mount which allowed the NUC to be mounted on the back and with a wireless keyboard and mouse the end result was extremely tidy. Making everyone happy.

The spec of the NUC was as follows

NUC: Intel DN2820FYKH Barebone Desktop (Celeron N2820 2.39GHz, HD Graphics, WLAN, Bluetooth 4.0)

Hard Drive: Intel 530 Series 120GB SSD

Ram: Kingston ValueRam 8 GB DDR3L 1600 MHz SODIMM CL11 Memory Module

Monitor: BenQ GL2250HM 21.5-inch Widescreen LED Multimedia Monitor

Couple of gotchas about the Intel NUCs generally and this model specifically

  1.  The ram must be the low voltage ( 1.35v) Sodimm modules. Normal voltage ram will not work in an Intel NUC. There is a supported memory list which lists out the full specs of the ram : http://www.intel.com/support/motherboards/desktop/sb/CS-034475.htm
  2. For this specific model of NUC you have to use a 2.5 inch hard drive as opposed to a m-SATA hard drive. However to fit the drive bay of the NUC the RAM must be no thicker than 9.5mm.

The installation of a Windows 8.1

Honestly this was the most painful part of this process, the windows installer loaded fine all the pain came from the NUC.

Issues:

  1. SSD not detected
  2. BIOS screen not displaying on monitor

To install and operating system on the intel NUC DN2820FYKH you need to upgrade the BIOS from the factory shipped version of 0015 to a minimum of 0025 along with making some fairly minor changes to the BIOS settings. As you can probably imagine the inability to see the BIOS screen made flashing the BIOS less straightforward but not impossible. Intel supply the BIOS updates in 4 packages :

  • Recovery BIOS update : used for flashing directly from the BIOS screen
  • iFlash BIOS update : DOS based utility for updating the BIOS
  • Self-extracting windows based update file
  • Self-extracting windows PE update file

In my case the simplest fix was to make a windowsPE boot USB instructions here , then copy the appropriate BIOS version to the drive. Then boot into the windows PE and run the .exe which then updated the BIOS and all proceeded smoothly from there. The SSD was detected and OS installed.

 

vCentre appliance fails to boot after changing hostname

Follow up to yesterdays post about changing the certs/applicance name

After regenerating the certs the appliance fails to boot….ooops. The boot process hangs Waiting for embedded database to Start up [OK] ”

embedded_DB

Suppose thats what test environments are for.

To recover from this you need to get into the command line of the VCA. To get into the command line you have to stop the boot process at the Grub boot loader by hitting the “Down” arrow at the boot screen. Then you make the following changes

  1. Press “p” and enter the root password
  2. Select the “VMware vCentre Server appliance” and hit “e” to edit the boot settings
  3. highlight the Kernel option and hit “e” to edit settings
  4. Add in a space and the number 1 at the end of the boot string so it ends with “showops 1” this forced the machine to boot into console
  5. Hiot enter and press b to boot

Once it boots, log in with the Root password and remove the allow_regeneration file with the following line

rm /etc/vmware-vpx/ssl/allow_regeneration

then reboot the Virtual machine

This clears the “toggle certificate setting” and allows the VCA to boot normally

Correct way to regenerate Certificates on Vcentre Virtual appliance

I have been working around with virtual appliance and had to regenerate certificates. The trials of getting this done are covered here , but to properly regenerate the certificates without hangs at boot.

  1. Enable the certificate regeneration either by hitting the “Toggle certificate setting” in the web console or by logging onto the VCA via SSH and running from the command linetouch /etc/vmware-vpx/ssl/allow_regeneration
  2. Stop all the vCentre and SSO services on the Vcentre appliance
  3. Regenerate the certificates
    source vpxd_commonutils; regenerate_certificates
    The result of this should be VC_CFG_RESULT=0
  4. Replace all the certs
    source vpxd_commonutils; generate_all_certificates replace
  5. Clean up the regeneration file by deleting the allow_regeneration file
    rm /etc/vmware-vpx/ssl/allow_regeneration
  6. Reboot the machine and check it comes up cleanly

This should resolve the issue

Changing host name on vCentre Appliance

Just a quick one

In my lab environment I use the virtual centre applicance. As it was setup quite quickly i never bothered adding the VCA to my testing domain at initial setup. Needed to test some domain stuff so decided to add it today.

Process is quite simple to add the VCA to the domain

  1. In your Active directory DNS create both a forward and reverse lookup entry for the VCA
  2. Under Networks cofiguration ensure you have the DNS in your AD configured
  3. On the same screen change the hostname of the VCA to the FQDN you have created ( has to be the FQDN rather than just the appliance name. this is in the form : VCA.domainname.tld )
  4. Reboot is required
  5. After Reboot you have to go to the authentication screen and enter the AD credentials and domain name

After doing all of this you will notice that you can no longer log into the vCentre client , you get the following error

vsphere_client

If you are using the built in certs then to fix this issue you have to go to Admin tab and toggle ” regenerate SSL certificate” setting

If you are using 3rd party certs then they need to be updated to reflect the new host name.

Full documentation on this issue here

E2EVC Rome 2013 a wrap-up

This year I attended the twentieth E2EVC conference ( formerly known as Pubforum) in Rome and all in all it was a fantastic conference. It’s a bit different than other conferences in that its organised by the community for the community so it is largely vendor neutral and there are next to no marketing presentations which is always excellent. Marketing has its place but it is best to understand the technology before trying to sell it, spinning the technology into marketing jazz is never a good approach.

Speakers and sessions

Over the course of the weekend I attended most of the sessions that it was possible to attend ( we had two rooms for sessions so unfortunately not possible to attend all). All of the sessions were worth attending, however in my opinion there were a number of stand-out sessions that both for content and presentation rise above the others ( but only slightly).

  • Andrew Wood and Jim Moyle’s Atlantis IO presentation
    The technology they were demonstrating was fascinating and the two guys presented it so well in terms of playing off each other to get their point across with some humour in the mix too. With the added fun of a live demo
  • Shawn Bass’s multi factor authentication presentation
    I have never seen anyone able to pack more information into a 45 mins presentation without melting everyone’s head and loosing everyone in the course of it. Fascinating look at passwords and pass phrases
  • Andrew Wood and Jim Moyle’s keynote ” What’s new with Citrix”
    Again they got their point across with a minimum of death by Powerpoint, most of the slides were images or single lines that provided a bench to talk from which are my favourite form of presentation . The content was a stark look at where citrix and the market is now and where they appear to be going. It was overall a positive direction but didn’t pull any punches
  • Jeff Wouters’s Powershell DSC deep dive
    A overview of a topic I havent had a opportunity to look at myself. Jeff is good at making complex topic appear easy through the use of examples and metaphor
  • Wilco Van Bragt and Ingmar Verheij’s PVS design decisions
    A really interesting take on this topic. Basically they both talked about the design phases of a project each and showed how one or two constraints early on can radically change how you design a environment. They also discussed how they dealt with those constraints. Excellent demonstration of how there is rarely a 100% correct answer and every project is a learning exercise as there is always something you can do better
  • Carl Webster’s Documentation scripts
    The amount of work that has gone into these scripts is enormous and the level of community involvement is just staggering. I first came across these scripts when i was trying to work with Citrix’s powershell commandlets and found the explanation blog posts that Carl posts with his scripts enormously helpful. The the scripts do an excellent job at their actual function( documentation) and Carl’s presentation style was entertaining

It was a great conference and I’d highly recommend going to the one in Brussels which is on May 30 – June 1, 2014 . Exact venue details still have to be announced. All information for the event will be here

Cant wait for the next one.

 

 

Server 2012 R2 KMS error STATUS_SUCCESS

Found a interesting one when activating my Server 2012 R2 KMS host with the KMS key.

All was going well and got as far as the commiting the changes when the following error popped up and it is rather strange

KMS_error

Despite the error text the commit was not successful and your configuration changes have not been saved

Turns out to be a rather simple fix. On the commit page for some reason the wizard defaults to 0 as the KMS TCP Listening port number. For KMS this should be 1688 changing this port number to 1688 resolves the error and allows the configuration to be saved.

 

System state backups failing

Im currently working on a script to automate system state backups and in my testing I encountered a issue, namely System state backups fail on my 2008 domain controller with the following error message

ERROR – Volume Shadow Copy Service operation error ( 0x800423f4). The Write experienced a non-transient error. If the backup is retried the error is likely to occur

Where to start with this one……..

Well the Hex error code indicates that the problem is with VSS failing to complete the read of data so the next port of call is check VSS. This can be done via powershell and the following command run from a administrative shell

vssadmin List Writers

which produces the following output

VSS

Which as you can see this confirmed that the NTDS VSS writer failed, which would be expected as we were backing up the system state. The first step in troubleshooting VSS failures is basic enough, restart the services and test. If that doesnt help then restart the server. This had no effect on the problem so it was time to dig a little deeper.

As always the best place to start is the event logs, Microsoft have really increased the level of logging on the servers and it is far more useful than in 2003.A quick perusal of the event logs showed that the backup ran until it tried to use the Extensible Storage Engine API ( ESENT) to read the shadowcopy headers of the Active directory database it then logged the following error

Log Name: Application
Source: ESENT
Date: <date & time>
Event ID: 412
Task Category: Logging/Recovery
Level: Error
Keywords: Classic
User: N/A
Computer: <computer name>
Description:
Lsass(640) Unable to read the header of logfile \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Windows\NTDS\edb.log  Error -546.

This error points to a known issue with Windows server 2008 ( which my Domain controller is) and applications that use ESENT. Microsoft have released a hotfix for this issue  : http://support.microsoft.com/kb/2470478

Once this hotfix was applied there were no further ESENT Errors logged and the VSS portion of the backup completed successfully.

 

File manipulation with Powershell

Some time ago I had a need to move some regularly generated files on a nightly basis off a disk due to storage concerns while a long term solution( a new SAN) was installed. In addition to moving them I only needed to keep them for 14 days, the best way of managing both these tasks is with powershell.

Moving the Files

Moving the files is quite simple, it is literally Move-Item . You can specify extensions by using *.extension and in that way narrow the files that are moved.

#region Copying the databases to the remote location

Move-Item d:\test\*.bak Y:\test
Move-Item e:\test\*.trn Y:\test1
Move-Item d:\test\*.txt Y:\test2

#Endregion

Cleaning up files older than 14 days

As the files are not required to be retained for longer than 14 days it is best to keep the clutter down and its a simple matter to extend the above copy script to clean up after itself.

The best way of doing this is to get the current date and then Add -14 to the current date using the Adddays method on the Get-Date . This gets us the value of 14 days ago

$now = Get-Date

$days = "14"

$TargetFolder = "y:\test"

$Lastwrite = $Now.AddDays(-$days)

Next step is to recursively query the Y:\test folder to get a list of files that are older than 14 days. In this sample code you can see I limited the $Files paramater to just *.bak, *.trn and *.txt , this is a safety mechanism to prevent mass deletions.

$Files = get-childitem $Targetfolder -include *.bak,*.trn,*.txt -Recurse | Where{$_.LastWriteTime -le "$Lastwrite"}

Then we do the removal using a Foreach loop to loop through the list of files and delete them.

if ($Files -ne $null)
{
	foreach ($file in $Files)
	{
		Remove-Item -path $File.Fullname -Verbose
	}
}

Adding in the Else to catch the error that would otherwise occur if there were no files to delete.

Else
{
Write-Host "Nothing to clean up"
}
#EndRegion

So there we have it, a simple script to move files to a folder and delete them when older than the required date. I’ve used this many times and it has always been quite fast and efficient. The full script can be downloaded from here

Configuring application Crash dumps with Powershell

In a windows environment applications crash for many reasons and the best way to troubleshoot them is to collect a application crash dump. An application crash dump is a snapshot of what the application was doing when it crashed.

From Windows Vista and Windows Server 2008 onwards Microsoft introduced Windows Error Reporting or WER . This allows the server to be configured to automatically enable the generation and capture of Application Crash dumps. The configuration of this is discussed here . The main problem with the default configuration is the dump files are created and stored in the  %APPDATA%\crashdumps folder running the process which can make it awkward to collect dumps as they are spread all over the server. There are additional problems with this as but the main problem I always had with it was that its a simple task that is very repetitive but easy to do incorrectly. Therefore is it a perfect task to be automated.

I wrote this little script in Powershell

app_crashdump.ps1

This script does 3 things :

  1. Creates a folder to put the crash dumps in
  2. Gives the appropriate accounts access to this folder
  3. Configures the registry with the appropriate settings

Part 1 : Creating the folder

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.VisualBasic') | Out-Null
$Folder=[Microsoft.VisualBasic.Interaction]::InputBox("Specify where to store crashdumps (not network location)", "Path", "c:\Crashdump")

New-Item $Folder -Type Directory -ErrorAction SilentlyContinue

### Verify the folder the user specified was a valid folder. Else failback to c:\Crashdump

$validatepath=Test-Path $Folder
	if ($validatepath -eq $false)
	{
	New-Item C:\Crashdump -Type Directory
	Set-Variable -Name Folder -value C:\Crashdump -Scope Script
	}

This piece of code asks the user for input as to where to put the folder, makes the folder. If it cannot make the folder the user suggested then it has a default path of c:\Crashdump

Part 2 : Specifying the permissions

$Acl= get-acl $Folder
$machinename = hostname
$querydomain = [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()
$domain = $querydomain.name

#Setting ACLs

$Acl.SetAccessRuleProtection($true, $false)
$acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule("Network","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")))
$acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule("Network Service","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")))
$acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule("Local Service","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")))
$acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule("System","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")))
$acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule("Everyone","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")))

Set-Acl $folder $Acl

This code just defines some variables and assigns the following user accounts permissions to write to the folder

Network, Network Service, Local Service, System  and the domain account everyone. It then writes the ACL back to the folder.

Part 3: Actually configuring WER

$verifydumpkey = Test-Path "HKLM:\Software\Microsoft\windows\Windows Error Reporting\LocalDumps"

	if ($verifydumpkey -eq $false )
	{
	New-Item -Path "HKLM:\Software\Microsoft\windows\Windows Error Reporting\" -Name LocalDumps
	}

##### adding the values

$dumpkey = "HKLM:\Software\Microsoft\Windows\Windows Error Reporting\LocalDumps"

New-ItemProperty $dumpkey -Name "DumpFolder" -Value $Folder -PropertyType "ExpandString" -Force
New-ItemProperty $dumpkey -Name "DumpCount" -Value 10 -PropertyType "Dword" -Force
New-ItemProperty $dumpkey -Name "DumpType" -Value 2 -PropertyType "Dword" -Force

This Script checks if the Local Dumps Registry key exists, if it doesnt it creates it and then adds the necessary values. You have probably noticed a potential gotcha with powershell and registry entries. Powershell treats registry values as properties of the Key that they are in as discussed here

The full script can be downloaded from here . To run this script you need to allow unsigned scripts by running the following command from a administrative powershell window.

Set-ExecutionPolicy -ExecutionPolicy Unrestricted

alternatively you can sign the script, which isn’t very difficult but has a number of steps. The Technet Scripting Guy blog has a very good guide here and part 2 here .

I use this script in a couple of ways but mainly to sumplify the task of enabling users/admins with collecting application crash dumps for analysis.

Hope this helps

Shane