Skip to content

First Impressions – AD FS and Window Server 2012 R2 – Part II

DanJ_DSAblog:

This blog has really helped me in recent months while working with the latest features of ADFS, this is another great post.

Originally posted on The Access Onion:

Hi folks. Welcome back to Part II of our first look at the new AD FS release in Windows Server 2012 R2. This one has been a while in the making and for those who have been waiting, thanks for your patience. This is a pretty long post and the longest to date. For the most part it emphasises what is new and good in the Windows Server 2012 R2 incarnation of AD FS, in particular concentrating on the authentication and UI changes in the latest release.

In the last post we looked at some of the new architectural changes in AD FS with Windows Server 2012 R2, such as the Web Application Proxy, Extranet soft lockout and a lightweight domain join facility, otherwise known as Workplace Join. In this post we’ll extend the look to some of the authentication/UI changes and how their application embraces a more conditional access…

View original 7,156 more words

NT4Crypto Settings in 2008R2 onwards

I am writing a doc for a client which is a questionnaire for their application teams to provide some information about how their applications interact with AD. This will help the client determine compatibility of their applications for a forthcoming 2003 > 2012 AD upgrade. Much of what they need to worry about is documented, but there still remained the question of the NT4.0 Cryptography settings. This KB article explains how to disable it, but not what it actually does, besides saying that it disallows ‘old’ or ‘weak’ or ‘NT4-style’ algorithms. Since the apps could be UNIX/Linux based using Samba or middleware platforms I needed to ask those teams exactly what their app was doing, not whether it was ‘using NT4-style algorithms’ (What are the algorithms? used for what?).

I decided to take a look through MS-NRPC (the netlogon remote specification) and it turns out that this controls the RejectDES parameter, which is used during secure channel creation (NetrServerAuthenticate3):

RejectDES SHOULD be initialized in an implementation-specific way and SHOULD default to TRUE. Implementations that use Windows registry to persistently store and retrieve the RejectDES variable SHOULD use the HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Netlogon\Parameters registry path and AllowNT4Crypto key set to negation of the RejectDES variable.

RejectDES: A Boolean variable that indicates whether the server MUST reject incoming clients using DES encryption in ECB mode.

So that explains it. NT4 (and older versions of Samba) will attempt to create a secure channel with a DC using DES encryption. Prevention of this is analogous to the 2008R2 settings which disallow DES for Kerberos, and which are another key question on the questionnaire for any app support team.

There is also a RejectMD5Clients value listed in the MS-NRPC that isn’t really explained, apart from being able to deduce that it only apples to Win7/2008R2 and onwards. However I found this on a Samba forum, an explanation from a MSFT tech about how these two values are implemented:

http://lists.samba.org/archive/cifs-protocol/2011-July/001984.html

So that even prevents 2000/XP/2003/Vista/2008 clients from establishing a secure channel, although I don’t think there is a corresponding GPO option for that (yet).

MCM/MCSM/MCA withdrawn :(

Well, I have been busy offline for a week, then suddenly found out that the Microsoft Master programme has been withdrawn :(

http://blogs.technet.com/b/neiljohn/archive/2013/08/31/retiring-the-microsoft-master-certifications-and-training.aspx

I was due to do the 7th-26th Oct Directory Services rotation but changed my mind at the last minute as my wife is due to give birth around that time. I’m glad I didn’t now as I’m not even sure I would have got my $18,500 back, and I’m certain I wouldn’t have been able to get the airfares refunded (and tbh 3 weeks in Seattle is probably a little long for a holiday). Certainly at no time discussing this with the advanced cert team did anyone tell me that this was on the cards.

I am already annoyed at the loss of Technet subscriptions, and quite frankly worried about the way Microsoft are headed and the corresponding implications for tech professionals. “Devices and Services”? wtf? MSFT are a software company that make great software products, as well as hosting services based on those. Every attempt they have made to crack mobile has ended badly and I don’t see the Nokia acquisition changing things.

By assuming that all their clients want to put things in the cloud they are making a fundamental mistake. My current client will never be allowed to do this based on regulatory requirements, and others I have worked at will never do it for other reasons. Sure, offer it as a service but don’t assume you can hoover up all this business and let the professionals who were your former technology evangelists become de-skilled and disillusioned.

Reducing the top level of certification offered means that there is no way to distinguish between people who crammed their MCSE from some braindump and those who really know what they are doing. MSFT will end up with having their products badly configured and managed by people who claim that MSFT have verified their skills. This leads to people who make technology purchasing decisions distrusting MSFT products and seeking alternatives. This will severely hit MSFT revenues; the policy is a one-horse race to the bottom, if you like.

There are loads more opinions about this from some clever people both inside and outside MSFT and the MCM programme:

http://www.stevieg.org/2013/08/are-microsoft-losing-friends-and-alienating-it-pros/

http://www.bhargavs.com/index.php/2013/09/03/microsoft-kills-microsoft-certified-solutions-master-mcmmcsm/

http://www.devinonearth.com/2013/08/aint-nobody-microsoft-learning-got-time/

http://www.iamberke.com/post/2013/08/31/Microsoft-killed-MCMMCSMMCA-certifications.aspx

http://michaelvh.wordpress.com/2013/08/31/microsoft-is-retiring-the-mcsmmca-program/

and plenty more links in those posts too.

Here is Microsoft’s Tim Sneath on the decision:

http://www.devinonearth.com/2013/08/defending-a-bad-decision/

I am not convinced.

Powershell 3.0 arrays of length zero or one

Here’s something I noticed today about zero or one length arrays:

PS C:\Users\dan> # in a powershell 2.0 console
PS C:\Users\dan> (1).count
PS C:\Users\dan> ($null).count
PS C:\Users\dan> (1,2).count
2
PS C:\Users\dan> @(1).count
1
PS C:\Users\dan> @().count
0

So a single object doesn’t have a count defined (nor does it have a length). If you explicitly define it as an array then it does have count 1 (or 0). This is annoying since if you get a result set of length 1 (or 0) then your code must deal with it differently.

But, in Powershell 3.0:

PS C:\Users\dan> # in a powershell 3.0 console
PS C:\Users\dan> (1).count
1
PS C:\Users\dan> ($null).count
0

That’s a really useful change. The about_Arrays page has a little more info. It’s worth noting that existing scripts written in PS2 will behave differently if they are run on a PS3 host.

Microsoft Certified Solutions Expert Server 2012: Server Infrastructure

Passed my Microsoft Certified Solutions Expert Server 2012: Server Infrastructure cert today :)

SolExp_ServInfra_Blk

So not surprisingly I was OK with the Identity and Access Solutions bit (that’s my job after all), but I knew I needed to learn more about System Center 2012, particularly Virtual Machine Manager (SCVMM). I bought some new kit for my lab and have learned a lot about this stuff, it’s pretty cool actually. I think it’s even ready to start displacing VMware (assuming you can get over the “Microsoft virtualization? No chance!” cries from the enterprise architecture teams). SMB 3.0 is pretty cool too, just run your VMs off a network share now (cf ESX and NFS).

All in all this “New” MCSE is tougher to achieve than the “Old” MCSE, or the MCITP for that matter, which I assume was MSFT’s intention.

Install, Template and Deploy Server 2012 in ESXi 4.1

I hit a snag attempting to build a 2012 server on a client’s vSphere 4.1 dev environment, where the installer hit a BSOD (now with added 2012 ‘sad face’ icon) and kept restarting. jmattson on the VMware forums suggest that you need to do a quick hack on the VM’s virtual BIOS as server 2012 doesn’t support the PIIX 4 southbridge emulation that ESXi 4.x uses.

This is done by copying the BIOS file from the link above to your virtual machine config directory and appending the following lines to the .vnx file:

bios440.filename = "bios.440.rom"
mce.enable = "TRUE"
cpuid.hypervisor.v0 = "FALSE"
vmGenCounter.enable = "FALSE"

This won’t magically enable VM-Generation ID so you won’t be able to take advantage of the advanced virtual capabilities, but you will at least be able to install and run the OS.

I have no idea whether this is a MSFT or VMware supported config for your production environment, I have no requirement for this to be in production as the client is in the process of building a series of new ESXi 5.1-based clusters for that. YMMV so check with the vendors first.

NB if you do this for a machine which you subsequently convert to template then you will need to re-add the ROM file to the config directory of each new VM that you deploy from this template as it doesn’t copy it across automatically. Your deployed VM will not start, with an error:

Could not open bios.440.rom (No such file or directory).

An alternative is to put the ROM file in a ‘central’ location then specify the full path e.g.

/vmfs/volumes/mydatastore/bios.440.rom

Server 2012 ROI study by Forrester

MSFT have just released a study they commissioned from Forrester, on the ROI from implementing Server 2012. Handy for convincing management that now is the time to leave behind 2008 R2.

Download here

Follow

Get every new post delivered to your Inbox.