Quantcast
Channel: HTG
Viewing all 178 articles
Browse latest View live

Creating a Windows 10 internet kiosk using Microsoft Edge

$
0
0

We’re all familiar with the use of Windows PCs as internet browsing kiosks. I can recall building many a kiosk on Windows 7 using the imaginatively-titled Internet Explorer “kiosk mode”. There are a raft of third-party tools that were used to enable this functionality for those of who couldn’t be bothered to spend the time to lock them down. The idea is that users are given a throwaway, cut-down machine that provides nothing but basic access to a browser for idly leafing through the goodness of the modern web.

So I wasn’t particularly bothered when I came across the (first) instance I’d seen of someone wanting this on Windows 10 running Microsoft’s new browser, Edge, rather than the old warhorse Internet Explorer. After all, in the immortal words of Andy Wood and Jim Moyle “how hard can it be?”

Well. Hindsight is a beautiful thing, let me assure you.

But anyway – let’s start with a list of the requirements we had for our new shiny Windows 10 internet kiosk.

  • It must run Microsoft Edge, not Internet Explorer
  • The user should not have to remember or be given a password – logon should be automatic
  • The user should be presented with nothing more than a full-screen browsing session upon logon, branded as necessary
  • There should be no way for the user to activate other applications, browse the filesystem, or otherwise delve into or use other parts of the operating system
  • Ideally, the user’s settings (such as websites visited, bookmarks, cookies, etc.) should be purged at the end of the browsing session

Doesn’t sound too bad, no? Especially as though I’d done this with Internet Explorer on Windows 7 many times. So, should be a nice short and succinct article…

Yeah, right.

Part #1 – Assigned Access and Custom UI

Now, anyone who has ever scanned through a list of Windows 10’s features (or attended one of my many sessions on the subject) is probably quite aware that the new Microsoft operating system ships with a feature called Assigned Access. And Assigned Access is – well, it’s a way to enable a kiosk mode that can only one run application! Sounds just what we need, eh?

One small problem though. Assigned Access only allows you to assign Universal Windows Platform apps (Modern Apps, if you prefer) for the kiosk mode. Well, Edge is a UWP app isn’t it? Actually it’s not quite a UWP app, it doesn’t update through the Windows Store, and it turns out it’s actually not available as an option in Assigned Access. And neither is any browser, for the record. So the built-in Windows 10 “kiosk mode” feature doesn’t actually allow you to run it with an internet browser of any sort. Now I’m not going to stand myself up as the world’s foremost “kiosk expert”, but in my experience, all of the ones I’ve built have been with “internet cafe”-style functionality in mind. I’ve yet to see a kiosk for running Remote Desktops, or the Microsoft Mail app. So these restrictions built into Assigned Access seem, well, not to mince words, absolutely and utterly ridiculous.

For browsers, the Assigned Access documentation recommends you use the old Microsoft method of a Group Policy Custom User Interface (User Configuration | Administrative Templates | System | Custom User Interface). You know, even on Windows 7, when building kiosks, I used to avoid this setting, but it’s not going to be an issue, because this isn’t going to fly with Microsoft Edge anyway. You see, even though Edge is a UWP app that isn’t really quite a UWP app (in that it won’t run via Assigned Access), it’s still enough of a UWP app to be unsuitable for Custom UI. Edge cannot be invoked by simply running the MicrosoftEdge.exe application – it either never appears, or throws an error. So if you set up the Custom UI GPO and point it at Edge – you’re no further forward.

At this point, I should have seen the warning signs and quit. But nobody ever accused me of being sensible.

Part #2 – shortcuts

Now, if you create a shortcut to Edge by pointing to the executable in %WINDIR%\SystemApps\Microsoft.MicrosoftEdge_8wekyb3d8bbwe called MicrosoftEdge.exe, it just doesn’t work. It either crashes or simply doesn’t respond.

However, if you drag the Microsoft Edge shortcut from the Start Menu to the Desktop, then you do get a working shortcut. There are some differences between the shortcuts (see the image below), but I’m not entirely clear how to interpret them. Certainly, the left-hand one fails, the right-hand one works.

OK, that’s by the by – so can we perhaps copy the working shortcut into a file share, and then insert it maybe into the user’s Startup folder when they log in and get Edge to run that way?

Sounds good…but then so did David Moyes at one stage. For some reason, what is a working shortcut on a user desktop becomes useless when transported into the Startup folder. Scratch that idea as well!

Part #3 – Running Edge from the shell

OK, so I can’t trigger Edge through Assigned Access or the old GPO method or by using a working shortcut in Startup. Any other ideas as to how I can make it run when my user logs in?

What you can do with UWP apps is call them from within the Windows shell by an unfamiliar method. You simply call the UWP app name and stick a colon on the end. Here are some examples, courtesy of Rod Trent

  • Action Center – ms-actioncenter:
  • Clock – ms-clock:
  • Mail – mailto:
  • OneNote – onenote:
  • Edge – microsoft-edge:

Give them a try – Start | Run | command. They do work! So that’s at least something to concentrate on, eh? Maybe we can combine this way of invoking Modern Apps with the Custom User Interface GPO and give ourselves a way to make Edge run at logon…

…well, I take it that’s a no then. Looks like invoking Edge as a “shell” isn’t really suitable either. Poor old Internet Explorer could live with it easily enough. OK, what’s next?

You can also invoke Edge from the command prompt or PowerShell, but as you can see from the image above (when we moved from “microsoft-edge:” to “cmd.exe /c start microsoft-edge:“), it needs to be done after the user has already logged in. Now, Group Policy now has a default logon script delay on Server 2012 R2 and up (set out of the box to five minutes!!!), which we could use to run a command after the logon has finished – but the value can only be set in minutes. Which means if we tried to do it through Group Policy as a delayed logon script, the user would have to wait a minute after logging in to the kiosk to get their browser window. Not a real starter.

Next thought that occurred was a Scheduled Task, but that’s just getting even more murky. You can’t set a Scheduled Task to run after logon specifically, although you could maybe use an event as a trigger – but I’m really starting to delve into the depths here. What I needed, at this point, was some more tooling – some help.

Part #4 – enter AppSense Ivanti DesktopNow

I really must get used to referring to AppSense as Ivanti. It just doesn’t seem to roll off the tongue so easily, although that may have something to do with spending nearly five years of your life writing the word “AppSense” multiple times into two hundred or so blog articles. I’m digressing here.

This was intended to be something anyone could set up, but Edge’s behaviour has sent me straight back to my old pal AppSense Ivanti DesktopNow for some extra help, which means this is now becoming a little bit specialized in terms of tooling. But that’s part and parcel of the challenge – you need to pick the right tools for the job. I’m sure RES or some of the other higher-end products in the endpoint management market could also manage this requirement – I just use DesktopNow because it’s my personal preference.

I’ve written on several occasions – here and here at least – about how you can use DesktopNow to set up a “delayed” trigger that runs a specified time after the user’s logon completes. We won’t go into this functionality in this article – it’s documented in the links and we will make a configuration available that contains the Actions for reference purposes at the end. In this case, I’ve set the delay after logon to 0.5 seconds because I want the commands to be processed as soon after logon finishes as possible. So after logon finishes, we call this command from PowerShell:-

start microsoft-edge:

and hey presto! We have Edge launching automatically for our user. Wasn’t that a hell of a lot harder than launching any other Windows process you’ve ever wanted to do automatically? Anyway – now we can crack on with the rest of our requirements.

Part #5a – maximizing a window

I’ve already had my fingers burned with a reference to “how hard can it be?” so I’m not repeating it, even though all I want to do is run the Edge window maximized when it starts. It’s easy enough for Internet Explorer – so why am I so apprehensive?

Well probably something to do with the fact that we’ve already established we can’t use a traditional shortcut for this in any way shape or form, so running a shortcut with the “Run maximized” flag is out for starters. Maybe if we just open Edge, maximize it, and then find out where it writes the Registry keys that control the window size, that will do the trick…

This doesn’t look too bad, actually. Process Monitor reveals that on close Edge writes to two DWORD values in HKCU\Software\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\AppContainer\Storage\microsoft.microsoftedge_8wekyb3d8bbwe\MicrosoftEdge\Main called LastClosedHeight and LastClosedWidth

But imagine my surprise at another false dawn. No matter what these values are set to, or when it is done, Edge always opens in the same size window if the user hasn’t run it previously. We want it to always run maximized, so no good.

Cue more digging. So then I discover another Registry value (a BINARY one), this time at HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\ApplicationFrame\Positions\Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge and going by the name of PositionObject, that seems to control Edge’s window position.

Now this one does actually do the trick if you maximize the window and grab the settings from the value before reimporting them – but only for specific resolutions. Try and port it to a larger or smaller screen resolution, and you’re straight back to the default size. FFS.

Another method that came to mind was forcing Windows 10 into tablet mode -which incidentally does actually make Edge launch maximized. But the UX is awful (Start Menu overlaid on the screen), so that’s out as well.

OK. It’s clear that tried-and-tested methods are no use here. What if we try something a little more radical? We know that you can maximize a window by clicking certain buttons in the Windows interface, but automating that is very hard. However, you can also maximize a window using keystrokes – and, last I heard, you could send keystrokes using PowerShell. Is this a potential way of getting around this requirement?

Part #5b – sending key presses

We’ve really come down a rabbit hole for this, and I’m not sure how much further it has to go. Maybe this should have been a series like the Windows 10 one (which still isn’t finished, as far as I’m concerned, but I’m off on a tangent again). Focus.

PowerShell and keystrokes, that’s where we’re at. Can you send keystrokes to a particular area of the session?

Indeed you can. Here is the code we will be using. To maximize a focused window, we normally use the key sequence “Alt + <SPACE>” and then “x” to activate the Maximize command (give it a try yourself).

# PowerShell to launch Edge maximized – starting the page and then using PS to send “Alt + Space and x” to the Edge window to maximize it

# This launches Edge with a specific page destination (Google in this case)

start microsoft-edge:http://google.co.uk

$wshell = New-Object -ComObject wscript.shell;

# This next line uses the window title to send the keystrokes to, so it’s essential the string matches the title of whatever page you are opening in Edge

$wshell.AppActivate(‘Google’)

# First pause is one second

start-sleep 1

$wshell.SendKeys(‘(%(” “))’)

# Next one is three seconds – test your own delays to get it working optimally and change as required

start-sleep 3

$wshell.SendKeys(‘(x)’)

The key parts are the window title, and the sleep commands. The keystrokes can only be sent to an active window (or a process ID), so it is essential we get the window title consistent so that we can send the key presses to the Edge process. This is why we are using start microsoft-edge:http://www.google.co.uk to launch Edge, as this ensures that the window title is ‘Google’ and can be picked out as such for the keystrokes.

The pause between the keystrokes is a bit of trial and error to ensure that the key presses are sent in the right order. In my environment, one second after the initial window activation and then three seconds after the “Alt + <SPACE>” seems to work most of the time, but you may need to give this a test to make sure it works in the same way for yourselves.

We can apply this using DesktopNow – in fact, we can do it just after we have started the Edge process through PowerShell. So Edge launches in its typical windowed mode, and then the keystrokes are sent which make the process run maximized. Cool! Not the most elegant way of doing things, but it seems to work, and that’s all we want. In hindsight, it’s not a huge issue if it maybe fails one time every ten or so – it just looks neater when the window is maximized. But I wasn’t stopping until I found a solution, no siree.

Part #6 – branding fun

It’s nice to make your internet kiosk look smart and presentable, which could easily be achieved by picking a suitable desktop background and copying it down onto the endpoint, then forcing it as the wallpaper through Group Policy. But because we’ve started out on this wonderful adventure of discovery, why not take it a notch further? So for this part, we are going to introduce you to an awesome trick that allows you to create specific images “on-the-fly” through your AppSense EM configurations. No more relying on files in file shares and/or copying them down to your endpoints based on Conditions – just create them with an Action.

We’re going to do this from a perspective of dropping an image to be used as the desktop background, but you could extend this to any file you need to leverage into the AppSense Ivanti DesktopNow configuration from outside of it. Essentially you could make the whole thing self-contained for distribution into different environments.

Mr Guy Leech deserves credit for this, and it is documented thoroughly on his blog over here. Basically, in this example, first you pick an image that you want to use as your desktop background. Then run some lines of PowerShell to encode the file into base64 in a text file (change the paths as required below)

$inputFile = ‘c:\users\jrankin\downloads\InternetKiosk.jpg’
[byte[]]$contents = Get-Content $inputFile -Encoding Byte
[System.Convert]::ToBase64String($contents) | Set-Content –Path c:\users\jrankin\downloads\encoded.txt

Don’t pick a file that’s too big – I started on an 11MB image file and it nearly crashed my machine. In this case the file was 800KB and ran through quite passably.

Next, you need to copy the entire contents of the output file (in this case, encoded.txt) and paste it into the PowerShell command that we are going to call at Computer Startup (because this seems the ideal time to create files that will be used in the user session for this example). Don’t worry about the huge amount of text you’re copying from the output file into the PowerShell command – it will fit! Here’s an example below – replace

$encoded = ‘Paste the humungous amount of encoded data in here’
$newfile = ( $env:WINDIR + ‘\InternetKiosk.jpg’ )
[System.Convert]::FromBase64String($encoded) | Set-Content -Path $newfile -Encoding Byte

Now, when this trigger runs, the PowerShell will create the file with the image in it from nothing more than the base64 encoding that we have embedded into the command line. Awesomely cool! I’ve made a configuration available at the end, and should you use this in test, you should find that you will get the same image set for the background as I used in my testing – without ever having to locate or download it. The image is created on-the-fly, and then a Group Policy Action sets it as the default desktop background.

I’ve made this Action Conditional on the fact that the file doesn’t already exist, as it is (not surprisingly) quite an intense command. It’s a super-cool trick and is very handy for making configurations completely modular and portable. Kudos to Guy – nice one.

Part #7 – automatic logon

A key part of any successful kiosk implementation is putting together an automatic logon. You don’t want to have to stick a password to the monitor or tell users what the password is – it’s much easier just to let it log on automatically.

The Registry values for automatic logon have existed since the Windows NT days and haven’t changed yet. They are as follows:-

  • HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\AutoAdminLogon
  • HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\DefaultDomain
  • HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\DefaultUserName
  • HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\DefaultPassword

They are all REG_SZ values. The first needs to be set to “1” to activate it, and the rest should be self-explanatory – populate them with domain name (not FQDN), username and password as required. Obviously as the password is going to be stored in clear text in the system Registry, it pays to make sure this user account does not have any privileges beyond that required to log on to the kiosk. Generally, I restrict the logon access rights for this account to kiosk machines only.

Interestingly, until you disable the Lock Screen on Windows 10 (via GPO), the automatic logon doesn’t work. So you will need this disabled in your Group Policy settings. In addition to this, if you’re using Hyper-V, an Enhanced console session won’t run the automatic logon (because that is essentially an RDP logon, under the hood – you need to use a Basic session).

Finally, also a Windows 10 thing, when you log the user out, the automatic logon doesn’t work. It only seems to activate after a boot. That’s really annoying, but we’ve got around it by setting the kiosk machine to restart whenever the browser is closed, rather than logging the user out. This is done by using a Process Stopped trigger for MicrosoftEdge.exe in DesktopNow, which then calls a shutdown /r /t 0 command.

Part #8a – lockdown (Start Menu)

Now let’s get into locking down our Edge kiosk so that users can’t break out into the filesystem, other applications or the Windows user interface. The first thing we want to nail down is the Start Menu.

In Windows 7, we could hobble the functionality of the Start Menu via Group Policy Objects and then redirect the user to a custom (blank) Start Menu, which meant all they got were a blank Programs menu and a Log Off command. In Windows 10, though, we have these pesky UWP apps on the Start Menu which aren’t controlled by the filesystem entries we traditionally associate with the Start Menu, and the WinX menu as well (the “right-click” menu under the Start button). How do we deal with this?

You could remove all of the UWP apps from the image at build time, but this means that if you aren’t doing this en masse then you’re already getting into the realms of making your kiosk a separate image from the rest of the desktop estate. But seeing as though we’re already leaning on AppSense Ivanti DesktopNow to help us launch Edge, why can’t we use the tooling further to help us here? Bring out DesktopNow’s mighty Lockdown Tool…

If you’re not familiar with the Lockdown tool with DesktopNow’s Environment Manager product, here’s a quick rundown of how to use it. The process hasn’t changed much since that article was penned, so it’s all still perfectly relevant.

Fantastically, this works flawlessly when unleashed on the Windows 10 Start Menu, and also kills off the right-click access to the WinX menu as well (see the red section highlighted in the image below). Go AppSense Ivanti!

Part #8b – lockdown (all the rest)

So to lock down the rest of the shell, and Edge itself, we’ve (not surprisingly) put together a whole host of Group Policy Objects and Registry items that turn our Windows 10 machine into a nailed-down sandbox that only runs a browser.

The configuration we’ve made available at the end has all of these built in. We are applying the user settings only if the user name matches the one we’ve set up (in this example it was JRR\kioskuser, who is our auto-logon user account), and the machine settings only if the computer name matches our kiosk naming format (anything matching KIOSK*, in this case). I don’t understand why some Edge settings and some shell settings are only available as Computer Configuration items (e.g. prevent OneDrive from being used, turn off web search, etc.) Both of the “driver” Conditions are inserted as Reusable Conditions so if you want to change them to your own environment, you just need do it in one place.

I’ve also (in the Edge Process Started node) applied some Lockdown items which get rid of the Share, Web Note and More options in Edge which only provide entry points into UWP apps like OneNote and the like. If you want stuff like Web Note to work, then remove the Lockdown item pertaining to it. There’s also some stuff to clear Edge data at application close – but I’m not sure if that works as intended. More on that in the next section.

This set of policies is all contained within the AppSense Ivanti DesktopNow Environment Manager configuration (including the background, as from the cool PowerShell above), so you should be able to drop it onto a Windows 10 1607 build with the default set of ADMX files and it should work out of the box. As far as I can tell 🙂

Part #9 – purge the user’s browsing data

Maybe it’s just me but I wouldn’t like to open the address bar and be confronted by a previous user’s web history (especially if they read The Guardian). And there’s also the problem that if a user accidentally saves passwords or other security information into the browser, we don’t want it to persist into the next session. So how can we deal with this?

It (again!) sounds fairly simple, but the key problem we have here is that as soon as Edge closes, we’re initiating a reboot because otherwise the autologon doesn’t work. So if we try to purge the user’s profile at logoff (by marking it as temporary, or dropping the user in the Guests group), it won’t necessarily finish deleting before the system restarts. And because it logs straight on using the same user account, we can’t purge it at boot time, either. We really need to find another way rather than dropping the whole profile, as not only does this not work, but it also means the next logon takes exponentially longer.

Fortunately yes, and this seems to work first time (OMG!) Simply set the “clear browser history at logoff” within Edge, and then capture and import these Registry values at next logon, to ensure that all evidence of previous misdemeanours is nicely swept away when the application closes.

  • Key – HKEY_CURRENT_USER\SOFTWARE\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\AppContainer\Storage\microsoft.microsoftedge_8wekyb3d8bbwe\MicrosoftEdge\Privacy
  • Value – ClearBrowsingHistoryOnExit DWORD 1
  • Value – CleanDownloadHistory DWORD 1
  • Value – CleanForms DWORD 1
  • Value – CleanPassword DWORD 1
  • Value – InProgressFlags DWORD 0
  • Value – ClearBrowsingHistoryOnStart DWORD 0

Not sure why the “ClearBrowsingHistoryOnStart” needs to be set, but when the option is changed both the Start and Exit values are toggled, so it needs to be set in order for this to work effectively.

Final stage – the $64,000 question

That was an awfully long and quite frankly painful investigation into the feasibility of an Edge-based internet kiosk. So if I spin up a new Windows 10 1607 instance, patch it, install the AppSense Ivanti DesktopNow agents, and then apply the configuration we’ve created, will it actually work?

As this is quite a complicated thing to demonstrate working, we’ve recorded a video of it which is embedded below. Of course, if you don’t have the time to listen to my Alan Partridge-esque commentary as I run through an example of this in action, then just skip ahead…

Of course it works. Well, most of it. The keystroke delays to maximize the window are the one bit where I see intermittent issues. Occasionally the “x” keystroke I’m sending gets lost and hops over to the search bar, so if you see an autocomplete suggestion for Xabi Alonso or The X Factor, then that’s what has happened. You can get around this by messing with the delays in the PowerShell that sends the keystrokes, which appear to be dependent on how many other actions are running in there too. Weird, but it’s not a total showstopper, and if you’ve watched the video, most of the time it works absolutely fine.

So yes, in summary, it is possible to set up a Microsoft Edge-based kiosk. But it’s very hard compared to running Internet Explorer in kiosk mode, and without AppSense Ivanti DesktopNow or another high-end endpoint management tool you’d be struggling, to be honest. Even if you lifted the policies to an IE-based kiosk, you’d still have the Start Menu as a big fat entry point for the user to all their other apps. And even though in my Windows 10 deployments it appeared trivially easy to break the Start Menu, when I wanted to do it deliberately, as in this case, I just couldn’t manage it. C’est la vie!

If you want an Edge-based kiosk (or any locked-down kiosk on Windows 10 that doesn’t use third-party kiosk tools), then this is the way to do it. It’s not straightforward, it needs powerful tooling, you’re going to do some scripting, and to top it all off the browser itself ain’t that great anyway (I’m not going to go into how to enable Google as a search provider in Edge – that’s another article for the future). But it can be done. Yes it can. The video proves it, and I’ve shared the configuration that we used (including Guy’s awesome PowerShell to build files on-the-fly) to help you get started easily should you wish to create something like this.

And now to try and get back the four days of my life I’ve spent doing this. Stubbornness is a terrible thing.

The post Creating a Windows 10 internet kiosk using Microsoft Edge appeared first on HTG | Howell Technology Group.


Printing in the modern world, part #1 – King Kong, Godzilla, and….printers?

$
0
0

I recently took my kids to see Kong: Skull Island at the cinema. Not the most cerebrally-challenging film I’ve ever watched, but for two eight-year old boys, the prospect of a whole bunch of monster-on-monster tear-ups was probably quite enthralling. I have to admit that the toil of Kong’s daily routine, which involves beating up enormous monsters from dawn till dusk, left me feeling worn out, but as long as the kids were happy with the film, so was I.

But it did set me thinking. Two hours of monster battles without the need for anything as insignificant as an engaging story will make that happen, I can assure you.

The new Kong is absolutely huge compared to his previous incarnations (and he damn well needs to be, because apparently, he is going to fight the ultimate leviathan known as Godzilla in his next movie outing). Yet his existence has (apparently) remained a secret for many years because of a storm system surrounding his home island. In my mind (because I always insist on applying real-world logic to fantastical works of fiction), it would be very unlikely that something of such an enormous size could remain hidden in this way – it would be much more likely that the governments of the world had chosen to pretend it wasn’t there, hoping that the storm system kept casual travellers from finding out the truth. I mean, why bother dealing with a two-hundred foot beast when you can simply bury your head in the metaphorical sand and pretend it doesn’t exist?

And in the cloud-focused world of today, Kong reminds me of the issue of printing. Tenuous link alert? No, honestly, bear with me. Printing is the two-hundred foot ape in the room (or on the island, if you prefer) when it comes to cloud adoption. Seriously.

We’ve managed to start moving most things into the cloud. Microsoft’s partnership with Citrix around XenDesktop Essentials and the XenDesktop Cloud Service have finally allowed us to start delivering pure Windows 10 client desktops from Azure or your hybrid infrastructure at a reasonable price. We can put all of the component parts into the mix – applications, data, profiles, directory services, etc. – without the need to compromise on the cloud-based vision. But printing – because it deals with something physical that the end-user needs access to, wherever they may be – remains something stubbornly local, and that gives us problems.

Like Kong, printing is ugly. Most of the areas that make up our virtual desktop infrastructure or cloud-hosted services are cool and slick. They’re probably the equivalent of Godzilla – exciting, awesome-looking, laden with interesting and surprising features (atomic breath? How cool is that?)

But no-one gets excited about the smelly ugly brute that represents printing. And that’s a shame, because in the same way that Kong manages to keep the Skull Crawlers at bay, we couldn’t do without printing.

Oh sure, printing has come down in the hierarchy of needs, but it still sits there. We’re all familiar with the “think before you print” tagging and the days of storing vast reams of paper-based documentation have slowly started fading away with the advent of cloud-based services (for the most part, anyway). But even the arrival of e-signing and the like still doesn’t mean that we can do away with printing altogether. There are still occasions when you absolutely need to print (like the time I had to print out a 35-page car lease agreement, sign it on every page, and then scan it back in to send back – that was not an enjoyable task!) And in the main, the fact that we have always had the capability to print means that we need to build it into the solutions that we provide. How many applications do you know of that don’t have a print function? Users usually don’t take kindly to the removal of functionality they always assumed will be present – just look at Windows 8 and how the whole “Start Menu removal” turned out for a good example of this.

But printing also evolves – like the new Kong, it’s still growing (thought I couldn’t milk the hell out of this analogy? Think again) We don’t just print paper documents any more, and I’m not just referring to glossy photos or brochures. Oh no. We now have 3D printing, and this is something that manufacturing is positively beating its metaphorical chest about. In our cloud-based nirvana, we need to take into account this new wave of printing capability and provide the functionality that enables it.

And that brings us back to the issues we mentioned earlier. When all of our infrastructure is in the cloud (public or private), or in the process of moving to it, how do we deal with the fact that we need to send print jobs back to the print devices on the local network? Like the giant squid and the Skull Crawlers on Skull Island, how do we overcome these challenges?

With all of our workloads sitting pretty in the cloud, with applications, data and user settings all bunched nicely together, we can have a good user experience. But when we invoke the print function, the data that makes up that print job has to come back down the pipe and render on a device that sits on a remote local network. That takes bandwidth, and bandwidth is precious. Not only are (potentially) our user desktops now sitting in that cloudy infrastructure, we’ve got other technology in the mix like VoIP that also consumes bandwidth. So when we send big print jobs (and as everyone knows, print jobs can get very large these days – Kong-large), are we going to have a massive impact on the quality of the user experience, on the quality of VoIP traffic, on the whole process of printing? And that’s without even considering the potential impact of 3D printing jobs – if your ordinary print job can get Kong-large, then 3D printing moves firmly into Godzilla territory. Imagine waiting for a 3D print job to render from the remote datacenter in, for example, a JIT manufacturing environment where everything is time-sensitive? Let’s not go there…

So if cloud is on your radar – public or private – then you’ve got to deal with the King Kong in the room that is printing. And for some, like Kong himself, this can be a big problem. So much of a problem that often people will reconsider sending their workloads to the cloud when faced with the impact of printing. Whether you remediate it before or after your cloud strategy takes shape, it needs remediating. Like Kong, it isn’t going to go away – well, not without a hell of a fight, and one that you haven’t got much chance of winning.

In the movie, Kong was dealt with by a combination of mutual respect and a bit of friendship (and maybe even – spoiler alert – a hint of love). We don’t need to get so cerebral with our printing issues, though. Samuel L Jackson tried to deal – unsuccessfully – with the new Kong by means of napalm, but maybe we can leverage a tool to help us combat the beast that is printing. For napalm, we will be substituting UniPrint Infinity, an enterprise print management solution – I think napalm would have sounded cooler, but that’s what we’ve got 🙂

Over the next couple of months we will be publishing a set of articles showing how we can use UniPrint Infinity and their vendor-agnostic vPad devices to overcome the gargantuan, Kong-style challenges we face with printing via the cloud (although, sadly, I cannot guarantee a Hollywood film analogy for each instalment). We are going to cover several areas with some case studies, which should address all of the issues around this, including, but not limited to, bandwidth compression, mobile and BYOD printing, security, cost control, print policies, print visibility, user tools, workflow, resources and virtualization. The key element, though, is maintaining the quality of the user experience. When we go through digital transformation, when we make our infrastructure cloud-oriented, mobile and agile, we should never sacrifice the user experience on the altar of the latest buzzwords. UniPrint is a key tool in addressing the two-hundred foot ape that is hiding in plain sight in the cloudy environs of your new infrastructure paradise – printing.

Hopefully this should have stirred your interest, just like the Skull Island trailer clearly did to my kids. I can’t promise you a sequel with Godzilla in it (although I will make a concerted effort), but stay tuned for part #2!

The post Printing in the modern world, part #1 – King Kong, Godzilla, and….printers? appeared first on HTG | Howell Technology Group.

Cloud-based roaming profiles in Azure with FSLogix Profile Containers

$
0
0

So, we’ve started the process of cloud-enabling our applications and our data. For most people the future will be hybrid cloud, but for the applications and data that we can put up there, we have in most cases already begun this process. Exchange in the cloud via Office365 is the most popular, but other services – such as IM – are also being added. From a data perspective, we’re all familiar with Enterprise File and Sync (EFSS) services like DropBox, DataNow, OneDrive and many, many more. Applications-wise, preparing for Windows 10 by virtualizing applications is becoming increasingly common, and tech like Cloudhouse, Numecent Cloudpaging and Turbo.net are leading the way in making these applications available anywhere. But of course, sitting between our applications and our data, intersecting them and providing the key part of the user experience, is the profile itself.

What makes up the user profile? Here’s a slide lifted from my recent presentation at the XenAppBlog Virtual Expo where I did a quick dive into the guts of the Windows profile itself.

Now, without applications and data the profile itself is essentially useless, but because of all the configuration files, supplementary data and other key parts of the user experience that sit within the profile, it is vital in tying the applications, data and configuration together. You’d be surprised how many support calls during a recent Windows 10 rollout we saw that actually dealt with profile-related issues. Here’s another slide from my Virtual Expo presentation (which, on a side note, is a really great free event and definitely worth a few hours of any techie’s time).

Now some software is moving away from storing settings within the profile and tying them to specific accounts, but for the majority of enterprise stuff out there, the profile still has a big part to play. And because of this – and the relentless focus on apps and data that often means the profile is left out as a poor relation – it’s clear that managing our profiles needs to be done better. There’s a wealth of solutions in this area that can deal with this, on a number of different levels, but I’m not about to get into having a bake-off or comparison. What I want to concentrate on is whether we can move these profiles into Azure or another cloud-based system with a minimum of effort and maintenance. From this, we could gain an essentially cloud-native profile management solution, so users could log onto an on-premises device and make settings changes, then pick up a laptop which hasn’t touched the corporate network (or maybe even can’t touch it) and then see their settings updated down onto it.

Now of course, because the profile ties in so closely with applications and data they need to be consistent too, but if you’re adopting tech along the lines of anything like App-V, Cloudhouse, Unidesk (now Citrix App Layering), AppVolumes, FlexApp, Turbo, Numecent, Frame or any one of admittedly loads of vendors, you should already be able to deliver applications to different form factors and connectivity profiles with a minimum of effort. The devices we are using need to be domain-joined in order to activate the user context, but apart from that there shouldn’t be too many pre-requisites. We are going to take some Windows 10 machines built from the same image and try to set up a cloud-based method for dealing with the profiles.

Software

We’re going to use one of the “lighter” profile management solutions to enable this, because we want it to be straightforward. In my mind, it came down to a straight fight between Microsoft User Profile Disks (free), and FSLogix Profile Containers (cheap). I wrote an article here about how to enable User Profile Disks for Windows 10 VDI, but to be honest UPD can be a bit fiddly and error-prone (as you can see from the comments on the article I did!), and Microsoft appear not to really care about them as a technology. Profile Containers uses more or less the same method as UPD but has the added bonus of compatibility back to Windows 7, some helpful management tweaks and features, and the fact that they’re well-supported by the vendor. However, a big factor was that I found it very difficult to configure UPD to connect to the cloud-based file share, whereas with FSLogix (as I shall show you) there is a tweak you can use to get around this fairly easily.

Cloud selector

We’re going to use Azure for this, because we have a Microsoft partnership and free Azure credits 🙂 However, it should hopefully be portable to any of the major cloud players.

Pre-requisites

So, you may be thinking that first of all we need to build a cloud-based Windows file server in Azure, then stand up a federated Active Directory domain controller in there to support authentication, then set permissions and…but no, we don’t need any of that. And to be honest, that sounds like a bunch of work (and potential charges for the server instance). But that’s not what we need, there is a simpler way. So before we start we’ve set up:-

  • Windows 10 image (CBB, fully patched) with the latest FSLogix Apps agent installed and joined to the domain, deployed to required number of test endpoints
  • An Azure account
  • A working Active Directory (can be on-premises, no need for Azure DCs)
  • Firewall rules configured to allow traffic to your Azure Storage Account share (the crucial one being outbound port 445, as far as I can tell)

And that’s it!

Azure storage accounts

Azure Storage Accounts provide an SMB-enabled file share in Azure intended for application access. We are going to leverage these to hold our user profiles.

Log on to the Azure Portal

Click on Storage Accounts

Click on Add

Set up the options as required, paying particular attention to those options highlighted

The Performance option can be set to Standard or Premium. You may need to test between these two, but I found Standard to be OK in my testing.

Encryption is obviously a big desirable, in my testing I left it turned off but you may very well want to activate this option.

Location is very important because it deals with where your user data is being stored. I’ve chosen UK because obviously we are UK-based.

It’s also important to note at this stage you should always configure Folder Redirection for data storage folders like Documents, Pictures, Music and the like, otherwise all of that data will end up getting written into the Azure file share which is possibly not the best place to be sending such files. Other folders should be considered for redirection also, such as Downloads and Favorites. One folder not to redirect, though, is the AppData folder – leave that within the profile itself.

Once this is set up the way you want it click the Add button and the storage account will be deployed.

Next you need to set up a file share within your storage account by going to the Files section within the storage account overview screen and clicking File share. At this stage you also need to put a quota on the file share up to a maximum of 5120 GB.

The name of the “server” you will connect to is usually profilestorename.file.core.windows.net. To get the command for access, you need to view the file share you have just created and click on the Connect button, which will provide a net use (yes, old school!) command for connecting to the file share using a dedicated access key. Whatever you do, DON’T share the access key with anyone you shouldn’t – the one shown in the image below has been changed, before anyone decides to get up to any mischief with it 🙂

Take a copy of this net use command because you will need it later.

FSLogix setup

Now, to get FSLogix to be able to write the FSLogix profile to the Azure file share we need to make a few changes.

Firstly, you should set up a Group Policy Preference that populates the FSLogix Profile Include List local group with the users you want to deploy this to. You also need to make sure these users don’t have roaming profiles defined in AD or any other profile management tool associated with them.

Next, because FSLogix is going to attach to an Azure file share, we need to make a few changes.

Firstly, create a subfolder in your Azure file share to hold the profiles. I’ve made one called Profiles (simply use the Add Folder button as below)

Next, set the Registry value via GPP that tells the FSLogix software where to try and store the profiles for the users.

  • HKLM\Software\FSLogix\Profiles
  • Value – VHDLocations
  • Type – REG_MULTI_SZ
  • Data – \\YOURPROFILESTORE.file.core.windows.net\YOURSHARENAME\YOURFOLDERNAME

Now, the next part is key. In order to be able to access this share, we need to set up a few extra things.

Remember that net use command we saved earlier? You need to copy this into Notepad, replacing [drive letter] with the one you’ve chosen (I used X:). Then save it with a .cmd or .bat extension and place it on the network somewhere it can be accessed by all machines. Here’s the command we used

net use X: \\profilestore.file.core.windows.net\profilestore /u:profilestore IfYouThinkIAmPostingTheRealKeyHereYouAreOffYourRockerMate

Next, set up a Group Policy Object Startup Script to run the .bat or .cmd command as your target endpoints boot up. This will authenticate to the file share as drive X: for the SYSTEM context.

Finally (and this is the kicker to make it work), set the following Registry value so that FSLogix connects to the file share in the machine context rather than the user context.

  • HKLM\Software\FSLogix\Profiles
  • Value – AccessNetworkAsComputerObject
  • Type – REG_DWORD
  • Data – 1

Once these settings are deployed along with the software and your Azure file storage account, you should be ready to test.

Testing

Log on to the machine with FSLogix installed as a user configured to use it. Make sure the user doesn’t have a pre-existing profile. A new Windows 10 profile should be created. All being well, you should see a folder with the user’s SID and username created within the folder in the Azure share.

Make a bunch of changes to your desktop – anything you like, Pinned Items, Start Tiles, desktop background, whatever takes your fancy. Log out and the user profile should not be saved onto the local machine.

Now log on to another Windows 10 image. You should see your profile replicated exactly onto the second machine! Any changes you make are saved up automatically into your Azure file share and can be accessed whenever this user logs onto a domain-joined Windows 10 system!

Caveats and considerations

Obviously the machines need network connectivity to access the profile. Laptop users may need to have the FSLogix Registry value of KeepLocalDir set to a DWORD of 1 to mitigate against situations where a network connection needs to be specifically connected to (like hotel wireless).

From a security perspective the user doesn’t have access to the parent folder of their user profile so they should be unable to access other profiles. The X: drive mapped for SYSTEM is visible in the filesystem but is inaccessible without the access key. To remove this, I configured a Group Policy Preference to hide the X: drive for all users.

From a security perspective encryption is desirable as well, and you absolutely need to keep the access key away from intruders! You can request a new key for the share via the Azure portal if you feel it has been compromised, and then you simply need to update the script that maps the drive for the SYSTEM account.

Cost-wise Azure Storage Accounts are much better than standing up VM infrastructure to host it, and a lot less effort is involved. If someone suddenly uploads 30,000 user profiles to one of these Microsoft may reconsider their usage, but right at the moment, they’re a great option.

FSLogix Profile Containers also support multiple concurrent sessions if you use the ConcurrentUserSessions Registry value (see here for all the Registry keys). They basically use a differencing disk to overcome this.

Obviously this tech only supports profiles of the same version (v6 as used here works for Windows 10 AU and Server 2016). If you needed multiple profile versions, I’d recommend creating subfolders under the main Profiles folder. So redirect Windows 7 machines to \\AZURE\SHARENAME\Profiles.v2, Windows 8.1 to \\AZURE\SHARENAME\Profiles.v4, Windows 10 to \\AZURE\SHARENAME\Profiles.v6, and so on, to keep everything neat. You could use Group Policy Preferences to set the Registry value based around the client operating system version with no trouble at all.

I think this is a great way of using Profile Containers to make your profiles truly “cloud-native”. I haven’t tested this at scale or with complicated applications, and you will need to measure the performance before sending it live, but I didn’t notice any appreciable difference here. Domain-joined machines can now log on from anywhere and get their profiles from a highly-available, low-cost Azure-based SMB file share. Give it a try and let me know what you think!

I will be making a video of this that probably will get posted in the next few days, although I am attending the Citrix User Group in Birmingham so may be slightly delayed in this.

The post Cloud-based roaming profiles in Azure with FSLogix Profile Containers appeared first on HTG | Howell Technology Group.

How to replace Settings with Control Panel on the WinX menu in Windows 10 Creators’ Update (or customize it in any way, really)

$
0
0

That’s a big fat mouthful of a title right there. But hopefully it tells you what we are trying to do here!

The “WinX” or “Win+X” menu arrived in Windows 8.1, and has since become almost a sort of “Power User” menu with shortcuts to various useful functions for the more administratively-inclined amongst us. Personally, I always favoured either “Run” or “Control Panel” as the most useful links in this menu.

Windows 10 1607, proudly showing the “Control Panel” link

All well and good. But Windows 10 Creators Update (1703 build) removed the “Control Panel” entry and replaced it with “Settings”, taking us into the Settings Universal Windows App instead.

Windows 10 1703, and Settings has now appeared instead

Now, I’m well aware, like what they’re trying to do with GPO and InTune MDM, that Microsoft are quite keen that we move away from the old-school Control Panel and instead adopt the Settings app instead. More and more functions are slowly migrating into the Settings app. However, once you dig deeper into the advanced settings, you tend to get redirected to the old Control Panel applets anyway, and some applets are going to probably take a long time to migrate into Settings anyway (particularly third-party stuff like Flash and Java). And the Control Panel is nice and familiar – having been around since Windows 3.x, in my memory. What’s more, I don’t see why we shouldn’t be able to have a bit of a choice – classic Control Panel for the old-school users, funky new Settings for the millennials and Microsofties. Windows used to be about personalization and choice, didn’t it? Let’s not start that debate…

Of course, you could just right-click for the WinX menu, hit Run, type control, press <Enter>, and you’re still there anyway, yeah? But anyone who knows me probably knows that I hate stuff being arbitrarily removed that I find useful, and I’m stubborn enough to keep chasing after it even if it seems impossible. Besides, those eight extra keystrokes are a big productivity loss…

Digging under the hood

So first, we need to understand how the WinX menu is created.

The entries that control the WinX menu are found at %LOCALAPPDATA%\Microsoft\Windows\WinX, and are subdivided into folders with names of GroupX (see below)

Now, the Group folders refer to different areas of the right-click menu, as shown by the colour-coded outlines in the diagram below

So you’re probably thinking, as I did, that you can simply utilize this filesystem and just customize the folders according to what you want on the WinX menu by removing and adding shortcuts to programs, yeah?

Wrong.

To be fair, Microsoft don’t really want to make it that easy – imagine if every application you installed plonked itself into the WinX menu. It could get nasty.

But you can’t even edit the existing shortcuts. I tried to subvert this by changing the properties of one of the pre-populated shortcuts and pointing it somewhere else. Now initially it worked – but as soon as the user logged off or Explorer was restarted, the changed shortcut disappeared. WTF?

Cracking the issue

There appears to be some hashing going on. A guy called Rafael Rivera from WithinWindows did an excellent write-up on how it is done. Here’s a reproduction of what Rafael found.

An approved shortcut…is a .lnk file that has the appropriate markings to indicate to Windows “Hey, I’m special.” The marking is a simple 4-byte hash of several pieces of information. From the .lnk itself, two points are collected:

The link’s target application path/file (e.g. C:\Games\Minecraft.exe)
The link’s target application arguments (e.g. –windowed)

The third ingredient is simply a hard-coded chunk of text, or a salt if you will, to keep things interesting. That string is, literally, “Do not prehash links.  This should only be done by the user.”

With these three strings in hand, Windows then glues them together, lowercases everything, and runs them through the HashData function. But you’re probably wondering at this point, what does it compare to?

Let’s shift our focus to .lnk files. We know them as shortcuts to things. But they’re officially called Shell Links and can store a lot of information on other data objects in Windows. More specifically, they support storing a structure of data called a PropertyStoreDataBlock that acts as a container for arbitrary string or numeric key/value pairs. Yep, the “WinX hash” is stored in here. If you’re curious, the key can be defined as such:

DEFINE_PROPERTYKEY(PKEY_WINX_HASH, 0xFB8D2D7B, 0x90D1, 0x4E34, 0xBF, 0×60, 0x6E, 0xAC, 0×09, 0×92, 0x2B, 0xBF, 0×02);

So to tie it all together, Windows – the Shell specifically – iterates through the .lnk files in each GroupN folder; opens them up; pulls out and concatenates the target path, args, and an arbitrary string; then finally hashes the result. This hash is then compared with the one stored in the .lnk to determine if it’s approved. Rinse and repeat.

If you find that a bit TL:DR, Windows basically hashes the shortcut file and stores the hash inside the shortcut file itself as metadata. So if we want shortcuts within those WinX folders to appear on the WinX menu, they have to match the hash stored in the shortcut file. If we edit an existing shortcut or create a new one, the hash doesn’t match and the shortcut disappears next time the user logs in or the shell restarts.

So how do we get the files correctly hashed so we can load them into our WinX folders and have them appear?

Mr Rafael Rivera deserves a whole load of credit – he created a command-line tool called hashlnk which allows us to “patch” our shortcuts. Unfortunately it is a little difficult to find on the Internet, so I’ve stored a copy of it here for those of you who may want to download it.

Firstly, create your shortcut. We want a shortcut to Control Panel, so that’s fairly easy to do

Next, run the hashlnk executable from a command prompt and supply the full path to your shortcut file as the parameter

Now interestingly, you can’t use the hashlnk executable from Windows 10 itself – it crashes with a dll error. However, as long as you have an older version of Windows that you can create the hashed shortcut from (Windows 8.1 and Server 2012 R2 work absolutely fine), you won’t have any issues.

So now we simply need to take the “patched” shortcut file and pop this somewhere it can be used to populate the user’s WinX menu. The obvious way to do it (because the files are read and used when the shell starts, so it has to be done usually before logon, although more on this later) is to edit the folders in c:\Users\Default\AppData\Local\Microsoft\Windows\WinX, because these will be used to create each user’s profile as they log in for the first time.

We’ve opted, in this case, to remove the Settings shortcut from the Group2 folder, and replace it with our new “patched” Control Panel shortcut. An interesting thing to note is that sometimes the name that appears on the WinX menu may be different from the actual shortcut name – if this happens, edit the shortcut and put the name you want into the Comment field.

So now, when any new user logs on to our Windows 10 1703 build, they will get “Control Panel” instead of “Settings”. Here’s an image to show exactly that.

Awesome-ness – we now have Control Panel back where it was before! Sweet!

Taking it further

What you can do is leverage something like Group Policy Preferences to set the Default User folders up precisely the way you want them. But because we are using the default user folder to populate this, we’re essentially making this device-specific. Is it possible that we could find a way to create different WinX folder layouts based around different sets of users? Could we even add additional folders?

To demonstrate this we are going to try and add a fourth folder section with a couple of shortcuts within it. For the first user, we will try and put Google Chrome in there. For the second user, we will put Internet Explorer. Simple enough for a quick test 🙂

Firstly, you need to hash each shortcut using the hashlnk tool and put the “patched” files into a folder that is accessible from the network. Don’t forget you can’t run hashlnk from Windows 10!

Next we need to find a way of copying these shortcuts into the user profile before the shell actually starts, and making the file copy conditional. To do this we could try Group Policy Preferences or Ivanti DesktopNow – many other methods may work, but I tested these two because they were easiest for myself 🙂

In Ivanti DesktopNow, we use a Pre-Desktop node which should execute before the shell starts. We simply create the Group4 folder and then copy “patched” shortcuts into it from a network share dependent on a Username Condition.

In Group Policy Preferences, use a File item in User | Preferences | Windows Settings to copy each of the “patched” shortcuts into the required area, with Item-Level Targeting set up for the specific usernames.

 

Both of these methods work equally well. For our first user, the one who gets Google Chrome, their WinX menu now looks like this

And for the second user who was allocated Internet Explorer as the additional shortcut, their WinX menu now looks like this

That’s awesome – we can use GPP or a third-party tool to give ourselves custom WinX menus for different sets of users on the same devices. Uber-cool!

Summary

The WinX menu is intended by Microsoft to be unchangeable, but sometimes we might want to do this for reasons of productivity or just to tailor our user environments to keep the users happy. Big credit to Rafael Rivera for his hashlnk tool which lets us get around these restrictions and use Windows 10 in the way we’re used to using Windows – as an environment we can customize to the needs of our business.

Of course, the next feature update for Windows 10 will probably break this (or worse – change the hashing mechanism!), so as always, make sure you test thoroughly before deploying and check all your customizations when the next iteration of Windows comes along.

The post How to replace Settings with Control Panel on the WinX menu in Windows 10 Creators’ Update (or customize it in any way, really) appeared first on HTG | Howell Technology Group.

WannaCry strikes – it’s Groundhog Day for IT security

$
0
0

Back in the day

In 2003, I was a server administrator working on some of the largest distributed server estates across the world. The concept of applying “patches” for security purposes was at this time in its infancy. Generally, software hotfixes were applied as a response to bugs and failings in applications, than as any kind of security-oriented process.

Of course, we’d had widespread viruses before – mass mailers generally, like Melissa or I Love You. But Microsoft, realizing that Windows in its default format was particularly insecure, were slowly beginning to adapt their model to coping with the possibility of a large-scale network “worm” attack – a virus that spread without interaction on the part of the user across global networks.

At this time Microsoft had begun to issue “alerts” as vulnerabilities in their software were discovered, along with mitigations, workarounds and the patch to fix. The vulnerability in question was “alerted” on July 16 2003, and the hotfix issued at the same time. However, automated tools for deploying hotfixes were few and far between at the time – Microsoft had SUS, which updated clients only, and SMS, the early ancestor of SCCM – and the “Windows Update” tool was not yet part of the operating system. In fact, short of running the tool manually or via a script, the only real option for applying patches was to tell the user to visit www.windowsupdate.com.

Additionally, enterprises had an aversion to change. Change controls at the time were concerned primarily with minimizing business impact, and security wasn’t a major concern of any kind. We highlighted the vulnerability disclosure to our client as part of our weekly infrastructure meeting – this being the days before dedicated security teams existed – but they continually pushed back on the widespread deployment of an operating system patch. Some rather infamous quotes were thrown out in the meeting that stuck with me – “we’re not entirely sure that this could be used in the way Microsoft are indicating”, “let’s see if anyone else gets hacked first” and my personal favourite “we have antivirus – what’s the problem?”

On August 11 2003, while we had another meeting in which we tried to impress the need to deploy the patch, but were told that they were still pulling together a testing process, the infamous Blaster worm made its appearance, and promptly shot straight up the proverbial backside of our client’s biggest inbound sales call center. From there, it spread across the server estate as well, causing endless restarts and huge floods of network traffic as it tried to launch various denial-of-service attacks against Microsoft websites. Our network team panicked somewhat and shut down huge numbers of ports, rendering our response to the attack somewhat limited as SMB and ICMP traffic was all being blocked. The whole organization went into crisis mode, and we were eventually tasked to work through the night applying the patch to the server estate while field services did the same for the endpoint devices.

Without automated tools, we were reduced to using a quick-and-dirty patch script that I produced in order to apply the patch without visiting every server. However, the huge floods of network traffic produced by the worm as it propagated made this very difficult, with the feeling of “treading water” pervading every remote session we initiated. As tiredness set in when we moved to the second day, we found ourselves patching duplicate systems because there was no centralized way to identify a secured endpoint. Eventually, we pushed the patch out to the estate, cleaned the Blaster worm from all of our managed devices, and we were done. It took nearly 48 hours of effort from a six-man team just to secure the servers, and the damage in terms of lost revenue was going to be quite extreme.

The Blaster worm was followed by other famous, fast-propagating worms like Sasser, MyDoom, NetSky and Bagle, and although it had been preceded by notorious events like CodeRed and SQL Slammer, it was the first one that really woke up many of the world’s IT departments to the need for better patching processes and better security software to perform endpoint remediation and threat detection. Not long after, Microsoft released Windows Server Update Services, a fully-integrated Windows patching tool that allowed tight control over the release and installation of hotfixes, patches and service packs. Security software came on in leaps and bounds, with application management and intrusion detection being added to the defense strategies of enterprise environments. Patch management became a monthly cycle of assessment, testing, deployment and reporting that ensured compliance was in place to avoid any future network worms.

The modern world

And, like everything does, slowly we began to let out diligence slip.

“Worms” fell out of fashion, having been the staple of young hackers looking to make a name for themselves by spreading as fast as possible and causing great disruption. Viruses moved on to new forms – keyloggers to steal banking credentials, ransomware to encrypt files and demand payment, bot armies to launch huge distributed-denial-of-service attacks for extortion purposes. Organized crime got in on the game and made it a business, rather than an anarchic pastime. Highly targeted hacks – spear phishing – were the route into networks to steal data and intellectual property. It seemed that the day of the self-propagating network worm was officially over.

Alongside this change, our reliance on applications and Internet services has grown ever greater. Changes to our environments are ever more tightly controlled because operational impact has grown so much greater than it was previously. Moving away from legacy software has become a huge challenge because enterprises are terrified of having an adverse effect on existing revenue streams or processes. Toothless regulations have ensured that often the cost of preventing data breaches is higher than that of paying fines and dealing with bad publicity in the event it does happen. More and more of our devices are internet-connected, yet not designed with security in mind.

And amidst all of this we’ve become somewhat inured to the daily output of security vulnerabilities and vendor disclosure. I’m sure not many batted an eyelid when it was announced that Shadow Brokers had compromised a set of vulnerabilities allegedly used in a nation-state hacking toolkit by America’s National Security Agency. Microsoft responded by announcing they’d already made patches available for the vulnerabilities. Even though some of the vulnerabilities were critical and affected huge amounts of Windows endpoints, they were merely passed into the neverending security cycle for processing – because no-one would ever throw out a network worm any more anyway, after all, that’s so 2003?

Well, the events of 12 May 2017 indicate that we’re guilty of massive complacency. Hackers weaponized a vulnerability to spread ransomware, and added network worm capability to ensure self-propagation. Within a few hours, 75,000 systems in 57 countries had been infected with ransomware that encrypted all accessible data and demanded a payment to release the files. With hindsight, it seems incredibly naïve to think that no-one would ever consider putting together the insidious, revenue-generating attack vector of encryption ransomware and combine it with self-propagating worm capability – after all, the more people you hit, the more money you stand to make.

What makes this worse is that it appears to have affected UK NHS systems quite badly. Disrupting day-to-day workloads is bad enough, but in healthcare, lives are at stake. A huge amount of non-emergency appointments have been cancelled, making this probably the first worm attack that has done significant damage to a country’s actual social infrastructure.

Notwithstanding the fact that this vulnerability – MS17-010 – was a result of a nation state hoarding vulnerabilities for their own cyber warfare and spying purposes, rather than practicing responsible disclosure, the initial alert was issued on March 14 2017. That means we’ve had 59 days to deal with the vulnerability from disclosure to weaponization. Those people referring to it as a “zero-day” attack are completely wrong – almost two months has gone by in which the issue could have been remediated. Now contrast this with the Blaster worm, where there were only 26 days from disclosure to weaponization. In 2003, we had 26 days to respond, at a time when we had little or no automated tools to use, no standard processes for dealing with security, no buy-in from management, and very little experience of such an attack. In 2017, we had 59 days to respond, with a plethora of deployment tools and security software at our disposal, fourteen years of honing security processes, and plenty of anecdotal experience around the issue. So despite having all of the historical data and experience, the tools, the fine-tuned processes, and nearly three times longer to deal with the problem, what was the end result?

Exactly the same. Groundhog Day. We got owned by a network worm – again.

What did we do wrong?

Of course, we can put this in context. I’m sure many enterprises weren’t affected by this because they had a defense-in-depth strategy that provided mitigation. Defense-in-depth is key. The old anecdote tells of the man who leaves his expensive watch in his trouser pocket, and it gets broken when it goes through the washing machine. Who should have checked – the man himself, or his wife when loading the washing machine? In security, there is only one answer that protects your assets – they both should have checked, each providing mitigation in case the other layer fails.

So I suppose that the WannaCry worm – for that is the name that it has been given – has been somewhat reduced in its scope because many of the enterprises in the world have learned from the past fourteen years. But it’s clear that not everyone is on board with this. The NHS, in particular, have suffered badly because they are still wedded to older systems like Windows XP, despite the operating system being out of support for several years now. And this is because of legacy applications – ones that don’t port easily to the latest versions of Windows, forcing them to be persisted on older, vulnerable systems.

But of course there is also a failure to effectively patch other, supported systems that is contributory to this. After Blaster and Sasser patches were deployed quickly and without fuss, because the pain of the attack was still raw in the memory. Emphasis was placed more on backout plans in the event that application issues were caused, rather than insisting on rigorous testing prior to deployment. As network worm attacks decreased, time given to testing has exponentially increased. Many organizations now have a three-month window for patching client systems, which although it has served them well in the intervening time, is not good enough if we can see weaponization of a vulnerability inside of this window. In many cases, the window for patching of servers can be exponentially longer.

Administrative rights are still an issue as well. Whilst this particular vulnerability didn’t need administrative rights, the scope of potential damage to a system is always increased by the end-users having administrative rights, no matter what the reason. I still see many environments where end-users have simply been given administrative rights without any investigation into why they need them, or any possible mitigation.

Now I’m not going to sit here and cynically suggest that merely buying some software can save you from these sorts of attacks, because that would be a) a bit insensitive given that healthcare outlets are suffering the brunt of this outbreak, and b) because it simply isn’t true. Of course, adopting the right tools and solutions is a fundamental part of any defense-in-depth solution, but it isn’t anywhere near the whole thing.

What can we do better?

Traditional antivirus isn’t great at catching exploits like this, so there is a rethink needed about how you approach security technologies. Bromium vSentry is an example of the “new wave” of security tech that we’ve had great success with, along with things like Cylance. But that’s not enough on its own; although solutions like these offer you a good starting point, it’s important to adopt a holistic approach.

Getting away from legacy systems and legacy apps is paramount. There’s no point having a comprehensive patching system in place if patches aren’t being issued for your core operating environment any more! There are many application remediation technologies that can help with moving incompatible legacy apps to supported – and more secure – operating systems like Windows 10. Once they’re remediated, they are portable, modular, even cloud-based, and can easily cope with the rolling cadence of Windows feature updates. Cloudhouse, Numecent, Turbo.net, etc. – we’ve had great success with these across many different verticals.

Ensuring you have a comprehensive patching process is also vital. It’s not just the OS itself – tech like Java, Flash, Office and the browser are examples of key areas that are often exploited. This has to encompass testing, deployment, mop-up and reporting in order to be fully effective, and if your users are mobile and/or use their own devices, you may need something beyond the traditional tools to accomplish this.

Application management – preventing untrusted code from running – is another key approach. You can use whitelisting technologies like AppLocker or Ivanti DesktopNow to create trusted lists of executable code, and combine this with new security features in Windows 10 like Device Guard and Credential Guard to produce a highly secure, but adaptable, solution. Many malware attacks such as WannaCry would be effectively mitigated by using an approach like this as part of an overall security strategy.

But security and usability don’t have to be polar opposites. Being secure doesn’t mean sacrificing user experience. There are some trade-offs to be made, but precise monitoring combined with robust processes and documentation can ensure that providing the right level of security doesn’t have to come at the cost of compromising user productivity. Striking that balance cuts to the very heart of an effective security policy.

And naturally, there are many other areas a security strategy needs to concentrate on for a true defense-in-depth approach. The risk that one control fails or is bypassed needs to be mitigated by others. Locking down USB devices, removing administrative rights, detecting threat-based behaviour, managing firewalls, web filtering, removing ads – the list of things to cover is very broad in scope. What’s more, all of this needs to merge into an effective business process that not only covers all of the things mentioned above but deals with incident response and learning. When the GDPR regulations come along in 2018, the penalties for security breaches will increase to a level where there is a real financial risk associated with failure. It’s a favourite saying in the security community that breaches are a case of “not if, but when”, and recent events should remind us that this has never rung truer.

Summary

All in all, we should have learned from the mistakes of the past, but the WannaCry worm has shown us that in many areas, we are still failing. Security needs to be done by default, not just when we get exposed. With more and more devices coming online and integrating themselves into our enterprise infrastructure, the potential for malware like this to not only cost us money, but to disrupt our entire society and even threaten people’s lives, should act as a well-overdue wake-up call. Security is an ongoing process, and it’s not something we’re ever going to fix – it’s a mindset that needs to run through IT operations from top to bottom. But if you take a diligent approach, choose the right technologies, implement your processes correctly and make sure you have good people on your team – then there’s no reason you shouldn’t be able to stop your enterprise becoming a victim of the next high-profile attack.

Update – there is a very interesting write-up on how one security researcher’s instinctive response to seeing the behaviour of the malware effectively stopped WannaCry in its tracks due to a poorly-executed analysis-evasion technique. Details here

The post WannaCry strikes – it’s Groundhog Day for IT security appeared first on HTG | Howell Technology Group.

How to create mandatory profiles in Windows 10 Creators Update (1703)

$
0
0

I wrote a comprehensive post a few years ago (God, it’s been that long?) on how to create mandatory profiles. When Windows 10 came along, mandatory profiles had been completely and utterly forgotten about, and simply didn’t work. After a while, they got around to fixing this, and I ended up recording a (rather long!) video about how to create them.

Unfortunately this had some issues around UWP apps, in that they seemed not to work very well when using a mandatory profile. And then, just as I was getting around to having a look at the WP issue, Microsoft released the Creators’ Update (1703). This, although it ostensibly brought back the capability to use the Copy Profile command to create a mandatory profile, also had the annoying effect of now breaking the Start Menu when you used a mandatory profile (thanks to Pim for the heads up on this issue). So, yesterday I set about cracking the issues that we had, so we needed to create a mandatory profile and test:-

a) Whether the Start Menu functions

b) Whether the UWP apps function

c) If both of the above still work OK when the user logs in to a second machine

Now, the only officially supported way to create a mandatory profile is by using Audit Mode to create a custom default user profile, and then using the Copy Profile command to move the customized default user profile to a network share. This is the way I’ve attacked it in the new video I’ve recorded. This article is intended to supplement that – and if you choose to do it the old-fashioned way, by copying an existing profile directly into a network share, you’re going to get problems. Believe me, I’ve tried!

Pre-requisites

We need:-

a) a network share to hold our user profile

b) a Windows 10 1703 machine to create the custom default profile on

c) a functional Active Directory environment

d) Ensure this Registry value is set on your devices – HKLM\Software\Microsoft\Windows\CurrentVersion\Explorer\SpecialRoamingOverrideAllowed REG_DWORD value 1

Hopefully you should have all that checked!

The hard work – creating the mandatory profile

Build a Windows 10 1703 machine and enter Audit Mode. You trigger Audit Mode when it reaches the screen that asks you which regional layout you want, and do it by pressing Ctrl-Shift-F3. The machine will then log you in and put up a sysprep prompt – click Cancel on this.

Once logged on, customize the environment how you want your mandatory profile to look. How much or how little you do probably really depends on what you are using the mandatory profile for. If you are using it as a base for a UEM product, then you probably don’t want much customization. If you’re using it for a kiosk or similar device, you may want a lot. Some of the things I find it handy to set are browser home pages, browser search provider, “show file extensions” in Explorer, change the default view in “This PC” away from Quick Access – it’s entirely up to you how much or how little you customize. Here’s how much I did – complete with “odd” icon placement so I can tell if it has worked 🙂

Next, create an XML file with the following text:

<?xml version=”1.0″ encoding=”utf-8″?><unattend xmlns=”urn:schemas-microsoft-com:unattend”>

<settings pass=”specialize”>
<component name=”Microsoft-Windows-Shell-Setup” processorArchitecture=”amd64″ publicKeyToken=”31bf3856ad364e35″ language=”neutral” versionScope=”nonSxS” xmlns:wcm=”http://schemas.microsoft.com/WMIConfig/2002/State” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”>
<CopyProfile>true</CopyProfile>
</component>
</settings>
<cpi:offlineImage cpi:source=”wim:D:/sources/install.wim#Windows 10 Enterprise” xmlns:cpi=”urn:schemas-microsoft-com:cpi” />
</unattend>

Changing the settings in bold to those as required in your environment (the path to the install files, and the Windows 10 Edition)

Make a note of the XML file name and path – I normally copy it to C:\unattend.xml

Next, open an administrative command prompt and run the following command

%windir%\system32\sysprep\sysprep.exe /generalize /oobe /reboot /unattend:c:\unattend.xml

where c:\unattend.xml is the path to the XML file you created.

Now, this will restart the system and complete the installation, copying your user profile into the default user profile area.

After this, I normally apply all patches and join the domain. Once this is done, log on with a domain account that has access to your network share where you intend to store the profile, and open up the Advanced System Properties dialog. Click on the Advanced tab and then Settings. Highlight the Default Profile, and click Copy To

Enter the path that you wish to copy to, change Permitted to use to say Authenticated Users, and check the box for Mandatory profile (not that it appears to do anything, but hey, check it anyways)

This copies the Default Profile across to our file share – but it’s not done properly, sadly. Firstly, we need to set the permissions correctly. The filesystem needs to have the permissions set as below:-

  • ALL APPLICATION PACKAGES – Full Control (this is mega-important – without this set the Start Menu will fail)
  • Authenticated Users – Read and Execute
  • SYSTEM – Full Control
  • Administrators – Full Control

Once you have set these permissions on the parent folder ENSURE that you cascade them all the way down the filesystem, and also MAKE SURE that Administrators is the owner of all the files and folders as well.

Next we need to set the Registry permissions as well. Open up regedit.exe, select the HKEY_USERS hive, and choose the Load Hive option from the File menu. Browse to the network share where you copied the files to, and open up the ntuser.dat file that is in there. Give it a name, and you will see the named hive loaded under HKEY_USERS.

Right-click on the root of the hive you have loaded and select Permissions. The permissions in here will be wrong. Change them to match those set below exactly.

You must ensure that the RESTRICTED group is removed, otherwise you will be unable to log on and will get an Access Denied error. When you apply these permissions, you will get an error saying “unable to set security in some keys” – just ignore this.

Now, search the Registry hive for any instances of the username and delete them. If you want to be really thorough, search for the SID of the user too and remove any references to that.

After this I normally delete any Registry keys which I think are unnecessary. Policies keys can definitely go, I also tend to remove APPDATALOW from \Software and the (huge amount!) of Google references you will find within the Registry. It’s up to you how much you do here – certainly there are lots of redundant objects related to gaming, XBox and SkyDrive that could easily be taken out.

Once you’ve done this, highlight the root of the loaded hive again and choose File | Unload Hive from the menu in regedit.exe, otherwise you will lock the file and it will be unusable – VERY IMPORTANT!

After this, you can highlight the Registry transaction logs in the root of your file share and delete them – they’re not needed.

Next you can strim down the filesystem. Because the Copy Profile command ignores the AppData\Local and AppData\LocalLow folders, you shouldn’t have too much to do here. I normally just get rid of \AppData\Roaming\Adobe.

This usually takes the size of the mandatory profile down to just over 1MB, which is about right.

For the penultimate steps, rename the ntuser.dat file to ntuser.man (why the hell did the Mandatory check box not do this bit????), and then set a test user to use the mandatory profile in AD or GPO.

But there is one final step we need to take to ensure that UWP apps work in our mandatory profile. You need to set a GPO that allows roaming profiles (because mandatory profiles are simply read-only roaming profiles) to deploy UWP apps. The GPO is shown below, and if this isn’t set, no UWP apps will work (they will just hang indefinitely)

Once you’ve got this set, you can now test your mandatory profile – and it should work perfectly. If you want to reduce the logon time, then removing as many UWP apps as possible from the image will be your best bet – see many of my other articles for guides on how to do this.

Summary

I’m hoping this is the last time I have to go down the mandatory profiles route. But I’m willing to bet it’s not. Welcome to Windows 10 and the fast release schedule!

The post How to create mandatory profiles in Windows 10 Creators Update (1703) appeared first on HTG | Howell Technology Group.

Diesel fuel, speed, and achieving enterprise nirvana by combining FSLogix with Insentra PIA

$
0
0

Ok, what?

Exactly – you’re probably thinking to yourself, just what can he be on about now? I thought the title of this article was a subject around enterprise IT? Well yes, but I do so like a good analogy. I recently managed to pull one off using King Kong and Godzilla, so now I’m going to talk about cars. Bear with me…

I’m a big car fan. Not that I can actually do anything with cars besides drive them, I’m not the sort of car fan who tunes his own engine or anything like that. But I do like a good sports car. As my wife can surely testify, I have wasted lots of money on expensive (mainly German) sports cars.

In the car world, as in many other areas, there are often accepted norms that don’t get challenged. For years, diesel cars were thought of as economic and environmentally friendly, much more so than cars that ran on petrol (or gasoline, depending on which side of the pond you’re sat). And (especially in the UK) we’ve been force-fed a mantra of “speed kills” for a very long time, contributing to a mass criminalization of ordinary motorists to feed huge revenue streams for police forces.

In the IT world, sometimes we also come across similar “accepted norms”, because everyone is talking about them, to the extent that we almost accept this as gospel. For a while now, I’ve been telling anyone who will listen that FSLogix Office365 Containers are the best way to manage OST files, Skype for Business, OneDrive and other cached files within pooled VDI or RDSH environments, or that FSLogix Profile Containers is an easy, low-maintenance method of controlling your user’s profiles. They’re not the only way, but I definitely think that from a perspective of simplicity, management, cost-effectiveness, performance and user experience, they’re the best around.

However, what we don’t want to do, is hype something up to the extent that it becomes an accepted norm without being challenged. “Surely that doesn’t happen in the IT world?” You’d be surprised. Humans are easily influenced in all walks of life, and if you hear something often enough it can generate not only a lot of buzz, but an almost unthinking concurrence. It’s like the perception of diesel cars as environmentally-friendly and the whole “speed kills” movement – they become embedded in the collective consciousness and accepted as hard facts.

IT has sometimes suffered from over-hyping of technologies and solutions that have not really been suitable for particular environments. As an example, let’s look at virtual desktop infrastructure (VDI), which has been hyped to high heaven and pushed down our throats at every single opportunity over the last ten years. How many of us have seen VDI proposed – or even deployed – as a solution to problems that it really wouldn’t help with? I’ve personally seen enterprises deploy VDI where they have already had computing power situated on every desk, or where their applications are very much unsuited to the virtualized delivery model. And worse, not only does it not address the problems at hand, it makes them worse by introducing great layers of complexity to the managed environment that previously didn’t exist. Now this isn’t an anti-VDI tirade – I have seen it perform excellently in many businesses and bring great value to users – but the point I am trying to make is no-one should adopt a technology simply because all and sundry are talking it up as the magic bullet that will solve all of your problems and take you to the next step of desktop evolution. Cloud is another prime example – it can do great things, but it’s very important to choose horses for courses and not get caught up in throwing money at a new paradigm just because everyone you talk to appears to be embracing it as the technological equivalent of paradise itself.

Let’s jump back to cars for a second. How did we come to the perception that diesel-powered engines were so much cleaner and environmentally-friendly than petrol-driven ones? Even though the evidence at hand seemed to contradict this perception – I’ve always referred to diesel cars as “smokers” due to the occasional cloud of fumes they will spit out – the general consensus, for a long time, always seemed to be that diesel was better.

In the case of diesel, it was a poor understanding of emissions and their effects. Legislation was focused very much on CO2 as the environmental threat, and petrol engines emit more CO2 than diesel. Diesel also involved less refining effort, and therefore there had less environmental pressure associated with its actual production.

However, over the years, we’ve become aware that diesel engines produce more nitrogen oxides and sulphur dioxides, which are associated heavily with poor urban air quality. And then there was the whole “diesel-gate” scandal, as Volkswagen were caught out putting “defeat devices” into their cars to make them reduce their emissions to legal levels only when in a testing situation, and pumping out up to forty times the legal amount when in normal driving mode. An interesting aside from this, on a technical perspective, is that the “defeat device” would use a situation when all non-driving wheels were moving but there was no pressure on the steering wheel as an indication that it was under test, and engage the “reduced emissions” mode based around this set of parameters.

So the reason for diesels becoming known as environmentally-friendly was down to, in the main, a poor understanding of the actual effects of emissions on the environment. Naturally, your average man in the street would not have an intimate knowledge of the science around air pollution, and it was only when experts in the field brought their knowledge up-to-date (and when VW’s charade was exposed), that the perception of diesel as a cleaner fuel began to unravel. See below for an example of a now-irony-laced advert for their diesel engines…

You can equate this with IT. How many IT departments have an expert-level understanding of the internals of the devices, operating systems and software under their management? In a lot of cases, IT departments are responsible for operational administration, and a great deal of their understanding of how their systems work on a deeper level is obtained by reading blogs and articles by vendors or community industry experts. And the consensus among vendors and the community can change over time, just like it did with the understanding of the impact of diesel fuel on the environment. Let’s take Folder Redirection as an example here – there are experts within the IT community who will happily argue the case either for or against Folder Redirection as a solution, with very compelling standpoints from both sides. What IT departments need is to be given the right information, relevant to their environment, which allows them to understand their systems better – and receive this information in a clear, concise manner.

And let’s go back to cars again to discuss the second analogy I raised – speed. Somewhere in the 90s, there was a conscious decision made (at least in the UK) to come up with the mantra “speed kills”. Unlike the diesel engines, there was no science behind this that could be erroneously interpreted. Speed is speed – you put your foot down, you go faster, a concept that just about anyone can easily grasp.

Now anyone with half a brain can concede that driving a car at 90mph in a 20mph speed zone on a housing estate in the evening with kids and pets around is pretty goddamned stupid. It’s out of examples like this that the – initially noble – idea of “speed kills” was born. But it slowly grew into something far beyond this, an arms race against the motorists themselves, entwined deeply into the media machine. Turn on any British TV channel and you can see shows like “Police Interceptors” or “Police Camera Action!”, complete with traffic officers standing next to the carriageway looking at a pile of crumpled metal, and declaring “this accident was caused by speeding, pure and simple”.

Initially some people (myself probably included) railed against this and actually asked the government of the day to provide statistics to back this up. The problem was, this was a huge job. Collating data from individual police forces and stitching it together involved a lot of time and effort, and there was no central agency that could provide the resource to do it.

Eventually though, the government commissioned an organization called the Transport Research Laboratory, based in Crowthorne. This was to be their official vehicle research centre, where they collate data on traffic accidents as well as slinging all manner of cars around high speed test tracks, crash rooms, drag strips and banked curves in the midst of the lush forests surrounding sunny Bracknell. And it was here that they decided to do some huge studies on crash analysis and speed, taking into account some enormous sample sets.

Now, excessive speed makes any accident more violent, but does it specifically cause the accident in the first place? Take a guess at what percentage of accidents the TRL actually found were caused directly by speeding? 40%? 30%? (It’s interesting how high people’s initial guesses will go, indicating a high level of penetration for the government’s “speed kills” message) In actual fact, the rate was around 4%. If you then factored in “loss of control” accidents such as black ice, wheels coming off, etc. the percentage of accidents where speed was a contributory factor was under 2%. The government’s own research proved, quite tellingly, that the highest cause of accidents on the road is quite simply what police forces lump under the catch-all definition of “driving without due care and attention” – failure to judge path or speed of other road users, inattention to approaching vehicles, failing to spot a hazard, etc.

Does the 1.9% justify the increased usage of speed cameras? Probably not, but police forces are still expanding their camera presence, and it’s mainly because of revenue, not safety. Speed cameras can generate thousands of pounds an hour when active, and several individual cameras in Britain posted earnings of well over the £1 million mark last year! But the TRL also did a study on speed cameras, and one of the things it looked at, amongst many others, was the presence of speed cameras and their effect on accidents. And what did the statistics come up with? Shockingly, precisely the opposite of what the anti-speed campaigners were hoping for – they actually found that speed cameras made accidents more, not less, likely to occur. At road works, the accident rate increased by a mammoth 55% with the presence of speed cameras. On open motorway, the cameras increased it by a not-inconsiderable 32%. This study was done over a period of eighteen months and covered 4.1 million vehicle kilometres – a pretty large and comprehensive sample set.

Before I go off on a real rant, let’s get back to the point I was trying to make. IT is the same, in that much of the time, it is very difficult to collect, collate and compare operational metrics, for reasons of time, skills and resource. The British public were lucky that an organization like the TRL was commissioned to pull together the statistics for road safety (even if its findings were unceremoniously ditched for coming up with the “wrong” conclusions!) The IT world isn’t as politically motivated, but what it does have in common is that we’re often left to our own devices when it comes to gathering the numbers required for monitoring performance, testing systems and building business cases. If you get inaccurate data or metrics, then you’re going to be in trouble.

So what we need is good, relevant, concise data about our environments, and we need it delivered without a huge overhead in terms of skills and management that disrupts our daily workloads. And if we can get data delivered of this quality in this way, we can then make proper informed decisions about the latest buzz we’re hearing in the IT community. Such as people like me telling everyone that FSLogix is the solution to performance issues on Office365 and dealing with the whole Windows profile in general 🙂

Is there a way we can get data of this quality, focused to our specific environmental requirements, without the huge effort?

Well, there are a plethora of monitoring solutions in the world. But the problem I generally find with them is that they all involve a significant investment in setup, learning and ongoing tuning. It’s almost like asking the general public to do the TRL’s research themselves – it can be a huge task. And if you don’t have a deep understanding of the internals of modern computer systems, then you’re going to be in the position of possibly looking in the wrong places for your data anyway, much like when we concentrated on CO2 as the harmful exhaust emission gas without taking into account all of the other poisons being spewed out.

This is where – yes, I’ve got to my point eventually – a company called Insentra comes in, with an offering called Predictive Insights and Analytics (PIA for short). I’m so glad they didn’t call it Predictive Insights, Trends and Analytics 🙂

The importance of metrics

Many – I’d say up to 85% – of projects I’ve worked on recently have little or no metrics from right across the environment. Metrics that they are monitoring (if any) are infrastructure-focused – networks, storage, databases. But in today’s world it’s the holistic view, the components that contribute to end-user experience, that need to be monitored. Very few places monitor the entire estate with the same level of detail reserved for back-end infrastructure components.

And even though there are great technologies out there that can provide this level of monitoring across the entire estate, there are problems. Technology like Lakeside SysTrack offers unparalleled visibility into your environment, but it needs to be purchased, implemented, supported, and continually tuned. This is a not-insignificant investment of time and resource – and as such there needs to be a compelling business case that can justify the investment beyond the scope of the current project. This is often the stage where interest fades – companies love the technology, they see the need, but they struggle to build the business case that will turn interest into investment.

Insentra’s PIA offering ticks a number of boxes that will make it so much easier. Firstly, they build the dashboards for you, using their own tech on top of a base Lakeside SysTrack core, and can tailor this to as many – or as few – metrics as you require. Secondly, it can all be cloud-based (although it doesn’t have to be), reducing the overhead of deploying the solution either as a PoC or in production. Thirdly, you can consume the offering for a specific period of time, so if you just need to invest in a monitoring solution for the specific duration of a project, it doesn’t have to be something you’re signing up to pay for on a continual basis. But the most important point is that monitoring doesn’t become an ongoing headache for the IT department – it just works. And that frees your staff up to innovate, to find new ways for IT to make the business more profitable and productive – not leave them fighting fires all of the time.

Combining features of FSLogix and Insentra gives you a lot of freedom to concentrate on developing IT services and enabling your users. Management complexity of applications and profiles is stripped away, performance problems with the Office365 suite are removed, and you get proactive alerts on the health of your environment and continual justification of whether your infrastructure is performing as expected. You don’t need to build a business case – the service itself can build it for you 🙂

We can easily spit out this “auto-justification” by pointing the PIA engine at FSLogix itself to find out exactly how much of a business benefit you’re getting. What PIA also calculates is a cumulative metric called “service quality” which is an attempt to quantify that metric that is so important to every enterprise out there, the sometime intangible known as “user experience”.

Insentra did what they do, and collated custom dashboards that run within their managed service to provide information on KPIs for systems both with and without the FSLogix solutions installed. There was no need for any learning of the product on our part – we simply told Insentra what we wanted, and they came back with the dashboards for us to plug our test systems into. As easy as that – one agent installation, and we are good to go, and now we can define the KPIs we want to see on our dashboards.

With regard to understanding Outlook performance both with and without FSLogix in use, we plumped for three fairly straightforward indicators.

  • Logon times – this is usually the #1 KPI of your average user, so a very pertinent metric to measure
  • Outlook launch times – especially in non-persistent environments, ensuring solid launch times of key applications is also vital, and email is one of the most commonly-used ones out there
  • Outlook latency – we need to measure the performance of our key applications in-session, so this KPI is appropriate because it measures the latency between the Outlook client and any other system outside of the session. Because FSLogix maps a VHD, this should be seen as “local” to the session and show lower latency, which would translate into better application performance

The stats

Here are some of the insights that PIA provided for us with regard to FSLogix, and this is an example of just how Insentra and FSLogix together could also work for your enterprise.

Lab 1 – XenApp 7.6 published desktops on Server 2012 R2, 200 users over 5 day monitoring period

Logon times

We can see a full 31% improvement in the logon time KPI.

Outlook launch times

There is a 51% improvement in Outlook launch time KPI.

Outlook latency

45% improvement with regard to the Outlook latency KPI.

Service quality / user experience

There was an overall improvement of 14% in total service quality, and within this increase, we observed most of the improvements around disk and latency.

Lab 2 – Microsoft RDSH published desktops on Server 2012 R2, 450 users over 5 day monitoring period

Logon times

There was a 23% improvement in the logon time KPI.

Outlook load times

Outlook load time KPI was improved by 23%.

Outlook latency

Latency KPI showed an improvement of 13%.

Service quality / user experience

The overall service quality / user experience metric improved by 49%.

So now we’ve got all the stats, it’s up to us how we interpret them, but what we can see is that with the FSLogix solution enabled we have improvements in all of our key performance indicators. Given that lab environments with test users are often quite simple, it’s important that we see improvement, because that can only increase when it is scaled to large numbers of “real” users. Interestingly, the improvement in service quality (49%) on RDSH was way higher than that viewed on XenApp (13%). This is probably because XenApp itself makes some improvements around the handling of system resources and the user experience, making it much less of a marked increase when FSLogix was applied to a XenApp endpoint. But XenApp or not, we can see that the performance is better in every area, which is an important first step in building out our business case.

The real world and “user entropy”

Once you move away from test environments – which are by their nature very clean and uniform – into the “real” world, you start to see more noticeable improvements. Some of the FSLogix guys refer to this phenomenon as “Windows rot”, the unpredictable nature of endpoints once they start layering swathes of applications and processes on top of the underlying system. I prefer the term “user entropy” – the decline into disorder you get as users are let loose on the provisioned environment. Whichever term you prefer, it definitely is something you will become only too familiar with. Take a look at the before-and-after statistics we’ve collected in this way with a real-world production deployment of the FSLogix solution…

…and you can see what we are talking about. We can observe percentage improvements of between 50% and a mammoth 95% in the customer-designated KPIs. That is the sort of incredible improvement that makes a huge difference not just in productivity but also user faith in the enterprise environment, and more than justifies the effort put into the solution which has made this possible.

Summary

In much the same way as the challenges from members of the public to justify the “war on speed” made the government commission big surveys from the TRL, IT departments need to be challenging software vendors to show exactly how their solutions make the improvements that they’re claiming as benefits.

But we want to throw that challenge back the customer way a little bit. We are going to challenge you to put FSLogix’s software and Insentra’s PIA service together in your environment, on a small scale or large, and have a look at the benefits that you get. If you don’t see those improvements, then lift the solutions back out. It’s a simple as that.

What you will get is:-

  • Simplified management of applications and profiles
  • Improvement of in-session performance and user experience
  • A solution to the common problems of Office365 deployments (such as poor performance of Outlook, Skype for Business and OneDrive)
  • Monitoring of specific KPIs customized to your enterprise’s needs
  • A cloud or on-premises based monitoring solution that requires no infrastructure, training or specialist skills
  • Proactive alerting of issues within your environment
  • The potential to back up business cases with real-time data and analytics
  • More time for IT teams to spend on innovating and enabling the business

If you don’t see these improvements, then you simply don’t continue to consume the services or use the software. Straightforward, simple, no strings attached. And that’s the way modern IT environments should be.

So to draw the last drop of blood out of my motoring analogy, you want your IT environment to go from being something like this…

…to something like this…(although, it should be stressed, not with a comparable price tag!)

In my humble opinion, adopting technologies like those of FSLogix alongside services like Insentra’s PIA is a large step forward to achieving this level of enterprise nirvana. It’s a win-win – what’s stopping you getting involved?

 

The post Diesel fuel, speed, and achieving enterprise nirvana by combining FSLogix with Insentra PIA appeared first on HTG | Howell Technology Group.

Changes to Windows 10 servicing model

$
0
0

One of the changes precipitated by the latest Windows 10 version, the Creators’ Update, is that this version (1703) will be the last version that fits into what we knew as the Current Branch/Current Branch for Business model.

The first thing that’s going to change is that the “branches” nomenclature is going to change. We now won’t be talking about servicing branches, but instead they will now be referred to as servicing channels.

What we previously referred to as feature upgrades have now become feature updates. This is despite the fact that feature updates are whole new copies of the operating system and essentially are upgrades – the terminology now rather cynically classes them as updates when in fact they are complete operating system upgrades. To be fair, this change came in before the Creators’ Update, but it is worth bringing it to your attention as part of the wider changes happening.

Windows Update has now become Windows Update with Unified Update Platform (UUP). The changes in UUP are intended to make downloads smaller and less time-consuming, introducing the concept of “canonical” (full) and “differential” builds. This is now extending onto other devices such as Windows mobile, Xbox and HoloLens as well.

The Current Branch (CB) and Current Branch for Business (CBB) channels will change – making 1703 the last version of Windows 10 that is delivered through the servicing “branches”. When the next version is released, we will see a change in the nomenclature that will now refer to CB (or Release Ready) as Semi-Annual Channel (Pilot), and CBB (or Business Ready) as Semi-Annual Channel (Broad). I don’t know whether it’s the official usage, but I’ve already started referring to these two easy-reading roll-off-the-tongue acronyms as SACP and SACB.

Interesting that Microsoft have now admitted that the CB users (of which many are home, consumer users) are now actually really a “pilot” group whose purpose is to identify issues that can be fed into the release channel. How consumer users react to being classified as glorified beta testers is unclear, but it is certainly refreshing that Microsoft have admitted that the purpose of the former CB channel is as a pilot test group.

The use of the term Semi-Annual was a little confusing at first, as my perceived interpretation of the word “semi” had always, in my head at least, meant “partly”. However, it was pointed out to me on Twitter (thanks Rob!) that the actual official meaning of “semi” actually translates to “twice”, although that’s very much a literal interpretation that in my opinion has changed somewhat over time. If Microsoft meant it to be “twice-annually”, then I don’t see why “bi-annual” couldn’t have been used instead – but after thinking about it, “bi-annual” would have nailed them down to a twice-yearly release, whereas “semi-annual” remains wooly enough to allow them to possibly alter this schedule as they see fit. Maybe I’m being a little cynical (hide your surprise!), but time will tell.

Now, assuming Microsoft are going to adopt the twice-yearly release schedule, and stick to it, the servicing window (starting from the 1709 release) should look something like this, as Microsoft are now telling us there will be a specific 18-month support lifecycle.

As noted in the image above (which was lifted straight from one of my presentations), the grace period of 60 days at the end of a support window for a release has now been removed, fixing the servicing window at 18 months from release. The GPO which allowed you to delay feature updates by a further 35 days may still work, but I’m willing to bet it will be deprecated pretty shortly, so you have exactly 18 months to fit in all of the required testing, remediation, vendor engagement and implementation that goes along with moving to a new feature update.

Concurrency-wise, this means that if Microsoft stick to the six-month release schedule, we should at any given point have no more than three Windows 10 versions in support, as shown in the diagram below.

It also means that if you choose to upgrade, for instance, from 1709 to 1809, the 1809 release will have passed through SACP before 1709 goes out of support, ensuring that you can move in jumps of two releases at a time, which will be good news for enterprises that are struggling for resources within operations and testing teams.

Finally, Long-Term Servicing Branch (LTSB) still sits outside of this and as of now (June 2017) is still referred to as such. Whilst some are predicting an LTSB release in September of this year to coincide with some updates to Server 2016, the official word of Microsoft (again, as of this moment in time) is that the next LTSB release is not due until 2019.

Summary

Microsoft’s changes to the terminology used within the servicing model are somewhat unwelcome, given that it took a very long time to both divine and understand the nature of the previous terminologies they introduced us to.

However, the move towards a fixed 18-month supported servicing window – if it is stuck to – is very welcome. Prior to this, there was a lot of confusion and misunderstanding as to precisely how long a servicing window would be. Whilst still a little aggressive for my liking, the 18-month window at least ensures that we know where we stand and how we need to approach the management of our Windows 10 platform.

I will stand by my assertion that virtualizing applications and user settings is key to being able to maintain an agile Windows 10 environment. Coupling virtualization with the requisite mindset changes and additional monitoring required will allow you to keep on top of it, even if you choose to go the SACB route.

The post Changes to Windows 10 servicing model appeared first on HTG | Howell Technology Group.


Launching locally-installed applications from Citrix Storefront using keywords

$
0
0

Recently I did some work in an environment where users had been migrated from a thin-client environment where they used hosted applications through Citrix Storefront, onto a Windows 10 fat-client environment with a number of their applications locally-installed. The challenge we faced was getting users to use the local copies of applications rather than the ones hosted on Citrix XenApp, as they’d grown quite accustomed to using Storefront during the years of thin client technology.

Now, I hear you say, this isn’t a technical issue, it’s a training one. As the venerable Ed Crowley once said, “there are seldom technological solutions to behavioural problems”. I agree – to a point. However, Citrix Storefront (and many other methods of application presentation, to be fair) are an excellent way of delivering applications to users, particularly in mixed environments where some are hosted and some are installed locally. It fits nicely into the “app store” model that users are intimately familiar with due to their use of similar interfaces on Apple, Android and (dare I say it?) modern Windows devices (I should also give honourable mentions to products like the VMware View Portal, Software2 Hub and RES IT Store that do an excellent job of very similar functionality). If users are used to using this model for discovering, provisioning and launching applications, we shouldn’t discourage them away from it.

So, in this situation, shouldn’t we try to find some way of launching (if available) the locally-installed copy of certain applications when invoked from the Citrix Storefront interface? Particularly for this example, it offered a way of reducing load on the Citrix XenApp servers and improving user experience quite seamlessly.

The solution to this is the use of Keywords within Citrix Storefront, so let’s explore it a bit here.

Keywords

Keywords are a kind of tag that you associate with published applications or desktops within the properties of the resources in XenApp or XenDesktop. When launched or enumerated, the keywords invoke specific behaviours that you can use to customize the delivery solution.

The one we are interested in is the Prefer keyword. This works by iterating through the local Start Menu and searching for a string that is specified along with the Prefer keyword. If a match is found, then that local application is launched instead of the published application. If no match, then the original published resource will be launched.

Naturally, this means you can search for partial matches or folder paths, but bear in mind that the first one matched will be launched. If you’re searching for specific versions (and bear in mind that Microsoft Office is notorious for changing the shortcut name strings from version to version), it’s best to be as precise as possible so as to avoid possible screw-ups.

Testing

We are going to do this with some Windows 10 1703 clients and both a XenApp 6.5 and XenApp 7.x server (7.13 at the time of writing, it’s Friday and I can’t be bothered with a 7.14 upgrade just now). The Storefront session we’re using is actually installed on the XenApp 6.5 server but it doesn’t matter where it sits.

Personally I’d use Storefront 3.x for this. I think it can be done with 2.x but I’ve read a number of articles saying it was a bit hit and miss and upgrading to 3.x solved the issues. If you’ve got a 6.5 environment, you can easily upgrade to Storefront 3.x, although if you’ve got 1.x Storefront you will need to uninstall it and then reinstall the latest version, as there is no upgrade path from 1.x to anything except 2.0, which is notoriously difficult to find for download (at least in my efforts).

We are going to publish an instance of Outlook 2007 on XenApp 6.5 and an instance of Internet Explorer on XenApp 7.13. The idea is that Outlook 2007 should launch locally rather than from XenApp, as it is installed on the Windows 10 clients, but Internet Explorer should launch on the XenApp server for one client (which we’ve removed IE from), and launch locally on the other (which, unsurprisingly, we haven’t removed IE from). If you’re wondering why I used Office 2007 – I don’t want to use my entitlements for Office 2016 for testing, and I happened to have an old copy of 2007 lying about 🙂 The principles remain the same, though!

Also, if you’re wondering how we removed IE from one of the Windows 10 instances, this video should tell you what you need to know.

Preparation

Firstly, let’s publish the applications and set them up with the required keywords. You need to check the Start Menu of the client desktop for the string you’re trying to match. For Outlook 2007 on Windows 10, this appears to be “Microsoft Office Outlook 2007”.

So if we’re being precise (and it generally pays to be), we will add the string “Microsoft Office Outlook 2007” as the prefer keyword to our Outlook 2007 published application on XenApp 6.5. If you’re just looking for any locally-installed version of Outlook, you could just use “Outlook”. Test thoroughly though if you’re using partial matches. The keyword on XenApp 6.5 sits as part of the application properties under Name.

Note also the use of the second keyword “Mandatory”. This is to ensure that a user with an entitlement to this application gets it added to their Receiver subscriptions without having to search for it themselves.

Now we are going to do the same on the XenApp 7.13 server, for our Internet Explorer published application. This time, of course, the search string will be “Internet Explorer”. But we need to bear in mind that on Windows 10, a shortcut to Internet Explorer doesn’t exist on the Start Menu by default, so firstly we will need to add one. The problem with this is you can’t add shortcuts to C:\ProgramData\Microsoft\Windows\Start Menu\Programs unless you elevate, and doing so directly from Explorer is difficult. The solution is simply to log on, create one on the desktop, then move it to the Programs folder. It should then show in the Windows 10 Start Menu (as below). Ideally you’d want to do this in your base image or through Group Policy or another tool to make sure it is there, otherwise the search will fail!

Then we simply need to change the keywords setting on the published application on XenApp 7.13 to match our shortcut for Internet Explorer.

Now, simply access the Citrix Receiver on the Windows 10 clients. IMPORTANT NOTE – this only works with the native Receiver, the one you can install. The Receiver for Web, at this current moment, doesn’t support keywords.

Once I’ve logged in to the Citrix Receiver (or accessed it through pass-through authentication), I can now see the two published applications presented to me. And yes, I did brand my Receiver with the VMware logo. Don’t you know yet that I’m a complete child?

Another key point to make here is that if a user had an existing subscription to one of these published applications before the keywords were added (i.e. they had added it into their Favourites list within the native Receiver), the keywords will not take effect until the subscription is removed and re-added. You can do this manually or remove it through a script (more on this later this week), as I think subscription data is not stored within the user profile (but as I said, more on this later).

Once the user has successfully subscribed to the application with the keywords (and if you’ve used Mandatory, it should be done automatically for them once any existing subscriptions are removed, if there were any), they can now test and launch it. When the user invokes the Outlook 2007 application from Storefront, you should see that instead of running it on the XenApp session host server, it launches locally

And the same for the browser, if we launch it on the machine with IE installed…

…but then if we jump to the machine with IE removed, we can see it runs a hosted version (the lack of the “launch Edge” tab shows it is a non-Windows 10 version that is running)

Naturally, if the user logs on to a machine where the Outlook 2007 app and shortcuts aren’t present, it also runs the hosted copy. So this is an ideal solution for users that roam around lots of different and disparate devices but need access to the best-performing available version from a single pane of glass. I like it! Main drawback is having to remove subscriptions each time you edit the keywords, but if you make them Mandatory this should only bite you once.

Summary

I think doing things like this is cool – allowing users to use a single interface to launch all of their apps. It’s a shame it only works through native Receiver currently – Receiver for Web support would make it really neat. What would be ultra-cool is detecting the user state and tailoring an application launch type specifically to that – think using hosted apps when at a site with a poor connection and the like. I’m going to have a look at this in a few days, work permitting.

I’m also going to record a video of how to do this and a few other XenApp bits, as well as a follow-up article about editing subscriptions and maybe have a look at branding the Receivers. I think Citrix has been a bit neglected around here in favour of Windows 10 recently – let’s bring it back to parity.

The post Launching locally-installed applications from Citrix Storefront using keywords appeared first on HTG | Howell Technology Group.

QuickPost – resolving “there was a problem sending the command to the program” message when opening Microsoft Office documents

$
0
0

Just a quick post today because I’ve recently re-encountered an error I first had a run-in with back in 2011. I’ve seen it happen a number of times and it does seem to occur in environments where profile management tools are in use more than others, but I really can’t speculate on the actual root cause.

Problem

What normally happens is you will see a particular document type for a particular Office application begin throwing errors when opening documents directly from Windows Explorer. Last instance of it I had, it was affecting .xlsx files in Excel 2010 and .rtf files in Word 2007. If you double-clicked one of these files from a Windows Explorer window, the handler application would launch but the file would fail to open with the error shown below.

Once this fails, though, browsing through the application to the file’s location and opening it from within the app works just fine. This isn’t a viable workaround, though – users will soon get very frustrated with having to do this.

Resolution

Firstly, you need to identify the particular file extension(s) and application(s) you’re having a problem with. This shouldn’t take too long – just recreate the error you (or the user) is having and find out which file extensions and Office applications that it is occurring on. Usually it’s just one, although I have seen it affecting up to three in some cases. In the last example I had of it, it was Excel 2010 .xlsx files and Word 2007 .rtf files.

Next, marry up the file type extension with a file type handler. For instance, the .xlsx extension  maps to Excel.Sheet.12, whereas the old .xls extension from 2003-2007 versions (although still very much supported) maps to Excel.Sheet.8. Sadly, the numbers in these handlers don’t map to Office versions (that would be too easy!), so sometimes it is a case of educated guesswork. The .rtf extension in Word maps to Word.RTF.8, so most of the time they are pretty easy to spot. Make a note of these as you will need them. As a guide, older file types (from 2003 to 2007 Office versions) generally resolve to 8s, the open formats from 2010 onwards resolve to 12s, and there are a few 16s kicking about for new file formats in the latest Office versions.

Now we need to do some Registry editing, preferably on-the-fly. We need to remove and edit a bunch of HKLM entries before the user completes login, so you could use Group Policy Preferences, or a script, or Citrix WEM, or Ivanti DesktopNow or RES (which are now part of the same company, still not got used to that), or any one of a bunch of third -party tools. You could simply do it once to resolve, but in my experience this error has a habit of reappearing after a period of time, so best to keep it applied somehow.

Just for easiness, we will quickly run through how to do it in Group Policy Preferences.  This will go in Computer | Preferences | Windows Settings | Registry. We will use the example above, so we are wanting to reset the settings for Excel.Sheet.12 and Word.RTF.8. Change the target Registry values and file paths as necessary for the problematic file type and application

Firstly you need to set up Delete rules as below. Delete these KEYS completely

HKLM\Software\Classes\Excel.Sheet.12\shell\open\ddeexec

HKLM\Software\Classes\Word.RTF.8\shell\open\ddeexec

and delete these VALUES (not the entire key as in the above two lines)

KEY HKLM\Software\Classes\Excel.Sheet.12\shell\open\command VALUE command

KEY HKLM\Software\Classes\Word.RTF.8\shell\open\command VALUE command

Then we need to Replace some settings within these keys as well. Set the (DEFAULT) value in these keys to that specified. The value needs to point at the right numerical path for the version of Office you’re opening the file type in – usually 2003 (11), 2007 (12), 2010 (14), 2013 (15) and 2016 (16). Note they missed out 13!

Key HKLM\Software\Classes\Excel.Sheet.12\shell\open\command VALUE (Default) DATA “C:\Program Files (x86)\Microsoft Office\Office14\EXCEL.EXE” “%1”

Key HKLM\Software\Classes\Word.RTF.8\shell\open\command VALUE (Default) DATA “C:\Program Files (x86)\Microsoft Office\Office12\WINWORD.EXE” “%1”

Just for posterity, here’s a list of the settings in the Group Policy Preference that would handle this. Don’t forget the two REPLACE actions in here are operating on the (DEFAULT) value, even though it appears as blank

Once you’ve got this in place, you should see that your problems opening certain file types from the Windows Explorer interface are gone forever. This can also occasionally manifest when trying to open links or attachments from email, so it does have the potential to be a pretty major pain in the proverbials. It’s easily enough worked around, but it’s not a great mitigation, and it really interrupts user experience and increases frustration.

More posts coming soon as we finish our adventures in Citrix Cloud Land – in fact, we might have a bit of a series on the go. Stay tuned.

 

The post QuickPost – resolving “there was a problem sending the command to the program” message when opening Microsoft Office documents appeared first on HTG | Howell Technology Group.

HTG staff named in Citrix Technology Advocate and VMware vExpert programs

$
0
0

I.T. community has always been a hugely important part of the information technology industry. In an industry where solutions can be engineered in myriad different ways, using a subset of software from many different vendors, having a healthy and thriving community where people share ideas, techniques and guides is absolutely vital.

The I.T. community is composed of thousands of online blogs, YouTube channels, product forums and busy websites like Spiceworks and Experts Exchange, all of which are out there doing great work in the various sectors of the I.T. industry. And it’s not just online resources, there are hundreds of user groups (examples I’m familiar with are such as MyCUGC, TUG and VMUG) where community volunteers share presentations on their experiences and pass on their knowledge to their peers. My old dad could never understand this – he thinks it’s completely self-destructive to publicize your knowledge, secrets and experience. But the I.T. community is something we all have relied on for years to help us solve problems and overcome persistent issues, and it’s only right that we put something back in to keep the cycle of success going.

At HTG we’ve always been very keen on community, and that’s why we have our company blog, The Enterprise Eightfold Path, which shares solutions, technical know-how, and useful guides to overcoming common I.T. problems. But I.T. community isn’t just about blogging – it covers presenting at user groups and conferences, writing books and guides, evangelizing new technology to peers, working with vendors to iron out problems, develop new features and produce examinations, helping out on forums and providing documentation, writing scripts to automate common tasks…the list of community ventures is almost endless. I think it’s very appropriate at this time to call out Carl Webster as a shining example of a brilliant community contributor, who has produced fantastic PowerShell scripts for documenting Citrix, Microsoft and VMware resources that I know consultants all over the world use on a regular basis.

Big vendors like Microsoft, Citrix and VMware have been very keen to recognize and encourage the contributions of their community contributors through award programs, and even smaller vendors like Ivanti (formerly AppSense and RES), Veeam and Nutanix have got in on this, realizing that having a healthy community is integral to developing and steering a successful line of products. Some programs (such as Citrix Technology Professional and VMware EUC Technology Champion) are quite exclusive and insist on a high bar of acceptance, whereas some (such as VMware vExpert and Microsoft MVP) are much more broad in their acceptance criteria. Think of one set as Special Forces, the others as the Marines 🙂 But no matter how high the bar is set, they are all about recognizing the excellence and selfless contributions of their members to the community, which is almost solely done for no financial rewards of any kind. Once on these programs, members are afforded the opportunity to help steer and shape the products they work with based around their experiences, which is greatly rewarding for those who work with these technologies on a day-to-day basis.

Here at HTG we believe that a key to being the best in what we do is to make sure that the people we employ are key contributors to I.T. community, not only ensuring that our staff will go the extra mile but that we have the contacts within the community to allow us to adapt to any specific set of requirements we have. A thriving and engaged community means that questions can be answered more quickly, more accurately and more honestly than without it, and we can both benefit from and contribute to the pool of knowledge that exists within it.

HTG consultants James Rankin and Kevin Howell contributing to the community at the UK Citrix User Group in London

So recently we are pleased to announce that three of our staff are now members of vendor award programs within the community, which helps us effectively bridge the gap between customer and vendor to the extent that the solutions we provide more closely fit the needs of the enterprise.

  • James Rankin has been awarded Citrix Technology Advocate for 2016/2017 and VMware vExpert for 2017, adding to the AppSense Community Advisor award he has held since 2012
  • Kevin Howell has been awarded Citrix Technology Advocate for 2017 and VMware vExpert for 2017
  • Jane Cassell has been awarded Citrix Technology Advocate for 2016/2017

It’s a great achievement to make it onto these award programs, so here at HTG we’d like to extend our best wishes to all new and returning CTAs and vExperts on the recent intakes – well done to all!

https://blogs.vmware.com/vmtn/2017/08/vexpert-2017-second-half-announcement.html

https://www.mycugc.org/blog/community-champions-cta

The post HTG staff named in Citrix Technology Advocate and VMware vExpert programs appeared first on HTG | Howell Technology Group.

Citrix printing is solved. Isn’t it?

$
0
0

I made the point in an earlier article that printing can often be the 64,000-pound gorilla in the room. But if recent experience is anything to go by, there’s still a long way to come before Citrix architects, consultants and administrators catch up with this. Printing, compared to the blue-screening, spooler crashing days of years ago, is often regarded as “solved” by people designing Citrix solutions. “Most people don’t print any more,” I hear a lot. “Citrix Universal Print driver has gotten rid of most of our issues,” is another refrain. Printing is often the last thing on the mind of those designing and implementing a Citrix solution, because there is a perception that it isn’t the critical bugbear that it once was. And this, not surprisingly, is breathtakingly naïve.

Firstly, to say “people don’t print” is a dangerous path to go down. Just because there is less of a demand on a service doesn’t mean that the service itself should be deliberately deprecated. The very fact that people are now printing less often means that when they do print, they’re printing something pretty vital (the last thing I had to print was my lease car agreement, and I would have been extremely annoyed had I not been able to do so!) And there are always departments and users that rely specifically on printing functions.

Also, we’re moving into a brave new world where the users who connect to corporate resources are increasingly outside of the traditional boundaries of the corporate network. Even the old concept of an Active Directory domain doesn’t encompass the enterprise any more. Active Directory has become much less of a monolithic, ring-fenced collection of managed devices and users, but a federated identity and authentication service that transcends specific physical locations. And more often than not, we have users with mobile devices and tablets connecting to our corporate resources. These users will need to print as well, with a minimum of fuss and complexity.

It doesn’t just need to be simple to print though – it needs to be simple for all users, on all devices, to be able to access advanced printing properties such as duplex, colour choice, stapling, hole-punching, the list can go on and on. It’s probably unfair, but users in the year 2017 will compare a company-provided hosted infrastructure directly to the devices that they use at home. Your Citrix solution has to perform on a par with this – in terms of available features, as well as ease of use. It’s comparing apples with oranges, but the key to acceptance lies in the experience of the user, and that means that this is a cross architects, consultants and administrators all have to bear.

And it’s not just features and ease of use that will come into play here – when talking a hosted Citrix environment, another area that users will always make a comparison on is speed. To cope with this perception, it’s very important to do as much work as possible to make functions in the Citrix environment as responsive as those in a traditional fat-client environment. Speed of printing is a particular bottleneck here, and an area where the local fat client environment can be considerably faster than a hosted one.

Cost is also a major factor. In the current climate, reducing cost, where possible, is always a win for IT departments who increasingly find themselves pitched against outsourced or cloud-hosted solutions in the name of saving money. Cutting down on the volume and overhead of printing helps cut back on wasted capital. And it applies to up-front expenditure as well – you don’t want to have to rip out a fleet of mixed printers and replace them with a single vendor just to put together your print service either.

Cost also runs down into maintenance. You don’t want to put in a complicated, many-moving-part solution because that’s going to improve user experience at the expense of increasing the cost of resources needed to keep the service running. But at the same time (and this is typical of modern IT’s “must have it all” approach) the solution, while easy to maintain and simple to use, must support high availability. And in an ideal world, can it also provide statistics to support maintenance and planning?

And no article about IT can miss out the word of the moment – security. Printing, due to the same complacency that relegates it to the “last thing to think about” when designing Citrix infrastructure, is also dismissed by those raising concerns about IT security. But it still needs to be taken into account when securing an environment. How do you make sure print jobs aren’t rendered on the wrong printer? How can you make sure that sensitive personal information and intellectual property is released for printing only for people authorized to deal with it? How can you ensure that print jobs can’t be hijacked “on-the-wire”? If the need arises, can you track who printed what, where and when, and is this information easily queried?

It all seems like a hell of a lot to think about for something as “simple” as printing, something that many people would have you believe has been solved a long time ago, no? The fact of the matter is, printing was simple, when it was done simply, in simple environments. But the cloudy, multi-device, bring-your-own world we live in know has moved firmly on from the previous methods of operation. It’s not that printing has become complicated – users and their expectations have evolved firmly upwards. We’re no longer providing a simple “print button” that renders a text-based print job onto the device in the corner of a dedicated office. We’re committed to providing an agile, adaptable, feature-rich, simple and secure printing service to users on any device in any location, and that’s why printing isn’t so easily “solved” any more.

So how do we “solve” printing in a Citrix environment in 2017?

Well, we’ve been solving it using UniPrint Infinity. UniPrint’s solution ticks all of the boxes we’ve mentioned earlier, and also does it in a simple, easy-to-deploy method that allows you to deliver extra printing features without an associated uptick in cost.

  • Virtual print queue, using VPAD technology, means printer mapping and queue/driver management becomes a thing of the past.
  • Mobile Printing allows anyone, whether within the corporate structure or not, to print from any device or location. Print servers are no longer needed for remote or cloud-based users.
  • Spool files are compressed by up to 95%, drastically increasing the speed of printing and making associated user experience improvements.
  • High availability module ensures maximum printing uptime.
  • Printer profiles allow advanced features to be wrapped up and deployed to specific users or groups of users.
  • Two-factor authentication and 256-bit encryption ensure maximum security for print services.
  • Statistics and archiving module provides tracking and visibility.
  • Vendor-agnostic technology – no need to standardize print devices to take advantage of the features.
  • All settings are remotely deployed through Active Directory Group Policy Objects, simplifying management and overhead.

This is just a subset of the available features – the solution can be engineered in many different ways to provide a fully-customized print service that meets the customer requirements. Combining the features together to produce secure, flexible and modern printing solutions is the key strength of the product set.

 

So in summary, it’s important for us to start addressing the popular misconception that printing – and Citrix printing – is an easy process since we got through the dark days of Terminal Server crashes and buggy drivers. Designing the print service of today is difficult, complex and intensive – but if you choose the right supporting technology, you can make your life so much easier.

[sponsored blog]

The post Citrix printing is solved. Isn’t it? appeared first on HTG | Howell Technology Group.

Using Skype for Business with a mandatory profile

$
0
0

I’ve had some email comments recently regarding Skype For Business 2016 with mandatory profiles. When you use Skype for Business and log in for the first time, it needs to install a personal certificate into the user profile. As those of you who have used mandatory profiles before will know, personal certificates can’t be used in mandatory profiles, as they are not intended to be shared. This means that users with mandatory profiles trying to use for Skype for Business will be unable to sign in.

Technology like Ivanti DesktopNow and Ivanti RES used various methods of profile spoofing to avoid this issue, but for simple implementations, adopting third-party technology isn’t really an option. People who use mandatory profiles for kiosk or access area machines may well want to give the users the option to sign into Skype for Business, but also to purge the profile from the machine at log off.

There have been a couple of articles I have seen referenced by Microsoft with regard to this issue, but there is no solution offered (see this article for an example). However, it is possible to use Group Policy to achieve this.

The Windows operating system gets the profile type from a Registry value called State stored in HKLM\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\[SID] (where [SID] equals the security identifier of the user). If the State is detected as a DWORD decimal value of 5, it (usually) indicates a mandatory profile. By manipulating this value using logon and logoff scripts, we can trick the operating system into thinking the profile is non-mandatory during the session (and allowing the Skype for Business certificate to be installed), but also purge the profile at logoff because the operating system sees the profile as mandatory again. There are a few steps needed to achieve this

  1. Set the ACLs on the \ProfileList key

Users need to be given access to the ProfileList key in the Registry. The easiest way to do this is to use a Group Policy Object to set permissions for Authenticated Users. Set up a GPO and set the values under Computer Configuration | Windows Settings | Security Settings | Registry to the below

KEY – MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList

Once this GPO is applied and propagated, you should see Authenticated Users have Special permissions to that Registry key.

2. Ensure “Logon script delay” is set to 0

This is the bit I missed out of the video and had to append to the end 🙂 From Server 2012 and up, logon scripts don’t run at logon, they run five minutes afterwards (yes, I know). So set the delay to 0 via Group Policy to make your logon scripts run when you expect them to. The policy is in Computer Config | Admin Templates | System | Group Policy and is called Configure Logon Script Delay, set it to 0.

3. Set up a GPO with logon and logoff scripts

You need to set up two PowerShell scripts, one for logoff and one for logon, and apply them via a Group Policy Object. The logon script should look like this:-

$USERSID = ([Security.Principal.WindowsIdentity]::GetCurrent()).User.Value
set-variable -Name key -Value “HKLM:\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\$USERSID”
$state = (Get-ItemProperty -Path $key -Name State).State
if ($state -eq 5) {Set-ItemProperty -Path $key -Name State -Value 9000}

The script reads the user SID, reads the State value from the user, and if it is equal to 5, changes it.

Note we are setting the State value to 9000. The OS will still interpret this as non-mandatory, but it will be a specific value that couldn’t happen by accident. This is to ensure that when we are resetting the profile to mandatory at logoff, we don’t accidentally run it on a profile that wasn’t mandatory to begin with. Checking for this unusual value (9000) will make sure it only resets on accounts we’ve already changed.

The logoff script is very similar and should look like this:-

$USERSID = ([Security.Principal.WindowsIdentity]::GetCurrent()).User.Value
set-variable -Name key -Value “HKLM:\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\$USERSID”
$state = (Get-ItemProperty -Path $key -Name State).State
if ($state -eq 9000) {Set-ItemProperty -Path $key -Name State -Value 5}

Essentially it is just working in reverse, checking the State value and if it is 9000, resetting back to 5.

4. Deploy and test

Once these GPOs propagate, a user logging on with a mandatory profile should be able to use Skype for Business without getting a certificate error. I have recorded a video of the process in action here.

The post Using Skype for Business with a mandatory profile appeared first on HTG | Howell Technology Group.

QuickPost: Multiple service failures on boot with no errors logged on Citrix XenApp servers

$
0
0

So, just a quick post to document an issue we experienced recently regarding service failures on boot, without any errors being logged, on Citrix XenApp servers.

The problem manifested itself, in this instance, on PVS targets running XenApp 6.5, although it can be replicated on other XenApp versions as well (and may well affect XenDesktop too, especially given that it is now the same code base), and doesn’t appear to be tied to anything to do with Provisioning Services. After an overnight scheduled reboot, we noticed that various critical services had stopped on the target devices. The most common ones are listed below:-

  • Citrix Independent Management Architecture
  • Citrix XTE Service
  • User Profile Service
  • AppSense User Virtualization Service
  • Sophos Antivirus Service
  • Network Store Interface Service

Now, I’m sure the more savvy amongst you can probably guess the culprit area straight away, but we didn’t quite grasp the correlation from the off. But one thing that was common to these service failures is that they were all of critical components. If the Network Store Interface Service didn’t start, the Netlogon service would fail, and the PVS target was unable to contact AD. If the Citrix or User Profile services failed, the server would be up but users totally unable to log on and use applications. If AppSense was down, policies and personalization would not be applied. Whatever failed, the net result was disruption to, or failure of, core services.

Another common denominator was the fact that in most cases, there was nothing written to the event logs at all. Occasionally you would see the Network Store Interface Service or the User Profile Service log an error about a timeout being exceeded while starting, but mainly, and almost exclusively for the Citrix and AppSense services, there was literally no error at all. This was very unusual, particularly for the Citrix IMA service, which normally always logs a cryptic error about why it has failed to start. All the other Citrix services could be observed starting up, but this one just didn’t log anything at all.

Now in the best principles of troubleshooting, we were aware we had recently installed the Lakeside SysTrack monitoring agent onto these systems, ironically enough, to work out how we could improve their stability. So the first step we took was to disable the service for this monitoring agent within the vDisk. However, the problems persisted. But if we actually fully uninstalled the Lakeside systems monitoring software, and then resealed the vDisk, everything went back to normal. It appeared clear that the issue lay somewhere within the Lakeside software, although not necessarily within the agent service itself.

Now what should have set us down the right track is the correlation between the Citrix, AppSense, Sophos

and User Profile services – that they all hook processes to achieve what they’re set up for. We needed to look in a particular area of the Registry to see what was being “hooked” into each process as it launched.

The key in question is this one:-

HKLM\Software\Microsoft\Windows NT\CurrentVersion\Windows

And the value is a REG_SZ called AppInit_DLLs

What it does, in a nutshell, is that all the DLLs that are specified in this value are loaded by each Microsoft Windows-based application that is running in the current log on session. Interestingly, Microsoft’s own documentation on this (which is admittedly eleven years old), makes the following statement “we do not recommend that applications use this feature or rely on this feature”. Well, it’s clear that is either wrong or widely ignored, because a lot of applications use this entry to achieve their “hooking” into various Windows processes.

In our instance, we found that the list of applications here contained Sophos, Citrix, AppSense and a few others. But more importantly, the Lakeside agent had added its own entry here, a reference to lsihok64.dll (see the detail from the value below)

lsihok64.dll c:\progra~1\appsense\applic~1\agent\amldra~1.dll c:\progra~2\citrix\system32\mfaphook64.dll c:\progra~2\sophos\sophos~1\sophos~2.dll

Now the Lakeside agent obviously needs a hook to do its business, or at least some of it. It monitors thousands of metrics on an installed endpoint, which is what it’s there for. But it seemed rather obvious that the services we were seeing failures from were also named in this Registry value – and that the presence of the Lakeside agent seemed to be causing some issues. So how can we fix this?

If you remove the entry from here, the Lakeside agent will put it back when it initializes. This is not a problem, but we need it never to be present at restart. There is an option to remove it entirely from within the Lakeside console, but this loses various aspects of the monitoring toolset. So how you approach the fix depends on whether you’re using a technology like PVS or MCS, that restores the system to a “golden” state at every restart, or your XenApp systems are more traditional server types.

If you’re using PVS or other similar technology:-

  • Open the master image in Private Mode
  • Shut down the Lakeside agent process
  • Remove lsihok64.dll from the value for the AppInit_DLLs
  • Set the Lakeside agent service to “Delayed Start”, if possible
  • Reseal the image and put into Standard Mode

If you’re using a more traditional server:-

  • Disable the “application hook” setting from the Lakeside console
  • Shut down the Lakeside agent process
  • Remove lsihok64.dll from the value for the AppInit_DLLs
  • Set the Lakeside agent service to “Delayed Start”, if possible
  • Restart the system

There is a caveat to the latter of these – with the “application hook” disabled from the console, you will not see information on application or service hangs, you won’t get detailed logon process information, applications that run for less than 15 seconds will not record data, and 64-bit processes will not appear in the data recorder. For PVS-style systems, because they “reset” at reboot, the agent hook will never be in place at bootup (which is when the problems occur), so you can allow it to re-insert itself after the agent starts and give the full range of metric monitoring.

Also, be very careful when editing the AppInit_DLLs key – we managed to inadvertently fat-finger it and delete the Citrix hook entry in our testing. Which was not amusing for the testers, who lost the ability to run apps in seamless windows!

Once we removed the hook on our systems and set the Lakeside service to “Delayed Start” (so that the Citrix, AppSense and Sophos services were all fully started before the hook was re-inserted), we got clean restarts of the servers every time. So, if you’re using Lakeside Systrack for monitoring and you are seeing unexplained service failures, either removing this Registry hook from the Lakeside console or directly from regedit.exe and then delaying the service start should sort you out.

The post QuickPost: Multiple service failures on boot with no errors logged on Citrix XenApp servers appeared first on HTG | Howell Technology Group.

Directing Citrix XenApp 6.5 or 7.x users to run their applications on specific servers (using Load Balancing Policies, or Tags)

$
0
0

Forcing users to execute XenApp applications on specific sets of servers is something you might want to do for a number of reasons. In my case, I primarily run into this requirement during phased migrations, but there are many situations that may push you towards it.

Often I do projects where components or software within the XenApp infrastructure are being upgraded, and customers wish to take a slow migration path towards it, to deal with issues as they arise, rather than a “big bang” approach. Take, for instance, the latest example of this I came across, where AppSense (Ivanti) DesktopNow was being upgraded for the whole Citrix farm. The customer wished to start by updating a small number of Citrix XenApp servers which would then get the new agents and point to the new database. A small number of users would migrate across, run their applications, and feed back any issues.

Over time, more users would be migrated and more servers pointed to the new Ivanti infrastructure, and as this happened, more XenApp servers would be moved over. Eventually, the “rolling” upgrade would finish, hopefully with all problems ironed out as they occurred. The idea was to reduce the impact to the business, to not swamp the IT department with migration issues, and to allow quick rollback if anything went wrong.

Of course, this all depends on whether you can force the “migrated” users to open their XenApp applications on the “migrated” servers, whilst the “non-migrated” users continue to use the “non-migrated” servers! Now, the first thought everyone has in this situation is simply – “duplicate the applications”. Duplicate all the apps, assign one set of applications to “migrated” and one set to “non-migrated” – easy enough, right?

Unfortunately, it can get messy, and with lots of applications there is often a lot of time and resource involved in the duplication anyway. I’ve seen enterprises where a lot of migrations and testing have left Citrix XenApp farms chock full of duplicated, redundant and orphaned applications. I’ve also seen farms where vigorous duplication has also duplicated keywords to lots of applications that shouldn’t have had them! In short – it’s cleaner, easier and less hassle in the long run if there were an easy way of maintaining one set of applications, but forcing subsets of users to run said applications on particular subsets of servers.

So how can we achieve this? It’s not as simple as setting up something like Worker Groups in XenApp 6.5, because even with two Worker Groups assigned to a single application, there’s no way to preferentially direct users to one or the other. We will look at this for both Citrix XenApp 7.x and Citrix XenApp 6.5, because I have had to do both recently, and it makes sense to document both ways for posterity.

Pre-requisites

Obviously, you can’t get away from the fact that you need to separate one set of users from the other! 🙂 So the first task is to set up two Active Directory groups, one for migrated users, one for non-migrated users, in this example. And also obviously – make sure there are no users that are members of both groups.

So, how do we achieve this?

XenApp 7.x

On XenApp 7.x, there is no native Worker Group functionality. What is present, is a function called Tags that can be used to create the same delineations between sets of machines in a site.

I’ve already set up a Delivery Group (called, imaginatively, Delivery Group 001) and added two VDA machines to it. I’ve also created a test application (cmd.exe) within the Delivery Group. But as it stands, publishing the application would run it on either of the VDAs within the Delivery Group.

First of all, we need to Tag the VDAs so that they are able to be treated as disparate groups. We do this by setting Tags for the machines in the Search area of the Citrix Studio console.

Right-click on the first machine, and choose the Manage Tags option. On the next dialog box, choose Create to set a new tag

Enter a name and, optionally, a description for the tag before clicking OK. Repeat this until you have as many tags as are necessary.

Now, apply the first “worker group” tag to the first server by checking the box next to it

Once you click Save the tag will now be applied to the machine. Apply the tags as necessary to all of your XenApp servers to separate them into what are now effectively “worker groups”

So now we have tagged the first machine, UKSLRD002 in this case, as belonging to “Worker group 1”, and the second machine, UKSLRD003, as belonging to “Worker group 2”.

We already mentioned that we have an application published to the Delivery Group, in this case cmd.exe

This application is obviously published to all the users of the Delivery Group, but we want to make sure that our users from the “Non-migrated users” group only run their applications on the first server, and the users from the “Migrated users” group only run their applications on the second server.

To do this we use Application Groups. Right-click on the Applications node and choose Create Application Group. After the initial screen, check the box to “Restrict launches to machines with tag” and select the first tag group we set up.

On the next screen, select the user group who will have access to the application through this group.

Finally, we need to add the application which we have already created to this application group.

Once you have set all this up, review the Summary, give the group a name, and click Finish.

Repeat the above process for the second server, but change the tag to the second “worker group” instead, and apply it to the second group of users.

 

Once the Application Groups are set up, you should now be able to launch the applications from Storefront and see them directed to the required server, irrespective of load. So now you know why I chose cmd.exe as the test application, so I could grab the server name easily enough! 🙂 Here we see user jrankin, who is in the non-migrated users group, and every time they launch the published application it is running on the server from the first “worker group” we set up using tags

And naturally when you log in as the jrankin2 user which is in the migrated users group and run the same application, it launches on the other server

So there it is – in XenApp 7.x, you can use tags and application groups to replicate Worker Group functionality, and have specific groups of users launching the same application on specified groups of servers.

XenApp 6.5

There’s still a lot of XenApp 6.5 out there in the wild, so it makes sense to discuss how to do this in the older IMA version of the product suite as well.

It’s a lot simpler on XenApp 6.5 – firstly, it still has the direct “Worker Group” functionality that is somewhat hidden in XenApp 7.x. Create two Worker Groups and assign the servers to them as required.

Our test application (again, cmd.exe) should be published to both Worker Groups

Next, we need to set up Load Balancing Policies (not to be confused with Load Evaluators) to direct the users to the required server. These are accessed from the Load Balancing Policies area of the AppCenter console

Create a load balancing policy and give it an appropriate name

Set the Filters to Users, enable it, and match it to the AD group created earlier

Now apply the Worker Group Preference to the required Worker Group

Click on OK to save it.

Now repeat the process, but this time set the Filter to the other user group, and the Worker Group Preference to the other Worker Group.

These policies will apply to any application that the user launches, which is the main difference between this and the XenApp 7.x implementation.

So, when the user hits Storefront and launches the application, we should see user jrankin from the non-migrated users group launch the application on the first XA 6.5 server (UKSLXA003)

And every time user jrankin2 from the migrated users group launches the application, it will launch on the migrated server (UKSLXA004)

Summary

So, we should now be able to route our users to specific servers from single instances of applications, without having to duplicate those applications and create a mess for ourselves in the future.

You can also use both these methods to do other things, such as route sessions to a specific datacenter in an active-active configuration, and probably a lot of other uses you can think of. I never really dug too deeply into Load Balancing Policies and Tags/Application Groups previously, but they are very useful features that you can use to avoid extra work within your environment.

I should be recording a video on this very soon, I will update this post with a link when completed.

 

 

 

The post Directing Citrix XenApp 6.5 or 7.x users to run their applications on specific servers (using Load Balancing Policies, or Tags) appeared first on HTG | Howell Technology Group.


Managing device-based licensing on XenApp and RDSH using FSLogix Apps

$
0
0

Happy New Year all, it’s good to be back at work and have so many things lined up to blog about in 2018! Where shall we start? Well, let’s take a dive into the minefield that is per-device software licensing on XenApp or RDSH.

Per-device licensing model – the background

For a long time, XenApp or RDSH administrators have struggled against the problems brought by per-device licensing agreements. When an application is installed on a XenApp server or RDSH server, technically that application can be run by thousands of client devices. If the application uses per-device licensing, the idea is that a license needs to be purchased for every device that can access and run the application. If you then install this application onto a XenApp or RDSH server to allow users to access it in this way, then you may find you need to buy a huge number of licenses, far in excess of the number of users you have that actually want to use the application. Many pieces of software behave like this, but perhaps the two best-known ones are Microsoft Project and Microsoft Visio.

Controlling access to the published application through things like Active Directory groups isn’t enough to satisfy the licensing requirements. This is understandable, because technically a user could use a separate published application to browse through the server filesystem and launch the application that they’re not supposed to have access to. Locking down the filesystem on a per-application basis is messy, time-consuming, and ultimately doesn’t satisfy the per-device licensing model.

Introducing the per-device licensing model into RDSH environments can cost a huge amount of money, as you can probably imagine. Take, for example, a company with Microsoft Visio installed on a XenApp 7.15 infrastructure and 1800 users with thin client machines. This application is only used by 70 users but as the servers with the app installed can be accessed from any thin client machine, the company must purchase 1800 licenses for Visio – 1730 more than will actually be using the application, making a mockery of the cost-savings associated with their XenApp and thin client infrastructure.

Microsoft did modify this approach slightly with the addition of “Roaming Use Rights” for Software Assurance customers, but the fact remains that the per-device licensing model is murky and potentially costly (especially in an audit situation). If you lock down your RDSH environment heavily with Group Policies and technologies like AppLocker, you can try to prove to an auditor that you have taken the required steps to restrict access to a specific subset of devices, but the problem remains that if they find any way in which a user could circumvent this, you will be on the hook for the extra licenses required, as well as an audit fine.

It’s not just Microsoft who take this approach – some of the bigger database vendors are known to do it too, and there are myriad other pieces of software out there that adopt the same model. it’s much better to be safe than sorry, arguably!

Now, for a long time people used tech like AppSense Application Manager (now known as Ivanti Application Control) to mitigate this issue. I even wrote a blog post about it, what seems like a lifetime ago 🙂 (in fact that was published on my very first day of blogging!) Ivanti had a sort of quasi-official response from Microsoft that allowed them to run with this as a feature, but Microsoft have now distanced themselves somewhat from any specific third-party endorsement for per-device licensing compliance.

This doesn’t mean that tech like Ivanti Application Control is now invalid – just that Microsoft (and probably other vendors too) are now going to judge each case on its individual merits. Or to put it another way – if there’s any way you haven’t covered your backside with the solution you choose to deploy, they’re going to bill you for it. However, Microsoft have issued a set of guidelines that will drive the approach to auditing per-device licensed software in XenApp/RDSH situations.

  1. The software must be proven to be restricted to a specific set of client devices, without any possible way to circumvent the restrictions.
  2. The licenses must be transferable between devices.
  3. Reporting must be available on the current and historical license usage

So with these requirements in mind, you need to put together a solution that ticks all of these boxes to allow you to avoid big bills from non-compliance with per-device licensing.

The Ivanti Application Control method is still probably perfectly valid (as far as I know it should be able to satisfy the requirements, although you will need to validate this). However, if you’re not an existing Ivanti customer getting this tech in just to handle licensing is probably a bit of overkill from a functionality and cost perspective, to be fair, although there are some other really cool features of Application Control you may also find useful. For purposes of this blog, though, I’m going to look at handling the licensing through FSLogix Application Masking, because FSLogix is quick and easy to set up and get running, and simplicity is one of my target areas for 2018 🙂

Managing licenses with FSLogix

The Application Masking feature is ideal for this because it actually physically hides the filesystem and Registry entries from devices that aren’t allowed to run the application. If the user can’t actually see the executables, they are going to have a job running them! This makes FSLogix perfect to satisfy the first of the requirements above.

Firstly, you will need to install the FSLogix Apps software onto your XenApp/RDSH server (if you want to use the trial version like me, just download the latest version from the FSLogix website and crack on)

Then you will need to install the FSLogix Apps Rule Editor onto a management server or client. This console allows you to configure and deploy the rules and rule assignments that will get the licensing restrictions to work. Of course (pro tip), if you install the Rules Editor onto a server that runs from the same image as your production XenApp/RDSH servers, it will be much easier to set up the masking rules (because the applications are already present), so that’s what I’m going to do 🙂

Next, let’s install a per-device piece of software onto our XenApp server. In this instance, we will use Microsoft Visio 2016. If you’re like me and scatter-brained, make sure you get the version that runs on Remote Desktop Services! 🙂

We will now publish the application so it is available to our XenApp users

Next, let’s run the FSLogix Apps Rule Editor and create a Masking Rule for Visio 2016. If we’ve installed the Rules Editor onto spare XenApp system with the applications already pre-loaded or deployed, we can use the Add/Remove Programs method to read out the relevant Registry values and filesystem entries and save ourselves some work. Create a new rule, give it a name, and then choose Visio (in this case) from the “Choose from installed programs” list

 

The application will be scanned and the rule populated with the required filesystem entries and Registry values

We can use the function in the Rules Editor under File | Change Licensing Parameters to set a minimum time that a license will be assigned

Setting the minimum time allows us to fulfil the requirement for allocation and transferral that is stipulated. If a license is moved or deleted (by removing or changing the assignments) before the minimum allocation time, a warning will be shown to remind the administrator that they may be violating the licensing requirements.

The actual assignment of the licenses to devices is done by using the Manage Assignments function for the Masking Rule. Click on File | Manage Assignments to start this. Firstly, we will Remove the Everyone rule. Then click on Add.

Add an Assignment for Environment Variable. We are going to use the variable CLIENTNAME (this will be a variable within the user’s XenApp or RDSH session, that corresponds to the name of the connecting client). In this example, we are going to allow the client machine UKSLD205 to run the Visio software via Citrix and block it for all other connecting devices.

Firstly, you need to remember that FSLogix rules are evaluated fully from the top down. So this doesn’t mean it reads the first matching rule and then stops processing – it reads them all and evaluates the outcome. So to allow the application to be run by specific devices, first we need to add a wildcard rule.

Add an Assignment for Environment Variable CLIENTNAME, set the value to * (wildcard), and select Ruleset does apply

Next add an Assignment for Environment Variable CLIENTNAME, set the value to (in this case) UKSLD205, and select Ruleset does not apply

So the rule will look at both Assignments and decide to apply the ruleset to all machines except UKSLD205, in this instance.

To deploy the rules, you simply copy the .fxr and .fxa files into the C:\Program Files\FSLogix\Apps\Rules folder on any target servers. The service then picks up and processes the changed files. In this example I simply did it manually, but in production environments I normally use a script to push changed files to XenApp or RDSH farms.

Now, if we log onto Storefront and launch the published version of Visio from UKSLD205 (the machine that we specified to be allowed), we should see it launches successfully.

However, if we run the published resource from anywhere else, we get this error

Realistically, it could probably do with a custom error in this case, one that tells the user that they are restricted from running the application because of licensing rules. I will feed this back to FSLogix as a feature request.

What you could do (and that I know guys at FSLogix do) is redirect the file type associations to Visio Reader, so if a user isn’t allowed to run Visio, instead of getting this error message they actually get allowed to open Visio files in the installed Reader version. I may well do a follow-up to this article covering how to do this, once my backlog is cleared 🙂

The rules cannot be subverted, even in a situation where the user actively changed their environment variable for CLIENTNAME whilst logged in, because the FSLogix rules are read in at logon time and the application masked or unmasked as required at that point. A change to the variable during the session does not affect how they are processed.

To satisfy the last requirement, you can run reports on the current or historical usage of FSLogix App Masking for by using the function File | Licensing Report

Finally, there is a function within the software to easily import large numbers of machines to allow as Assignments. When adding an Assignment for Environment Variable, the From file function allows you to select a text file (one entry per line) that you can use to add rules for a number of machines at once. This functionality is a bit rudimentary (it would be nice to be able to add machines from particular OUs, Sites or IP address ranges, for instance) but certainly reduces the effort required to add a number of endpoints to the configuration.

Summary

So, if you want to exercise control over your per-device licensed software (be it Visio, Project, or one of the many others out there that adopt this model) in a XenApp or RDSH or VDI environment, FSLogix Application Masking offers you a quick, easy and audit-compliant way of achieving this (although as I said, there is no absolute certainty on that final point for anyone, you really need to construct the solution and verify it with the software vendor). However, based on my own experiences, I’m pretty sure that as long as you can demonstrate how the solution you’ve chosen meets the requirements specified, then there should be no reason why a vendor shouldn’t allow you to use it for these purposes.

With regards to addressing this through FSLogix, there are a couple of rough edges to this, namely the error that appears when the application is restricted and the flexibility around adding large numbers of allowed devices, but these should be easy enough to iron out. The core functionality is solid and does exactly what we need it to – I’d recommend it as an excellent way to deal with per-device licensing for any environment that experiences these issues.

I will be recording a video on this subject later on today – link will be posted here as soon as it is done!

 

The post Managing device-based licensing on XenApp and RDSH using FSLogix Apps appeared first on HTG | Howell Technology Group.

QuickPost: Windows 10 1709 UWP applications fail to deploy at first logon

$
0
0

Just a quick one for starters today (as I have a bunch of stuff I want to get out there). However, I want to quickly run through a problem I had over the last few days with the latest iteration of Windows 10.

Everyone (hopefully!) knows what UWP applications are – Universal Windows Platform apps (also known as Store apps, Metro apps, Modern apps, Universal apps). They are the “self-contained” applications that are deployed to Windows 10 when a user logs on for the first time, and are gradually increasing in number and scope. The user can visit either the Windows Store or the Windows Store for Business to add more UWP apps to the ones that are automatically deployed when they log on.

Recently, I noticed that during some testing on the latest version of Windows 10 (version 1709, fully patched as of Jan 18), that the UWP app deployment was not happening properly. Normally I remove most of the auto-deployed UWP apps from the image, but in this case I was trying to test with fully vanilla Windows 10 images. Intermittently, a user would log on for the first time and receive a desktop and Start Menu that looked like this

A few of the UWP apps had deployed, but most of them (including the Store itself) were missing. It wasn’t a timing or connectivity issue – leaving the VM for a long period of time made no difference, multiple logins made no difference, and Internet connectivity was good.

Oddly enough, running the following PowerShell (which normally reinstalls and repairs all UWP apps in the image) also made no difference at the next initial logon

Get-AppXPackage | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register “$($_.InstallLocation)\AppXManifest.xml”}

However, when logging on with some other user accounts to the same machine, we would observe that all of the UWP apps were deploying correctly, as shown in the image below

Investigation revealed that the user account that didn’t work was identical to the one that did work – except for one difference, the errant user account had a network home drive defined on the user object in Active Directory

Bizarrely enough, setting the Home folder attribute back to “Local path” stopped the deployment from failing (once the local copy of the errant user account was removed).

Digging deeper, I found that when the Home folder attribute was set, a number of errors appeared in Event Viewer at logon from source ESENT, reading something like this

Error ShellExperienceHost (5584,P,0) TILEREPOSITORYS-1-5-21-2950944927-1203068717-1704750700-2614: An attempt to open the device with name “\\.\C:” containing “C:\” failed with system error 5 (0x00000005): “Access is denied. “. The operation will fail with error -1032 (0xfffffbf8).

Clearly, it seems there is an attempt to read or write somewhere that is failing. However, these home folders had worked fine on all previous versions of Windows 10, and the permissions had not changed.

However, having vast experience with Microsoft changing the context of how various tasks run, I elected to add the “Domain Computers” group to the ACL for the home drives as a test…

…and to my surprise, everything now started working as expected, a user with a home folder defined in AD could now log on and get the full set of UWP apps correctly deployed.

I’m not entirely clear why this was happening – nothing appeared to be written to the home drive that I could see. If I’d had time, I’d have liked to change the permissions back and run a session 0 Process Monitor to find exactly why I was seeing this behaviour. However, as with all things Windows 10, I expect this will probably be rectified by Microsoft in an update without ever telling us that an issue has been fixed (that’s the way now – there is never any detail).

So if you find you’re having trouble with the Store or bunches of UWP apps missing when you log on to Windows 10 1709, check the permissions on your networked home folders.

The post QuickPost: Windows 10 1709 UWP applications fail to deploy at first logon appeared first on HTG | Howell Technology Group.

Using host-to-client redirection in Citrix XenApp

$
0
0

I came across a problem recently where users were having problems opening links to certain SharePoint sites from within a Citrix XenApp environment. For some reason, the XenApp hosted sessions were asking for authentication when hitting the SharePoint front end. Whilst this certainly indicated some sort of misconfiguration within the environment itself – client desktop sessions, for instance, had no problem accessing the links through single sign-on – we decided to use XenApp host-to-client redirection to solve it. Not only would it address the issue at hand whilst further investigation of the root cause took place, it would also achieve one of our other goals, which was to offload browser traffic, where possible, away from the XenApp environment. Users were to continue to use Office applications such as Outlook from within XenApp, but browser sessions would ideally be sent back to the local machine for processing.

Host-to-client redirection is exactly what it says on the tin – redirection of content (URLs) from the XenApp host back to the Citrix client. The host-to-client redirection process only works on server VDAs, not on desktop VDAs. In a lot of situations, host-to-client redirection is not commonly used – the VDA already optimizes multimedia, and both multimedia and Flash redirection can be configured individually through policies where necessary. However, in certain situations, host-to-client redirection can be used to get around problems with performance, compliance or compatibility. It is supported on Citrix Receiver for Windows, Receiver for Mac, Receiver for Linux, Receiver for HTML5, and Receiver for Chrome.

An example of host-to-client redirection that I used previously to this was to open up a search engine on the client rather than in a hosted app, as the search engine in question used geolocation for some particular features, and the users were in India, whereas their hosted applications were in a European datacenter. Redirecting the link to the search engine back to the client browser allowed the site to geolocate properly.

XenApp 7.13 and higher introduces a policy for “URL redirection” with accompanying whitelists and blacklists, but if you’re on an older version this is not an option. In this situation you need to use host-to-client content redirection, which is a type of server FTA (File Type Association) redirection. This FTA redirects certain protocols to the client, such as http, https, rtsp, or mms. For example, if you only open embedded links with http, the links directly open with the client application. There is no URL blacklist or whitelist support as with the newer URL redirection policy (although you can configure some level of control using Registry keys, which we will show in this article).

Considerations

It is important to understand which particular situations will trigger host-to-client redirection to function. It activates when URLs are:-

Embedded as hyperlinks in an application (for example, in an email message or document, as in my example)
Selected through a VDA application’s menus or dialogs, provided that the application uses the Windows ShellExecuteEx API
Entered in the Windows Run dialog

It does not activate for URLs entered into a web browser, links clicked within a web browser, standard Windows shortcuts, browser Favourites or Bookmarks, or URLs passed as a parameter to an application command line. If you wish to work with URLs entered in the web browser, then URL redirection is the preferred method. I will do an article on “bidirectional content redirection” (to give it the correct name) as a follow-up to this one, but as mentioned earlier that requires version 7.13 or above.

The default application on the client for the URL type specified is used to open the link. If there is no default app available, then the link will be redirected back to the host again.

The final consideration is that when users have their URLs redirected locally, they may not have access to environmental features that they get in the hosted application, such as drive mappings and printers. It is important to think about how users may try to use the redirected site and ensure that the expected functions are available to them.

Configuration

The host-to-client redirection is done with a Citrix policy, whether on 6.5 or 7.x

 

You can filter the policy in any way you require, using standard Citrix methods

Once the policy is applied, this will attempt to redirect all of the following URL types that are launched in the above-mentioned fashions (specifically, http, https, rtsp, rtspu, pnm and mms). I did wonder if the RTSP protocol redirection would affect App-V 4.x applications being launched that use the RTSP protocol. Fortunately, it does not seem to affect these, although I would advise testing if you use links to RTSP App-V applications from within your published resources.

If you wish to restrict or extend the particular URL types that are launched, you can set Registry values that allow you to customize the URL types that are redirected.

To restrict the URL types use these two values:-

Key:  HKLM\Software\Wow6432Node\Citrix\SFTA
Name: DisableServerFTA
Type: REG_DWORD
Data: 1

Name: NoRedirectClasses
Type: REG_MULTI_SZ
Data: Specify any combination of the values: http, https, rtsp, rtspu, pnm, or mms.  Enter multiple values on separate lines. For example:

http
https
rtsp

To extend the URL types, use this value:-

Key:  HKLM\Software\Wow6432Node\Citrix\SFTA
Name: ExtraURLProtocols
Type: REG_MULTI_SZ
Data: Specify any combination of URL types. Each URL type must include the :// suffix; separate multiple values with semicolons as the delimiter. For example:

customtype1://;customtype2://

Combining these Registry values will allow you to produce specific sets of URL types to be processed.

However, if you wish to use specific sites instead of URL types, you can use a different Registry value (this is the method I prefer). This will only redirect the sites you specify in the Registry value. This works great for the example I had, in which I want to redirect Sharepoint sites back to the local browser. The Registry value is:-

Key: HKLM\Software\Wow6432Node\Citrix\SFTA
Name: ValidSites
Type: REG_MULTI_SZ
Data: Specify any combination of fully-qualified domain names (FQDNs). Enter multiple FQDNs on separate lines. An FQDN may include a wildcard in the leftmost position only. This matches a single level of domain, which is consistent with the rules in RFC 6125. For example, we used this:-

*.microsoftonline.com

*.sharepoint.com

www.office.com

When using this value, it is important to remember that it’s not where the link points that is compared to the Registry settings, it is the actual destination page. For instance, redirecting *.google.com does not work when clicking on the www.google.com URL if the actual page is redirected to www.google.co.uk.

In the same vein, www.yahoo.co.uk could be whitelisted, but this redirects to http://uk.yahoo.com so would not work unless you specifically added uk.yahoo.com or *.yahoo.com to the Registry value.

If you’re using a version of Windows on the server end prior to 2012, then this is all the configuration you need to do, so for XenApp 6.5, you’d be ready to deploy right now.

However, Server 2012 and upwards introduce some very annoying changes to how FTAs are handled (which is essentially what URL types translate to). For host-to-client redirection to work on these platforms you will need to follow these steps.

First, create an XML file with the following content:-

<?xml version=”1.0″ encoding=”UTF-8″?>
<DefaultAssociations>
<Association Identifier=”http” ProgId=”ServerFTAHTML” ApplicationName=”ServerFTA” />
<Association Identifier=”https” ProgId=”ServerFTAHTML” ApplicationName=”ServerFTA” />
</DefaultAssociations>

Save this file in a network location that the VDAs can access.

Next, configure the GPO for Computer Config | Admin Templates | Windows Components | File Explorer | Set a default associations configuration file to Enabled, and point the setting to the file you created above. The downloadable example GPO below contains this setting, but obviously you must point it to the right path!

Note – if you are on Server 2016, make sure that the OS is fully patched. There was a bug in Server 2016 whereby a default associations file, as above, would not apply until second logon. The bug is fixed by the latest patches for the OS.

Next, you need to set a whole bunch of Registry values on your VDA machines. The easiest way to configure this is through Group Policy Preferences, although you can use many different ways to do this. The downloadable example GPO below contains all of these settings ready for you.

Downloadable GPO with settings

The settings are listed here also:-

Note that the GPO above also contains a setting for ValidSitesList, so you will need to change this as well if you wish to use it for the sites you require, or remove it.

Once you’ve applied the Registry values and the GPO for file type associations with the XML file provided, you can now deploy it to machines running XenApp 7.x.

If you’ve configured everything correctly, you should see that for the sites and/or URL types you have specified (or all of them), you will now be redirected out of your XenApp application or session and back to the local client browser.

I will be recording a video on this, I will post the link here as soon as it is uploaded.

Summary

So, that’s how to configure host-to-client redirection on XenApp. Ideally, this method is best for situations requiring simple URL redirection of embedded links from published applications, or on older XenApp versions such as 6.5. You can use the provided Registry values to filter it as necessary to your needs.

 

The post Using host-to-client redirection in Citrix XenApp appeared first on HTG | Howell Technology Group.

Automating common tasks part #1 – automating user provisioning

$
0
0

One of the key areas I am concentrating on for 2018 is automation. With the advent of hybrid cloud services that mix traditional on-premises infrastructure with SaaS, IaaS and PaaS, it is critically important that we avoid an unsustainable rise in administrative tasks by doing as much automation as possible. Hopefully, I’m going to write a series of blog articles that deal with ways you can automate some of your common tasks and manage your evolving systems much more effectively.

Active Directory

AD, and ADFS, are probably one of the most admin-intensive areas of your enterprise. It’s here that many configuration settings are defined and modified, often in quite high volumes. Many companies I work alongside have either manual processes for AD admin tasks, which are time-consuming and can introduce user errors if anything is missed, or have scripted their way around some of the admin tasks, which also brings an overhead of maintenance of the scripts that necessitates resources be assigned to it.

When it comes to automating within AD or ADFS, I like to use Softerra’s Adaxes product to enable this. It has a huge amount of automation and extension power tied up in a straightforward GUI interface, and the advantages it gives in removing labour-intensive tasks, and extending AD functionality, more than make it worth the investment. There are a huge amount of things you can do with it (I am going to have to do a quite a few blog articles to cover all the bits I like!), and I’m not going to list them all here – we can cover them off one by one. An example of how it also extends AD functionality is the concept of a “Business Unit”, which gets around the limitations of Organizational Units that often plague administrators.

User provisioning

Provisioning of new users is often a long-winded manual process that involves a lot of specific actions. In the brave new world of cloud services, we can use not just group memberships and built-in AD attributes to provide access, we can use customized AD attributes as well (such as in claims-based access). If you’ve embraced these methods, then you will be aware that any mistake at the user provisioning stage can result in a bad experience for the user. For instance, in my current project I see a lot of service desk calls because new users haven’t been added to the right group, or had their profile path set, or don’t have a telephone number set on their user object. Because these attributes drive access to applications and services, new users have a frustrating time trying to get up and running. The only way to avoid these issues is to have a robust, automated provisioning process, and Adaxes can help us drive that through with no mistakes, giving us much happier users and freeing up time for staff to deal with other things.

Let’s just run through an example of using Adaxes to provision new users in my lab domain.

Initial configuration

Obviously you need to install the Adaxes sofware somewhere! 🙂 It doesn’t have to be onto a domain controller, although there is no reason why you could not do it this way if you want to. I used a dedicated application server, but given that it is simply a web service that integrates with Active Directory, it could easily be piggybacked onto another lightweight app server. Resiliency can be provided in a number of common ways – something I will cover at a later date.

You simply run the installer, select the components you need (there are self-service aspects to the software as well, again which I will cover at a later date), and provide a service account that has the right to Create all child objects and Delete all child objects in AD. That’s it! Once you’ve done that, simply connect it to your domain, run the console, and you’re ready to rock.

An important point though, in order for the Adaxes rules to work properly new objects need to be created in the Adaxes console rather than dsa.msc (AD Users and Computers) or AD Admin Center. So you really should install at least the console onto domain controllers for your users to use, if they’re in the habit of running it from there. The Adaxes AD console is very similar to ADUC, though, so there is no training needed or familiarisation. There is also an enormous amount of delegation you can do through the Adaxes console, so you should gain from using it for AD management anyway!

Requirements

OK, to demonstrate the provisioning process in action, we are going to list our requirements for new users. They’re probably fairly simple compared to the average enterprise, but they should be enough to give you an idea of what is possible.

When a new user is created, I would like to:-

Create a home drive pointing to \\UKSLDC003\FileStore\HomeDrives\%username%
Set the permissions on this home folder so the user has Full Control
Map this home drive to H: on the AD user object
Set the Remote Desktop Services profile path to \\UKSLDC003\FileStore\Mandatory (RDSH sessions use a mandatory profile)
Set the Profile Path to \\UKSLDC003\FileStore\HomeDrives\%username%\Profile (for a roaming profile on desktops)
If the user is in the Power Users OU, assign it to the Group SW_Set1 (which deploys software for the user)
If the user is in the Standard Users OU, assign it to the Group SW_Set2 (which deploys software for the user)
If the username begins with the prefix -temp-, assign an expiry date three months in the future
If the username begins with the prefix -admin-, add it to the Domain Admins group

In normal operation, this would have to be done specifically by first-line support by following a process (although normally you find service desks simply use the Copy function to avoid some of this, which introduces its own set of security and operational problems). But in this case, we’re going to automate it all (hopefully!)

We are going to divide this into a couple of areas. The first is a set of actions that run after the user object is created. The second will be actions that run when an OU is updated.

Create a Business Rule that will run after a user is created. There is a built-in one provided, but we will create it from scratch

The rule should then be scoped to the precise target event that is required, which in this case is to be run after a new user is created

Next, we need to assign the Actions that we want to take place after a new user is created. So, we will start with the mapping of a home drive, and the setting of the required permissions. This is all done by interfacing directly with the functions in Active Directory itself.

And now we can add the next requirement for an Action, which is setting the Remote Desktop Services profile path to a specific area, to pick up a mandatory profile when using RDSH. You choose an Action of “Update user”, click Add, choose Remote Desktop Services Settings from the list, click on the button next to Update value, and enter the profile path in the correct field

You can also use the same Update user function to update the non-RDSH Profile Path setting which we have as a requirement. Choose Profile Path and enter the required path in the field

Now, to set the user group memberships based around the prefix of the user logon name, firstly, click on Add Action To A New Set. Set up an Action for, in this case, adding the user to the Domain Admins group.

Now, simply add an additional “AND” Condition to the newly-created second set, so that this will only apply when the user is created AND if the username matches a defined format. Firstly, we will create one for the prefix of “-admin-“. Right-click on “If the operation succeeded” in the right-hand pane, and choose “If <Property> <Relation> <Value>” and populate as required

You should now have two sets of Conditions and Actions in the Business Rule as shown here, one for new user creation in general, and one with an AND clause to check if the user also matches the prefix pattern

Now we can simply repeat the previous couple of steps to set a specific account expiry if the user is a temp (beginning with -temp-). For the Action for the new set, choose Update User | Account Expiry and set a date three months past the processing date

And simply add a Condition for the matching of the user logon name pattern again, this time to “-temp-”

So now your entire Business Rule for new user account creation would look like this

Our last requirements are around if a user is in a particular OU, add them to a security group which will facilitate software deployment of a specific application set. For this, we will need to add a couple of Business Rules for when a user account is moved. To scope the move into a specific OU and not any move, we simply apply the Business Rule only to the OU we are concerned with. So for the “Power Users” OU we create a new Business Rule and name it

And then apply it to trigger after a user is moved

Add the required Action for the trigger (in this case, adding the user to a security group called SW_Set1)

And then scope it so it only applies to the target OU (i.e. when a user is moved into it)

Then just repeat the steps, substituting the Power Users OU for Standard Users, and the target group from SW_Set1 to SW_Set2. This should leave you with these two new rules

It’s as simple as that – it’s a very intuitive GUI, and it is easy to build expressions together into your Rules to do quite granular things.

Deployment

These rules should take effect immediately, assuming they’re all Enabled in the console.

So, when an admin creates a new user through the Active Directory section of the Adaxes console, we will get a dialog box informing us that there are rules to be processed (you can turn this off when you wish).

The rules for “admin” and “temp” users have not been processed, because the user object does not meet the criteria. However if we then check the properties of the user object we can see that the requirements we stipulated have been applied.

And if we then move the user into the “Standard Users” OU, we will see the group memberships being updated as required

If we create a user with the prefix -admin-, we can see it is automatically added to Domain Admins

And a user with prefix -temp- automatically is set to expire three months from the current date

Boom! So now our service desk users don’t have to do anything manual, bar creating users and moving them to the required OU. If they are to be admins, just prefix their username with -admin-, if they’re temps, prefix their username with -temp-. Apart from these simple steps, everything is driven by our automation rules. Now just imagine (in a complicated enterprise environment) how much time you can save, how much you can simplify your provisioning processes, and how many errors and misconfigurations can be avoided. And it is dead easy to get to grips with – I spent about an hour and a half installing it, setting up these rules, testing them and blogging it, and this is what I’ve managed to set up 🙂

I’m barely scratching the surface of the capabilities of Adaxes with this, there’s loads more we can do, so we will be revisiting it quite a bit as I do this series on automating bits and pieces of your environment. But hopefully this might whet your appetite a bit. If you want to know more about it or fire me some questions, find me on Twitter @james____rankin, drop a comment below, or fire an email to james [at] htguk [dot] com

The post Automating common tasks part #1 – automating user provisioning appeared first on HTG | Howell Technology Group.

QuickPost: “Cannot connect to server version prior to 7.xx” when running Citrix PVS Imaging Wizard

$
0
0

I recently upgraded my lab environment from XenApp 7.12 to 7.17. As part of this process, I also upgraded the Provisioning Server infrastructure to 7.17 as well. The upgrade appeared to go through without any issues. The only deviation in my configuration from standard was that I used an AD account for the PVS services to run under.

I then spun up a Windows 10 1803 instance which I intended to run the Imaging Wizard on and create a new vDisk ready for deployment. The Target Device software installation went fine, however, as soon as I ran the Imaging Wizard and tried to connect to the PVS farm, I got this error message:-

Naturally, I first wondered if this was an occurrence of the old bug that only allowed connections to PVS server IP addresses instead of host names, but using the IP address gave the same error.

Checking the firewall ports via Telnet revealed that 54321 and 54322 on the PVS server were responding correctly.

Next stop was to check if anything had gone wrong in the upgrade process. Repairing the PVS Server software made no difference, as did a full uninstall, reboot and reinstall.

Next I tested on Windows 10 1709 in case it was an issue with the latest version – still no luck, the same error persisted.

The PVS Imaging Wizard logs in C:\ProgramData\Citrix\Imaging Wizard showed the following information:-

From this log, it seems that for some reason the PVS SoapServer version is not being returned when the Imaging Wizard is connecting to the PVS system, and because it does not match the required version (7.17), the wizard is halting with an error.

I’d used a specific account (-service-pvs) to run the PVS SOAP and Stream services, so I tried changing this to a different account, but that just killed the PVS responsiveness entirely 🙂

Finally, I thought it must be something to do with the database. I’d used an SQL Express 2017 instance for the PVS database, but after checking the supported databases list, it was clear that this was a valid configuration.

So I checked the permissions for the service account under SQL. Expanding Instance | Databases | DatabaseName | Security | Users, I checked the properties of the service account specified and looked at the Membership tab. The service account only had db_datareader and db_datawriter assigned, so I switched this to db_owner and tried again

After making this change, the Imaging Wizard now proceed as expected and connected successfully to the PVS server. So if you get this error message (Cannot connect to server less than version 7.x), checking the service account permissions on the SQL database should help you resolve it.

 

 

The post QuickPost: “Cannot connect to server version prior to 7.xx” when running Citrix PVS Imaging Wizard appeared first on HTG | Howell Technology Group.

Viewing all 178 articles
Browse latest View live