Quantcast
Channel: HTG
Viewing all 178 articles
Browse latest View live

GDPR Compliance and how you can prepare

$
0
0

With the General Data Protection Regulation (GDPR) entered into force on the 24th of May 2016 and will be applied to all EU Member States on the 25th of May 2018, this has given organisations only one year to prepare for compliance, and now only a matter of weeks left to prepare. Only one in five businesses are GDPR ready. Here we have some information on how to prepare your business for GDPR, and more specifically information on how to prepare your company’s technology to be GDPR compliant.

“GDPR states that businesses must handle personal data securely, transparently, and in a lawful manner throughout the entire data processing lifecycle.”

A common misconception of the GDPR is that it only applies to organisations established in the European Union, this is incorrect, the regulation not only applies to all these organisations but also any non-EU established organisations that handle or process data of individuals in the EU. Whether this data is used to offer them goods and services, or monitor their behaviour within the EU, this data must be handled in line with the new regulations.

The GDPR is a replacement for the old data protection laws, the purpose of it is to protect people’s personal data at all stages of data processing. It is there to iron out the creases of the old data protection laws and to create a thorough, strong and unified set of rules for data privacy and security. The old data protection procedures were confusing and could cause problems for businesses trading across borders, the new regulation intends to unify the data protection laws for these businesses, including information in the new regulation on how to correctly handle EU individuals’ data from outside of the EU.

The main difference between the old and new regulation is the entity held liable. The new GDPR names two roles that are responsible, data controllers and data processors. Under the old regulation, the EU Data Protection Directive, only data controllers could be held liable. Also, under the new regulation data processors have strict data protection requirements and obligations to follow. This is to ensure the protection of the privacy rights of the data subjects, a data subject is an “identifiable natural person” or any person that a business collects information on in connection to the business and its operations.

The difference between a data controller and data processor is, a data controller determines why the company are in possession of the data in the first place, if the business handles data for its own purposes and needs then they are a data controller.  They must be able to justify the purpose of the data, the conditions in which it can be used, and the procedure put in place for how the data is handled. A data controller is any business that manages the personal data of their employees and customers. A data processor is different to this, they work on behalf of the controller and process the personal data for them. An example of a data processor is a cloud provider also Software-as-a-Service companies such as a CRM system. A company can be both a data processor and data controller depending on the type of data they are handling and how they plan to use the data. A software company based in the cloud can be both a controller and processor of data, this is because they act as a data controller when handling the data of their own employees, but they also act as a data processor when handling the data that their client’s process with their software. There are no set retention periods for data under the GDPR, however, the data controller must be able to justify that they have a purpose for the data, how they are going to use it and the handling procedure. If they do not have this information, then the data must be deleted as it is being kept unlawfully.

In article 5 of the regulation the 6 most important principles regarding the management of personal data are summarised:

  • Processing – Personal data shall be processed lawfully, fairly and transparently.
  • Purpose – Personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner which is incompatible with those purposes.
  • Relevance – Personal data shall be adequate, relevant and limited to what is necessary for the purposes of collection.
  • Accuracy – Personal data shall be accurate and kept up-to-date; inaccurate data must be erased or rectified without delay.
  • Retention – Personal data shall be kept in a form which permits identification of data subjects for no longer than is necessary.
  • Security – Personal data shall be processed in a manner that ensures appropriate security, using appropriate technical or organizational measure.

Within the regulation there are different classifications of personal data, personal data regarding GDPR is any information that can directly or indirectly identify a data subject, this information can be in any format. The new regulation separates data into two categories, personal data and special categories of personal data. The inclusion of genetic and biometric data is new as these were not mentioned in the old regulation.

 

The fines under the new regulation are much higher than those under the current EU Data Protection Directive, and while this is causing people to panic about GDPR it may not be as much of a cause for concern as it looks. Last year (2016/2017) there were 17,300 breach cases and only 16 of them resulted in fines for the organisations concerned. Clearly, a breach does not necessarily mean a fine, if the company can show the process they have in place for handling the date and the process they have for handling a breach.

However, the fines are considerably larger under the new regulation, a good example of this is the mobile phone company Talk Talk, who suffered a data breach in October 2015. The company admitted to a security failure in which some personal details of customers had not been encrypted, and the company admitted they had not taken the basic steps they could have to protect customer information. The information stolen included bank account details, birth dates and addresses. Due to the company not having even basic protection in place for the personal information of their customers the breach resulted in a fine for the company. The fine was £400,000 which for a large company like Talk Talk is not an unduly damaging fine, however, under the new GDPR, this fine would have been £73,000,000 almost 20 times the amount they paid. This shows why everyone is understandably worried about the new fines, due to the number being a lot more damaging to the business, however as stated before if your company’s data handling process is justifiable along with the process for handling a breach the likelihood is a breach will not result in a fine.

A fine is not the only thing a company has to worry about when suffering a data breach, another thing to consider is company share price. Publicly-listed companies that had a data breach saw on average an immediate 5% drop in share price, showing a drop in the reputation of the company and resulting in a decrease in revenue. Response time is essential to the damage a breach can have on a company, share price of a company generally recovered within 7 days with a strong security response, but with a weak response, it hadn’t recovered after 90 days. Lastly, the company’s customers could drop because of the tarnished reputation of the company, a 2-5% loss of customers could be expected after a breach (average £2.08m to £3.07m loss).

One aspect to consider when looking at how you can prepare for GDPR is your company’s technology stack, and that the compliance with GDPR should be driven by a global business process. Within the regulation document, it specifies that security should be “by design, and by default”, meaning that it should be included in the business process from the beginning and not just integrated when a breach occurs. There are three things you can do to prepare your technology for GDPR;

  • Prepare for compliance audit – To prepare for a compliance audit, IT teams should ensure they can effectively monitor their entire IT infrastructure including endpoint devices like PCs and printers. They should also schedule regular assessments to keep every endpoint device, including the entire printer fleet, in compliance with the policy.
  • Carry out a complete audit – IT teams must identify every device that can access their company and customer data and assess the level of security it has built in.
  • Embrace security by design – IT teams must put the right IT policies in place so that compliance requirements are not an afterthought but an intrinsic way that new devices and services are introduced into the network. Ensure you can monitor every device and feed anomalies or incident information into your network-wide vulnerability assessment and monitoring tools.

When researching GDPR it is common to come across sales pitches stating that by deploying encryption, you will become GDPR compliant. Some even state that through encryption alone you will be 70% compliant. While encryption is mentioned in the regulation, it is not offered as a solution, and the regulation gives no instruction on the type of encryption to use or where you should be using it. Encryption is not the one solution needed to become GDPR compliant, GDPR is far-reaching but complying with it is not just a technical challenge. It needs to be addressed as a business.

One way to help your technology to become more secure is to move your data over to a Cloud environment. There are steps you can take when moving into a Cloud or Hybrid Cloud environment to support your businesses GDPR compliance. One option is a secure digital workspace, which is a flexible and an integrated way to deliver and manage the apps, desktops, data and devices your users need in a contextual and secure fashion. A unified, contextual and secure digital workspace enables you to do all of this and realise the full benefits of hybrid- and multi-cloud environments while simplifying management and overcoming security challenges. A complete secure digital workspace must be:

GDPR is not the end of the world, embrace the change if you have basic processes in place already you are on the way of being GDPR compliant anyway! It is just best practice to ensure you to protect your business data and your customers!

The post GDPR Compliance and how you can prepare appeared first on HTG | Howell Technology Group.


Unboxing the Citrix NetScaler SD-WAN 210 LTE

Using Skype for Business with a mandatory profile

$
0
0

I’ve had some email comments recently regarding Skype For Business 2016 with mandatory profiles. When you use Skype for Business and log in for the first time, it needs to install a personal certificate into the user profile. As those of you who have used mandatory profiles before will know, personal certificates can’t be used in mandatory profiles, as they are not intended to be shared. This means that users with mandatory profiles trying to use for Skype for Business will be unable to sign in.

Technology like Ivanti DesktopNow and Ivanti RES used various methods of profile spoofing to avoid this issue, but for simple implementations, adopting third-party technology isn’t really an option. People who use mandatory profiles for kiosk or access area machines may well want to give the users the option to sign into Skype for Business, but also to purge the profile from the machine at log off.

There have been a couple of articles I have seen referenced by Microsoft with regard to this issue, but there is no solution offered (see this article for an example). However, it is possible to use Group Policy to achieve this.

The Windows operating system gets the profile type from a Registry value called State stored in HKLM\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\[SID] (where [SID] equals the security identifier of the user). If the State is detected as a DWORD decimal value of 5, it (usually) indicates a mandatory profile. By manipulating this value using logon and logoff scripts, we can trick the operating system into thinking the profile is non-mandatory during the session (and allowing the Skype for Business certificate to be installed), but also purge the profile at logoff because the operating system sees the profile as mandatory again. There are a few steps needed to achieve this

  1. Set the ACLs on the \ProfileList key

Users need to be given access to the ProfileList key in the Registry. The easiest way to do this is to use a Group Policy Object to set permissions for Authenticated Users. Set up a GPO and set the values under Computer Configuration | Windows Settings | Security Settings | Registry to the below

KEY – MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList

Once this GPO is applied and propagated, you should see Authenticated Users have Special permissions to that Registry key.

2. Ensure “Logon script delay” is set to 0

This is the bit I missed out of the video and had to append to the end 🙂 From Server 2012 and up, logon scripts don’t run at logon, they run five minutes afterwards (yes, I know). So set the delay to 0 via Group Policy to make your logon scripts run when you expect them to. The policy is in Computer Config | Admin Templates | System | Group Policy and is called Configure Logon Script Delay, set it to 0.

3. Set up a GPO with logon and logoff scripts

You need to set up two PowerShell scripts, one for logoff and one for logon, and apply them via a Group Policy Object. The logon script should look like this:-

$USERSID = ([Security.Principal.WindowsIdentity]::GetCurrent()).User.Value
set-variable -Name key -Value “HKLM:\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\$USERSID”
$state = (Get-ItemProperty -Path $key -Name State).State
if ($state -eq 5) {Set-ItemProperty -Path $key -Name State -Value 9000}

The script reads the user SID, reads the State value from the user, and if it is equal to 5, changes it.

Note we are setting the State value to 9000. The OS will still interpret this as non-mandatory, but it will be a specific value that couldn’t happen by accident. This is to ensure that when we are resetting the profile to mandatory at logoff, we don’t accidentally run it on a profile that wasn’t mandatory to begin with. Checking for this unusual value (9000) will make sure it only resets on accounts we’ve already changed.

The logoff script is very similar and should look like this:-

$USERSID = ([Security.Principal.WindowsIdentity]::GetCurrent()).User.Value
set-variable -Name key -Value “HKLM:\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\$USERSID”
$state = (Get-ItemProperty -Path $key -Name State).State
if ($state -eq 9000) {Set-ItemProperty -Path $key -Name State -Value 5}

Essentially it is just working in reverse, checking the State value and if it is 9000, resetting back to 5.

4. Deploy and test

Once these GPOs propagate, a user logging on with a mandatory profile should be able to use Skype for Business without getting a certificate error. I have recorded a video of the process in action here.

The post Using Skype for Business with a mandatory profile appeared first on HTG.

QuickPost: Multiple service failures on boot with no errors logged on Citrix XenApp servers

$
0
0

So, just a quick post to document an issue we experienced recently regarding service failures on boot, without any errors being logged, on Citrix XenApp servers.

The problem manifested itself, in this instance, on PVS targets running XenApp 6.5, although it can be replicated on other XenApp versions as well (and may well affect XenDesktop too, especially given that it is now the same code base), and doesn’t appear to be tied to anything to do with Provisioning Services. After an overnight scheduled reboot, we noticed that various critical services had stopped on the target devices. The most common ones are listed below:-

  • Citrix Independent Management Architecture
  • Citrix XTE Service
  • User Profile Service
  • AppSense User Virtualization Service
  • Sophos Antivirus Service
  • Network Store Interface Service

Now, I’m sure the more savvy amongst you can probably guess the culprit area straight away, but we didn’t quite grasp the correlation from the off. But one thing that was common to these service failures is that they were all of critical components. If the Network Store Interface Service didn’t start, the Netlogon service would fail, and the PVS target was unable to contact AD. If the Citrix or User Profile services failed, the server would be up but users totally unable to log on and use applications. If AppSense was down, policies and personalization would not be applied. Whatever failed, the net result was disruption to, or failure of, core services.

Another common denominator was the fact that in most cases, there was nothing written to the event logs at all. Occasionally you would see the Network Store Interface Service or the User Profile Service log an error about a timeout being exceeded while starting, but mainly, and almost exclusively for the Citrix and AppSense services, there was literally no error at all. This was very unusual, particularly for the Citrix IMA service, which normally always logs a cryptic error about why it has failed to start. All the other Citrix services could be observed starting up, but this one just didn’t log anything at all.

Now in the best principles of troubleshooting, we were aware we had recently installed the Lakeside SysTrack monitoring agent onto these systems, ironically enough, to work out how we could improve their stability. So the first step we took was to disable the service for this monitoring agent within the vDisk. However, the problems persisted. But if we actually fully uninstalled the Lakeside systems monitoring software, and then resealed the vDisk, everything went back to normal. It appeared clear that the issue lay somewhere within the Lakeside software, although not necessarily within the agent service itself.

Now what should have set us down the right track is the correlation between the Citrix, AppSense, Sophos

and User Profile services – that they all hook processes to achieve what they’re set up for. We needed to look in a particular area of the Registry to see what was being “hooked” into each process as it launched.

The key in question is this one:-

HKLM\Software\Microsoft\Windows NT\CurrentVersion\Windows

And the value is a REG_SZ called AppInit_DLLs

What it does, in a nutshell, is that all the DLLs that are specified in this value are loaded by each Microsoft Windows-based application that is running in the current log on session. Interestingly, Microsoft’s own documentation on this (which is admittedly eleven years old), makes the following statement “we do not recommend that applications use this feature or rely on this feature”. Well, it’s clear that is either wrong or widely ignored, because a lot of applications use this entry to achieve their “hooking” into various Windows processes.

In our instance, we found that the list of applications here contained Sophos, Citrix, AppSense and a few others. But more importantly, the Lakeside agent had added its own entry here, a reference to lsihok64.dll (see the detail from the value below)

lsihok64.dll c:\progra~1\appsense\applic~1\agent\amldra~1.dll c:\progra~2\citrix\system32\mfaphook64.dll c:\progra~2\sophos\sophos~1\sophos~2.dll

Now the Lakeside agent obviously needs a hook to do its business, or at least some of it. It monitors thousands of metrics on an installed endpoint, which is what it’s there for. But it seemed rather obvious that the services we were seeing failures from were also named in this Registry value – and that the presence of the Lakeside agent seemed to be causing some issues. So how can we fix this?

If you remove the entry from here, the Lakeside agent will put it back when it initializes. This is not a problem, but we need it never to be present at restart. There is an option to remove it entirely from within the Lakeside console, but this loses various aspects of the monitoring toolset. So how you approach the fix depends on whether you’re using a technology like PVS or MCS, that restores the system to a “golden” state at every restart, or your XenApp systems are more traditional server types.

If you’re using PVS or other similar technology:-

  • Open the master image in Private Mode
  • Shut down the Lakeside agent process
  • Remove lsihok64.dll from the value for the AppInit_DLLs
  • Set the Lakeside agent service to “Delayed Start”, if possible
  • Reseal the image and put into Standard Mode

If you’re using a more traditional server:-

  • Disable the “application hook” setting from the Lakeside console
  • Shut down the Lakeside agent process
  • Remove lsihok64.dll from the value for the AppInit_DLLs
  • Set the Lakeside agent service to “Delayed Start”, if possible
  • Restart the system

There is a caveat to the latter of these – with the “application hook” disabled from the console, you will not see information on application or service hangs, you won’t get detailed logon process information, applications that run for less than 15 seconds will not record data, and 64-bit processes will not appear in the data recorder. For PVS-style systems, because they “reset” at reboot, the agent hook will never be in place at bootup (which is when the problems occur), so you can allow it to re-insert itself after the agent starts and give the full range of metric monitoring.

Also, be very careful when editing the AppInit_DLLs key – we managed to inadvertently fat-finger it and delete the Citrix hook entry in our testing. Which was not amusing for the testers, who lost the ability to run apps in seamless windows!

Once we removed the hook on our systems and set the Lakeside service to “Delayed Start” (so that the Citrix, AppSense and Sophos services were all fully started before the hook was re-inserted), we got clean restarts of the servers every time. So, if you’re using Lakeside Systrack for monitoring and you are seeing unexplained service failures, either removing this Registry hook from the Lakeside console or directly from regedit.exe and then delaying the service start should sort you out.

Update – there is actually a second hook that exists within the Registry that deals specifically with 32-bit processes on 64-bit platforms. You may also need to remove the hook reference from here as well, the value is

HKLM\Software\Wow6432Node\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs

The post QuickPost: Multiple service failures on boot with no errors logged on Citrix XenApp servers appeared first on HTG.

Directing Citrix XenApp 6.5 or 7.x users to run their applications on specific servers (using Load Balancing Policies, or Tags)

$
0
0

Forcing users to execute XenApp applications on specific sets of servers is something you might want to do for a number of reasons. In my case, I primarily run into this requirement during phased migrations, but there are many situations that may push you towards it.

Often I do projects where components or software within the XenApp infrastructure are being upgraded, and customers wish to take a slow migration path towards it, to deal with issues as they arise, rather than a “big bang” approach. Take, for instance, the latest example of this I came across, where AppSense (Ivanti) DesktopNow was being upgraded for the whole Citrix farm. The customer wished to start by updating a small number of Citrix XenApp servers which would then get the new agents and point to the new database. A small number of users would migrate across, run their applications, and feed back any issues.

Over time, more users would be migrated and more servers pointed to the new Ivanti infrastructure, and as this happened, more XenApp servers would be moved over. Eventually, the “rolling” upgrade would finish, hopefully with all problems ironed out as they occurred. The idea was to reduce the impact to the business, to not swamp the IT department with migration issues, and to allow quick rollback if anything went wrong.

Of course, this all depends on whether you can force the “migrated” users to open their XenApp applications on the “migrated” servers, whilst the “non-migrated” users continue to use the “non-migrated” servers! Now, the first thought everyone has in this situation is simply – “duplicate the applications”. Duplicate all the apps, assign one set of applications to “migrated” and one set to “non-migrated” – easy enough, right?

Unfortunately, it can get messy, and with lots of applications there is often a lot of time and resource involved in the duplication anyway. I’ve seen enterprises where a lot of migrations and testing have left Citrix XenApp farms chock full of duplicated, redundant and orphaned applications. I’ve also seen farms where vigorous duplication has also duplicated keywords to lots of applications that shouldn’t have had them! In short – it’s cleaner, easier and less hassle in the long run if there were an easy way of maintaining one set of applications, but forcing subsets of users to run said applications on particular subsets of servers.

So how can we achieve this? It’s not as simple as setting up something like Worker Groups in XenApp 6.5, because even with two Worker Groups assigned to a single application, there’s no way to preferentially direct users to one or the other. We will look at this for both Citrix XenApp 7.x and Citrix XenApp 6.5, because I have had to do both recently, and it makes sense to document both ways for posterity.

Pre-requisites

Obviously, you can’t get away from the fact that you need to separate one set of users from the other! 🙂 So the first task is to set up two Active Directory groups, one for migrated users, one for non-migrated users, in this example. And also obviously – make sure there are no users that are members of both groups.

So, how do we achieve this?

XenApp 7.x

On XenApp 7.x, there is no native Worker Group functionality. What is present, is a function called Tags that can be used to create the same delineations between sets of machines in a site.

I’ve already set up a Delivery Group (called, imaginatively, Delivery Group 001) and added two VDA machines to it. I’ve also created a test application (cmd.exe) within the Delivery Group. But as it stands, publishing the application would run it on either of the VDAs within the Delivery Group.

First of all, we need to Tag the VDAs so that they are able to be treated as disparate groups. We do this by setting Tags for the machines in the Search area of the Citrix Studio console.

Right-click on the first machine, and choose the Manage Tags option. On the next dialog box, choose Create to set a new tag

Enter a name and, optionally, a description for the tag before clicking OK. Repeat this until you have as many tags as are necessary.

Now, apply the first “worker group” tag to the first server by checking the box next to it

Once you click Save the tag will now be applied to the machine. Apply the tags as necessary to all of your XenApp servers to separate them into what are now effectively “worker groups”

So now we have tagged the first machine, UKSLRD002 in this case, as belonging to “Worker group 1”, and the second machine, UKSLRD003, as belonging to “Worker group 2”.

We already mentioned that we have an application published to the Delivery Group, in this case cmd.exe

This application is obviously published to all the users of the Delivery Group, but we want to make sure that our users from the “Non-migrated users” group only run their applications on the first server, and the users from the “Migrated users” group only run their applications on the second server.

To do this we use Application Groups. Right-click on the Applications node and choose Create Application Group. After the initial screen, check the box to “Restrict launches to machines with tag” and select the first tag group we set up.

On the next screen, select the user group who will have access to the application through this group.

Finally, we need to add the application which we have already created to this application group.

Once you have set all this up, review the Summary, give the group a name, and click Finish.

Repeat the above process for the second server, but change the tag to the second “worker group” instead, and apply it to the second group of users.

 

Once the Application Groups are set up, you should now be able to launch the applications from Storefront and see them directed to the required server, irrespective of load. So now you know why I chose cmd.exe as the test application, so I could grab the server name easily enough! 🙂 Here we see user jrankin, who is in the non-migrated users group, and every time they launch the published application it is running on the server from the first “worker group” we set up using tags

And naturally when you log in as the jrankin2 user which is in the migrated users group and run the same application, it launches on the other server

So there it is – in XenApp 7.x, you can use tags and application groups to replicate Worker Group functionality, and have specific groups of users launching the same application on specified groups of servers.

XenApp 6.5

There’s still a lot of XenApp 6.5 out there in the wild, so it makes sense to discuss how to do this in the older IMA version of the product suite as well.

It’s a lot simpler on XenApp 6.5 – firstly, it still has the direct “Worker Group” functionality that is somewhat hidden in XenApp 7.x. Create two Worker Groups and assign the servers to them as required.

Our test application (again, cmd.exe) should be published to both Worker Groups

Next, we need to set up Load Balancing Policies (not to be confused with Load Evaluators) to direct the users to the required server. These are accessed from the Load Balancing Policies area of the AppCenter console

Create a load balancing policy and give it an appropriate name

Set the Filters to Users, enable it, and match it to the AD group created earlier

Now apply the Worker Group Preference to the required Worker Group

Click on OK to save it.

Now repeat the process, but this time set the Filter to the other user group, and the Worker Group Preference to the other Worker Group.

These policies will apply to any application that the user launches, which is the main difference between this and the XenApp 7.x implementation.

So, when the user hits Storefront and launches the application, we should see user jrankin from the non-migrated users group launch the application on the first XA 6.5 server (UKSLXA003)

And every time user jrankin2 from the migrated users group launches the application, it will launch on the migrated server (UKSLXA004)

Summary

So, we should now be able to route our users to specific servers from single instances of applications, without having to duplicate those applications and create a mess for ourselves in the future.

You can also use both these methods to do other things, such as route sessions to a specific datacenter in an active-active configuration, and probably a lot of other uses you can think of. I never really dug too deeply into Load Balancing Policies and Tags/Application Groups previously, but they are very useful features that you can use to avoid extra work within your environment.

I should be recording a video on this very soon, I will update this post with a link when completed.

 

 

 

The post Directing Citrix XenApp 6.5 or 7.x users to run their applications on specific servers (using Load Balancing Policies, or Tags) appeared first on HTG.

Windows Virtual Desktop Goes Live (and HTG was there to see it!)

$
0
0

Last week HTG had the pleasure of being one of only a small number of UK based Microsoft Gold Partners to be invited to the first global Technical Airlift for the newly launched Windows Virtual Desktop (WVD) in Seattle, and what a week it was.

Rather than wax-lyrical in a single post, and as there is no much to share, over the coming weeks we’ll publish a series of articles covering everything from the basics of what WVD is, it’s benefits, licensing requirements, how to deploy it and more.

But for now, here’s a very quick summary of just what WVD is….

WVD is not just Microsoft’s new Azure-hosted DaaS service, that is to say, they haven’t just taken Windows 10 and made it available as an IaaS VM, that offering has been available for a while, instead what Microsoft has done in creating WVD is take Windows 10 and for the first time layer into it a true multi-session experience meaning you no longer have to deploy and maintain a VM-per-user or make do with ‘Windows Server with a desktop skin’ based environment – with WVD, Windows 10 now includes the Remote Desktop Service (RDS) that has underpinned multi-session environments, such as Citrix XenApp and more recently Virtual Apps and Desktops for years (pause, more on Citrix+WVD later) – this means you can deploy one or more pooled Windows 10 WVD instances from either the Azure Marketplace or build your own gold build with your standard app stack pre-installed on an appropriately sized VM and have those presented to your users via either the newly developed dedicated clients for Windows, Mac, Linux and Mobile OS’s or via any HTML 5 supported web browser.

But that’s not all, Windows 10 WVD now includes the award-winning and much applauded FSLogix profile management technologies for an optimised Office 365 ProPlus experience, but more on that in later posts!

Here are the topic’s we’ll cover in the upcoming series:

  1. Windows 10 Multi-Session In-Depth
  2. Licensing Requirements for WVD
  3. Correct VM Sizing
  4. Location and Latency, and how to get the best experience
  5. WVD and FSLogix Profile Management In-Depth
  6. WVD Identity Management – Active Directory and Azure AD
  7. App Assure for Windows 10 application compatibility testing
  8. App Attach – A new way of managing apps
  9. Citrix and WVD

Keep checking back for the latest articles in the series, and if you have any questions about WVD and how it can benefit your business please don’t hesitate to get in touch at wvd@htguk.com

PS – Don’t miss out on the HTG Future of Work event in Newcastle on October 17th where Microsoft WVD will be a key topic covered by Microsoft’s Jim Moyle, places are filling fast so register here to avoid disappointment: https://g.co/kgs/nXikBU

Thanks,
Dean Lawrence
Principal Consultant, HTG

The post Windows Virtual Desktop Goes Live (and HTG was there to see it!) appeared first on HTG.

How to configure and protect your end-user devices using Microsoft Endpoint Manager

$
0
0

Throughout 2019, HTG saw a rapid rise in customers who adhered to a cloud-first strategy, thus having little to no on-premise services. As such, they ask us how they should be configuring and protecting their end-user computer devices in situations where a) they don’t have the use of traditional on-premise services such as Active Directory, Group Policy and SCCM, and b) where they have all of their corporate data in Office 365 but their staff are using poorly managed Windows 10 devices. More commonly than not, our answer is Microsoft Intune, or to use its newly rebranded moniker, Endpoint Manager.

What is Microsoft Endpoint Manager?

Put simply, Microsoft Endpoint Manager is a product born from the marriage of Microsoft Intune and Config Manager (SCCM) – in brief, up until recently Microsoft had made it clear that SCCM, its long standing stalwart of on-premise device management was on-notice, that is to say, it would no longer be developing it. Instead, it would be slowly migrating all the functionality into Intune, its fully cloud-based sister product. However, at the yearly Ignite conference in November 2019, Microsoft announced somewhat of a u-turn in that they would no longer be retiring SCCM in favour of Intune and instead would amalgamate the two products into what we now know as Endpoint Manager.

Endpoint Manager provides a vast array of services, protects not only Windows devices but MacOS, iOS and Android as well and is closely integrated with Azure AD as you would expect. However, rather than provide an overview of the product, this article will address, and hopefully clarify, one of the common questions we get from our customers (especially those who have attempted to deploy the product themselves)! The questions being, what are the differences between Configuration, Compliance and Security Policies and which one should they be using to secure their devices?

So, let’s dig into it. I’ll cover each policy type in turn and in an order that should hopefully help tie together the relationship between the policies.

What are Configuration Policies?

The best way to think of a Configuration Policy is as Endpoint Managers’ implementation of Group Policy. In fact, Microsoft has engineered Configuration Policies in such a way as to allow you to import and utilise ADMX files in the same way you would in on-premise Group Policy.

Configuration Policies are therefore what you would use to apply predefined settings to a user or device, such as homepage and other browser settings in IE and Edge (and even Chrome and other browsers, but that’s for another blog!) or set a custom wallpaper on the Windows 10 lock screen. Like Group Policy, Configuration Policies can be applied to a targeted set of users or devices using groups within Azure AD.

What are Security Policies?

Put simply, Security Policies or Security Baselines as they are interchangeably referred to, are pre-configured Windows settings that help you apply a known group of settings and default values that are recommended by Microsoft. When you create a security baseline, you’re creating a template that consists of multiple Configuration Policies.

 

Microsoft routinely released a new Security Baseline which is a thorough pre-defined set of policies that can be quickly and easily deployed to secure your environment. That is not to say you shouldn’t apply standard test and release management processes.

What are Compliance Policies?

Compliance Policies somewhat tie both Configuration and Security Policies together and then apply an additional layer of protection over not only the device or user, but other company resources such as SharePoint sites.

Compliance Policies are used to evaluate a devices compliance against a pre-defined baseline, such as the requirement for a device to be encrypted or to be within a defined minimum OS version which is especially useful with Windows 10 to stop devices falling too far behind with major updates.

Compliance Policies are often deployed alongside Conditional Access Policies, which control what a device can and cannot access should it be deemed as non-compliant, for example, non-compliant devices can be blocked from accessing corporately owned data.

Summary

The key take-aways to conclude this article are that each policy type when individually configured and deployed correctly can add great value in securing a plethora of OS and device types. However, when configured and deployed together, they can not only enforce an entire collection of settings championed by Microsoft but also provide the assurance that should a device fall foul of the required compliance baseline, that device and the user using it would not be able to access and potentially but inadvertently open the company up to malicious exploit.

The post How to configure and protect your end-user devices using Microsoft Endpoint Manager appeared first on HTG.

QuickPost: Multiple service failures on boot with no errors logged on Citrix XenApp servers

$
0
0

So, just a quick post to document an issue we experienced recently regarding service failures on boot, without any errors being logged, on Citrix XenApp servers.

The problem manifested itself, in this instance, on PVS targets running XenApp 6.5, although it can be replicated on other XenApp versions as well (and may well affect XenDesktop too, especially given that it is now the same code base), and doesn’t appear to be tied to anything to do with Provisioning Services. After an overnight scheduled reboot, we noticed that various critical services had stopped on the target devices. The most common ones are listed below:-

  • Citrix Independent Management Architecture
  • Citrix XTE Service
  • User Profile Service
  • AppSense User Virtualization Service
  • Sophos Antivirus Service
  • Network Store Interface Service

Now, I’m sure the more savvy amongst you can probably guess the culprit area straight away, but we didn’t quite grasp the correlation from the off. But one thing that was common to these service failures is that they were all of critical components. If the Network Store Interface Service didn’t start, the Netlogon service would fail, and the PVS target was unable to contact AD. If the Citrix or User Profile services failed, the server would be up but users totally unable to log on and use applications. If AppSense was down, policies and personalization would not be applied. Whatever failed, the net result was disruption to, or failure of, core services.

Another common denominator was the fact that in most cases, there was nothing written to the event logs at all. Occasionally you would see the Network Store Interface Service or the User Profile Service log an error about a timeout being exceeded while starting, but mainly, and almost exclusively for the Citrix and AppSense services, there was literally no error at all. This was very unusual, particularly for the Citrix IMA service, which normally always logs a cryptic error about why it has failed to start. All the other Citrix services could be observed starting up, but this one just didn’t log anything at all.

Now in the best principles of troubleshooting, we were aware we had recently installed the Lakeside SysTrack monitoring agent onto these systems, ironically enough, to work out how we could improve their stability. So the first step we took was to disable the service for this monitoring agent within the vDisk. However, the problems persisted. But if we actually fully uninstalled the Lakeside systems monitoring software, and then resealed the vDisk, everything went back to normal. It appeared clear that the issue lay somewhere within the Lakeside software, although not necessarily within the agent service itself.

Now what should have set us down the right track is the correlation between the Citrix, AppSense, Sophos

and User Profile services – that they all hook processes to achieve what they’re set up for. We needed to look in a particular area of the Registry to see what was being “hooked” into each process as it launched.

The key in question is this one:-

HKLM\Software\Microsoft\Windows NT\CurrentVersion\Windows

And the value is a REG_SZ called AppInit_DLLs

What it does, in a nutshell, is that all the DLLs that are specified in this value are loaded by each Microsoft Windows-based application that is running in the current log on session. Interestingly, Microsoft’s own documentation on this (which is admittedly eleven years old), makes the following statement “we do not recommend that applications use this feature or rely on this feature”. Well, it’s clear that is either wrong or widely ignored, because a lot of applications use this entry to achieve their “hooking” into various Windows processes.

In our instance, we found that the list of applications here contained Sophos, Citrix, AppSense and a few others. But more importantly, the Lakeside agent had added its own entry here, a reference to lsihok64.dll (see the detail from the value below)

lsihok64.dll c:\progra~1\appsense\applic~1\agent\amldra~1.dll c:\progra~2\citrix\system32\mfaphook64.dll c:\progra~2\sophos\sophos~1\sophos~2.dll

Now the Lakeside agent obviously needs a hook to do its business, or at least some of it. It monitors thousands of metrics on an installed endpoint, which is what it’s there for. But it seemed rather obvious that the services we were seeing failures from were also named in this Registry value – and that the presence of the Lakeside agent seemed to be causing some issues. So how can we fix this?

If you remove the entry from here, the Lakeside agent will put it back when it initializes. This is not a problem, but we need it never to be present at restart. There is an option to remove it entirely from within the Lakeside console, but this loses various aspects of the monitoring toolset. So how you approach the fix depends on whether you’re using a technology like PVS or MCS, that restores the system to a “golden” state at every restart, or your XenApp systems are more traditional server types.

If you’re using PVS or other similar technology:-

  • Open the master image in Private Mode
  • Shut down the Lakeside agent process
  • Remove lsihok64.dll from the value for the AppInit_DLLs
  • Set the Lakeside agent service to “Delayed Start”, if possible
  • Reseal the image and put into Standard Mode

If you’re using a more traditional server:-

  • Disable the “application hook” setting from the Lakeside console
  • Shut down the Lakeside agent process
  • Remove lsihok64.dll from the value for the AppInit_DLLs
  • Set the Lakeside agent service to “Delayed Start”, if possible
  • Restart the system

There is a caveat to the latter of these – with the “application hook” disabled from the console, you will not see information on application or service hangs, you won’t get detailed logon process information, applications that run for less than 15 seconds will not record data, and 64-bit processes will not appear in the data recorder. For PVS-style systems, because they “reset” at reboot, the agent hook will never be in place at bootup (which is when the problems occur), so you can allow it to re-insert itself after the agent starts and give the full range of metric monitoring.

Also, be very careful when editing the AppInit_DLLs key – we managed to inadvertently fat-finger it and delete the Citrix hook entry in our testing. Which was not amusing for the testers, who lost the ability to run apps in seamless windows!

Once we removed the hook on our systems and set the Lakeside service to “Delayed Start” (so that the Citrix, AppSense and Sophos services were all fully started before the hook was re-inserted), we got clean restarts of the servers every time. So, if you’re using Lakeside Systrack for monitoring and you are seeing unexplained service failures, either removing this Registry hook from the Lakeside console or directly from regedit.exe and then delaying the service start should sort you out.

Update – there is actually a second hook that exists within the Registry that deals specifically with 32-bit processes on 64-bit platforms. You may also need to remove the hook reference from here as well, the value is

HKLM\Software\Wow6432Node\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs

The post QuickPost: Multiple service failures on boot with no errors logged on Citrix XenApp servers appeared first on HTG.


Understanding Intune Policies

$
0
0

This blog post will address, and hopefully, demystify a topic I struggled with when first starting out with Intune or Endpoint Manager to use its new moniker, and that is the difference between Configuration, Compliance and Security Policies and in which scenarios to use them.  So, let’s dig into it, I’ll cover each policy type in turn and in an order that should hopefully help tie the relationship between the policies together.

Intune-Policies

Configuration Policies

The best way to think of a Configuration Policy is as Intune’s implementation of Group Policy, in fact, Microsoft has engineered Configuration Policies in such a way as to allow you to import and utilise ADMX files in the same way you would with a traditional Group Policy Object.

Intune-Config

Configuration Policies are therefore what you would use to apply predefined settings to a user or device, such as defining a set homepage or other browser settings in IE and Edge (and even Chrome and other browsers, but that’s for another blog!) or enforce a custom desktop wallpaper or lock screen behaviour in Windows 10 and like Group Policy Objects, Configuration Policies can be applied to a targeted set of users or devices using groups within Azure AD.

Intune-Config2

Security Policies

Security Policies or Security Baselines as they are interchangeably referred to are pre-configured Windows settings that help you apply a known group of settings and default values that are recommended by Microsoft, that is to say, when you create a security baseline, you’re creating a template that consists of hundreds of individual Configuration Policies.

Intune-SecurityBaseline

Microsoft routinely releases a new Security Baseline which is a thorough pre-defined set of policies covering all facets of the target technology, such as Windows 10, that can be quickly and easily deployed to secure your environment.

Note, Security Baseline are extremely exhaustive and I would advise caution over adding them without careful testing, they are, however, extremely useful at locking down an environment to a given standard quickly.

Compliance Policies

Compliance Policies are used to evaluate a device’s compliance against a pre-defined baseline, such as the requirement for a device to be encrypted or to be within a defined minimum OS version.

Intune-Compliance2

Compliance Policies are a good tool for alerting on configuration drift, and when deployed alongside Conditional Access Policies can control what a device can and cannot access should it be deemed non-compliant, for example, non-compliant devices can be blocked from accessing corporately owned data.

Intune-Compliance

Summary

Each policy type when individually deployed correctly can add great value in securing a plethora of OS and device types, however, when configured and deployed together they can not only enforce an entire collection of settings championed by Microsoft but also provide the assurance that should a device fall foul of the required compliance baseline that device and the user using it would not be able to access and potentially but inadvertently open the company up to malicious exploit.

Finally, I’d highly recommend following Intune Training on YouTube where Steve and Adam (and others) share some great content on all things Intune.

Intune-Training

I also maintain a List on Twitter for the key folk I follow in the MDM space, feel free to follow that here.

The post Understanding Intune Policies appeared first on HTG.

6 IT expert Twitter accounts you need to follow in 2021

$
0
0

“…a wealth of information creates a poverty of attention…” ― Herbert A. Simon, American economist, political scientist and cognitive psychologist.

I wonder if Herbert knew just how much information we’d have at our fingertips, today.

The internet and social media are jam-packed with IT experts wanting to share their opinions, advice and knowledge with you. If you follow the right people, you’ll get a steady stream of valuable IT insights. The question is, how do you find these professionals?

With around 330 million monthly active users and 145 million daily active users on Twitter alone, it can be difficult to decide who deserves your time and attention.

Six must-follow IT experts on Twitter

 


1. Richard Corbridge

Richard is the Chief Information Officer at Boots. He is an expert in healthcare strategy and technology. He has over 20 years’ experience in the Health and Clinical Research Information sectors. Richard has a keen interest in healthcare business change and benefits management and focuses on customer experience, engagement and other advantages that come from implementing technology.

2. Kimberly Bryant

Kimberly is the founder of Black Girls Code. Her goal is to increase the number of women of colour in the digital space by teaching girls about computer science and technology. She talks about how minorities have helped shape technological advancements and the importance of amplifying their innovations for the future.

3. Chris Skinner

Chris is one of the most well-known technology influencers in the FinTech industry. He provides insights on his blog, thefinanser.com. He’s a bestselling author and his latest book, ‘Doing Digital’, shares digital transformation lessons learned through interviews with BBVA, China Merchants Bank, DBS, ING and JPMorgan Chase.

4. Kara Swisher

Kara Swisher is the co-founder and editor-at-large of Recode, and is the producer and host of the Recode Decode and Pivot podcasts. She talks about disruptive technology and is a contributing opinion writer for The New York Times. Kara also hosts the new New York Times podcast, Sway.

5. Werner Vogels

Amazon’s Vice President & Chief Technology Officer, Werner Vogels, covers a broad range of tech topics on his blog All Things Distributed. He’s the man responsible for pushing Amazon’s tech innovation on behalf of Amazon’s customers at a global scale.

Werner’s keynotes are well known and a ‘must-attend’ at AWS re:Invent. The 2020 session is available to view on YouTube now.

6. Laura Dawson

Laura is the Chief Information Officer at the London School of Economics and trustee of charity IT leaders. Previously, she was CIO at the British Council and she tweets about technology in the charity sector, digital transformation and leadership.


 

These six IT professionals post meaningful content often enough to inspire interest and sustain valuable conversations with their social followers. They are great examples of how IT experts can share their knowledge effectively and harness the power of thought leadership on social media.

Talk to the friendly IT experts at HTG

For even more industry-leading IT expertise (if we do say so ourselves), stay up to date with the team here at HTG. You’ll find us sharing our knowledge on our blog, over on Twitter, and on LinkedIn. If you prefer a direct line, get in touch.

The post 6 IT expert Twitter accounts you need to follow in 2021 appeared first on HTG.

A place at the top table: why you need EUC specialists to guide your IT roadmap

$
0
0

After the expedited digital transformation of 2020, you know how difficult it can be to navigate your IT evolution at speed – but also that sometimes, it’s necessary. In the case of remote working, respondents to a McKinsey survey said their companies moved 40 times faster than they thought possible before the pandemic. End User Computing (EUC) is at the heart of that digital acceleration.

For many businesses, this process was no mean feat. The main challenges reported were that IT infrastructure was insufficient, or that organisational silos impeded commitment to – and execution of – the required changes.

In this blog, we’ll take a look at how you can avoid those challenges by hiring an EUC specialist to design your IT roadmap. Here’s why they should be involved.

The benefits of having an EUC specialist

An unmanaged and uncontrolled EUC strategy can be a significant source of problems for businesses. Organisations often start down the wrong path without realising it. Often, it’s because they don’t know enough about implementing EUC, and find it hard to think in those terms.

It’s crucial to have an expert on board to guide the strategy. For your EUC strategy to run smoothly, you need someone who can:

  • Assess application usage and determine how important each application is
  • Support your migration strategy from legacy applications into new cloud-based environments
  • Perform functionality testing with end-users
  • Pre-empt user issues, and resolve them when they arise

While you might think doing those tasks yourself will save money on salary, there are some specific benefits that come with hiring a specialist:

Secure and flexible infrastructure

There’s no ‘one-size fits all’ when it comes to EUC. By tailoring a strategy to your business, an EUC specialist will create the kind of infrastructure flexibility and security that your business needs. In a recent blog post, Brian Madden from VMware mentions the importance of said flexibility and how, for some, it ‘…will be expressed by going “all in” to the public cloud. For others, it will be doubling-down on on-prem infrastructure… And for others still, flexibility means a hybrid approach.’

An EUC specialist experienced in cloud, on-prem and hybrid solutions will be able to identify the best course for your business to take – and will ensure any transition or migration is as secure as possible. Get a great EUC specialist on board, and they’ll do this without sacrificing the end-user experience

Reduced EUC risk

Research provider Chartis estimated that the EUC Value at Risk for the 50 largest financial institutions was over £8.9 billion. This highlights the degree to which EUC risk can impact a business. Beyond financial loss, EUC risk can lead to a number of other worst-case scenarios: the massive data blunder in NHS contact tracing last year was a stark reminder of its importance.

Evidently, there’s much at stake when it comes to EUC risk. It’s critical that businesses assess, review and manage this risk accordingly. An EUC specialist can help guide you through these steps and will be able to spot the small but exponential risks to your business that others might miss.

Competitive advantage

In an unstable, unpredictable economic landscape, the ability to change and adapt to new requirements defines an organisation and allows you to retain – or lose – your competitive edge.

Achieving the required level of desktop mobility will often require new, cloud-optimised infrastructures. Once you’ve got the infrastructure, you can scale on-demand with simple, automated processes and tools. An EUC specialist with multiple partners will have the right knowledge, connections and experience with these tools to determine which solution is right for your business and employee needs.

Mobility is the NOW, not the future

Investing in your EUC strategy today will help you increase your organization’s adaptability for the future.

With 82% of company leaders planning to allow employees to work remotely at least some of the time, you’ll need to get your ducks in a row. The strategy you pulled together last March may have served your business in the short-term, but you’ll need something more robust to stand the test of time.

An EUC specialist will ensure you build a more secure, nimble and user-friendly IT infrastructure. Want to learn more? Here at HTG, we align EUC solutions and strategies with tangible business outcomes. For more information on how we can help you, get in touch.

The post A place at the top table: why you need EUC specialists to guide your IT roadmap appeared first on HTG.

3 business benefits of virtual desktops (and the saving grace of WVD)

$
0
0

The unpredictable pandemic only accelerated an entirely predictable trend: the rise of remote work. In March, as organisations scrambled to adapt, it became clear that securely managing access to IT would be a challenge and business continuity was at risk. Or rather, it would have been if Windows Virtual Desktop (WVD) hadn’t been released about six months earlier.

WVD (and other virtual desktop solutions) have proved to be a saving grace for IT departments the world over. Their popularity has grown alongside that of remote work: some sources report that there were six times more WVD users than expected in 2020. According to research conducted by Forrester, those users have seen massive improvements to cost efficiency and productivity.

Cloud-based virtual desktops are to traditional IT infrastructure what remote work is to the office – the safer, more efficient future. Here’s how:

1. Productivity

That same Forrester survey found that businesses adopting WVD saw an annual productivity increase of 22 hours per user. How is that possible? When it comes to the benefits of virtual desktops, it’s probably less about gaining those hours and more about reclaiming them through centralised IT management.

ComputerWorld’s Steven J. Vaughan-Nichols highlights the inefficiency of a traditional, decentralised approach in decidedly un-traditional circumstances: ‘Most companies are dealing with the astronomical rise in telecommuting by trying to manage Windows 10 users remotely. But it hasn’t been pretty. To quote a sysadmin friend of mine, “I’ve had about a billion calls on how to use the VPN, and don’t talk to me about securing and patching Windows 10 remotely.”’

The old remote management model doesn’t work at volume. Timely scalability is a pipe dream because attempting to troubleshoot Windows 10 issues remotely on top of VPN access provision is an efficiency vacuum. It ties up IT managers and holds employees back from getting work done. Yet it’s completely avoidable with a centrally-managed, cloud-based Virtual Desktop Infrastructure (VDI).

2. Familiarity

It takes an average of 66 days for a behaviour to become automatic. For many of us, using the OS we’re familiar with has become second-nature, and we’re able to reap the productivity rewards that come with that. We’re advocates for a more modern approach to remote IT, but we’re not advising a behaviour change for end-users. According to Microsoft, over one billion devices run Windows 10. Let’s not create a tidal wave of lost productivity by forcing their owners to learn a new interface.

WVD has proved popular because it presents the end-user with the same interface they’re used to. It’s the most seamless option for shifting your IT infrastructure to the cloud and centrally-managing your employees’ IT. For all intents and purposes, nothing about the actual use of their OS has changed (aside from the fact that it’s more secure and has fewer connectivity issues).

3. Security

56 percent of people have been using their personal devices to work remotely. That’s a significant number of potential leaks in your data security bucket, especially if you’re relying on decentralised IT and allowing employees to store company data at home.

Virtual desktops allow for near-instant access provision, but they also allow managers to rapidly deny users access in the face of security threats. Centralised VD management also gives IT professionals the benefit of security and connectivity reports that reflect the entire company’s activity, making it far easier to track down and plug any potential breaches.

Windows Virtual Desktop and the future of work

If the future of work is remote – and there’s a good amount of evidence to suggest it will be – IT infrastructure will have to evolve. If it doesn’t, ‘remote’ will become a by-word for ‘unsecure and unproductive’.

To preserve any semblance of business continuity in the face of instability, businesses will need to be able to scale their IT up and down without sacrificing security. That’s a tall order, but virtual desktop systems like WVD have been meeting it for over a year now.

The post 3 business benefits of virtual desktops (and the saving grace of WVD) appeared first on HTG.

International Women’s Day 2021

$
0
0

The UN’s theme for International Women’s Day 2021 is “Women in leadership: Achieving an equal future in a COVID-19 world.” The theme celebrates the tremendous efforts by women and girls around the world in shaping a more equal future and recovery from the COVID-19 pandemic.

But, despite the progress that’s been made, there is still significant work to be done. Here are five facts about women in tech we should be talking about this International Woman’s Day.

  1. “None of us will see gender parity in our lifetimes, and nor likely will many of our children,” the WEF (World Economic Foundation) found that the gender pay gap will take roughly 257 years to close, even more than the 202 years it predicted in 2018. This is even more sobering when you find out that unless more women are encouraged to enter fields such as science, technology and engineering, the gender gap could widen. In a report by the WEF, it found that the UK has slipped from 15th to 21st place, leaving it behind a few developing countries and most rich ones.
  2. Only 27% of female students we surveyed say they would consider a career in technology, compared to 61% of males, and only 3% say it is their first choice. Over 25% of female students say they’ve been put off a career in technology as it’s too male dominated, and only 22% of students can name a famous #womanintech in comparison to over 66% being able to name a man in the tech industry.
  3. A study carried out by the WEF showed that among professionals in fields such as artificial intelligence, women only make up 22 per cent of roles, creating a gender gap three times larger than other industries. “In an era when human skills are increasingly important and complementary to technology, the world cannot afford to deprive itself of women’s talent in sectors in which talent is already scarce.”
  4. One study found that women in the industry are more likely to be in roles termed as ‘execution’ roles, which are generally non-technical. Men, on the other hand, are more likely to be assigned the more technical ‘creator’ roles. For instance, the top position for women in tech is Project Manager, whereas the top position for men in tech is Software Engineer.
  5. According to data from European Union’s statistics agency, Eurostat, males graduating in science, mathematics, computing, engineering, manufacturing and construction, outnumber female graduates almost two to one.
    Even though you have more females going into higher education, when it comes to STEM subjects, that is not the case. While the rate of females graduating in STEM subjects has been slowly increasing, which is encouraging to see, it’s not enough. According to UCAS data, just 35% of STEM students in UK higher education are women.
    With the technology and digital transformation taking place, the number of programmes has increased and so has the number of male graduates meaning that little has changed when it comes to overall representation of female STEM graduates.

Find out more about International Women’s Day and how to support the cause, here.

The post International Women’s Day 2021 appeared first on HTG.

The virtual desktop is dead: Long live the virtual desktop

$
0
0

In early 2020, Reuters reported a major spike in IT hardware sales:

‘With more employees working from home to help slow the spread of coronavirus, demand is surging for laptops and network peripherals as well as components along the supply chain…as companies rush to build virtual offices.’

That was in March, months before a record-high 38 percent of UK employees were working from home. While that number has since dropped, the experience has convinced many to adopt flexible working in the long term.

The pandemic accelerated an already-developing paradigm shift in the workplace. It pushed businesses to prioritise adaptability, flexibility and mobility for their employees. As a result, virtual desktop infrastructure (VDI) was one of the pillars of business continuity throughout 2020.

Why are we claiming that it’s dead, then? It has something to do with that spike in laptop sales back in March.

Thick and thin

In many offices, thin clients (basic computers designed to provide access to an operating system via a server) were the traditional choice. The server did all the real processing, and the computer was more of an access point.

Thin clients still have their place as desktops for basic line-of-business (LOB) applications, but they don’t meet the requirements of a newly-remote workforce that leans heavily on video conferencing. Most don’t have a camera, built-in microphone or the graphical power needed to run Microsoft Teams or Zoom. Even if they did, though, there would still be a problem.

Thin clients served well when everyone had access to strong, consistent internet connections in office settings. Employees are now having to share WiFi with partners and children. Servers would have to be a lot more powerful than they usually are to run video software through VDI.

It’s become increasingly clear that the end-user’s client has to become ‘thicker’, and to take on some of the processing load locally. That begins to explain some of those laptop sales. Most of our customers have deployed laptops to replace thin clients and plug the processing gap, allowing us to implement a ‘hybrid approach’ that’s more than the sum of its parts.

VDI another day

The shift towards thick clients doesn’t eliminate the need for virtual desktops. In fact, it makes them more valuable than ever.

To navigate processing challenges and enable collaborative remote work, we’ve found that virtual desktops are an indispensable part of a hybrid whole. By running more demanding software locally, employees make the most of more powerful laptop hardware. Critical, proprietary, or security-sensitive LOB applications are delivered via VDI.

Thin clients were secure because there was no sensitive data stored on the device itself; it was all in the server. That’s incredibly important from a data loss prevention perspective.

Laptops present a far greater security risk for remote teams – especially if employees are using their own. Sectioning off LOB applications in the virtual desktop while running Teams calls directly from the laptops is a ‘best of both worlds’ approach. That’s why VDI is still so valuable, and why (as we mentioned in another post), Microsoft saw six times more WVD users than they expected in 2020.

No going back?

Do thin clients still have their role to play when people can go back into the office, or will thick clients prevail? When it comes to the future of work, laptops and personal desktops managed with a hybrid approach look to be the more practical option.

After all, research suggests that employees are more productive when allowed to use their own devices, and productivity is good for business continuity. It’s unlikely that staff will want to return to less functional, less familiar thin clients.

Whether businesses adopt BYOD policies or provide laptops for their staff, managed VDI can work in tandem with locally-processed software to make remote work a viable option. End-users may be accessing virtual desktops on different devices, but virtual desktops don’t seem to be going anywhere but up.

The post The virtual desktop is dead: Long live the virtual desktop appeared first on HTG.

3 reasons why your digital transformation roadmap will fail at the first hurdle

$
0
0

If the last year has taught us anything, it’s that your business needs to expect the unexpected. In 2020, many of our customers digital transformation roadmap took the same three steps:

  1. Enable remote working for all staff as soon as possible
  2. Improve hastily put-together remote working plans
  3. Implement fully optimised remote working policies and processes (e.g. remote onboarding and offboarding).

This may seem simple on paper but, **70 percent of digital transformations fail.**

With expert advice from the right partner, you can prevent your process from becoming just another statistic.

Here are the main ways your digital transformation can fail and our advice on how to avoid these pitfalls.

1. There’s no plan

We cannot stress this enough – without an effective plan, your digital transformation will fail.

If you don’t outline the scope and manage expectations, no one will know what you are trying to achieve. Without a clear purpose you run the risk of doing potentially expensive busy work that brings no real-world benefit.

A comprehensive plan enables you to get buy-in from your entire business. And if everyone is on the same page, you’re more likely to succeed.

To prevent any scope creep down the line, we also recommend appointing a transformation leader. This is an internal senior champion. They help to keep the project on track but reminding people of what you are trying to achieve and why.

2. Technical solutions overshadow business outcomes

Every activity you carry out should be aligned with overarching business goals. If technical solutions lead your digital transformation, rather than business outcomes, it will not succeed.

After all, digital transformation isn’t something you just ‘do’. You need to know the implications for you business.

For example, although getting 50 new laptops to your teams might be a vital step in your transformation efforts, that’s not the point. What you’re actually doing is enhancing customer and end user experience by using modern technology.

Establish what you want to achieve through digital transformation, then explore technical solutions. After all, technology is a tool, not an end goal.

3. Running before you can walk

We love ambitious businesses. But, it’s important you don’t get ahead of yourself when carrying out any workplace modernisation.

You may have a plan for your digital transformation, you may know what you want to achieve and why. But remember, this is a nuanced process, not a ‘one-size-fits-all’ solution.

So, before you take any steps in your digital transformation roadmap, you need to look inward.

Carry out a ‘pre-flight’ check on your technology. Check legacy systems, hardware, software and anything that may require attention during your process. See where your current pain points lie when it comes to your business technology. Then, codify these pain points and use them to inform your plan and the goals you want to achieve through your digital transformation.

This guarantees that you will always be addressing real needs in an objective, data-driven way.

Expert transformation requires expert advice

At HTG, we have more than 25 years of experience in planning and implementing effective digital transformations. So, we understand what’s required to achieve a smooth transformation that uses technical solutions to support business outcomes.

We hope that our insights help you in your transformation efforts. Avoid potential pitfalls; speak with our team of experts.

The post 3 reasons why your digital transformation roadmap will fail at the first hurdle appeared first on HTG.


How hybrid cloud solutions could be the answer to balancing your books

$
0
0

As the popularity of cloud computing grows, so do the range of cloud solutions available to you. What’s known as a ‘hybrid’ approach is becoming more and more prominent, and for good reason: it can deliver measurable cost savings, flexibility, agility and scalability at speed. It’s become so popular that the Flexera 2021 State of the Cloud Report found that around 80 percent of enterprises already have a hybrid cloud strategy.

So, why are hybrid cloud solutions so widespread? When implemented correctly they offer your business the best of both the private and public cloud. Let’s take a look at a quick definition, then break down how a hybrid solution could be the answer to balancing your books.

What is a hybrid cloud?

A hybrid cloud is composed of a combination of private (on-premise infrastructure) and public clouds. It allows businesses to implement a hybrid strategy. This involves some workloads being managed in the public cloud, and sensitive data and business-critical applications being processed in a secure private cloud. Hybrid solutions are designed to meet client needs and/or regulatory requirements.

How can a hybrid cloud solution save your business money?

The hybrid cloud approach allows you to easily scale your computing resources, as well as reduce costs and make local resources available for your company to store more sensitive data or applications. Here’s how it can save you money:

Less upfront investment

Hybrid clouds are scalable without excessive upfront investment. You can manage workloads and resources across multiple cloud instances or vendor services. This enables your business to access virtually unlimited capacity in the public cloud in the event you experience a sudden surge in your computing needs. That means you won’t need to invest in large-scale in-house servers to increase capacity for temporary peaks in demand.

Reduction of on-prem server maintenance

A hybrid approach also saves businesses money when it comes to general hardware maintenance. Implementing a combination of public and private cloud versus 100 percent private means you’ll have less hardware to manage and update. Therefore, reducing the costs you pay on utilities and facilities-related overheads, security and risk management and IT staff labour.

Better data loss protection

Data backup and disaster recovery are other key areas where the hybrid cloud provides potential cost savings. There is no single point of failure because you store your organisation’s data in multiple locations. As a result, it provides a cheaper — and more reliable — alternative to completely private IT infrastructure.

Long-term savings

Hybrid cloud solutions provide businesses with long-term savings by reducing the need to purchase preemptive storage capacity. Instead, implementing public offerings that supply consumption-based payment models as well as competitive pricing from multiple providers.

A step in the modern direction

A shift to hybrid can support your digital transformation strategy, fuel growth and boost innovation whilst lowering costs. It’s a valuable option when it comes to modernizing your IT infrastructure.

For any IT transition to be successful, CIOs, managers and business owners need to plan carefully. Your team has to account for the best practices unique to hybrid cloud solutions — which is where we come in. Our experts can help guide your transformation strategy with our cloud design service, which uses your business goals and KPIs as a guide. Let’s work together and figure out what’s right for your company. Find out more about our cloud services here.

The post How hybrid cloud solutions could be the answer to balancing your books appeared first on HTG.

4 steps to ensure your staff are deploying remote working solutions securely

$
0
0

For your digital workspace to be a success, it has to enable employees to do everything they could in the office from anywhere in the world – securely. That last part is arguably the most important. Would you rather have slower, clunkier processes, or open your company’s data to increased risk?

With the right security policies and remote working tools in place, that’s not a choice you’ll be forced to make. Security and functionality do not have to be mutually exclusive. Here are four steps you can take to ensure your staff are working from home without compromising data security.

1. Adopt a hybrid approach

Thin clients are often thought of as more secure than laptops, but if you’ve read our blog on the future of VDI, you’ll know that they’re not as well-suited to the requirements of the modern workplace. We’ve seen clients investing in laptops over thin clients as it becomes increasingly clear that remote work is a lasting trend.

To make up for the increased security risk associated with locally-hosted software, it’s a good idea to provision line-of-business (LOB) applications through VDI rather than having them processed on the laptop. A hybrid approach allows for greater visibility over the security of your business-critical apps. Crucially, it does so without hampering your employees’ ability to collaborate using locally-hosted video conferencing software like Microsoft Teams.

2. Refresh your security policies

Like many businesses, yours may not have been a remote-working organisation before 2020. A lot will have changed – especially your reliance on IT. Whether employees are taking advantage of a BYOD policy or you’ve provided them with laptops, your security policies will now have to account for the unique challenges presented by a dispersed workforce.

Revisit your existing policy to ensure that you’ve covered basics like:

  • Antivirus software requirements
  • Password best practices
  • Acceptable use
  • Network security requirements

It’s also worth going beyond the cybersecurity fundamentals to counter record-high numbers of malware and phishing attacks. Measures like two-factor authentication are no longer an option, especially if you’re taking our advice and making the most of a hybrid system. LOB applications will have their data safely stored elsewhere, but Teams conversations – which will likely be processed locally – may be at risk without the proper precautions.

3. Update legacy platforms and processes

CSO’s Susan Bradley makes an important point about access provision:

‘Those who use geoblocking in the firewall to restrict access…will need to review and revise those policies given that your firm’s employees will be coming in from various locations.’

In our experience, it’s not just firewall policies that need updating. We’ve seen our fair share of businesses that are still running virtual desktop instances on outdated, vulnerable operating systems. If you haven’t already, now is the time to audit your cloud resources and plug any potential leaks.

4. Educate your employees

Don’t leave your security policy on the intranet to collect dust. Instead, actively engage your workforce in the security conversation. That could mean hosting virtual workshops, or, if you’re part of an enterprise-level organisation, asking team leaders to review policies with their colleagues. As threats mature and adapt to the meteoric rise of remote work, so should your team’s vigilance and security education.

Meeting in the middle

To a certain degree, your cybersecurity strengths and weaknesses are down to your employees. Without trusting them to follow best practices and use good judgement, you won’t get far as a remote organisation. You will reduce the risk of a breach several times over, however, by doing some of the thinking for them.

Don’t force your team to choose between functionality and security, and refresh policies to reflect the changes that have taken place over the last year. That way, there’s little to hinder successful, secure remote work.

The post 4 steps to ensure your staff are deploying remote working solutions securely appeared first on HTG.

It’s 2021: Is the digital workspace here to stay?

$
0
0

It’s no secret that 2020 was a year defined by change. Not the strategic digital transformation dreams of ambitious IT professionals, but change of a different sort. Businesses were forced to cobble together a digital workspace that would allow their employees to work remotely. As many as 60 percent of the UK workforce are still working from home in 2021.

And yes, it was the right thing to do. But the results? Duplicated EUC services, costly contracts, last-minute hardware provisioning and unsecured, lagging systems – all set up and maintained by IT teams under pressure and without the time to plan, or train staff.

For many organisations, this is where they stand today.

Sound familiar?

Redefining the ‘digital workspace’

The above scenario isn’t what we’d call a thriving ‘digital workspace’. Here’s our definition:

A digital workspace gives you the mobility to do the work you do in the office, from anywhere in the world. It means having fast, secure access to everything you need, from any device.

Alternatively, Citrix uses the term ‘single pane of glass’. That is, you have a clear line of access to your suite of applications from a single touchpoint. Set up correctly, the digital workspace represents a holistic shift in how your team uses real-time collaboration tools to deliver better business results.

Of course it’s here to stay

Is the digital workspace here to stay? - woman working from home on an iPad

The HTG position on the subject is, overwhelmingly: yes, digital workspaces are the future.

By our standards, any organisation looking to remain competitive should consider them a fixture of work, moving forward. But, we predict the ‘rush job’ version of 2020 will quickly be forgotten.

Instead, businesses will prioritise enlisting expert help to put reliable, frictionless virtual desktop solutions in place.

And those that don’t? Well, they won’t have the necessary agility, resilience or levels of productivity to thrive in tough times.

Don’t take our word for it…

You need a digital workspace for distributed teams to work together effectively. A remote working framework is the ‘new normal’ for millions of employers:

  • Companies like Amazon, VMware and Salesforce have all said they will continue with a flexible working model from now on.
  • Willis Towers Watson’s recent study shows the number of people working from home in 2021 will be seven times higher than three years ago.
  • The Institute of Directors has reported 74 percent of companies expect to maintain increased remote working levels, even after coronavirus.

Form analysts to business leaders, the consensus is the digital workspace isn’t going anywhere.

Make room

Is the digital workspace here to stay? - Person sitting in from of a home computer

Right now, the ‘struggle to unplug’ is the main difficulty people are facing with remote working. In Buffer’s annual survey, 27 percent cite this factor, above others like ‘communication issues’ or ‘loneliness’ (both 16 percent). Compared to previous years, this is a massive shift in pain points.

Yes, people require easy access to documents, project management and communications tools. But, one advantage of a virtual desktop setup for your digital workspace is that all your work tools are ‘collected’ in one place. This, then, provides a clear line in the sand, so it’s easier to maintain a work-life balance.

Make flexible working truly sustainable, from today. Use virtual desktop technology to help your employees maintain a healthy separation between ‘personal’ and ‘work’ spaces.

Good boundaries make good work.

The post It’s 2021: Is the digital workspace here to stay? appeared first on HTG.

Viewing all 178 articles
Browse latest View live




Latest Images