Windows Virtual Desktop – First Thoughts – Part 2

A few weeks ago, I published the first part of this post, get to it here.

In the first part, I wrote about the initial setup and config experience for creating and accessing Windows Virtual Desktop (WVD). Overall, I found the experience to be good, but at times, slightly basic. This is to be expected with a brand new service that is in preview, but for the second part of this post, I wanted to explore the more advanced options of configuration that are currently available.

So in this post, I am going to discuss the following:

  • fslogix profiles
  • Load balancing
  • Depolying using a custom image
  • Availabiltiy/configuration of SSO

Starting with fslogix, I think it’s kinda cool that you are given a free license as part of WVD to use the service. The actual install was quick and easy, download the client, run it, set up two reg keys, done. However, I wasn’t overly familiar with fslogix and as such thought it wasn’t working and I have a solid background in desktop virtualisation, so I understand profile redirection explicitly. I found the Microsoft docs for this are light currently, but clicking through to the fslogix docs I spotted the issue, the local profile cannot exist first or fslogix will ignore it. A quick tidy up and my profiles were redirecting, loading quickly and generally behaving as expected. So far so good, I’d like to see how it scales with a tonne of users, but the expectation is similar performance to Citrix UPM. One thing that is perhaps a bit annoying out of the box is, when you choose signout from the web client, it simply disconnects the user from an app group, this can be fixed with some Group Policy, but I would expect sign out to mean sign out.

A little tip, check out the reg key “FlipFlopProfileDirectoryName” for a quick way to make finding your user profiles within your file share a bit easier. There are also more advanced options like “SIDDirNamePattern”, more here.

Next up is load balancing the session hosts. This obviously only applies to non-persistent session hosts, as the persitent relationship is 1:1. You also need to understand two concepts:

  • Breadth-first
    • This distributes sessions evenly across all session hosts, a max session limit is optional.
  • Depth-first
    • This fills up a session host first before distributing sessions, a max session limit is required.

Both options are simple to setup via powershell and behave exactly as outlined. Instructions here.

Third, I created a new host pool. To do this I first needed a custom image. So I deployed a VM to Azure to convert later. I skipped a bit here by using the new W10 image with 365 proplus preinstalled for you, very handy. However, I then realised this wasn’t the multi-user version and even though all is good with image creation, it fails when trying to register with WVD (30 mins later…) So to save you some time, just use the W10 multi-user image instead! I installed my apps, then I made the following changes:

  • fslogix installed and enabled
  • configured session timeout policies
  • Additional language pack and region settings

Once I had this done, to get your custom image, follow the usual docs here.

Then, you can simply specify it as part of the same steps you followed previously to deploy as a host pool

Once your new host pool is deployed, you need to assign users, don’t forget you currently can’t assign a user to more than one App Group at any one time. So I removed one of my test users from it’s previous group and added it to the new one.

Once logged in, everything is as expected. Profiles, custom settings and my newly added apps. For my own terrible fun I added a special app, yes, I’, sorry, that is Windows 95 running in the HTML5 client on WVD!

Windows95 running via HTML5 client on Windows Virtual Desktop

One tip that could save you some time is relevant to SSO. You may notice that when signing in and launching and app/desktop you are prompted for credentials twice. This is the current expected experience. In the comments on docs, I spotted the following response from the program group:

So we’ll just have to wait and see how good/bad SSO functionality will be once released!

If you have any questions or would like to see a third part to this series, let me know!

AZ-302: Microsoft Azure Solutions Architect Certification Transition Study Guide

The first thing to note about this exam is that it is intended only for those who have previously sat and passed the 70-535: Architecting Microsoft Azure Solutions exam. So if you’ve passed that exam, read on!

Next thing to note, this exam isn’t here to stay. It’s a transition exam to earn the more up to date certification. As such, Microsoft are retiring it on June 30, 2019.

What’s great is that if you pass, you will earn the Microsoft Certified: Azure Solutions Architect Expert with just the one exam.

Here is what Microsoft have to say about this exam:

The transition exam is intended for people who have already demonstrated skills in the content domain by passing the existing exam(s) that the new role-based certification exams will be replacing. They cover the delta between the current certification and what we expect people who earn the new certification to be able to do. We don’t want to retest people on the same content where they have already demonstrated competence by passing the existing exam.

Transition exams cover net new content, content that wasn’t covered in enough depth, and content on aspects of the technology that have likely changed since someone took the exam. As a result, the transition exam is not shorter than a typical exam but more focused on the key tasks and skills that were not assessed in the existing exam or certification that is being replaced.

Candidates for this exam are Azure Solution Architects who advise stakeholders and translates business requirements into secure, scalable, and reliable solutions.

Candidates should have advanced experience and knowledge across various aspects of IT operations, including networking, virtualization, identity, security, business continuity, disaster recovery, data management, budgeting, and governance. This role requires managing how decisions in each area affects an overall solution.

Candidates must be proficient in Azure administration, Azure development, and DevOps, and have expert-level skills in at least one of those domains.

Below I’ve put together a collection of links relevant to the sections highlighted as being part of the skills measured for this exam. As always, these are only guide links, sometimes you need to explore a topic much more deeply if you are not familiar with it.

One final important note, as this is a solution architecture exam, there is a presumption that you are aware of service SLAs, performance tiers, dependencies etc. This sort of knowledge will only come with experience and practise. Again, you would have needed to know these to pass 70-535 so nothing new there!

If you spot something, or have a better link for a topic, get in touch! I will update this post regularly as I work my way towards taking this exam and appreciate any feedback.

Determine workload requirements

Determine feasibility and refine requirements

  • There are no real links that help with this section as it is so broad. This requires that prior over-arching knowledge of the platform I mentioned earlier.

Optimize consumption strategy

Design for identity and security

Design authorization

Design a business continuity strategy

Design a site recovery strategy

Design for high availability

Implement workloads and security

Configure serverless computing

Implement authentication and secure data

Implement secure data solutions

Develop for the cloud

Develop long-running tasks

Configure a message-based integration architecture

Develop for asynchronous processing

Develop for autoscaling

Implement distributed transactions

Develop advanced cloud workloads

**This section is vast. A lot of practise and reading required**

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/home

https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/overview

https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-tutorial-basic-deploy?view=azure-bot-service-4.0&tabs=csharp

https://docs.microsoft.com/en-us/azure/machine-learning/service/samples-notebooks

https://docs.microsoft.com/en-us/azure/iot-fundamentals/iot-services-and-technologies

Windows Virtual Desktop – First Thoughts – Part 1

Last week, Microsoft released Windows Virtual Desktop (WVD) to the public in preview. The service was first announced back at Ignite 2018. Microsoft describe the service as follows:

Windows Virtual Desktop is a desktop and app virtualization service that runs on the cloud.

Here’s what you can do when you run Windows Virtual Desktop on Azure:

  • Set up a multi-session Windows 10 deployment that delivers a full Windows 10 with scalability
  • Virtualize Office 365 ProPlus and optimize it to run in multi-user virtual scenarios
  • Provide Windows 7 virtual desktops with free Extended Security Updates
  • Bring your existing Remote Desktop Services (RDS) and Windows Server desktops and apps to any computer
  • Virtualize both desktops and apps
  • Manage Windows 10, Windows Server, and Windows 7 desktops and apps with a unified management experience

One point not mentioned that is important, Azure is the only public cloud you can run Windows 10 workloads.

There are a couple of pre-requisites to deploying WVD. First up is licensing, below are the requirements for running WVD

OSRequired license
Windows 10 Enterprise multi-session or Windows 10 single-sessionMicrosoft E3, E5, A3, A5, Business
Windows E3, E5, A3, A5
Windows 7Microsoft E3, E5, A3, A5, Business
Windows E3, E5, A3, A5
Windows Server 2012 R2, 2016, 2019RDS Client Access License (CAL) with Software Assurance

Next, you’ll need the following infrastructure components:

  • Azure AD tenant to register the service against
  • AD Domain Services reachable by VMs in WVD pool, so either a domain controller in the vnet or enable AAD DS.
  • An Azure subscription to host and pay for the above 🙂

Once the above is all ready to go, you’ll want to start your deployment. First, you need to register your AAD tenant with the WVD service. This requires Global Admin rights and your tenant ID, full details here. I found the process quick, simple and well documented.

Second, you need to create a host pool. This links the IaaS resources to your domain and your WVD service. I opted for an isolated vnet with AADDS activated to domain join the VMs, using a server pool for applications. Full details for this step here.

After a little time, my host pool was deployed and I could access the service via the web client.

A Windows 10 desktop session running via the HTML5 client right in the browser

The experience was good but I wouldn’t call it seamless. Simple things jumped out straight away from the authentication side of things with the HTML5 client. After logging into the Azure AD app, I then have to login again to the desktop, I would have expected SSO here. The same lack of SSO is present in the RDS client on Windows 10.

However, once connected, performance and latency were good. Exactly as expected in fact. Even via the HTML5 client.

Next, I wanted to test some individual apps. Namely, the powerhouse of app virtualisation, Notepad. I created a new RemoteApp Group, following the instructions here, again they were easy to follow. Although, Notepad didn’t show up in the list of available apps, I just entered the location where I know it is installed and it worked.

Again performance was as expected however the issue I ran into here was the fact that I couldn’t assign the same user to multiple groups, it was one group or the other as I had a “desktop” group and an “app” group. Hopefully this is something that is fixed, or a workaround in place for GA.

Next on my list will be to test the FSlogix option for profiles, load balancing options and creating a pool with a customised image. But so far I am impressed with the simplicity of deployment. I will follow this post up with impressions relative to that next level of customisation required for a production environment.

One final note is that all of the customisation of the Windows Virtual Desktop service is done via Powershell. If you’re not familiar or comfortable with this, you may struggle to get a working POC in place. My advice is to follow the published guides exactly or ask on Twitter for help!

AZ-500 Microsoft Azure Security Technologies (beta) – Study Guide

A new exam has been announced for Azure, AZ-500 Microsoft Azure Security Technologies (beta). Passing this single exam will allow you to earn a Microsoft Certified: Azure Security Engineer Associate certification. So, interested and wondering if you should take this exam? Here is what Microsoft has to say:

Candidates for this exam are Microsoft Azure security engineers who implement security controls, maintain the security posture, manages identity and access, and protects data, applications, and networks. Candidates identify and remediate vulnerabilities by using a variety of security tools, implements threat protection, and responds to security incident escalations. As a Microsoft Azure security engineer, candidates often serve as part of a larger team dedicated to cloud-based management and security and may also secure hybrid environments as part of an end-to-end infrastructure.

Candidates for this exam should have strong skills in scripting and automation, a deep understanding of networking, virtualization, and cloud N-tier architecture, and a strong familiarity with cloud capabilities, Microsoft Azure products and services, and other Microsoft products and services.

Below I’ve put together a collection of links relevant to the sections highlighted as being part of the skills measured for this exam. As always, these are only guide links, sometimes you need to explore a topic much more deeply if you are not familiar with it.

If you spot something, or have a better link for a topic, get in touch! I will update this post regularly as I work my way towards taking this exam and appreciate any feedback.

Don’t forget, there are beta codes at the exam link above for the first 300 people who register to take the exam!

A good place to start is the Azure Security Documentation page. This has several of the links from below and includes most of the key concepts and services covered in this exam as well as several best practice approaches you should consider.

Manage Identity and Access

App registration

MFA

Groups

Users

Adconnect

CA

AADIP

PIM

Tenant

Implement Platform Protection

VNET

NSG & ASG

Firewall

Remote Accesss Management

Baseline Security

Resource Firewall

Endpoint Security VM

VM Security

Harden VM

Container Networks

Container Isolation & Security

AKS Security

Container Registry Security & Authentication

Container Instance Security – Working on it!

Resource Locks

Resource Group Security

Azure Policy

Custom RBAC

Manage Security Operations

Azure Monitor

Log Analytics

Diagnostic Logs

Vulnerability Scanning

Security Center Policies & JIT

Security Alerts

Secure Data and Applications

Data Security Policies

Data Infrastructure

Data at Rest

Application Delivery

Application Security

Key Vault

What are Azure Blueprints?

I’m sure most of you have seen recent announcements relative to Blueprints as well as multiple Microsoft posts about the service and what it can do to improve your environments. However, what if you’re not sure about what they are and if they are usable for your environment? Hopefully, that’s where this post comes in. I’m going to explain exactly what they are and why you might use an Azure Blueprint. This should allow you to make a decision on whether you need them or not.

Following on from that, I think that’s the first basic point about Azure Blueprints. Similar to several other new services in Azure, the functionality is great and could help progress a lot of environments, but that doesn’t mean they help, or are even useful in a lot of other environments. Never feel guilt-ed into using a new service because there is a “buzz” about it at launch. Assess the service, understand it, assess it’s usability versus your requirements then TEST TEST TEST! Don’t forget, Blueprints are still in preview so no production workloads yet.

So, what is an Azure Blueprint? To try explain it plainly, it is a collection of governance and resource services, defined in such a way to allow you to repeat deployments to a set standard.

Azure Blueprints overview

The collection of governance and resource services within a Blueprint are referred to as Artifacts. Within each Blueprint, you can make use of any combination of the following:

ResourceHierarchy optionsDescription
Resource GroupsSubscriptionCreate a new resource group for use by other artifacts within the blueprint. This enables you to organize resources and provides a scope for other artifacts.
Azure Resource Manager templateSubscription, Resource Group
Templates are used to create resources. This could range from individual deployments to entire environments.
Policy AssignmentSubscription, Resource GroupAllows assignment of a policy or initiative to the subscription and/or resource group the blueprint is assigned to. Any
parameters are assigned at creation of the blueprint or during blueprint assignment.
Role AssignmentSubscription, Resource GroupRole assignments can be defined for the entire subscription or nested to a specific resource group included in the blueprint.

As you can see above, artifacts can be deployed/assigned at different levels. However, the Blueprint itself must be located in either a subscription you have at least Contributor access to or a Management Group. If located within a Management Group the Blueprint is available to any of the child subscriptions of that group.

When defining your Blueprint, several artifact options allow you to choose parameters that are passed from Blueprint to artifact. For example, when defining a Resource Group, you can choose to specify the name and location. You don’t have to specify these parameters within the Blueprint, you can also allow these to be passed when the Blueprint is assigned.

Once you have your Blueprint defined, your next step is to publish it. When publishing, you must indicate a version. I found it odd that this isn’t restricted in some way, you can literally name one version “1.0” and the next “B” so I’d recommend adding notes with each version and try to stick to a pattern. However, it makes sense if you’re going to use different versions for different assignments (I’ll explain that next), so choose relative to your requirements.

When your Blueprint is published, you can then assign it. A nice feature is the ability to assign different versions of a Blueprint to different subscriptions. For example you could have two versions of a Blueprint, that have different artifact definitions (think test version and production version) assigned to different subscriptions. They can be independently updated too.

At assignment, there are some options to chose as well as subscription. They are Resource Locking and Managed Identity.

For Managed Identity, it’s recommended you simply choose System Assigned as the Blueprints service will then manage the security lifecycle. More on Managed Identities to help you understand and choose what’s right for your environment.

The Resource Locking feature really allows you to maintain control of your governed deployment. If you’re not familiar with Resource Locks, check out this post. The familiar status applies to resources deployed by a Blueprint assignment:

  • Not locked
  • Read Only
  • Cannot delete

However, once a status is applied, not even a user/object with the Owner role can modify it. This is due to how these statuses are applied. An RBAC deny assignments deny action is applied to artifact resources during assignment of a blueprint if the assignment selected the Read Only or Do Not Delete option. The deny action is added by the managed identity of the blueprint assignment and can only be removed from the artifact resources by the same managed identity.

So, how do you edit or delete your resources? Update your Blueprint to “Not locked” and push the update to the relevant assignment. This method prevents unwanted and unexpected changes occurring outside of the scope of the Blueprint.

There is quite a learning curve for Blueprints I think as they combine several other services you must be familiar with, so for me, you have to start there. Understand each of the artifacts fully so you can see how they may work well if defined in your environment.

Recently, sample Blueprints have been released to allow you to deploy governed pre-designed environments with a couple of clicks, one sample is the ISO27001 Shared Services which I think is good to help understand the service, even if it might be slightly complex for your first test.

Again, Blueprints are still in Preview. So be as cautious as always with your production environments. I look forward to seeing what changes come with GA, which shouldn’t be much longer considering Blueprints were announced back at Ignite. I will update this post relevant to GA when it happens.

As always, if you have questions, leave a comment, mail me, or ping me on Twitter!

Azure Backup – New! Instant Restore

Azure Backup is a service offered within the platform that allows you to backup and restore your IaaS data both from within Azure and on-premises. The integration out-of-the-box within Azure, and the attractive cost point make it a great option for many deployments. Simplicity is also key, for example, you can enable backup right from within the VM blade, or, for several VMs within your environment from the recovery services vault itself.

Another benefit is the speed and efficiency with which you can test and confirm your required restore processes. This has been recently improved on again by the introduction of Instant Restore. This introduces the ability to use snapshots taken as part of a backup job, which are then available for recovery without waiting for any data transfer to the vault. This greatly reduces the wait time to trigger a restore.

While Instant Restore is the quickest option, it does incur additional cost for the storage required to hold the snapshots. Also, it currently is only available in four regions, West Central US, India South, Australia East, and North Europe.

There are other alternatives included in the cost prices of Azure Backup. They are as follows:

  • Create a new VM
    • This quickly creates a basic VM from a chosen restore point. Configuration options are limited.
  • Restore disk
    • This creates new disks from a restore point. These disks can then be attached manually or via script. You pay for the disks while they are active but this option offers great flexibility.
  • Replace existing
    • This is more automated. Azure Backup takes a snapshot of the currently active VM, powers it off, then detaches current disks. While this is happening, it creates new disks from a restore point. Once restore is complete, it attaches them to the VM and powers it back on. There are some limitations here due to the automation used, it’s not supported for unmanaged disks, generalized VMs, or for VMs created using custom images.

There is also a fourth option that combines some automation for file level recovery. This is well detailed here – Azure Backup File Recovery.

Based on the previous options available, I think the new Instant Restore capability is a welcome addition. It brings Azure Backup more in line with traditional snapshot-as-a-quick-backup solutions people regularly leverage on-premises. If you are running significant IaaS workloads in Azure that require regular changes, I would strongly consider upgrading your Recovery Services Vault to support Instant Restore.

Azure Migrate – Where Should I Start?

If you’re thinking about making a move to Azure, it’s important to first understand how to approach it. With the correct approach and sufficient planning, a migration can be straight forward, efficient and void of surprises.

Therefore, the place to start is the Microsoft Cloud Operating Model. This is a detailed white paper that allows you to create a strategy for migration. Covering cloud readiness, people strategy and technical analysis, it’s a comprehensive document. Once you have an understanding of your business strategy, read “Why am I moving to Azure?” and your people strategy, read “Who is moving us to Azure?” you can progress to the technical phase.

The vast majority of initial moves to Azure are often re-host migrations, or “lift and shift”, as these are most common, I will reference this scenario as an example. There are four stages:

The first step of the technical phase is to Assess. This means understanding what it is that you are moving and what the best process will be. This includes everything from involving the business stake holders, to cost calculation to application evaluation. This analysis should give you an output that not only details where the application could go but more importantly, where it can go.

Microsoft offer several tools to help with some of this. First up is Azure TCO. This allows you to estimate the cost savings you could make by migrating to Azure. Next is Azure Migrate, this is an assessment tool that is FREE and allows you to discover, document and assess your workloads and their dependencies. You can then create cost estimates for running them in Azure.

Azure Migrate Dependencies Example

Now that you have your environment discovered, grouped and sized correctly, you can begin to migrate your workloads. Microsoft provide a service for this also, Azure Site Recovery (ASR). This service allows you to replicate your servers from your on-premises environment. For most services it is application aware, meaning it can replicate services like SQL server without any data loss. Before you implement ASR it is important to use your data from Azure Migrate to capacity plan for your replication requirements. Taking this step allows for greater speed and efficiency during replication and migration of workloads.

Microsoft also provide a script repository for migrating large numbers of VMs at once. These can be from VMware, AWS, GCP or physical servers. There are some limitations, most restrictive is lack of support for Managed Disks, but you can always flip these manually later. The scripts and guide can be found here.

How long it takes to migrate your workloads is determined by your business requirements. However, once complete, it is vital that you revisit these workloads for optimisation. Azure Advisor can provide recommendations but the key areas to focus on are:

  • VM sizing – Ensure the VM is running on an appropriate size to gain maximum cost efficiency
  • Storage tier – Ensure the disks associated with the VM are using the correct tier to balance performance requirements against cost.
  • Reserved Instances – Once the VM is sized correctly, purchase Reserved Instances to achieve the maximum discount to run your workload for one to three years.

Now that your workloads are migrated and optimised, your final step is to ensure they’re secure and managed correctly. The best place to start with this process is Azure Security Center. This provides unified security management and allows you to take action to mitigate risk and implement actionable recommendations. This will include common requirements like disk encryption and anti virus. More advanced and platform specific features like Just In Time Access are also available.

So to recap, there is 1 prerequisite then 4 main steps:

  1. Understand and create your Cloud Operating Model
  2. Assess your current environment
  3. Migrate it!
  4. Optimise your utilisation
  5. Secure and Manage it

If all of the above is completed and optimisation and security are reviewed regularly you can be confident in the quality of your environment state. If you have any questions, feel free to tweet me @wedoAzure or leave a comment!

Azure Monitor – Show me all the data!

A significant put possibly less exciting update to Azure over the past few months has been the revamp and update of Azure Monitor (AzMon). In my opinion, this is a resource that should be included in your design for all deployments. It is now a comprehensive solution for collecting, analysing, and acting on your telemetry from Azure as well as on-premises environments.

The below graphic gives an overview of AzMon and it’s three key areas. Left-to-right, you start with the six possible data ingestion sources for telemetry data. As you can see, everything from application to infrastructure is supported. The two key forms of data used is next, Metrics and Logs. The final area, highlights the processing functions available.

monitoroverview

AzMon collects the following data for processing:

  • Application monitoring data: Data about the performance and functionality of the code you have written, regardless of its platform.
  • Guest OS monitoring data: Data about the operating system on which your application is running. This could be running in Azure, another cloud, or on-premises.
  • Azure resource monitoring data: Data about the operation of an Azure resource.
  • Azure subscription monitoring data: Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself.
  • Azure tenant monitoring data: Data about the operation of tenant-level Azure services, such as Azure Active Directory.

As mentioned previously, the data slots neatly into two categories:

  1. Metrics – You can use Metrics Explorer to view resource performance among other things over periods of time. These can be presented immediately within the portal or additional dashboards can be created to visualise.
  2. Logs –  Using Log Analytics helps to quickly retrieve, consolidate, and analyse the collected log data. This can be then be re-used in visualisations as well as alerting.

The Docs site for AzMon is fantastic. It is filled with guidance on setup as well as tutorials relating to common scenarios, rather than regurgitate, below are my personal picks:

Visualise your log data

Respond to events using alerts

Application health alerts

Also hugely important is the level of integration with other services that AzMon offers, take a look here for the currently supported list.

Azure – Protect My App

If you’ve taken a path to adopt public cloud and part of that adoption is a public facing application, you need to review how you are protecting it within Azure. When your application was on-premises, there was most likely only a single well managed solution to granting external access. In Azure, there are several various solutions available and each carries its own set of functionality and risk.

Rather than try define that entire list, let us look at how best to protect your application, regardless of how you have deployed it. With Azure, these are my three preferred options:

  1. Azure AD Application Proxy
  2. Azure Application Gateway
  3. 3rd Party Network Virtual Appliance

These arguably run from least to most secure. An important tip is to treat each application as unique, because it is. There is not a single best solution for all of your applications and as Azure is a shared model of security it is up to you to protect your data!

Awkward stuff out of the way, let’s look at Application Proxy, we’ve already had a post on setting this up here. The key point with this service is that you do not have your site exposed externally at a network level. Of course, the application itself will be and therefore possibly your server and data, but there is not an open endpoint accepting traffic.

Next is Application Gateway. This is actually a load-balancing solution but you can enable the Web Application Firewall tier easily. This provides protection of your applications from common exploits and vulnerabilities. However, this protection is based on rules from the OWASP core rule set and while it is configurable, this only means you can disable certain rules, not add custom ones. Application Gateway can seamlessly integrate into your environment whether you are running PaaS or IaaS solutions and is economical from a cost perspective. However, that point regarding customisation can often be the deciding factor in choosing our third option instead.

Finally, a 3rd party appliance. Azure offers solutions from the majority of the major providers in this space. Easily deployable from the Azure Marketplace within minutes. Integration options are good but require some work, (See post on routing here) and cost can be a factor to meet required availability levels. But if you need maximum protection and customisation, this is your best option.

Overall, I think there is definitely a solution in Azure that will meet your requirements. Take the time to understand your application, consult with your SMEs and you won’t go wrong!

 

 

Azure Compute Updates at Ignite

If you thought there were a lot of networking updates at Microsoft Ignite, you won’t believe how many there were when it comes to Azure Compute. Here I will try to round-up those I am most excited about. Some of the features announced have been on a wish list of mine for quite a while…I’m looking at you Managed Disks!

First up, several new VM sizes have been announced. The ND and NV series has been updated in preview. This series offers powerful GPU capabilities and is now running cutting edge tech from NVIDIA.

HPC can often have several blockers on-premise but is easily workable in Azure, building on this, Microsoft have added two new ranges to the H series offering. HB an HC series will be in preview before the end of the year. These allow for staggering amounts of compute power and bandwidth.

Storage is sometimes overlooked when considering VM performance, that is something Microsoft are attempting to correct with the announcement of Ultra-SSD Managed Disks in preview. These disks will offer sub-millisecond latency, and can hit up to 160,000 iOPS on a single disk. There were no typos there, fastest disks available in any cloud.

Standard SSDs and larger sizes across the board were also announced. This allows greater flexibility in performance and cost management when designing and deploying solutions.

As mentioned earlier, my wish list item, Managed Disks can now be moved between resource groups and subscriptions. This finally allows better management and flexibility with deployments. This update allows you to also move managed images and snapshots. We had access to the private preview of this functionality and it works exactly as expected.

Not directly Compute, but important in relation to it is the announcement of Windows Virtual Desktop. Azure is already the only cloud where you can run Windows 10 workloads and this service is going to improve on the deployment and management of them. Essentially, Azure will run the RDS Gateway and Broker service for you. You will have full control and responsibility of the infrastructure this will connect too and which applications and desktops are presented. We’ve chatted to the Product Team here at Ignite and they are excited for people to get their hands on the preview and really test it out. My favourite piece of functionality is that the service will be agent-less when using Windows 10 to connect which should make deployment and adoption as painless as possible for admins!

Finally, to encourage older workload migration, Microsoft announced that if you migrate Windows Server or SQL Server 2008/R2 to Azure, you will get three years of free extended security updates on those systems. This could save you some money when Windows Server and SQL Server 2008/ R2 end of support (EOS).

So many announcements, so little time. Expect more detailed posts on most if not all of the above piece over the coming weeks and months.