Azure Migrate – Where Should I Start?

If you’re thinking about making a move to Azure, it’s important to first understand how to approach it. With the correct approach and sufficient planning, a migration can be straight forward, efficient and void of surprises.

Therefore, the place to start is the Microsoft Cloud Operating Model. This is a detailed white paper that allows you to create a strategy for migration. Covering cloud readiness, people strategy and technical analysis, it’s a comprehensive document. Once you have an understanding of your business strategy, read “Why am I moving to Azure?” and your people strategy, read “Who is moving us to Azure?” you can progress to the technical phase.

The vast majority of initial moves to Azure are often re-host migrations, or “lift and shift”, as these are most common, I will reference this scenario as an example. There are four stages:

The first step of the technical phase is to Assess. This means understanding what it is that you are moving and what the best process will be. This includes everything from involving the business stake holders, to cost calculation to application evaluation. This analysis should give you an output that not only details where the application could go but more importantly, where it can go.

Microsoft offer several tools to help with some of this. First up is Azure TCO. This allows you to estimate the cost savings you could make by migrating to Azure. Next is Azure Migrate, this is an assessment tool that is FREE and allows you to discover, document and assess your workloads and their dependencies. You can then create cost estimates for running them in Azure.

Azure Migrate Dependencies Example

Now that you have your environment discovered, grouped and sized correctly, you can begin to migrate your workloads. Microsoft provide a service for this also, Azure Site Recovery (ASR). This service allows you to replicate your servers from your on-premises environment. For most services it is application aware, meaning it can replicate services like SQL server without any data loss. Before you implement ASR it is important to use your data from Azure Migrate to capacity plan for your replication requirements. Taking this step allows for greater speed and efficiency during replication and migration of workloads.

Microsoft also provide a script repository for migrating large numbers of VMs at once. These can be from VMware, AWS, GCP or physical servers. There are some limitations, most restrictive is lack of support for Managed Disks, but you can always flip these manually later. The scripts and guide can be found here.

How long it takes to migrate your workloads is determined by your business requirements. However, once complete, it is vital that you revisit these workloads for optimisation. Azure Advisor can provide recommendations but the key areas to focus on are:

  • VM sizing – Ensure the VM is running on an appropriate size to gain maximum cost efficiency
  • Storage tier – Ensure the disks associated with the VM are using the correct tier to balance performance requirements against cost.
  • Reserved Instances – Once the VM is sized correctly, purchase Reserved Instances to achieve the maximum discount to run your workload for one to three years.

Now that your workloads are migrated and optimised, your final step is to ensure they’re secure and managed correctly. The best place to start with this process is Azure Security Center. This provides unified security management and allows you to take action to mitigate risk and implement actionable recommendations. This will include common requirements like disk encryption and anti virus. More advanced and platform specific features like Just In Time Access are also available.

So to recap, there is 1 prerequisite then 4 main steps:

  1. Understand and create your Cloud Operating Model
  2. Assess your current environment
  3. Migrate it!
  4. Optimise your utilisation
  5. Secure and Manage it

If all of the above is completed and optimisation and security are reviewed regularly you can be confident in the quality of your environment state. If you have any questions, feel free to tweet me @wedoAzure or leave a comment!

Azure Monitor – Show me all the data!

A significant put possibly less exciting update to Azure over the past few months has been the revamp and update of Azure Monitor (AzMon). In my opinion, this is a resource that should be included in your design for all deployments. It is now a comprehensive solution for collecting, analysing, and acting on your telemetry from Azure as well as on-premises environments.

The below graphic gives an overview of AzMon and it’s three key areas. Left-to-right, you start with the six possible data ingestion sources for telemetry data. As you can see, everything from application to infrastructure is supported. The two key forms of data used is next, Metrics and Logs. The final area, highlights the processing functions available.

monitoroverview

AzMon collects the following data for processing:

  • Application monitoring data: Data about the performance and functionality of the code you have written, regardless of its platform.
  • Guest OS monitoring data: Data about the operating system on which your application is running. This could be running in Azure, another cloud, or on-premises.
  • Azure resource monitoring data: Data about the operation of an Azure resource.
  • Azure subscription monitoring data: Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself.
  • Azure tenant monitoring data: Data about the operation of tenant-level Azure services, such as Azure Active Directory.

As mentioned previously, the data slots neatly into two categories:

  1. Metrics – You can use Metrics Explorer to view resource performance among other things over periods of time. These can be presented immediately within the portal or additional dashboards can be created to visualise.
  2. Logs –  Using Log Analytics helps to quickly retrieve, consolidate, and analyse the collected log data. This can be then be re-used in visualisations as well as alerting.

The Docs site for AzMon is fantastic. It is filled with guidance on setup as well as tutorials relating to common scenarios, rather than regurgitate, below are my personal picks:

Visualise your log data

Respond to events using alerts

Application health alerts

Also hugely important is the level of integration with other services that AzMon offers, take a look here for the currently supported list.

Azure – Protect My App

If you’ve taken a path to adopt public cloud and part of that adoption is a public facing application, you need to review how you are protecting it within Azure. When your application was on-premises, there was most likely only a single well managed solution to granting external access. In Azure, there are several various solutions available and each carries its own set of functionality and risk.

Rather than try define that entire list, let us look at how best to protect your application, regardless of how you have deployed it. With Azure, these are my three preferred options:

  1. Azure AD Application Proxy
  2. Azure Application Gateway
  3. 3rd Party Network Virtual Appliance

These arguably run from least to most secure. An important tip is to treat each application as unique, because it is. There is not a single best solution for all of your applications and as Azure is a shared model of security it is up to you to protect your data!

Awkward stuff out of the way, let’s look at Application Proxy, we’ve already had a post on setting this up here. The key point with this service is that you do not have your site exposed externally at a network level. Of course, the application itself will be and therefore possibly your server and data, but there is not an open endpoint accepting traffic.

Next is Application Gateway. This is actually a load-balancing solution but you can enable the Web Application Firewall tier easily. This provides protection of your applications from common exploits and vulnerabilities. However, this protection is based on rules from the OWASP core rule set and while it is configurable, this only means you can disable certain rules, not add custom ones. Application Gateway can seamlessly integrate into your environment whether you are running PaaS or IaaS solutions and is economical from a cost perspective. However, that point regarding customisation can often be the deciding factor in choosing our third option instead.

Finally, a 3rd party appliance. Azure offers solutions from the majority of the major providers in this space. Easily deployable from the Azure Marketplace within minutes. Integration options are good but require some work, (See post on routing here) and cost can be a factor to meet required availability levels. But if you need maximum protection and customisation, this is your best option.

Overall, I think there is definitely a solution in Azure that will meet your requirements. Take the time to understand your application, consult with your SMEs and you won’t go wrong!

 

 

Azure Compute Updates at Ignite

If you thought there were a lot of networking updates at Microsoft Ignite, you won’t believe how many there were when it comes to Azure Compute. Here I will try to round-up those I am most excited about. Some of the features announced have been on a wish list of mine for quite a while…I’m looking at you Managed Disks!

First up, several new VM sizes have been announced. The ND and NV series has been updated in preview. This series offers powerful GPU capabilities and is now running cutting edge tech from NVIDIA.

HPC can often have several blockers on-premise but is easily workable in Azure, building on this, Microsoft have added two new ranges to the H series offering. HB an HC series will be in preview before the end of the year. These allow for staggering amounts of compute power and bandwidth.

Storage is sometimes overlooked when considering VM performance, that is something Microsoft are attempting to correct with the announcement of Ultra-SSD Managed Disks in preview. These disks will offer sub-millisecond latency, and can hit up to 160,000 iOPS on a single disk. There were no typos there, fastest disks available in any cloud.

Standard SSDs and larger sizes across the board were also announced. This allows greater flexibility in performance and cost management when designing and deploying solutions.

As mentioned earlier, my wish list item, Managed Disks can now be moved between resource groups and subscriptions. This finally allows better management and flexibility with deployments. This update allows you to also move managed images and snapshots. We had access to the private preview of this functionality and it works exactly as expected.

Not directly Compute, but important in relation to it is the announcement of Windows Virtual Desktop. Azure is already the only cloud where you can run Windows 10 workloads and this service is going to improve on the deployment and management of them. Essentially, Azure will run the RDS Gateway and Broker service for you. You will have full control and responsibility of the infrastructure this will connect too and which applications and desktops are presented. We’ve chatted to the Product Team here at Ignite and they are excited for people to get their hands on the preview and really test it out. My favourite piece of functionality is that the service will be agent-less when using Windows 10 to connect which should make deployment and adoption as painless as possible for admins!

Finally, to encourage older workload migration, Microsoft announced that if you migrate Windows Server or SQL Server 2008/R2 to Azure, you will get three years of free extended security updates on those systems. This could save you some money when Windows Server and SQL Server 2008/ R2 end of support (EOS).

So many announcements, so little time. Expect more detailed posts on most if not all of the above piece over the coming weeks and months.

Azure Networking Updates at Ignite

To say there have been a lot of Azure announcements at Ignite is an understatement. Several important services have hit GA and several exciting new services have been announced. Rather than re-list them all (I’ll link through at the end), I am going to highlight those I am most excited about, and think will be relevant to most clients.

Below is a graphical overview of the new approach to Azure networking, which contains four pillars, Connect, Protect, Deliver and Monitor.

azurenetworkingfalludpate

Starting with Connect, two ExpressRoute related announcements immediately grabbed my attention. Microsoft now offer 100Gbps connectivity speeds via ExpressRoute Direct which is the fastest cloud connectivity available.

Helping customers with geo-distributed offices, ExpressRoute Global Reach was announced, allowing customers to connect offices using their existing circuits. Additional to this concept, Azure Virtual WAN is now GA. With that move, Microsoft also added new preview features including P2S VPN and ExpressRoute connectivity shown below:

vwanfeatures.png

The final announcement in Connect that really will make a difference to clients is Public IP Prefix. This means that you can now request a range of static IP addresses for your resources that you will “own”. This will make white-listing and administration of your public presence a lot simpler.

Moving to the Protect pillar, Azure Firewall has now moved to GA. Which is a great service for those looking to have control over their breakout point in Azure without deploying and maintaining an NVA.

Rounding out Protect, Service Endpoint Policies. These allow more granular control over access to Azure resources over your vnet service endpoint. See previous post on endpoints for more.

From the Deliver pillar, the announcement of Azure Front Door in preview is the big one. AFD is a global entry point for you applications that is scalable and secure. The ability to scale and level of integration with existing services will make this widely used by customers with a large public facing presence.

There are also updates to Application Gateway which improve performance, introduce redundancy and increase the level of integration with over services.

From the final pillar, Monitor, the main announcement is Virtual Network TAP in preview. This allows for continuous mirroring of VM traffic to a collector, without any agents. This the first of these services in a public cloud and will allow for greater analytics and granularity of environment traffic.

For the entire round-up from Microsoft, head here.

Azure Certification Updates and Announcing Microsoft Learn

We’re lucky enough to be at Microsoft Ignite in Orlando this week. Trying to wrap all of the announcements into immediate reactions is almost impossible, so I’m going to take some time to give each announcement the detail it deserves. With that said, I think the best place to start is with learning and certification.

At Inspire, Microsoft announced new learning paths for Azure, see here for previous post. Yesterday, they confirmed the new certification options. Microsoft have shifted their certification focus to be role based. This was first seen with Azure Administrator certification and is continued with Developer and Architect tracks. Azure is the first technology platform Microsoft are making this change for, 365 will follow later this year.

One interesting point to take from the announcement is that there will no longer be an MCSA or MCSE qualification, there will be individual awards for each path. This is best explained in the diagram below:

Azure Apps and Infrastructure certs

So depending on your specific, or hopeful future, job role you have some options. As you can see in the bottom right hand corner, there are Transition exams available if you have previously sat the relevant Azure exams. I’ve already sat the AZ-102 beta exam, it was very good, everything you would expect from an Administrator perspective. I am going to attempt to sit the AZ-302 exam here at Ignite as well, so watch social for thoughts on that if there are no NDAs.

The second and more impressive announcement is the introduction of Microsoft Learn. This is an entirely new platform that allows you to interactively learn right in the browser. Want to know the best thing, it’s all FREE.

If you’re trying to learn more about Azure, or you are just starting, there is an absolute wealth of learning available. The paths are accurate and beneficial, they allow you to learn at your own pace and if you’re into achievements, they offer plenty! See below for an example of some of the paths offered:

learn azure.PNG

Part of the brilliance of the platform is direct cloud shell integration, allowing you to actually deploy and test what you are learning about, again right in the browser. Genuinely, I am loving this new Learn platform, I spent some time yesterday speaking to the product team and trying out different paths myself and the service is excellent. What are you waiting for, get over there and start learning now!

There are discount codes for the beta exams Architect track here and the Developer track here.

Understanding Azure Reserved Virtual Machine Instances

One of the main benefits of Azure’s billing model is that it offers per minute billing. This means that if you have an application/service/environment that isn’t required 24/7 you can reduce your costs by using Automation so that you will only pay for what you consume.

However, if your environment requires you run a VM constantly, the cost can start to mount up. To help alleviate this, Microsoft offer a solution in the form of long-term fixed price Virtual Machine instances.

These Reserved Instances (RI) help save money by allowing you to pre-pay for a one-year or three-year VM size. The fact that you pay up front, allows you to make significant savings on the Pay-As-You-Go pricing.

RIexample

The most common subscription offers have the ability to purchase RIs, but there are some restrictions in terms of how it is approached. The options are the below:

  • Enterprise agreement subscriptions. To purchase reservations in an enterprise enrollment, the enterprise administrator must enable reservation purchases in the EA portal.
  • Pay-As-You-Go but you must have the “Owner” role on the subscription to buy a reservation.
  • Cloud Solution Provider subscriptions. However, the providing partner must make the purchase on behalf of the customer.

Once purchased, the discount is then applied to the resource usage that matches up with the RI capacity purchased. For example, if you purchase a one-year RI for a DS4v3 size VM, and you are using a DS4v3 the discount will apply against that usage.

A good strategy is to determine the sizing before purchasing the RI. So my advice would be to run your VMs without an RI for a few months to ensure your sizing is suitable and therefore correct. However, if this is something that is proving difficult, there is a range of flexibility offered within your RI scope.

With instance size flexibility, you don’t have to deploy the exact same VM size to get the benefit of your purchased Azure Reserved Instances (RI) as other VM sizes within the same VM group also get the RI discount. As a rough example, see the below table from the Microsoft announcement.

VM name VM group Ratios

Standard_D2s_v3

DSv3 Series

1

Standard_D4s_v3

DSv3 Series

2

Standard_D8s_v3

DSv3 Series

4

Standard_D16s_v3

DSv3 Series

8

Standard_D32s_v3

DSv3 Series

16

Standard_D64s_v3

DSv3 Series

32

This means that if you buy an RI for a D2sV3, it would cover half of an D4sV3 instance etc. More on how this can be applied and options available to you are here.

In general, I think an RI purchase is something that most deployments should be taking advantage of. Once sized correctly and with the ability to leverage flexibility, there are huge savings to be made with relatively low amounts of administrative effort.

More on how to buy an RI here

More on how the discount is applied here

 

What is Azure B-Series Compute?

images

If you’ve provisioned a Virtual Machine (VM) via the Azure Portal in recent months, I am sure you have noticed a new series of VM that sits at a cheaper price point to some of the regular alternatives. This is the B-series and it has some special features.

Immediately you would think based on the specifications presented that this series was a relative bargain in comparison to the D series. Take a look at the table below to see a comparison of prices presented for B and Dv3 series VMs: (NOTE: These are from an EA subscription, prices are relative)

pricecompareBvD
B-Series vs Dv3 Series VMs

Comparing a B2ms and a D2s_V3, there is a saving of approximately ten euro per month. You can see they have the same amount of vCPU and RAM. Which is the most common deciding factor when sizing a VM. However, if you look closer you’ll notice the B series actually has more Max IOPS, how or why is that possible? (Read a previous post on iops etc here)

The B-series VMs are designed to offer “burstable” performance. They leverage flexible CPU usage, suitable for workloads that will run for a long time using a small fraction of the CPU performance possible and then spike to needing the full power of the CPU due to incoming traffic or required work.

This burst isn’t unlimited though. While B-Series VMs are running in the low-points and not fully utilizing the baseline performance of the CPU, your VM instance builds up credits. When the VM has accumulated enough credit, you can burst your usage, up to 100% of the vCPU for the period of time when your application requires the higher CPU performance.

Here is a great example from Microsoft Docs of how credits are accumulated and spent.

I deploy a VM using the B1ms size for my application. This size allows my application to use up to 20% of a vCPU as my baseline, which is .2 credits per minute I can use or bank.

My application is busy at the beginning and end of my employees work day, between 7:00-9:00 AM and 4:00 – 6:00PM. During the other 20 hours of the day, my application is typically at idle, only using 10% of the vCPU. For the non-peak hours I earn 0.2 credits per minute but only consume 0.l credits per minute, so my VM will bank .1 x 60 = 6 credits per hour. For the 20 hours that I am off-peak, I will bank 120 credits.

During peak hours my application averages 60% vCPU utilization, I still earn 0.2 credits per minute but I consume 0.6 credits per minute, for a net cost of .4 credits a minute or .4 x 60 = 24 credits per hour. I have 4 hours per day of peak usage, so it costs 4 x 24 = 96 credits for my peak usage.

If I take the 120 credits I earned off-peak and subtract the 96 credits I used for my peak times, I bank an additional 24 credits per day that I can use for other bursts of activity.”

So, there was quite a bit of maths there, what are the important points?

  • Baseline vCPU performance. This dictates your earn/spend threshold. There is a chart here.
  • Peak utilisation consumption. If this is not allowing you to bank credits, you will eventually end up in a situation where you cannot burst. Size up your VM.
  • Automation doesn’t work here, you only earn credits when the VM is allocated.

Overall I think the B-series is a good option, but only in specific scenarios. A different way of thinking is required to size and manage your application. If you get it right, you can make some savings on running a standard VM.

There is a Q&A on some common topics here.

Azure App Service and Windows Containers

Containerisation of applications is something that is becoming more and more common. Allowing developers to “wrap” all requirements into an individual element which the infrastructure team can then deploy where resources are available opens a door to the most modern options in application deployment and management.

Enter Azure App Service, which for years now has been removing the need for an infrastructure management layer and allowing teams to focus on deployment and performance. Traditionally, you had to deploy your apps within the allowed parameters of your App Service Plan (ASP). However, you can now run containers as part of this platform.

Combine this with a Container Registry, such as Azure Container Registry and you can deploy images within minutes. These images can then be scaled within your ASP to meet demand and can be updated as required using your current CI/CD processes.

This had been limited to Linux based containers, but Microsoft have recently announced a public preview of the ability to run Windows containers within your ASP. This is targeted towards customers interested in migrating .NET applications to Azure, and hoping to avail of a PaaS service to get the many productivity benefits such as high availability within and across Azure regions. This can also increase application redundancy options by using integrated backup/restore and app cloning options.

WebAppForContainers
Example deployment scenario

The preview capabilities are appropriate for testing and POC environments, but there are of course some limitations and preview deployments are not recommended for production workloads in any scenario.

Within the preview the following is supported:

  • Deploy containerized applications using Docker Hub, Azure Container Registry, or private registries.
  • Incrementally deploy apps into production with deployment slots and slot swaps.
  • Scale out automatically with auto-scale.
  • Enable application logs and use the App Service Log Streaming feature to see logs from your application.
  • Use PowerShell and Win-RM to remotely connect directly into your containers.

For a quick start/how-to see the following link.

First Impressions – Azure Firewall Preview

Recently Microsoft announced that a new Azure Firewall service was entering a managed public preview. Azure Firewall is a managed, network security service that protects your Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability and scalability.

firewall-overview.png

The services uses a static public IP meaning that your outbound traffic can be identified by third party services as/if required. Worth nothing, that only outbound rules are active within this preview. Inbound filtering will hopefully be available by GA.

The following capabilities are all available as part of the preview:

  • Stateful firewall as a Service
  • Built-in high availability with unrestricted cloud scalability
  • FQDN filtering
  • Network traffic filtering rules
  • Outbound SNAT support
  • Centrally create, enforce, and log application and network connectivity policies across Azure subscriptions and VNETs
  • Fully integrated with Azure Monitor for logging and analytics

As with all previews it should not be used for production environments, but for testing purposes this is how to register your tenant for deployment.

To enable the Azure Firewall public preview follow the guide here: Enabling the preview

Once enabled, follow this tutorial for a sample implementation: Deployment Tutorial

Now that you’re familiar with the deployment, you should apply to your specific test scenarios. Be wary of some operations that could be limited by applying a default route to your VM. There is an updated FAQ for the service here: Azure Firewall FAQ

Overall, this is a welcome addition to Azure networking. As the preview progresses and more service options are added, especially inbound options, I see this being as common as deploying an NSG in your environment. Combining it with peering and the right set of rule collections for your environment allows for an easily managed, scalable, and most importantly, secure environment within Azure with minimal cost and infrastructure footprint.