Azure Compute Updates at Ignite

If you thought there were a lot of networking updates at Microsoft Ignite, you won’t believe how many there were when it comes to Azure Compute. Here I will try to round-up those I am most excited about. Some of the features announced have been on a wish list of mine for quite a while…I’m looking at you Managed Disks!

First up, several new VM sizes have been announced. The ND and NV series has been updated in preview. This series offers powerful GPU capabilities and is now running cutting edge tech from NVIDIA.

HPC can often have several blockers on-premise but is easily workable in Azure, building on this, Microsoft have added two new ranges to the H series offering. HB an HC series will be in preview before the end of the year. These allow for staggering amounts of compute power and bandwidth.

Storage is sometimes overlooked when considering VM performance, that is something Microsoft are attempting to correct with the announcement of Ultra-SSD Managed Disks in preview. These disks will offer sub-millisecond latency, and can hit up to 160,000 iOPS on a single disk. There were no typos there, fastest disks available in any cloud.

Standard SSDs and larger sizes across the board were also announced. This allows greater flexibility in performance and cost management when designing and deploying solutions.

As mentioned earlier, my wish list item, Managed Disks can now be moved between resource groups and subscriptions. This finally allows better management and flexibility with deployments. This update allows you to also move managed images and snapshots. We had access to the private preview of this functionality and it works exactly as expected.

Not directly Compute, but important in relation to it is the announcement of Windows Virtual Desktop. Azure is already the only cloud where you can run Windows 10 workloads and this service is going to improve on the deployment and management of them. Essentially, Azure will run the RDS Gateway and Broker service for you. You will have full control and responsibility of the infrastructure this will connect too and which applications and desktops are presented. We’ve chatted to the Product Team here at Ignite and they are excited for people to get their hands on the preview and really test it out. My favourite piece of functionality is that the service will be agent-less when using Windows 10 to connect which should make deployment and adoption as painless as possible for admins!

Finally, to encourage older workload migration, Microsoft announced that if you migrate Windows Server or SQL Server 2008/R2 to Azure, you will get three years of free extended security updates on those systems. This could save you some money when Windows Server and SQL Server 2008/ R2 end of support (EOS).

So many announcements, so little time. Expect more detailed posts on most if not all of the above piece over the coming weeks and months.

Azure Networking Updates at Ignite

To say there have been a lot of Azure announcements at Ignite is an understatement. Several important services have hit GA and several exciting new services have been announced. Rather than re-list them all (I’ll link through at the end), I am going to highlight those I am most excited about, and think will be relevant to most clients.

Below is a graphical overview of the new approach to Azure networking, which contains four pillars, Connect, Protect, Deliver and Monitor.

azurenetworkingfalludpate

Starting with Connect, two ExpressRoute related announcements immediately grabbed my attention. Microsoft now offer 100Gbps connectivity speeds via ExpressRoute Direct which is the fastest cloud connectivity available.

Helping customers with geo-distributed offices, ExpressRoute Global Reach was announced, allowing customers to connect offices using their existing circuits. Additional to this concept, Azure Virtual WAN is now GA. With that move, Microsoft also added new preview features including P2S VPN and ExpressRoute connectivity shown below:

vwanfeatures.png

The final announcement in Connect that really will make a difference to clients is Public IP Prefix. This means that you can now request a range of static IP addresses for your resources that you will “own”. This will make white-listing and administration of your public presence a lot simpler.

Moving to the Protect pillar, Azure Firewall has now moved to GA. Which is a great service for those looking to have control over their breakout point in Azure without deploying and maintaining an NVA.

Rounding out Protect, Service Endpoint Policies. These allow more granular control over access to Azure resources over your vnet service endpoint. See previous post on endpoints for more.

From the Deliver pillar, the announcement of Azure Front Door in preview is the big one. AFD is a global entry point for you applications that is scalable and secure. The ability to scale and level of integration with existing services will make this widely used by customers with a large public facing presence.

There are also updates to Application Gateway which improve performance, introduce redundancy and increase the level of integration with over services.

From the final pillar, Monitor, the main announcement is Virtual Network TAP in preview. This allows for continuous mirroring of VM traffic to a collector, without any agents. This the first of these services in a public cloud and will allow for greater analytics and granularity of environment traffic.

For the entire round-up from Microsoft, head here.

Azure Certification Updates and Announcing Microsoft Learn

We’re lucky enough to be at Microsoft Ignite in Orlando this week. Trying to wrap all of the announcements into immediate reactions is almost impossible, so I’m going to take some time to give each announcement the detail it deserves. With that said, I think the best place to start is with learning and certification.

At Inspire, Microsoft announced new learning paths for Azure, see here for previous post. Yesterday, they confirmed the new certification options. Microsoft have shifted their certification focus to be role based. This was first seen with Azure Administrator certification and is continued with Developer and Architect tracks. Azure is the first technology platform Microsoft are making this change for, 365 will follow later this year.

One interesting point to take from the announcement is that there will no longer be an MCSA or MCSE qualification, there will be individual awards for each path. This is best explained in the diagram below:

Azure Apps and Infrastructure certs

So depending on your specific, or hopeful future, job role you have some options. As you can see in the bottom right hand corner, there are Transition exams available if you have previously sat the relevant Azure exams. I’ve already sat the AZ-102 beta exam, it was very good, everything you would expect from an Administrator perspective. I am going to attempt to sit the AZ-302 exam here at Ignite as well, so watch social for thoughts on that if there are no NDAs.

The second and more impressive announcement is the introduction of Microsoft Learn. This is an entirely new platform that allows you to interactively learn right in the browser. Want to know the best thing, it’s all FREE.

If you’re trying to learn more about Azure, or you are just starting, there is an absolute wealth of learning available. The paths are accurate and beneficial, they allow you to learn at your own pace and if you’re into achievements, they offer plenty! See below for an example of some of the paths offered:

learn azure.PNG

Part of the brilliance of the platform is direct cloud shell integration, allowing you to actually deploy and test what you are learning about, again right in the browser. Genuinely, I am loving this new Learn platform, I spent some time yesterday speaking to the product team and trying out different paths myself and the service is excellent. What are you waiting for, get over there and start learning now!

There are discount codes for the beta exams Architect track here and the Developer track here.

Understanding Azure Reserved Virtual Machine Instances

One of the main benefits of Azure’s billing model is that it offers per minute billing. This means that if you have an application/service/environment that isn’t required 24/7 you can reduce your costs by using Automation so that you will only pay for what you consume.

However, if your environment requires you run a VM constantly, the cost can start to mount up. To help alleviate this, Microsoft offer a solution in the form of long-term fixed price Virtual Machine instances.

These Reserved Instances (RI) help save money by allowing you to pre-pay for a one-year or three-year VM size. The fact that you pay up front, allows you to make significant savings on the Pay-As-You-Go pricing.

RIexample

The most common subscription offers have the ability to purchase RIs, but there are some restrictions in terms of how it is approached. The options are the below:

  • Enterprise agreement subscriptions. To purchase reservations in an enterprise enrollment, the enterprise administrator must enable reservation purchases in the EA portal.
  • Pay-As-You-Go but you must have the “Owner” role on the subscription to buy a reservation.
  • Cloud Solution Provider subscriptions. However, the providing partner must make the purchase on behalf of the customer.

Once purchased, the discount is then applied to the resource usage that matches up with the RI capacity purchased. For example, if you purchase a one-year RI for a DS4v3 size VM, and you are using a DS4v3 the discount will apply against that usage.

A good strategy is to determine the sizing before purchasing the RI. So my advice would be to run your VMs without an RI for a few months to ensure your sizing is suitable and therefore correct. However, if this is something that is proving difficult, there is a range of flexibility offered within your RI scope.

With instance size flexibility, you don’t have to deploy the exact same VM size to get the benefit of your purchased Azure Reserved Instances (RI) as other VM sizes within the same VM group also get the RI discount. As a rough example, see the below table from the Microsoft announcement.

VM name VM group Ratios

Standard_D2s_v3

DSv3 Series

1

Standard_D4s_v3

DSv3 Series

2

Standard_D8s_v3

DSv3 Series

4

Standard_D16s_v3

DSv3 Series

8

Standard_D32s_v3

DSv3 Series

16

Standard_D64s_v3

DSv3 Series

32

This means that if you buy an RI for a D2sV3, it would cover half of an D4sV3 instance etc. More on how this can be applied and options available to you are here.

In general, I think an RI purchase is something that most deployments should be taking advantage of. Once sized correctly and with the ability to leverage flexibility, there are huge savings to be made with relatively low amounts of administrative effort.

More on how to buy an RI here

More on how the discount is applied here

 

What is Azure B-Series Compute?

images

If you’ve provisioned a Virtual Machine (VM) via the Azure Portal in recent months, I am sure you have noticed a new series of VM that sits at a cheaper price point to some of the regular alternatives. This is the B-series and it has some special features.

Immediately you would think based on the specifications presented that this series was a relative bargain in comparison to the D series. Take a look at the table below to see a comparison of prices presented for B and Dv3 series VMs: (NOTE: These are from an EA subscription, prices are relative)

pricecompareBvD
B-Series vs Dv3 Series VMs

Comparing a B2ms and a D2s_V3, there is a saving of approximately ten euro per month. You can see they have the same amount of vCPU and RAM. Which is the most common deciding factor when sizing a VM. However, if you look closer you’ll notice the B series actually has more Max IOPS, how or why is that possible? (Read a previous post on iops etc here)

The B-series VMs are designed to offer “burstable” performance. They leverage flexible CPU usage, suitable for workloads that will run for a long time using a small fraction of the CPU performance possible and then spike to needing the full power of the CPU due to incoming traffic or required work.

This burst isn’t unlimited though. While B-Series VMs are running in the low-points and not fully utilizing the baseline performance of the CPU, your VM instance builds up credits. When the VM has accumulated enough credit, you can burst your usage, up to 100% of the vCPU for the period of time when your application requires the higher CPU performance.

Here is a great example from Microsoft Docs of how credits are accumulated and spent.

I deploy a VM using the B1ms size for my application. This size allows my application to use up to 20% of a vCPU as my baseline, which is .2 credits per minute I can use or bank.

My application is busy at the beginning and end of my employees work day, between 7:00-9:00 AM and 4:00 – 6:00PM. During the other 20 hours of the day, my application is typically at idle, only using 10% of the vCPU. For the non-peak hours I earn 0.2 credits per minute but only consume 0.l credits per minute, so my VM will bank .1 x 60 = 6 credits per hour. For the 20 hours that I am off-peak, I will bank 120 credits.

During peak hours my application averages 60% vCPU utilization, I still earn 0.2 credits per minute but I consume 0.6 credits per minute, for a net cost of .4 credits a minute or .4 x 60 = 24 credits per hour. I have 4 hours per day of peak usage, so it costs 4 x 24 = 96 credits for my peak usage.

If I take the 120 credits I earned off-peak and subtract the 96 credits I used for my peak times, I bank an additional 24 credits per day that I can use for other bursts of activity.”

So, there was quite a bit of maths there, what are the important points?

  • Baseline vCPU performance. This dictates your earn/spend threshold. There is a chart here.
  • Peak utilisation consumption. If this is not allowing you to bank credits, you will eventually end up in a situation where you cannot burst. Size up your VM.
  • Automation doesn’t work here, you only earn credits when the VM is allocated.

Overall I think the B-series is a good option, but only in specific scenarios. A different way of thinking is required to size and manage your application. If you get it right, you can make some savings on running a standard VM.

There is a Q&A on some common topics here.

Azure App Service and Windows Containers

Containerisation of applications is something that is becoming more and more common. Allowing developers to “wrap” all requirements into an individual element which the infrastructure team can then deploy where resources are available opens a door to the most modern options in application deployment and management.

Enter Azure App Service, which for years now has been removing the need for an infrastructure management layer and allowing teams to focus on deployment and performance. Traditionally, you had to deploy your apps within the allowed parameters of your App Service Plan (ASP). However, you can now run containers as part of this platform.

Combine this with a Container Registry, such as Azure Container Registry and you can deploy images within minutes. These images can then be scaled within your ASP to meet demand and can be updated as required using your current CI/CD processes.

This had been limited to Linux based containers, but Microsoft have recently announced a public preview of the ability to run Windows containers within your ASP. This is targeted towards customers interested in migrating .NET applications to Azure, and hoping to avail of a PaaS service to get the many productivity benefits such as high availability within and across Azure regions. This can also increase application redundancy options by using integrated backup/restore and app cloning options.

WebAppForContainers
Example deployment scenario

The preview capabilities are appropriate for testing and POC environments, but there are of course some limitations and preview deployments are not recommended for production workloads in any scenario.

Within the preview the following is supported:

  • Deploy containerized applications using Docker Hub, Azure Container Registry, or private registries.
  • Incrementally deploy apps into production with deployment slots and slot swaps.
  • Scale out automatically with auto-scale.
  • Enable application logs and use the App Service Log Streaming feature to see logs from your application.
  • Use PowerShell and Win-RM to remotely connect directly into your containers.

For a quick start/how-to see the following link.

First Impressions – Azure Firewall Preview

Recently Microsoft announced that a new Azure Firewall service was entering a managed public preview. Azure Firewall is a managed, network security service that protects your Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability and scalability.

firewall-overview.png

The services uses a static public IP meaning that your outbound traffic can be identified by third party services as/if required. Worth nothing, that only outbound rules are active within this preview. Inbound filtering will hopefully be available by GA.

The following capabilities are all available as part of the preview:

  • Stateful firewall as a Service
  • Built-in high availability with unrestricted cloud scalability
  • FQDN filtering
  • Network traffic filtering rules
  • Outbound SNAT support
  • Centrally create, enforce, and log application and network connectivity policies across Azure subscriptions and VNETs
  • Fully integrated with Azure Monitor for logging and analytics

As with all previews it should not be used for production environments, but for testing purposes this is how to register your tenant for deployment.

To enable the Azure Firewall public preview follow the guide here: Enabling the preview

Once enabled, follow this tutorial for a sample implementation: Deployment Tutorial

Now that you’re familiar with the deployment, you should apply to your specific test scenarios. Be wary of some operations that could be limited by applying a default route to your VM. There is an updated FAQ for the service here: Azure Firewall FAQ

Overall, this is a welcome addition to Azure networking. As the preview progresses and more service options are added, especially inbound options, I see this being as common as deploying an NSG in your environment. Combining it with peering and the right set of rule collections for your environment allows for an easily managed, scalable, and most importantly, secure environment within Azure with minimal cost and infrastructure footprint.

New Azure Certifications

As many of you are probably aware, there is already a Microsoft certification path for those looking for Azure skill set recognition. Recently at the Microsoft Inspire partner conference, Microsoft announced that this would be replaced.

There will now be three roles defined with new qualification to earn.

  1. Azure Adminitstrator
  2. Azure Developer
  3. Azure Solutions Architect

new certs

These new roles and certifications are an attempt to better line-up with industry demands and standards.

overall certs

As with most current MCSA qualifications, it will take two exams to earn a certification. The Beta exams for Azure Administrator are live right now. AZ100 and AZ101 test a broad range of skills across the Azure sphere including, Compute, Networking, Identity, Storage among others.

If you already have the previous exam, 70-533, you can take a transition exam to earn an Azure Administrator certification – AZ102. Again, this is in Beta so places are limited.

timeline

The time frame for the remaining roles and certifications was also shown, so expect a follow-up post on this later this year.

Also, thanks to Microsoft MVP Thomas Maurer, here is a link to discount codes should you wish to book any of the Beta exams. The link is the Microsoft Learning blog, so I will paste it here in full in case anyone is worried about dodgy code pages! 🙂

https://www.microsoft.com/en-us/learning/community-blog-post.aspx?BlogId=8&Id=375147

If you would like to watch the announcement in full, you can do so here – https://myinspire.microsoft.com/videos/fb7c3db2-1c65-4a69-aceb-fd06c19bf971

Securing Azure PaaS

When considering Azure as a platform, part of the conversation should revolve around transformation. That is, how do we transform our approach from what is viewed as traditional to something more modern. Often this could lead to redesigning how your application/service is deployed, but with some workflows, a simple change from IaaS to PaaS is viewed as a quick win.

This change isn’t suitable in all scenarios, but depending on your specific requirement it could allow for greater resiliency, a reduction in costs, and a simpler administration requirement. One service that is often considered is SQL. Azure has its own PaaS SQL offering which removes the need for you to manage the underlying infrastructure. That alone makes the transformation a worthy consideration.

However, what isn’t often immediately apparent to some administrators is that PaaS offerings are, by their nature, public facing. For Azure SQL to be as resilient as possible and scale responsively, it sits behind a public FQDN. Therefore, how this FQDN is secured must be taken into consideration as a priority to ensure your data is protected appropriately.

Thankfully, Azure SQL comes with a built in firewall service. Initially, all Transact-SQL access to your Azure SQL server is blocked by the firewall. To allow traffic, you must specify one or more server-level firewall rules that enable access. The firewall rules specify which IP address ranges from the Internet are allowed. There is also the ability to choose whether Azure applications can connect to your Azure SQL server.

The ability to grant access to just one of the databases within your Azure SQL server is also possible. You simply create a database-level rule for the required database. However, while this limits the traffic to specific IP ranges, the traffic still flows via the internet.

To communicate with Azure SQL privately, you will first need an Azure V-Net. Once in place, you must enable the service endpoint for Azure SQL, see here. This will allow communication directly between listed subnets within your v-net and Azure SQL via the Azure backbone. This traffic is more secure and possibly faster than via the internet.

Once your endpoint is enabled, you can then create a v-net firewall rule on Azure SQL for the subnet which had a service endpoint enabled. All endpoints within the subnet will have access to all databases. You can repeat these steps to add additional subnets. If adding your v-net replaces the previous IP rules, remember to remove them from your Azure SQL firewall rules.

Also worth noting is the option for “Allow all Azure Services”, the presumption here is that this somehow would only access from Azure Services within your subscription, but this is not the case. It means every single Azure service in all subscriptions, even mine! My recommendation is to avoid this whenever possible, however, there are some cases where this required and this access should be noted as a risk.

More on Azure SQL Firewall – https://docs.microsoft.com/en-us/azure/sql-database/sql-database-firewall-configure

More on Azure SQL with V-Nets – https://docs.microsoft.com/en-us/azure/sql-database/sql-database-vnet-service-endpoint-rule-overview

 

Azure IAAS Disaster Recovery

The ability to recover your IAAS VMs in Azure to a different region has been a logical requirement within Azure for quite some time. Microsoft made the feature available in preview last year and this week have made it GA.

Azure DR allows you to recover your IAAS VMs in a different Azure region should their initial region become unavailable. For example, you run your workloads in North Europe, the region experiences significant downtime, you are now able to recover your workloads in West Europe.

In this post I will go through setting up an individual VM to replicate from North Europe to West. However, it’s worth pointing out that DR should be a business discussion, not just technical. All scenarios that could occur, within reason, should be discussed to decide whether DR is warranted. For example, if your business entirely relies on your premises for production, if you lose the premises, you don’t need DR as there is no production capability regardless of system recovery etc. The idea is to scope what DR actually means for your business and remember, DR is only valid if it is tested!

Enabling DR for a VM is straight forward. Open your VM blade and scroll down to Operations, you will see an option for Disaster Recovery

DRvmBlade

You select a Target Region that must be different from your current region, you can then choose the default settings for a POC. In my screen shot, I have created a Resource Group and Recovery Services Vault in WE already so will use those. Once submitted, replication for your VM will be enabled. You can then view the configured options:

DRvmSettings.PNG

And that’s it! Once synchronisation completes, you now have your VM protected in a different region. However, for it to be valid, you need to design and confirm your Recovery Plan then complete both a Test Failover and Complete Failover and Failback.

More reading on the overall concept and Azure-Azure DR specifics here.