Festive Tech Calendar: Adopt AI like a PRO with Azure Essentials

Another year, and another fantastic Festive Tech Calendar. While it wasn’t the first event I participated in, I do think it is my longest running, annual event. I have been a fan since it’s inception and am delighted to see it continue. This year, the team are raising funds for Beatson Cancer Charity. Donations are appreciated via the Just Giving page.

Now, this post is all about Azure AI adoption via the new offering of Azure Essentials. So, we will start off by explaining what that is and why it matters. Over the years, Microsoft has introduced new ways of doing things, new approaches or methods; sometimes these have been simple renames, and sometimes they have been a completely different vision of Azure. Often, they can be confusing regardless. This post aims to help understand Azure Essentials better, using the “tech of the moment” Azure AI.

So – let’s get started. What exactly is Azure Essentials? As we’re working related to AI, let’s set the scene using Copilot…

Copilot for M365 in Teams (please don’t @ me about the structure of that name, I cannot keep up with how to reference Copilot!) was helpful:

Copilot in Azure…not so much:

What you need to take away at this stage is, rather than this being an entirely new thing, it has consolidated existing good work to allow for the consumption of it to be simpler or more refined. At a theory level this seems to make sense, however, we all know the implementation of these things can be very tricky.

With this in mind, how to use or approach Azure Essentials shifts a bit. The first point that struck me was this is most useful for people new to Azure. However, that is not to say it is not useful for those experienced. But if you are new, we have a lot of assumptions that people will know and make use of offerings like CAF and WAF, will reference the Architecture Centre for design guidance, etc. When that is likely not the case.

Centralise core guidance as Azure Essentials is a great idea in my opinion. However, it hasn’t just centralised guidance. Disclosing at this point that I work for a Microsoft Partner. Essentials also includes recommendations for finding a partner, leveraging funding programs, which products are useful, and customer testimonials. This is nice for companies like mine as a marketing/contact channel, but I am not sure if I would define it as “essential”.

What is essential though is how it frames guidance and aligns customers into the right frame of approach in my opinion. The site is a touch confusing on this point though. So the new resource kit is right at the top, it’s the first link on the page. But scenario or use case guidance is further down and brings you elsewhere. Sticking with our original idea regarding AI adoption, there is a use case listed, and this brings you to an Azure Architecture blog from July – this is not what we want…

Whereas if we open the Resource Kit, then check it’s contents, we get a ‘common scenario’ with click through links

Now before we dig in there, one item I noted when researching this was that some messaging implies, or potentially confuses some elements of this with changes to, or improvements upon the Cloud Adoption Framework (CAF). In my opinion, Azure Essentials doesn’t change CAF, it’s not even listed on the What’s New page. However, it is an improvement to how people may be guided to CAF. And anything that brings more people to CAF and allows for efficient and more well governed deployments is a positive to me!

So, what exactly does Essentials recommend then as it’s ideal detail required for AI adoption? Six steps and some learning material. I am delighted to see the inclusion of learning material, its becoming more and more important as the rate of change increases. Let’s have a look at the six steps:

  1. Assess your Azure AI readiness
  2. Explore Azure AI pricing
  3. Prepare your AI environment
  4. Design your AI workloads
  5. Develop Well-Architected AI workloads
  6. Deploy, Manage, and operate your AI workloads

At first glance this looks like a good set to me. I don’t think I would have ranked pricing as high in the sequence, but perhaps it’s important to get that out of the way early! 🙂

The first ask is here is to take an Assessment. The Azure AI readiness assessment focusses on core areas of adoption strategy within your business. It can be a lengthy process, it notes 45 minutes, but if you choose all of the areas available, it will give you 100+ pages of questions to complete to attain your score. Anyone familiar with Azure Well Architected Reviews, or the old Governance Assessment will see the immediate similarities here and understand the usefulness of having something that asks people to think about things in the correct way and offers a score to guide expectations.

Next, it’s pricing. Again, this is tricky for me. To be remotely accurate with pricing, I think you need to have some form of design to then dictate resources, which lead to price. You are then happy, or shocked, and rework your design. Rinse repeat to get to where you need to be. Unfortunately, the link in the resource kit lands you on the default pricing page for Azure, nothing AI specific. So you really are starting at the bottom. Some more AI specific guidance here would be a great inclusion for the next version. For example, this placement link, bring you to the menu item for AI pricing on this page, a small but helpful direction.

Next, we’re onto preparation. A good note on a Landing Zone, but I would have expected as this is Azure Essentials that would link through to some guidance on Landing Zones. We then get two links to Design Architectures for Azure AI in the Architecture Centre. This could be more confusing than helpful, and it’s not the preparation guidance I would expect. This is Azure Essentials, and here is the first AI architecture Visio you see…

My concern here is complexity. I know people may have more interest in using OpenAI models and the whole chat functionality. But I would have gone a different route here. Most likely document based, something that uses one of the more mature services, like Document Intelligence, and a simpler architecture for guidance. Make it easier to see the objective rather than the mountain that is presented above. I don’t think there is actually a perfect set of links here, too many variables and too much information dependent on where the user perception of AI is. Will be very interesting to see how this progresses and it may always require further expertise and information to be properly impactful.

Next, design, one of my favourite areas. No other aspect of Azure excites me like creating the solution design. With a vast platform you start with everything and toil away until you have what works for what is needed. Here we get a note to choose from reference architectures – good point, but which ones? No links are provided, but having said that, there is no single link that works here. The reference architectures are spread out amongst the different products. Next, we get a great link to the AI architecture design overview page. I think I might have switched step 3 and 4 here actually. Doing this first, I believe it gives people a much better starting point to learn from and then understand point 3 more comprehensively. Bookmark this page for your AI adoption journey, it’s like a TOC of what to read for which service/product.

The penultimate step guides us to well architected workloads. The note is simply a note, however the point is valid but I think it should have included this link, as the start point for this step. It’s really useful and helps you quickly jump where you need to with reference to the Well Architected Framework (can anyone else just not call it WAF? Too confusing for me with Web Application Firewall). However the included link, which focusses on Azure OpenAI is good. It has the expected pillar guidance for Well Architected, and it has a comprehensive set of accurate click-through links. I think this step is important and placed in the right place too, so it flows well at this point of the resource kit.

Finally, we have a deploy and manage step. This feels a little bit like the weakest of the six steps. First of all the title is repeated as the first bullet point – not great.

Then it notes we should use best practice – again, no guidance as to what that means. Or how that is practically when it comes to deployment and management. Finally, it links to a guide page regarding responsible use of AI. Responsible use is incredibly important, it is valid when operating AI workloads, but it is useless as a single link for this step. There is a literal AI management page on CAF already that could be used. I have waited until this step to link to this area of CAF, as it hasn’t been updated since the start of 2024, but it has a lot of detail this kit should include, and with an update, would make much more sense than some of the links included.

In conclusion, I think the kit needs some work, a revision so to speak. First, I would tweak the steps to be as follows:

  1. Assess your Azure AI readiness
  2. Develop Well-Architected AI workloads
  3. Design your AI workloads
  4. Prepare your AI environment
  5. Deploy, Manage, and operate your AI workloads
  6. Explore Azure AI pricing

Next, I would rely more heavily on CAF and Architecture Center with context for links, or linking to overview pages with a note to use the links within. Like a ‘further reading’ note or similar. I know it is meant to be Essentials, but let’s give essential guidance rather than minimum perhaps?

Finally, if you want to adopt AI like a Pro – I think Essentials is useful as a sanity check, but you are better investing your time on the already existing items on Learn, CAF and WAF.

How to – Control Azure Regions with Azure Policy and Bicep

Updated: March 2024 – repo link now updated to new Github structure!

A common requirement for many tenants is to control or restrict the regions in which Azure can be used, this is most commonly done via Azure Policy. Wanting/needing to restrict regions can be for several reasons, below are some of the most common:

  • Alignment – Align to local presence for performance etc.
  • Governance – Align to compliance, residency and data restriction requirements.
  • Features – Not all data regions are created equally – restrict to those with features required.

For anyone who has created a Policy like this in the past, the experience is quite straight forward. Azure provides an out-of-the-box template, you simply have to pick which regions are allowed from a drop down. However, there are actually three policies you should consider for this in my opinion, all are included here.

  • Allowed Locations – when a resource deploys, it must align
  • Allowed Locations for Resource Groups – when an RG deploys, it must align
  • Audit resource location matches resource group location – just nice to have for governance!

So, with multiple policies to deploy, a controlled and accurate solution is to define these Policies via code, in my case, Bicep. How you manage this (i.e. deployment and repo management etc) is not for this blog post, but is achievable with this as your base Policy. Two elements are required for this to work correctly:

  1. Bicep file to deploy our Policy resources
  2. Parameters file for policy definitions IDs and regions required.

Item 1 is simple, I’ve included the code below, and will link off to a repo with all of this at the end too.

To explain the above:

  • We will pass an array of chosen regions/locations, and the strings of the built-in Policy definitions.
  • We use a variable to define the Policy names.
  • We deploy a resource object per policy, combining each of these into one Policy deployment.

The only real decision required here is which regions you want to allow. Some pointers from me, always include Global, otherwise some features like Traffic Manager, cannot be deployed. Only include the regions you need now, you can always update later.

How to decide on your regions? That’s probably a whole blog post by itself (adds idea to drafts) but my advice would be choose local, and choose as mature a region as possible. This should offer you the best mix of features, performance, and reliability. Once that is decided, also allow the paired region, so BCDR is possible when needed.

Once you have your list completed, that is now the detail we will use in the parameter file to pass our array of regions for deployment, note these must be in exact format required by ARM.

Now to deploy, simply pick your method! For this post, I am using PowerShell (as I am already logged in) and will complete a Subscription level deployment, as this is a Policy. I will pass both files as command parameters, Azure will do the rest! The below should work for you, and I will include a PS1 file in the repo too for reference, but adjust as per your files, tenant etc.

New-AzSubscriptionDeployment -Name 'az-region-policy-deploy' -Location northeurope -TemplateFile 'C:\folderthatexists\region-restrict-policy.bicep'
-TemplateParameterFile 'C:\folderthatexists\policy.parameters.json'

Once that runs successfully, you can verify all is correct via the Portal. Again, as this is Bicep, you can run this over and over and it will simply update the Policy if there are changes. Meaning all it requires is an update of your location parameters to add or remove regions from being allowed.

And that’s it! As promised, here is a repo with the files for reference. Please note – as always, all code is provided for reference only and should not be used on your environment without full understanding and testing. I cannot take responsibility for your environment and use of code. Please do reach out if you have questions.

How to – Control VM SKUs with Azure Policy and Bicep

Updated: March 2024 – repo link now fully working!

A common Azure Policy requirement for many tenants that deal with Virtual Machine infrastructure, is to restrict the sizes that are allowed to be used for deployment. This can be for several reasons, below are some of the most common:

  • Cost prevention – remove high cost VM SKUs
  • Alignment – align to existing SKUs for reservation flexibility
  • Governance – align to a governance policy of VM family (e.g. AMD only)

For anyone who has created a Policy like this in the past, the experience is quite straight forward. Azure provides an out-of-the-box template, you simply have to pick which VM SKUs are allowed from a drop down. However, several years ago this was straight forward. Pick X SKUs from about 100 SKUs. This list has grown exponentially, and only continues to get larger. This makes implementing the Policy, and modifying it, exceedingly cumbersome via the Portal.

So, one technical and accurate solution is to define this Policy via code. How you manage this (i.e. deployment and repo management etc) is not for this blog post, but is achievable with this as your base Policy. Two elements are required for this to work correctly

  1. Bicep file to deploy Policy resource
  2. Parameters file for VM SKUs

Item 1 is simple, I’ve included the exact code, and will link off to a repo with all of this at the end too.

To explain the above:

  • We will pass an array of chosen VM SKUs, and the string of the built-in Policy definition.
  • We use a variable to define the Policy name.
  • We deploy a single resource object, combining these into one Policy deployment.

Now, to get to the point where this can be deployed, we need to define our list of chosen VM SKUs. But to get that, we need a list of possible VM SKUs, to then filter down from. There are a couple of ways to achieve this, but to avoid the misery of formatting a hundred or so lines, here is how I did it.

First, not all VM SKUs are possible in all regions, so this is our start point. Figure out the region(s) in scope for this Policy, and work from there. For this example I will use North Europe.

Get-AzVMSize -Location 'northeurope'

The above gets us a big long list of VM SKUs possible in North Europe via PowerShell. However, it’s a nightmare to use and format, so a small change and an export will help…

Get-AzVMSize -Location 'northeurope' | select Name | export-csv C:\folderthatexists\vmksu.csv

Now, that gives me a list of 500+ VMs, again not very manageable directly, but what is important, is that if you open that CSV file in VS Code, they are formatted exactly as you need for JSON parameters file! You can simply copy and paste lines needed. I found it quicker here to actually remove lines I don’t want than select those I do.

Once you have your list completed, that is now the detail we will use in the parameter file to pass our array of SKUs for deployment.

Now to deploy, simply pick your method! For this post, I am using PowerShell (as I am already logged in) and will complete a Subscription level deployment, as this is a Policy. I will pass both files as command parameters, Azure will do the rest! The below should work for you, and I will include a PS1 file in the repo too for reference, but adjust as per your files, tenant etc.

New-AzSubscriptionDeployment -Name 'az-sku-policy-deploy' -Location northeurope -TemplateFile 'C:\folderthatexists\vm-sku-policy.bicep' -TemplateParameterFile 'C:\folderthatexists\policy.parameters.json'

Once that runs successfully, you can verify all is correct via Portal too. Again, as this is Bicep, you can run this over and over and it will simply update the Policy if there are changes. Meaning all it requires is an update of your VM SKU parameters to add or remove VMs from being allowed.

And that’s it! As promised, here is a repo with the files for reference. Please note – as always, all code is provided for reference only and should not be used on your environment without full understanding and testing. I cannot take responsibility for your environment and use of code. Please do reach out if you have questions.

How to – Control your Network with Azure Policy

In many Azure environments, Network is core to the solution. As such, it is one of the most critical components to design correctly and with the required configuration relative to posture and security.

For example, you create a well architected network, with a focus on security and performance. It works well, but then changes occur outside of your original design – and it starts to come apart.

Azure Policy is one solution to ensure that this does not happen. For this specific topic, Azure Policy will focus on two areas:

  1. Enforce standards
  2. Systematically assess compliance

It can also action remediation using DINE or deploy-if-not-exists policies. That may be your preference, but for me that’s not something I like at the network layer 🙂

So first, my preference is to create a core policy set, derived from your core network design. On Azure Policy, this will be policy definitions grouped as an initiative. Where possible, it is efficient to use existing built-in definitions. However, if required you can create custom definitions and also include them in your initiative.

For this post, I have created a simple example initiative that includes several built-in policies based on some network and security requirements for an app architecture.

Within this set, you can see “Not allowed resource types”, in here I have included several public or connection focussed resources such as:

  • Public IP addresses
  • VNET Peerings
  • Local Network Gateways

The idea behind the above aligns with the purpose of Azure Policy, restrict and define the platform. This core initiative obviously has a focus on network type resources, but the same basic principal applies. Similar again in approach is the ability to allow exclusions where necessary. For example, allowing VNET peering in a network resource group. However, don’t forget/ignore RBAC, this should at minimum compliment your policy requirements.

While this example is quite simple, as I mentioned, you can layer complexity in via custom policies (creation tutorial here). One I have always liked is a policy to ensure that your subnets have the appropriate Route Table applied to ensure traffic is being directed appropriately, for example, to a firewall. The linked example is just an audit for subnets without ANY route table, but you can expand on this to include a specific route table.

{
  "properties": {
    "displayName": "AINE Route Table for subnets in vnets",
    "policyType": "Custom",
    "mode": "All",
    "description": "Custom policy to audit if a route table exists on appropriate subnets in all vnets",
    "metadata": {
      "version": "1.1.0",
      "category": "Network"
    },

    "policyRule": {
      "if": {
        "allOf": [
          {
            "field": "type",
            "equals": "Microsoft.Network/virtualNetworks/subnets"
          },
          {
            "field": "name",
            "notEquals": "AzureBastionSubnet"
          },
          {
            "field": "Microsoft.Network/virtualNetworks/subnets/routeTable.id",
            "exists": false
          }
        ]
      },
      "then": {
        "effect": "audit"
      }
    }
  }
}

The above is stored on a repo in Github for you to re-use if needed, and don’t forget if you are using VS code to install the Azure Policy extension to make life a bit simpler!