So first, what is Bicep? If you haven’t heard of it, I have to ask – how!? Microsoft’s new deployment language for Azure has made waves since its launch. Continuously improving and taking in a tonne of community feedback it is an interesting offering from Microsoft. To be honest, at first I wasn’t convinced by Bicep. I was slightly confused as to why it was needed. I had put in the time to understand and use ARM templates. I don’t find them super confusing, but I do understand they can be frustrating and quite complex.
That exact point is what Bicep aims to simplify. It uses declarative syntax to deploy Azure resources. This provides concise syntax, reliable type safety, and support for code reuse. Bicep is a transparent abstraction over ARM template JSON and doesn’t lose any of the JSON template capabilities. In plain English, that means that Bicep hides the complexity of ARM templates. Perhaps think of it like shorthand templates 🙂
During deployment, the Bicep CLI converts a Bicep file into ARM template JSON. This means that Bicep has full feature alignment out of the box with all resource types, API versions, and properties that are valid in an ARM template.
This simplicity, combined with a common need to create a small IaaS test area is what lead me to create this post. Below I am going to outline a version of the deployment I use to create a quick and simple test environment. All documented and deployed via Bicep.
First up, what will this environment contain? I’m including resources I find helpful with configurations I find I most commonly need. I am leaving out certain resources that are less cost effective or frequently required (DDoS Standard for example), and I will allow for a conditional deployment of some that I just don’t want to wait on every time. I am looking at you Virtual Network Gateway 🙂
Bastion, Gateway, Firewall, Windows, Linux – subnets
Windows VM – Server 2019
Ubuntu VM – 20.04-LTS
Azure Firewall – Standard | Premium – Conditional based on Parameter
VNG – Conditional based on Parameter
So why does Bicep help me with the above? Genuinely I just never got time to create the same in ARM. When working on learning some Bicep I decided to use it as an opportunity to create something useful for myself.
All of the above is written in Bicep and stored in a public repo here. This includes a YAML Pipeline that can allow you test and if successful, deploy the environment to Azure using Azure DevOps. For more on that test stage, see my other post here.
You can see a high-level of the resources that can be deployed below, which I have pulled from the Visualiser function on VS Code:
Without the VNG included, you should see the entire environment built in under seven minutes.
Adding the VNG however will increase this most commonly to at least 20 minutes.
As always, if there are any questions or feedback, get in touch! Happy Bicep-ing! 💪
However, I noticed that it includes Azure DDoS Protecting on the Standard tier and does not carry a warning relative to the cost due to pricing model. This is the number one prohibitive factor I see with customers when discussing enabling DDoS Standard. While the resources provisioned as a whole are by no means “cheap” DDoS is the only one that carries a monthly as opposed to hourly rate.
Don’t get me wrong, this isn’t a negative post about DDoS Standard, I think it’s a good service. Take a look at the below benefits it offers over the Basic offer:
Also a list of the features and functionality it can provide is here on Docs.
But as I said, the cost is prohibitive, but what does that mean? Well take a look at the breakout of how the service is charged, vs something like Azure Firewall (also included in the lab).
Notice the main difference? It’s how the run rate is calculated. Firewall is per hour and DDoS is per month. This is restrictive, especially on a platform advertised often as “per-minute billing”. However, if you need the services offered, at least at you have a set price. The main challenge is for companies willing to pilot this and being met with this cost. Similarly, labs that include it for a demo that you may only need for a day or even a couple of hours.
Hat tip to Peter De Tender who spotted a change on the FAQs that if only the service is only active for a portion of time, you will receive a pro-rated bill. I haven’t seen this yet in practice, but if it’s public information you should be able to count on it!
So, based on cost alone, perhaps it is justification to block DDoS Standard from being deployed in your subscription? If that is your requirement, it is easily met via Azure Policy. The built-in policy below is perfect for this job
You can then simply choose the DDoS options as your parameters and prevent the service from being activated and avoid a cost shock!
Conversely, regardless of cost, perhaps you need it as part of a compliance or regulatory requirement. If so, Azure Policy can help here too! There are two built-in policies relative to DDoS enablement, however, if you need it to remediate rather than just audit, choose the below policy, ensuring you have a DDoS Plan created in advance.
The final point to note here is that DDoS is recommended as part of the Azure Security Benchmark if you are using Azure Security Center on the Defender tier. It requires Standard to be enabled to meet the control requirement, so be cautious and aware of committing to adhering to this standard!
As always, if there are any questions or feedback please get in touch! And remember, keep using #AZNet for your Azure Networking content!
If you work with ARM templates and Azure DevOps, you know that there is already tight integration between the two. Giving you a pretty simple method of deploying a template via a YAML Pipeline just by plugging in a few details. However, as your pipelines progress in complexity, or perhaps importance, the need for additional services like triggers, filters, and testing becomes apparent.
Having familiarity with ARM templates most likely means you are aware of the test toolkit, if not, here is a link to the docs page explaining what it is, how it works etc.
This post presumes you have knowledge of ARM Template deployment, Azure DevOps Pipelines, and a Project and Repo setup. However, as with any code on a blog, please be careful, use a sandbox first, I cannot help with your production environment. 🙂 If you’re new to this, there is a good tutorial here by Microsoft.
So, with your ARM template ready to go, you can deploy in a single task from a Pipeline. However, if this fails it can cause problems. Generally, more complex Pipelines will include the use of Stages and Jobs. In this example, we’re going to use a multi-stage Pipeline that will validate, test, report test results and if all successful, deploy our ARM template to Azure.
First, let’s take a look at the pipeline at a high level
This breaks out as follows:
Two stages – Test and Deploy
Two jobs in Test
A single job in Deploy
Within each job we may have multiple tasks and we will see that a bit later.
First up, lets look at the first job in the Test Stage, “testARM”. This job includes multiple tasks, with some built-in tasks and some imported. The first thing to address here is the task that is imported.
This takes the ARM Template Test Toolkit and allows you to import the functionality to Azure DevOps. My personal experience is with Sam Cogan’s build linked here, although there may be others. Once imported you can then use via the task assistant:
An added bonus, the extension supports both ARM and Bicep file testing. I have both included in the repo but this focusses on ARM. You can check the repo for Bicep details, or get in touch!
So with our task imported, let’s look at our code:
First up, we’re going to validate the template using the built ARM deploy task, switching deployment mode to validation.
This is a lightning quick test to ensure everything is OK with your template. Next, we’re going to use our imported task to run multiple checks against the template using the approved toolkit. This needs to run on a Windows pool by the way!
Note we have two tasks here, the first runs the tests and outputs the results. The second is a built-in task to publish your results to Azure DevOps, giving users a graphical representation in the portal as well as test history. The condition ensures the publish task will complete regardless of the previous task failing.
Next, we move onto the second job of our Test Stage.
This runs an Azure Powershell task, submitting your template using the built in “What If Result” cmdlet. I really like this one, it outputs a full detail of changes etc. If you haven’t tried it with an ARM template you really should. Below is a sample output from my pipeline:
Now, if all the above passes, the Pipeline will move onto the next Stage, Deploy. I’ve added a DependsOn to ensure this is the case. If the Test Stage doesn’t complete, this Stage will be skipped.
This again uses the built in ARM deployment task with a deployment mode of Incremental. The logic here is that if all of my Test Stage passes, I have a high percentage chance of my template deploying fully without issue.
The above has worked well for me in testing and with some variations in production environments. Feel free to experiment as needed here. One thing I have learned is there is an ever evolving set of methods and best practices. Alignment, in my opinion, should be to what works for you.
As always, if there are any questions or suggestions, get in touch!
Back in March 2021 at Ignite, Microsoft announced a new network feature in Preview, Azure Route Server. As the name suggests, this service is designed to act as a routing service within your Azure footprint. Specifically, it should simplify the management of dynamic routing between your network virtual appliance (NVA) and your Virtual Network. It does so by allowing you to exchange routes via BGP between the NVA and the Azure Software Defined Network (SDN) in your Virtual Network. As it’s a fully managed service, this means you no longer need to implement and maintain multiple route tables.
This service has just gone GA, and as a result, I thought I would share some thoughts relative to its implementation and functionality.
A note, if you’re on a CSP subscription, you will se an error in the portal if you try to deploy. However, you can work around this by using PowerShell or any of the other deployment options.
However, the key to understanding this service is having familiarity with implementing an NVA in an environment without it. So, say we build a Palo Alto, or any vendor NVA in our VNET. It is a flat VNET which has several subnets. And we want all our traffic to route via the NVA for filtering. Without Route Server, we have to implement single, or multiple, Route Tables on the subnets and manually manage route modifications on the NVA. Meaning a lot of admin overhead and room for potential user error.
Route Server removes this requirement, which believe me, on complex NVA hub-spoke designs is very welcome. The service leverages BGP to expose the system routing table of the VNET to allow the NVA write directly. Similarly, the NVA can learn routes from the VNET as they update, for example, the addition of a peering.
Creation of the service itself is very simple, a dedicated subnet of /27 or larger named RouteServerSubnet and a Public IP. Essentially a similar creation process to Azure Bastion. Guides for deployment are well documented, and include IaC options (although no Bicep example as yet).
Once Route Server is built, you then peer, and complete configuration as required with your NVA(s). Route Server advertises an ASN and IPs from the dedicated subnet for this, as below
I’ve only mentioned NVAs to this point, but Route Server also works with Virtual Network Gateways, supporting both VPNs and ExpressRoute. Integration here is even tighter, you don’t need to configure the peering, you just enable the switch within Route Server. This switch is called Route Exchange, and when enabled, your NVA and ExpressRoute wil learn each others routes.
One thing I have found so far is that this seems more like a service that should be enabled on the VNET itself, like DNS or Service Endpoints, rather than a resource you have to build and configure. Having it “built into” every VNET would seem like the better option to me, and would definitely be cleaner.
Also, there doesn’t appear to be any integration offered with Azure Firewall as yet, however, I want to dig a bit more on that and will update this post accordingly.
Perhaps someday we will get a VNET Version 2.0 which includes the likes of Route Server, Bastion etc all within the one blade and configuration item!
Until then however, if you want to try out Route Server, the absolute best option is to follow the Docs tutorial which covers everything except Virtual Network Gateway, although you could add that on yourself manually if required as I did (ping me if you need help). The full tutorial is here and is a really clean implementation and removal which gives you the ability to test each feature.
At the end of July, Microsoft announced the general availability of geomatching via Custom Rules in Web Application Firewall. This is a feature I am quite fond of, and is excellent at reducing your attack surface. Thankfully, it’s also quite simple to implement!
First, what is Web Application Firewall (WAF)? Well it’s a service that provides protection for your web apps from common exploits and vulnerabilities. It can be deployed with Azure Application Gateway, Azure Front Door, and Azure Content Delivery Network (CDNN is preview for now). WAF allows for central management, meaning you can react to threats faster, instead of securing each individual web application. There are a couple of specific differences depending on the service you attach a WAF policy to, and you can read about them here.
Next, what are Custom Rules? As part of each WAF policy, you have to configure a set of standard rules, such as Prevention or Detection mode, and managed rules (OWASP). However, you can also create your own rules, and in WAF these are simply called Custom Rules (CR). A CR is made up of one or more conditions followed by an action. All CRs for a WAF policy are match rules. You can have multiple CRs per policy and they are processed in order of priority in a range of 1-100, with 1 being highest priority, or processed earliest.
Now that we have the foundation, let’s look at the geomatch option. This is available within your CR as a match Condition.
Create your CR with an appropriate name and priority, then choose ‘Geo location’ from the Match type drop down as above. Next, you’ll want to ensure you choose RemoteAddr as the match variable, and decide what logic you want to apply. By logic I mean the pattern that will fire the rule. In this example, I want all traffic except Ireland blocked. So I will choose the Operation ‘Is not’, then location Ireland, then Deny. If I wanted all traffic allowed and Ireland blocked, I would simply choose the Operation ‘Is’. I recommend figuring out your pattern then working your way through the final section of the CR.
So, based on my example with Ireland, my CR now looks like this:
Now most commonly, I would see a geomatch policy applied to an Application Gateway, but it can be applied to Front Door also. On of the nice features of a WAF policy when using it with Application Gateway is that you get association choices. This means you can be as granular as having different policies per listener, or even per path rule. Giving you huge flexibility when combined with CRs.
Finally, I have obviously shown you how to create this CR in the portal, but there are other methods, like Powershell. These can be very handy if you need to recreate a geomatch CR, especially if you have multiple conditions with many countries (there is a max of 10 per condition). Note, if using these methods, you need to use country codes, like IE, and these are all listed here.
And that’s it! You have your CR created and associated. WAFs take effect pretty quickly so you should be able to test within a couple of minutes. If you want to perform a simple verify, you can switch the action from Deny traffic to Allow and retest.
Finally, a nice tip when you’re happy with your configuration is to setup an alert against your WAF should there be a change made, or worse, deleted (apply resource locks people). This runs as a signal alert from Azure Monitor against an action group as per standard
That’s it for this post, as always, if you have any questions, please get in touch!