What is – Azure Route Server

Back in March 2021 at Ignite, Microsoft announced a new network feature in Preview, Azure Route Server. As the name suggests, this service is designed to act as a routing service within your Azure footprint. Specifically, it should simplify the management of dynamic routing between your network virtual appliance (NVA) and your Virtual Network. It does so by allowing you to exchange routes via BGP between the NVA and the Azure Software Defined Network (SDN) in your Virtual Network. As it’s a fully managed service, this means you no longer need to implement and maintain multiple route tables.

This service has just gone GA, and as a result, I thought I would share some thoughts relative to its implementation and functionality.

First up, a little bit more on how it works. Microsoft have an excellent graphic to show you the functionality at a high-level:

Diagram showing Azure Route Server configured in a virtual network.

However, the key to understanding this service is having familiarity with implementing an NVA in an environment without it. So, say we build a Palo Alto, or any vendor NVA in our VNET. It is a flat VNET which has several subnets. And we want all our traffic to route via the NVA for filtering. Without Route Server, we have to implement single, or multiple, Route Tables on the subnets and manually manage route modifications on the NVA. Meaning a lot of admin overhead and room for potential user error.

Route Server removes this requirement, which believe me, on complex NVA hub-spoke designs is very welcome. The service leverages BGP to expose the system routing table of the VNET to allow the NVA write directly. Similarly, the NVA can learn routes from the VNET as they update, for example, the addition of a peering.

Creation of the service itself is very simple, a dedicated subnet of /27 or larger named RouteServerSubnet and a Public IP. Essentially a similar creation process to Azure Bastion. Guides for deployment are well documented, and include IaC options (although no Bicep example as yet).

Once Route Server is built, you then peer, and complete configuration as required with your NVA(s). Route Server advertises an ASN and IPs from the dedicated subnet for this, as below

I’ve only mentioned NVAs to this point, but Route Server also works with Virtual Network Gateways, supporting both VPNs and ExpressRoute. Integration here is even tighter, you don’t need to configure the peering, you just enable the switch within Route Server. This switch is called Route Exchange, and when enabled, your NVA and ExpressRoute wil learn each others routes.

One thing I have found so far is that this seems more like a service that should be enabled on the VNET itself, like DNS or Service Endpoints, rather than a resource you have to build and configure. Having it “built into” every VNET would seem like the better option to me, and would definitely be cleaner.

Also, there doesn’t appear to be any integration offered with Azure Firewall as yet, however, I want to dig a bit more on that and will update this post accordingly.

Perhaps someday we will get a VNET Version 2.0 which includes the likes of Route Server, Bastion etc all within the one blade and configuration item!

Until then however, if you want to try out Route Server, the absolute best option is to follow the Docs tutorial which covers everything except Virtual Network Gateway, although you could add that on yourself manually if required as I did (ping me if you need help). The full tutorial is here and is a really clean implementation and removal which gives you the ability to test each feature.

One thing that did catch me out here was advertising the default route, however, while I realised my mistake and corrected it with a UDR, I was glad to see this configuration issue, and others are already documented.

Before deploying to production, consider the FAQ to ensure you understand limits and supported scenarios. Peer and route limits are specifically important here for larger designs!

As always, if you have any questions, get in touch!

How To – Troubleshoot Azure Virtual Network Peerings

Recently, Microsoft announced a new preview feature relative to Virtual Network Peerings. This preview allows you to resize the address space of a peered virtual network. Up until the point of this preview, any resize operation on a Virtual Network with a peering activated would fail. The only previous workaround was to delete the peering, complete the required address space modifications, then recreate the peering. While this can be completed quite quickly, it does cause downtime. This preview feature is definitely a welcome addition.

Address space change error example due to peering

So while there is a new feature available, but it’s in preview, let’s have a look at the current state of Virtual Network Peering. Starting off with what exactly they are. A peering allows you to seamlessly connect two Virtual Networks. There are a couple of the usual caveats like non overlapping adress spaces but configuration is as simple as can be. One item of note, peerings are not transient. That means that in the graphic below, even though the Hub Virtual Network is peered to both A and B, traffic cannot traverse from A directly to B or vice versa.

virtual network peering transit
Hub/spoke network example with non-transient peering

Using peerings allows for the creation of well architected, and secure, network footprints in Azure. If you’re new to network on Azure, and/or new to peerings, I really recommend reading this routing page on Docs, it gives clear examples and explanations of common scenarios. And for more specifically on peering, check out this page on Docs. Both will help set the context for some of the areas we are going to cover in this post, and if you’re not familiar with them, may be a challenge.

Configuration

For all explanations we will use the following architecture:

Hub/spoke virtual networks with peerings

The first step to create a peering is also the first place you should confirm when troubleshooting. Often, it can simply be a missed setting in this configuration that causes your issue. A peering contains three elements of configuration, and they are repeated on both sides, leaving you with six settings in total that can impact your peering.When creating a peering, we have the following choices. These choices impact how your peering will function and should be your first check, every time, when you have an issue with a peering.

Let’s break those down a little. First, “Traffic to remote virtual network”, this enables communication between the virtual networks and allows resources connected to either virtual network to communicate with each other with the same bandwidth and latency as if they were connected to the same virtual network. So, you may ask why is there a block option if I am enabling a peering!? Good question! It’s mostly there to facilitate temporary blocks, saving you from deleting and recreating the peering. How this works is slightly complicated, as it’s based on manipulating the service tag “Virtual Network” rather than explicitly blocking traffic.

If you want to block all traffic – delete the peering.

The VirtualNetwork service tag for network security groups encompasses the virtual network and peered virtual network when this setting is Enabled. Read more about network security group service tags here. When this setting is disabled, traffic doesn’t flow between the peered virtual networks by default; however, traffic may still flow if explicitly allowed through a network security group rule that includes the appropriate IP addresses or application security groups.

Next, “Traffic forwared from remote virtual network”, when enabled, this allows traffic that didn’t originate from the virtual network. This is best explained using a routing example; take our three virtual networks, VNET A is peered to VNET B and to VNET C, however, B and C are not peered. Don’t forget, peerings are not transient, so without additional configuration, B and C cannot communicate. WE can create a service chain by using VNET A as the next hop for B to reach C and vice versa, we do this by using route tables and a network virtual appliance, such as a router or firewall etc. However, for that traffic to be allowed use the peering, we need the forwarded traffic flag enabled on BOTH sides of the peering.

Finally, “Virtual network gateway or Route Server”, this can be enabled depending on whether the services exist or not in your peered virtual networks. Using our architecture, creating a peering between A and B. On the A side, we could choose “Use this virtual network’s gateway…” and on B “Use the remote virtual network’s gateway…”. This would allow B to use the connections terminated on the gateway in A. Without this enabled, for example on our peering between A and C, traffic cannot use the connections on the gateway.

Important note here, you cannot peer and use gateways if both virtual networks have gateways. You can peer and use the other two options, but not the gateway.

One often over looked feature is the ability to use the VNG in A to route traffic from B to C. It’s a supported configuration, requiring route tables etc. and isn’t very well documented but it works! Ping me if you need help with it.

Constraints

Now, if you have confirmed that all of these settings are correct, but you are still facing issues, there are some common constraints that may be the cause of your trouble.

  • Problem Resources – There are a set of resources that do not work across global peering – Listed here.
  • Classic Virtual Network – while you can peer an ARM and a classic, you cannot peer two classics. Upgrade those vnets to ARM!
  • Peering status – Every peering has a status when viewed within the virtual network. If it shows Initiated, traffic wont flow. Check both sides of the peering and update configuration until it shows Connected
  • DNS – If you’re using the default DNS with your virtual network, you cannot resolve names in a peered virtual network. You will need custom DNS, or Azure DNS Private.
  • P2S – If you have P2S configured and then add a peering, you must donwload the client config again to pick up the peered virtual network routes.

Scenarios

While 99% of the issues I have seen are resolved with some combination of the above, there are a few specific scenarios that require a slightly different focus, Microsoft have documented these all on one page.

Finally

Don’t forget, peerings are all about the system route table of the virtual networks, that’s how they function. Understanding and validating your route tables is key to successful troubleshooting. Having said that, if you are ever stuck, please get in touch, I will do my best to help!

How To – Enable Web Application Firewall Geomatch Custom Rules

At the end of July, Microsoft announced the general availability of geomatching via Custom Rules in Web Application Firewall. This is a feature I am quite fond of, and is excellent at reducing your attack surface. Thankfully, it’s also quite simple to implement!

First, what is Web Application Firewall (WAF)? Well it’s a service that provides protection for your web apps from common exploits and vulnerabilities. It can be deployed with Azure Application Gateway, Azure Front Door, and Azure Content Delivery Network (CDNN is preview for now). WAF allows for central management, meaning you can react to threats faster, instead of securing each individual web application. There are a couple of specific differences depending on the service you attach a WAF policy to, and you can read about them here.

WAF overview
Nice graphical explainer of WAF from Microsoft

Next, what are Custom Rules? As part of each WAF policy, you have to configure a set of standard rules, such as Prevention or Detection mode, and managed rules (OWASP). However, you can also create your own rules, and in WAF these are simply called Custom Rules (CR). A CR is made up of one or more conditions followed by an action. All CRs for a WAF policy are match rules. You can have multiple CRs per policy and they are processed in order of priority in a range of 1-100, with 1 being highest priority, or processed earliest.

Now that we have the foundation, let’s look at the geomatch option. This is available within your CR as a match Condition.

Create your CR with an appropriate name and priority, then choose ‘Geo location’ from the Match type drop down as above. Next, you’ll want to ensure you choose RemoteAddr as the match variable, and decide what logic you want to apply. By logic I mean the pattern that will fire the rule. In this example, I want all traffic except Ireland blocked. So I will choose the Operation ‘Is not’, then location Ireland, then Deny. If I wanted all traffic allowed and Ireland blocked, I would simply choose the Operation ‘Is’. I recommend figuring out your pattern then working your way through the final section of the CR.

So, based on my example with Ireland, my CR now looks like this:

Now most commonly, I would see a geomatch policy applied to an Application Gateway, but it can be applied to Front Door also. On of the nice features of a WAF policy when using it with Application Gateway is that you get association choices. This means you can be as granular as having different policies per listener, or even per path rule. Giving you huge flexibility when combined with CRs.

Finally, I have obviously shown you how to create this CR in the portal, but there are other methods, like Powershell. These can be very handy if you need to recreate a geomatch CR, especially if you have multiple conditions with many countries (there is a max of 10 per condition). Note, if using these methods, you need to use country codes, like IE, and these are all listed here.

And that’s it! You have your CR created and associated. WAFs take effect pretty quickly so you should be able to test within a couple of minutes. If you want to perform a simple verify, you can switch the action from Deny traffic to Allow and retest.

Finally, a nice tip when you’re happy with your configuration is to setup an alert against your WAF should there be a change made, or worse, deleted (apply resource locks people). This runs as a signal alert from Azure Monitor against an action group as per standard

That’s it for this post, as always, if you have any questions, please get in touch!

AZ-500 Microsoft Azure Security Technologies – Study Guide

Updated February 2021

Azure has a sole security focused exam, AZ-500 Microsoft Azure Security Technologies. Passing this single exam will allow you to earn a Microsoft Certified: Azure Security Engineer Associate certification.

So, if you’re interested and wondering if you should take this exam? Here is what Microsoft have to say:

Candidates for this exam are Microsoft Azure security engineers who implement security controls, maintain the security posture, manages identity and access, and protects data, applications, and networks. Candidates identify and remediate vulnerabilities by using a variety of security tools, implements threat protection, and responds to security incident escalations. As a Microsoft Azure security engineer, candidates often serve as part of a larger team dedicated to cloud-based management and security and may also secure hybrid environments as part of an end-to-end infrastructure.

Candidates for this exam should have strong skills in scripting and automation, a deep understanding of networking, virtualization, and cloud N-tier architecture, and a strong familiarity with cloud capabilities, Microsoft Azure products and services, and other Microsoft products and services.

Below, I’ve put together a collection of links relevant to the sections highlighted as being part of the skills measured for this exam. As always, these are only guide links, sometimes you need to explore a topic much more deeply if you are not familiar with it.

If you spot something, or have a better link for a topic, get in touch! I will update this post as regularly as possible and always appreciate any feedback.

A good place to start is the Azure Security Documentation page. This site includes most of the key concepts and services covered in this exam, as well as several best practice approaches you should consider.

Manage Identity and Access (30-35%)

Manage Azure Active Directory identities
Configure secure access by using Azure AD
Manage application access
Manage access control

Implement Platform Protection (15-20%)

Implement advanced network security
Configure advanced security for compute

Manage Security Operations (25-30%)

Monitor security by using Azure Monitor
Monitor security by using Azure Security Center
Monitor security by using Azure Sentinel
Configure Security Policies

Secure Data and Applications (20-25%)

Configure security for storage
Configure security for databases
Configure and manage Key Vault

AZ-303: Microsoft Azure Architect Technologies (beta) – Study Guide

Microsoft continues to update it’s role based exams and next on the list is AZ-303. This is the updated version of AZ-300 and should launch at the end of June 2020. Currently this is arguably the most difficult exam certification path as it is one of only two Expert level certifications for Azure. This post will cover AZ-303 and here is what Microsoft have to say about it:

Candidates for this exam are Azure Solutions Architects who advise stakeholders and translate business requirements into secure, scalable, and reliable solutions. Candidates should have advanced experience and knowledge of IT operations, including networking, virtualization, identity, security, business continuity, disaster recovery, data platform, budgeting, and governance. This role requires managing how decisions in each area affects an overall solution. Candidates must have expert-level skills in Azure administration and have experience with Azure development processes and DevOps processes.

As the exam is still in beta, if you take it, it will not be scored immediately. So bear that in mind before you sit it! Of course, if you pass the exam, once scored, it will count towards the certification.

If you do want to sit the exam and register before August 10th 2020, there are 300 discounted exams on offer if you use the code – AZ303DonSuperStar

As always, a great place to start is Microsoft Learn. There are several interactive learning paths that are free that you can work through at your own pace. I find this a great way to study and gain greater understanding of the services by actually using them and you will need to be very familiar with Azure to pass this exam.

Below I’ve put together a collection of links relevant to the sections Microsoft have highlighted as being part of the skills measured for this exam. These are only guide links, sometimes you need to explore a topic much more deeply if you are not familiar with it. Hopefully these study materials will help guide you to successfully passing AZ-303!

Implement and Monitor an Azure Infrastructure (50-55%)

Implement cloud infrastructure monitoring

Implement storage accounts

Implement VMs for Windows and Linux

Automate deployment and configuration of resources

Implement virtual networking

Implement Azure Active Directory

Implement and manage hybrid identities

Implement Management and Security Solutions (25-30%)

Manage workloads in Azure

Implement load balancing and network security

Implement and manage Azure governance solutions

Manage security for applications

Implement Solutions for Apps (10-15%)

Implement an application infrastructure

Implement container-based applications

Implement and Manage Data Platforms (10-15%)

Implement NoSQL databases

Implement Azure SQL databases