Azure Networking Updates at Ignite

To say there have been a lot of Azure announcements at Ignite is an understatement. Several important services have hit GA and several exciting new services have been announced. Rather than re-list them all (I’ll link through at the end), I am going to highlight those I am most excited about, and think will be relevant to most clients.

Below is a graphical overview of the new approach to Azure networking, which contains four pillars, Connect, Protect, Deliver and Monitor.

azurenetworkingfalludpate

Starting with Connect, two ExpressRoute related announcements immediately grabbed my attention. Microsoft now offer 100Gbps connectivity speeds via ExpressRoute Direct which is the fastest cloud connectivity available.

Helping customers with geo-distributed offices, ExpressRoute Global Reach was announced, allowing customers to connect offices using their existing circuits. Additional to this concept, Azure Virtual WAN is now GA. With that move, Microsoft also added new preview features including P2S VPN and ExpressRoute connectivity shown below:

vwanfeatures.png

The final announcement in Connect that really will make a difference to clients is Public IP Prefix. This means that you can now request a range of static IP addresses for your resources that you will “own”. This will make white-listing and administration of your public presence a lot simpler.

Moving to the Protect pillar, Azure Firewall has now moved to GA. Which is a great service for those looking to have control over their breakout point in Azure without deploying and maintaining an NVA.

Rounding out Protect, Service Endpoint Policies. These allow more granular control over access to Azure resources over your vnet service endpoint. See previous post on endpoints for more.

From the Deliver pillar, the announcement of Azure Front Door in preview is the big one. AFD is a global entry point for you applications that is scalable and secure. The ability to scale and level of integration with existing services will make this widely used by customers with a large public facing presence.

There are also updates to Application Gateway which improve performance, introduce redundancy and increase the level of integration with over services.

From the final pillar, Monitor, the main announcement is Virtual Network TAP in preview. This allows for continuous mirroring of VM traffic to a collector, without any agents. This the first of these services in a public cloud and will allow for greater analytics and granularity of environment traffic.

For the entire round-up from Microsoft, head here.

Virtual Network Service Endpoints

When designing an application deployment to be hosted in Azure, a design consideration that is commonly enticing is to transform a layer of the application from traditional infrastructure to something more modern. Microsoft offer several Platform-as-a-service (PAAS) options that allow this to be achieved, for example, transforming SQL server installed on a VM to Azure SQL.

While this transformation might be straight forward from an SQL Database point of view and most likely when considering the cost of running your deployment, a concern that often arises is security. As Azure SQL is PAAS, it offers a public endpoint for SQL authentication and connectivity. This is by design and there are limited options to prefix this with a security layer. If your application runs somewhere outside Azure, this makes sense and might be an acceptable and noted weak spot. However, if the rest of your application layers are hosted within Azure, having to route out to a public endpoint is less secure than it could be and simply bad design.

Thankfully, Microsoft have been making updates to virtual network functionality that allow you to route directly from your virtual network resources to several PAAS offerings. To do this, you must make use of virtual network Service Endpoints.

Endpoints extend your virtual network private address space and the identity of your VNet to specific Azure services, over a direct connection. Endpoints allow you to secure your critical Azure service resources, such as Azure SQL, to only your virtual networks. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network and never takes a public route. They are currently available for three PAAS offerings:

  1. Azure SQL
  2. Azure Storage
  3. Azure Data Warehouse (Preview)

Introducing an Endpoint for Azure SQL (to stick with the initial example) allows improved security as it fully removes public Internet access and allows traffic only from the virtual network.

It also optimises routing as Endpoints always take service traffic directly from your virtual network to the service itself on the Microsoft Azure backbone network. Doing this means that if your environment uses forced-tunnelling this traffic will no longer be viewed as outbound, but intra-Azure and will flow direct.

There are some considerations to be aware of:

  • Location – the virtual network and the PAAS offering must be located in the same region.
  • Outbound Network Flow – if you control outbound network flow via NSG, you can make use of the “Azure Service” tags to allow this traffic via Endpoint.
  • Connections – If you enable a service endpoint, all current TCP connections from your virtual network will drop. This is to allow a change from Public IP access to Private.

Personally, I think Endpoints should be used as widely as possible. From a security and design perspective they allow greater ease of adoption when PAAS offerings are being considered and perhaps best of all, they are free!

User Defined Routing

How traffic is routed within and between Azure networks isn’t something that is immediately apparent to the average admin. Therefore trying to increase the complexity of this for security reasons or by adding a virtual appliance can offer several challenges. As more companies begin to adopt a cloud strategy these scenarios will become more common. I’m here to tell you that with a little patience, taking full control of traffic routing in your Azure environment only takes a couple of well planned steps.

This is something you should see repeated often when reading about Azure; everything starts with the Virtual Network. When you create a Virtual Network (vnet) you must define some simple parameters, we’ll focus on two of these for this blog post, Address Space and Subnets.

If you are planning on a cloud-only environment, you can choose whatever address space you like (well, almost… any range defined in RFC1918) but, I would always recommend you plan vnets with the possibility they will be linked to your LAN. It is much easier to plan for this now than trying to resolve it later, trust me.

Choosing how to divide your address space up into several subnets also requires careful consideration. To start with, not all of the addresses in a subnet are usable, Azure reserves the first, last and three other addresses for Azure services. So you need to ensure there are enough usable addresses in each subnet for its intended use.

Once the above steps are completed, Azure creates a system routing table. This defines how traffic will route both within and out of the vnet. It is a combination of your vnet address space and Azure defined address spaces. Below are the effective routes for a demo network interface sitting in a vnet with the address space 10.0.0.0/16

system_routing

As you can see, all of the source definitions are ‘Default’ indicating they are system generated. The entry noted with ‘next hop type’ of ‘Virtual Network’ is our vnet address space, this defines the traffic as local. The rest are added by default to cover system and internet routing, more on those here.

The above table applies to all subnets within the vnet, so if you had a requirement for an incredibly secure vnet that could not route to the internet, only local traffic, you could introduce a user defined route table to overwrite the system route.

Routing in Azure is done by longest prefix match (LPM) this means that if the system table defines a 0/16 and you define a 0/24 and traffic is bound for that address space, it will follow the 0/24 route. If the LPM is an exact match, the following order is adhered to:

  1. User defined route
  2. BGP route (when ExpressRoute is used)
  3. System route

So using our example above, we would create a UDR and define a route for traffic bound for 0.0.0.0/0 as having a ‘next hop’ of none. This will prevent any outbound traffic, with the exception of your vnet (remember, LPM) from being able to route entirely.

The below grab of my demo VM effective routes shows how the system route is now marked as invalid and my user route has precedence.

udr

There are many other scenarios where this can be useful, for example if you have a site-to-site VPN and would like all internet traffic to tunnel through on-premise appliances you can force tunnel traffic to the gateway.

Similarly if you have an appliance as part of your vnet within Azure, you can route all traffic to and from it as you see fit. Remember to always view the effective routes being applied to a network interface before pushing to a live production environment!

My final recommendation if using UDRs in your environment is to add a Network Watcher object. It greatly simplifies analysing and troubleshooting routing. There are multiple powershell and CLI commands you can also make use of for troubleshooting.

If you’ve any questions etc, get in touch!