Azure Sentinel – Where to start?

First announced back in late February, Azure Sentinel is the first cloud native SIEM service from a major provider. SIEM (security information and event management) is a primary component in any security service. Sentinel aims to leverage cloud specific benefits like elastic scale and AI to allow customers detect and respond to security incidents as quickly and efficiently as possible.

The workflow of Azure Sentinel can be broken into four steps:

Azure Sentinel core capabilities

1. Collect

Sentinel allows you to collect data at scale from multiple users, devices, applications and infrastructure, hosted in Azure, on-premises, and even in multiple clouds. This means you can aggregate all security data using industry standard log formatting. With built-in integration, you can enable collection for features such as Office 365 or Azure AD within seconds.

2. Detect

Having all of your data collected to Sentinel allows for more simple analysis and detection at scale than was previously possible. This more efficient triage, and the capability to leverage Microsoft Machine Learning allows you to be more productive, minimise false positives and react to those high accuracy alerts as early as possible.

3. Investigate

Sentinel allows you to visualise and resolve alerts using the same dashboards. Proactively hunting for incidents can be automatic or scripted into a set of queries. Microsoft have provided some to get you started too based on their analysis and response teams.

4. Respond

Continuing the efficiency seen in previous steps, Sentinel allows you to orchestrate and automate responses to incidents. Allowing you to automatically handle repeat and/or known incidents.

So now that you know what it is, the next step is to put it in action and see if it can be of use to you and your client/business. Currently still in preview, Sentinel is free to use, which is always good and allows you to assess the service without any significant financial impact. Bear in mind, you will pay for the Log Analytics workspace which stores the data!

First, you’ll need to enable Sentinel and a workspace, this can be done via the portal and a walkthrough is here. Then, you need to connect some services to start streaming data to Sentinel. As you can see below, there are multiple options and you can choose which logs/data is sent to Sentinel too.

Once your data connector is active, you can make use of the built in dashboards to visualise it. Below is a subsection of the Azure AD sign-in log dashboard, which is available immediately via Sentinel. You can also create your own custom dashboards, there is a guide with samples here.

Now that is being collected and you have visuals, your next step is analysis. The first thing you will need to do is to create Detection Rules. These are essentially Log Analytics queries with alerting parameters wrapped around them. Microsoft offer sample queries on Github which are updated regularly. Alternatively, you can simply write your own to meet your needs.

The results of your Detection Rules are then fed into the Cases section of Sentinel. Here you can triage, investigate and remediate incidents. The cases are created dynamically from the parameters you set for Detection Rules such as severity and entity mapping. As such, be prepared to have to tweak those thresholds and alert patterns a bit. I have Sentinel running within several customer tenants, and am still not 100% happy with my detection rules yet. Always remember to update the status of your case too, in progress, resolved etc.

Finally, you should set up some Playbooks to respond to your alerts. A Playbook is simply a set of procedures that you can run from Azure Sentinel. They help automate and orchestrate your responses to alerts, and you can run them manually or ideally set them up to run automatically in response to certain alerts. They are based on Logic Apps, which means all of the same actions are available via Sentinel. One quick note, there is a charge for Logic Apps and therefore Playbooks, so ensure you understand your costs first.

When creating a Playbook, regardless of it is going to be run automatically by Sentinel or manually by you, you should first define your scenario. My preferred approach here is to come up with “If-This-Then-That” loops and apply them as needed. This is another section that will take some tweaking over time. In my experience, I only run Playbooks manually initially, then start to add automated triggering once I’m happy with the alert and response. Docs have a nice sample alert-response playbook with messaging and actions which is a great place to start.

Another function which I haven’t covered here is Hunting. I haven’t spent enough time with this feature yet to give a detailed opinion but you can read more on it over on Docs.

So, if you haven’t given Sentinel a try yet, I’d recommend you review the quickstarts and deploy in your tenant for one or two of the data sources like Azure AD. While it’s in Preview, it is a great chance to assess it relevant to your tenant and hopefully gain some greater insight and response capability too.

As always, if there any questions or if you have any problems with your Sentinel, get in touch!

How to – Secure an Application Service with Application Gateway v2

Application Gateway conceptual

Application Gatway v2 brings several welcome additions to the service since it’s initial v1 release. For those who have spent time configuring an Application Gateway, you’ll be glad to hear that udpate/modification times have been drastically reduced. Better performance and the addition of functionality are some of the other main reasons to use v2 over v1. The entire list can be found here.

Recently, I had to secure an Application Service with an Application Gateway v2 on the WAF (web application firewall) tier. This is something I have done several times with v1 without any significant issue. In this instance the Application Service runs on a custom domain as does the Application Gateway. Requirements were to run SSL end to end and have WAF run in prevention mode.

If you’ve ever done this before, you know there are some basics to be completed within your Application Service. For this post and my requirement, they were map a custom domain, runs HTTPS only and prep rules to allow connections only from your Application Gateway. How to do all of that can be found at the following links:

Once your Application Service is ready to go, you move on to configuring your Application Gateway. This is a relatively simple process and can even be completed within the Portal. There is a published guide here. However, once it was configured, I noticed that certain redirect functionality aspects of the application were returning the default host name of the Application Service. This can also happen if you use Azure AD authentication. With WAF in prevention mode, this returns a 403 as a default rule picks up the change in address.

The reason for this is how both Application Gateway and Application Service handle their host headers. To fix this issue, there are two changes you can make, one of which that is only possible on Application Gateway v2.

The v2 only fix is to rewrite the location in the host header using rewrite rules. Rewrite rules are new functionality only included in v2. A guide on what you need to do exactly is here. Make sure the text is exactly as in the guide or it will not work.

The second option, and the one that is more common is to change how your Custom Probe and HTTP settings are configured. The reason for this is that the default guide does not take into account the use of a custom domain on your Application Service. For both settings, modify and remove the ” PickHostNameFromBackendAddress” setting. Now, the Application Gateway will forward the same hostname and redirection will happen on the same too. Full guide here.

As always, if there are any questions on the above, get in touch!

What are Azure Blueprints?

I’m sure most of you have seen recent announcements relative to Blueprints as well as multiple Microsoft posts about the service and what it can do to improve your environments. However, what if you’re not sure about what they are and if they are usable for your environment? Hopefully, that’s where this post comes in. I’m going to explain exactly what they are and why you might use an Azure Blueprint. This should allow you to make a decision on whether you need them or not.

Following on from that, I think that’s the first basic point about Azure Blueprints. Similar to several other new services in Azure, the functionality is great and could help progress a lot of environments, but that doesn’t mean they help, or are even useful in a lot of other environments. Never feel guilt-ed into using a new service because there is a “buzz” about it at launch. Assess the service, understand it, assess it’s usability versus your requirements then TEST TEST TEST! Don’t forget, Blueprints are still in preview so no production workloads yet.

So, what is an Azure Blueprint? To try explain it plainly, it is a collection of governance and resource services, defined in such a way to allow you to repeat deployments to a set standard.

Azure Blueprints overview

The collection of governance and resource services within a Blueprint are referred to as Artifacts. Within each Blueprint, you can make use of any combination of the following:

ResourceHierarchy optionsDescription
Resource GroupsSubscriptionCreate a new resource group for use by other artifacts within the blueprint. This enables you to organize resources and provides a scope for other artifacts.
Azure Resource Manager templateSubscription, Resource Group
Templates are used to create resources. This could range from individual deployments to entire environments.
Policy AssignmentSubscription, Resource GroupAllows assignment of a policy or initiative to the subscription and/or resource group the blueprint is assigned to. Any
parameters are assigned at creation of the blueprint or during blueprint assignment.
Role AssignmentSubscription, Resource GroupRole assignments can be defined for the entire subscription or nested to a specific resource group included in the blueprint.

As you can see above, artifacts can be deployed/assigned at different levels. However, the Blueprint itself must be located in either a subscription you have at least Contributor access to or a Management Group. If located within a Management Group the Blueprint is available to any of the child subscriptions of that group.

When defining your Blueprint, several artifact options allow you to choose parameters that are passed from Blueprint to artifact. For example, when defining a Resource Group, you can choose to specify the name and location. You don’t have to specify these parameters within the Blueprint, you can also allow these to be passed when the Blueprint is assigned.

Once you have your Blueprint defined, your next step is to publish it. When publishing, you must indicate a version. I found it odd that this isn’t restricted in some way, you can literally name one version “1.0” and the next “B” so I’d recommend adding notes with each version and try to stick to a pattern. However, it makes sense if you’re going to use different versions for different assignments (I’ll explain that next), so choose relative to your requirements.

When your Blueprint is published, you can then assign it. A nice feature is the ability to assign different versions of a Blueprint to different subscriptions. For example you could have two versions of a Blueprint, that have different artifact definitions (think test version and production version) assigned to different subscriptions. They can be independently updated too.

At assignment, there are some options to chose as well as subscription. They are Resource Locking and Managed Identity.

For Managed Identity, it’s recommended you simply choose System Assigned as the Blueprints service will then manage the security lifecycle. More on Managed Identities to help you understand and choose what’s right for your environment.

The Resource Locking feature really allows you to maintain control of your governed deployment. If you’re not familiar with Resource Locks, check out this post. The familiar status applies to resources deployed by a Blueprint assignment:

  • Not locked
  • Read Only
  • Cannot delete

However, once a status is applied, not even a user/object with the Owner role can modify it. This is due to how these statuses are applied. An RBAC deny assignments deny action is applied to artifact resources during assignment of a blueprint if the assignment selected the Read Only or Do Not Delete option. The deny action is added by the managed identity of the blueprint assignment and can only be removed from the artifact resources by the same managed identity.

So, how do you edit or delete your resources? Update your Blueprint to “Not locked” and push the update to the relevant assignment. This method prevents unwanted and unexpected changes occurring outside of the scope of the Blueprint.

There is quite a learning curve for Blueprints I think as they combine several other services you must be familiar with, so for me, you have to start there. Understand each of the artifacts fully so you can see how they may work well if defined in your environment.

Recently, sample Blueprints have been released to allow you to deploy governed pre-designed environments with a couple of clicks, one sample is the ISO27001 Shared Services which I think is good to help understand the service, even if it might be slightly complex for your first test.

Again, Blueprints are still in Preview. So be as cautious as always with your production environments. I look forward to seeing what changes come with GA, which shouldn’t be much longer considering Blueprints were announced back at Ignite. I will update this post relevant to GA when it happens.

As always, if you have questions, leave a comment, mail me, or ping me on Twitter!

First Impressions – Azure Firewall Preview

Recently Microsoft announced that a new Azure Firewall service was entering a managed public preview. Azure Firewall is a managed, network security service that protects your Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability and scalability.

firewall-overview.png

The services uses a static public IP meaning that your outbound traffic can be identified by third party services as/if required. Worth nothing, that only outbound rules are active within this preview. Inbound filtering will hopefully be available by GA.

The following capabilities are all available as part of the preview:

  • Stateful firewall as a Service
  • Built-in high availability with unrestricted cloud scalability
  • FQDN filtering
  • Network traffic filtering rules
  • Outbound SNAT support
  • Centrally create, enforce, and log application and network connectivity policies across Azure subscriptions and VNETs
  • Fully integrated with Azure Monitor for logging and analytics

As with all previews it should not be used for production environments, but for testing purposes this is how to register your tenant for deployment.

To enable the Azure Firewall public preview follow the guide here: Enabling the preview

Once enabled, follow this tutorial for a sample implementation: Deployment Tutorial

Now that you’re familiar with the deployment, you should apply to your specific test scenarios. Be wary of some operations that could be limited by applying a default route to your VM. There is an updated FAQ for the service here: Azure Firewall FAQ

Overall, this is a welcome addition to Azure networking. As the preview progresses and more service options are added, especially inbound options, I see this being as common as deploying an NSG in your environment. Combining it with peering and the right set of rule collections for your environment allows for an easily managed, scalable, and most importantly, secure environment within Azure with minimal cost and infrastructure footprint.

Securing Azure PaaS

When considering Azure as a platform, part of the conversation should revolve around transformation. That is, how do we transform our approach from what is viewed as traditional to something more modern. Often this could lead to redesigning how your application/service is deployed, but with some workflows, a simple change from IaaS to PaaS is viewed as a quick win.

This change isn’t suitable in all scenarios, but depending on your specific requirement it could allow for greater resiliency, a reduction in costs, and a simpler administration requirement. One service that is often considered is SQL. Azure has its own PaaS SQL offering which removes the need for you to manage the underlying infrastructure. That alone makes the transformation a worthy consideration.

However, what isn’t often immediately apparent to some administrators is that PaaS offerings are, by their nature, public facing. For Azure SQL to be as resilient as possible and scale responsively, it sits behind a public FQDN. Therefore, how this FQDN is secured must be taken into consideration as a priority to ensure your data is protected appropriately.

Thankfully, Azure SQL comes with a built in firewall service. Initially, all Transact-SQL access to your Azure SQL server is blocked by the firewall. To allow traffic, you must specify one or more server-level firewall rules that enable access. The firewall rules specify which IP address ranges from the Internet are allowed. There is also the ability to choose whether Azure applications can connect to your Azure SQL server.

The ability to grant access to just one of the databases within your Azure SQL server is also possible. You simply create a database-level rule for the required database. However, while this limits the traffic to specific IP ranges, the traffic still flows via the internet.

To communicate with Azure SQL privately, you will first need an Azure V-Net. Once in place, you must enable the service endpoint for Azure SQL, see here. This will allow communication directly between listed subnets within your v-net and Azure SQL via the Azure backbone. This traffic is more secure and possibly faster than via the internet.

Once your endpoint is enabled, you can then create a v-net firewall rule on Azure SQL for the subnet which had a service endpoint enabled. All endpoints within the subnet will have access to all databases. You can repeat these steps to add additional subnets. If adding your v-net replaces the previous IP rules, remember to remove them from your Azure SQL firewall rules.

Also worth noting is the option for “Allow all Azure Services”, the presumption here is that this somehow would only access from Azure Services within your subscription, but this is not the case. It means every single Azure service in all subscriptions, even mine! My recommendation is to avoid this whenever possible, however, there are some cases where this required and this access should be noted as a risk.

More on Azure SQL Firewall – https://docs.microsoft.com/en-us/azure/sql-database/sql-database-firewall-configure

More on Azure SQL with V-Nets – https://docs.microsoft.com/en-us/azure/sql-database/sql-database-vnet-service-endpoint-rule-overview