Azure Sentinel – Where to start?

First announced back in late February, Azure Sentinel is the first cloud native SIEM service from a major provider. SIEM (security information and event management) is a primary component in any security service. Sentinel aims to leverage cloud specific benefits like elastic scale and AI to allow customers detect and respond to security incidents as quickly and efficiently as possible.

The workflow of Azure Sentinel can be broken into four steps:

Azure Sentinel core capabilities

1. Collect

Sentinel allows you to collect data at scale from multiple users, devices, applications and infrastructure, hosted in Azure, on-premises, and even in multiple clouds. This means you can aggregate all security data using industry standard log formatting. With built-in integration, you can enable collection for features such as Office 365 or Azure AD within seconds.

2. Detect

Having all of your data collected to Sentinel allows for more simple analysis and detection at scale than was previously possible. This more efficient triage, and the capability to leverage Microsoft Machine Learning allows you to be more productive, minimise false positives and react to those high accuracy alerts as early as possible.

3. Investigate

Sentinel allows you to visualise and resolve alerts using the same dashboards. Proactively hunting for incidents can be automatic or scripted into a set of queries. Microsoft have provided some to get you started too based on their analysis and response teams.

4. Respond

Continuing the efficiency seen in previous steps, Sentinel allows you to orchestrate and automate responses to incidents. Allowing you to automatically handle repeat and/or known incidents.

So now that you know what it is, the next step is to put it in action and see if it can be of use to you and your client/business. Currently still in preview, Sentinel is free to use, which is always good and allows you to assess the service without any significant financial impact. Bear in mind, you will pay for the Log Analytics workspace which stores the data!

First, you’ll need to enable Sentinel and a workspace, this can be done via the portal and a walkthrough is here. Then, you need to connect some services to start streaming data to Sentinel. As you can see below, there are multiple options and you can choose which logs/data is sent to Sentinel too.

Once your data connector is active, you can make use of the built in dashboards to visualise it. Below is a subsection of the Azure AD sign-in log dashboard, which is available immediately via Sentinel. You can also create your own custom dashboards, there is a guide with samples here.

Now that is being collected and you have visuals, your next step is analysis. The first thing you will need to do is to create Detection Rules. These are essentially Log Analytics queries with alerting parameters wrapped around them. Microsoft offer sample queries on Github which are updated regularly. Alternatively, you can simply write your own to meet your needs.

The results of your Detection Rules are then fed into the Cases section of Sentinel. Here you can triage, investigate and remediate incidents. The cases are created dynamically from the parameters you set for Detection Rules such as severity and entity mapping. As such, be prepared to have to tweak those thresholds and alert patterns a bit. I have Sentinel running within several customer tenants, and am still not 100% happy with my detection rules yet. Always remember to update the status of your case too, in progress, resolved etc.

Finally, you should set up some Playbooks to respond to your alerts. A Playbook is simply a set of procedures that you can run from Azure Sentinel. They help automate and orchestrate your responses to alerts, and you can run them manually or ideally set them up to run automatically in response to certain alerts. They are based on Logic Apps, which means all of the same actions are available via Sentinel. One quick note, there is a charge for Logic Apps and therefore Playbooks, so ensure you understand your costs first.

When creating a Playbook, regardless of it is going to be run automatically by Sentinel or manually by you, you should first define your scenario. My preferred approach here is to come up with “If-This-Then-That” loops and apply them as needed. This is another section that will take some tweaking over time. In my experience, I only run Playbooks manually initially, then start to add automated triggering once I’m happy with the alert and response. Docs have a nice sample alert-response playbook with messaging and actions which is a great place to start.

Another function which I haven’t covered here is Hunting. I haven’t spent enough time with this feature yet to give a detailed opinion but you can read more on it over on Docs.

So, if you haven’t given Sentinel a try yet, I’d recommend you review the quickstarts and deploy in your tenant for one or two of the data sources like Azure AD. While it’s in Preview, it is a great chance to assess it relevant to your tenant and hopefully gain some greater insight and response capability too.

As always, if there any questions or if you have any problems with your Sentinel, get in touch!

What is Azure Advisor?

Built into every Azure Subscription and offered as a free service is Azure Advisor. Helpful without being too intrusive, it can be beneficial to many who use Azure.

So, what is it? It’s a personalised recommendation service. It gives you quick and simple insights into your deployments and where they could be optimised. There are four main areas of analysis completed:

  1. High Availability
  2. Performance
  3. Security
  4. Cost

Azure Advisor gives you a central point to analyse all of your deployments for best-practise optimisation. It pulls in recommendations from other services like Security Center to offer a comprehensive view.

Azure Advisor dashboard

You can tweak some of the recommendations that are presented. You can also tighten the scope of recommendations too, for more granular data. Docs has a handy guide explaining how to do that in detail. I will walktrough an example below.

To scope Azure Advisor to only display recommendations for specific resources, it’s a simple change to the configuration settings.

  1. Open Azure Advisor
  2. Click Configuration under the Settings section
  3. You can now choose your scope relative to Azure Subscriptions and which Resource Groups you want to include.
  4. To remove resources from the scope, simply untick the relative box and click Apply

While the service itself is free, always be aware that some recommended optimisations could require a higher spend, such as a performance increase.

As always, if you have any questions, please get in touch!

What are Azure Blueprints?

I’m sure most of you have seen recent announcements relative to Blueprints as well as multiple Microsoft posts about the service and what it can do to improve your environments. However, what if you’re not sure about what they are and if they are usable for your environment? Hopefully, that’s where this post comes in. I’m going to explain exactly what they are and why you might use an Azure Blueprint. This should allow you to make a decision on whether you need them or not.

Following on from that, I think that’s the first basic point about Azure Blueprints. Similar to several other new services in Azure, the functionality is great and could help progress a lot of environments, but that doesn’t mean they help, or are even useful in a lot of other environments. Never feel guilt-ed into using a new service because there is a “buzz” about it at launch. Assess the service, understand it, assess it’s usability versus your requirements then TEST TEST TEST! Don’t forget, Blueprints are still in preview so no production workloads yet.

So, what is an Azure Blueprint? To try explain it plainly, it is a collection of governance and resource services, defined in such a way to allow you to repeat deployments to a set standard.

Azure Blueprints overview

The collection of governance and resource services within a Blueprint are referred to as Artifacts. Within each Blueprint, you can make use of any combination of the following:

ResourceHierarchy optionsDescription
Resource GroupsSubscriptionCreate a new resource group for use by other artifacts within the blueprint. This enables you to organize resources and provides a scope for other artifacts.
Azure Resource Manager templateSubscription, Resource Group
Templates are used to create resources. This could range from individual deployments to entire environments.
Policy AssignmentSubscription, Resource GroupAllows assignment of a policy or initiative to the subscription and/or resource group the blueprint is assigned to. Any
parameters are assigned at creation of the blueprint or during blueprint assignment.
Role AssignmentSubscription, Resource GroupRole assignments can be defined for the entire subscription or nested to a specific resource group included in the blueprint.

As you can see above, artifacts can be deployed/assigned at different levels. However, the Blueprint itself must be located in either a subscription you have at least Contributor access to or a Management Group. If located within a Management Group the Blueprint is available to any of the child subscriptions of that group.

When defining your Blueprint, several artifact options allow you to choose parameters that are passed from Blueprint to artifact. For example, when defining a Resource Group, you can choose to specify the name and location. You don’t have to specify these parameters within the Blueprint, you can also allow these to be passed when the Blueprint is assigned.

Once you have your Blueprint defined, your next step is to publish it. When publishing, you must indicate a version. I found it odd that this isn’t restricted in some way, you can literally name one version “1.0” and the next “B” so I’d recommend adding notes with each version and try to stick to a pattern. However, it makes sense if you’re going to use different versions for different assignments (I’ll explain that next), so choose relative to your requirements.

When your Blueprint is published, you can then assign it. A nice feature is the ability to assign different versions of a Blueprint to different subscriptions. For example you could have two versions of a Blueprint, that have different artifact definitions (think test version and production version) assigned to different subscriptions. They can be independently updated too.

At assignment, there are some options to chose as well as subscription. They are Resource Locking and Managed Identity.

For Managed Identity, it’s recommended you simply choose System Assigned as the Blueprints service will then manage the security lifecycle. More on Managed Identities to help you understand and choose what’s right for your environment.

The Resource Locking feature really allows you to maintain control of your governed deployment. If you’re not familiar with Resource Locks, check out this post. The familiar status applies to resources deployed by a Blueprint assignment:

  • Not locked
  • Read Only
  • Cannot delete

However, once a status is applied, not even a user/object with the Owner role can modify it. This is due to how these statuses are applied. An RBAC deny assignments deny action is applied to artifact resources during assignment of a blueprint if the assignment selected the Read Only or Do Not Delete option. The deny action is added by the managed identity of the blueprint assignment and can only be removed from the artifact resources by the same managed identity.

So, how do you edit or delete your resources? Update your Blueprint to “Not locked” and push the update to the relevant assignment. This method prevents unwanted and unexpected changes occurring outside of the scope of the Blueprint.

There is quite a learning curve for Blueprints I think as they combine several other services you must be familiar with, so for me, you have to start there. Understand each of the artifacts fully so you can see how they may work well if defined in your environment.

Recently, sample Blueprints have been released to allow you to deploy governed pre-designed environments with a couple of clicks, one sample is the ISO27001 Shared Services which I think is good to help understand the service, even if it might be slightly complex for your first test.

Again, Blueprints are still in Preview. So be as cautious as always with your production environments. I look forward to seeing what changes come with GA, which shouldn’t be much longer considering Blueprints were announced back at Ignite. I will update this post relevant to GA when it happens.

As always, if you have questions, leave a comment, mail me, or ping me on Twitter!

Azure Monitor – Show me all the data!

A significant put possibly less exciting update to Azure over the past few months has been the revamp and update of Azure Monitor (AzMon). In my opinion, this is a resource that should be included in your design for all deployments. It is now a comprehensive solution for collecting, analysing, and acting on your telemetry from Azure as well as on-premises environments.

The below graphic gives an overview of AzMon and it’s three key areas. Left-to-right, you start with the six possible data ingestion sources for telemetry data. As you can see, everything from application to infrastructure is supported. The two key forms of data used is next, Metrics and Logs. The final area, highlights the processing functions available.

monitoroverview

AzMon collects the following data for processing:

  • Application monitoring data: Data about the performance and functionality of the code you have written, regardless of its platform.
  • Guest OS monitoring data: Data about the operating system on which your application is running. This could be running in Azure, another cloud, or on-premises.
  • Azure resource monitoring data: Data about the operation of an Azure resource.
  • Azure subscription monitoring data: Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself.
  • Azure tenant monitoring data: Data about the operation of tenant-level Azure services, such as Azure Active Directory.

As mentioned previously, the data slots neatly into two categories:

  1. Metrics – You can use Metrics Explorer to view resource performance among other things over periods of time. These can be presented immediately within the portal or additional dashboards can be created to visualise.
  2. Logs –  Using Log Analytics helps to quickly retrieve, consolidate, and analyse the collected log data. This can be then be re-used in visualisations as well as alerting.

The Docs site for AzMon is fantastic. It is filled with guidance on setup as well as tutorials relating to common scenarios, rather than regurgitate, below are my personal picks:

Visualise your log data

Respond to events using alerts

Application health alerts

Also hugely important is the level of integration with other services that AzMon offers, take a look here for the currently supported list.

Understanding Azure Reserved Virtual Machine Instances

One of the main benefits of Azure’s billing model is that it offers per minute billing. This means that if you have an application/service/environment that isn’t required 24/7 you can reduce your costs by using Automation so that you will only pay for what you consume.

However, if your environment requires you run a VM constantly, the cost can start to mount up. To help alleviate this, Microsoft offer a solution in the form of long-term fixed price Virtual Machine instances.

These Reserved Instances (RI) help save money by allowing you to pre-pay for a one-year or three-year VM size. The fact that you pay up front, allows you to make significant savings on the Pay-As-You-Go pricing.

RIexample

The most common subscription offers have the ability to purchase RIs, but there are some restrictions in terms of how it is approached. The options are the below:

  • Enterprise agreement subscriptions. To purchase reservations in an enterprise enrollment, the enterprise administrator must enable reservation purchases in the EA portal.
  • Pay-As-You-Go but you must have the “Owner” role on the subscription to buy a reservation.
  • Cloud Solution Provider subscriptions. However, the providing partner must make the purchase on behalf of the customer.

Once purchased, the discount is then applied to the resource usage that matches up with the RI capacity purchased. For example, if you purchase a one-year RI for a DS4v3 size VM, and you are using a DS4v3 the discount will apply against that usage.

A good strategy is to determine the sizing before purchasing the RI. So my advice would be to run your VMs without an RI for a few months to ensure your sizing is suitable and therefore correct. However, if this is something that is proving difficult, there is a range of flexibility offered within your RI scope.

With instance size flexibility, you don’t have to deploy the exact same VM size to get the benefit of your purchased Azure Reserved Instances (RI) as other VM sizes within the same VM group also get the RI discount. As a rough example, see the below table from the Microsoft announcement.

VM name VM group Ratios

Standard_D2s_v3

DSv3 Series

1

Standard_D4s_v3

DSv3 Series

2

Standard_D8s_v3

DSv3 Series

4

Standard_D16s_v3

DSv3 Series

8

Standard_D32s_v3

DSv3 Series

16

Standard_D64s_v3

DSv3 Series

32

This means that if you buy an RI for a D2sV3, it would cover half of an D4sV3 instance etc. More on how this can be applied and options available to you are here.

In general, I think an RI purchase is something that most deployments should be taking advantage of. Once sized correctly and with the ability to leverage flexibility, there are huge savings to be made with relatively low amounts of administrative effort.

More on how to buy an RI here

More on how the discount is applied here