From Ops to C-Level – AI Adoption in Practice – Festive Tech Calendar 2025

Introduction

It’s great to be back for another year of the Festive Tech Calendar. Always an excellent event across the month of December, and looking through the schedule, it is an amazing year of content. Kudos to the team for all the work they put in, and thanks for having me again!

This year, I’m continuing my AI adoption theme, but expanding it slightly beyond the scope of Azure, and taking a bit of a ‘business via tech’ approach. This is for a couple of reasons; first, I think AI has changed in the last 12 months, and second, I think the way everyone approaches it also has. This might seem simple, but in reality, or in practice for an IT decision maker – it really isn’t. So, I thought I would share some learnings from where I have seen things work well and work not so well!

AI Opinion

This of course all starts with AI. If you don’t know what that is, I would be slightly worried as to how you found yourself reading this… but AI isn’t just one thing anymore, and to be honest, it never was to begin with.

The GPT models have become the Google of search engine colloquialism for common users – ‘just ask ChatGPT’ etc. This is great for awareness, but also no doubt frustrating for the teams creating Copilot, Gemini etc. Even more frustrating for the teams responsible for the original Azure AI Services (Cognitive Services – remember that!?). That brings me onto my next point, and one of the key challenges with AI adoption I have seen – AI isn’t just generative and the perception gap that opens up.

AI ‘Types’ I Think are Relevant

For most users, working with an AI assistant of some sort, is the majority of AI interaction they will have. So general and/or company based knowledge helping users find or create content that assists with work. I genuinely haven’t seen any work setting where stupid images etc are being created, so don’t believe the hype or prevalence of that based on social media.

Next, a subset of your user base may use more tuned AI assistance. Think Github Copilot as the example here. Chat based, powerful and specific AI based assistance. This is often seen as a more impactful ‘upgrade’ to a user/teams skillset, but much less clearer in terms of adoption requirements.

Then we move into the one-to-many AI – agents. A huge range of options here and with the capabilities in Azure AI Foundry, a near endless set of use cases. From experience, I’ve seen these used to take on one information source, or one task. These work well, roll out rapidly and require little to no real guidance. I have also seen attempts at multi-agent frameworks/workflows with less success, and finally very few agents that take action without explicit supervision. Take an over simplistic example – “Write a reminder email for the upcoming meeting and send to all attendees, cc my boss” you need serious confidence in that agent to handle information as you need it to.

Finally, there has been large scale adoption of AI within systems or existing applications. However, don’t mistake large scale adoption with actual success of AI. This is easily the example where I have seen the most ‘AI washing’ – reinventing some existing feature by calling it AI. This part really bugs me, as I believe it is driving up costs of already expensive systems, while also interrupting roadmaps of upgrades, fixes, new features that could have more impact.

Ok – let’s get into some points around adoption in practice. Ultimately I’ve seen it boil down to a balance of use case vs blockers. If you can outdo the list of blockers, and the use case is valid – success. Now I have drastically simplified that, but let’s start with blockers, so we can start to see the meat of the issue.

Blockers

When C-level look for AI adoption, they think of productivity return-on-investment, people management, time to market, and competition. This is no different than any other type of adoption, but I think AI carries a much tougher perception problem. In a presentation back in the Summer, and in every AI discussion since, I have discussed and named this as the ‘surely gap’.

Without exception this is the number one issue derailing AI adoption. If you cannot address the perception issue early, you are doomed to have a project that will be viewed as not meeting expectations, disappointing, or even a failure. Even though AI might be delivering 50-80% increases in productivity, output, or accuracy etc. The second you hear “Surely AI can solve this, surely AI can create that, surely AI can remove that team – you are surely, in trouble.

Flip the approach to the ops team, or IT people. I see two equal priority issues:

  • Bad data projects – “I’d love to use AI, but our data isn’t ready” – this is either a security issue, or a compliance issue, or both. Often it can be as simple as permissions and structure of data, commonly an issue in any SharePoint online environment that has been around a while. Plenty of simple work can address this, but the fear of sensitive data being exposed will put the brakes on any AI work. Now a positive swing here is perhaps you can now get that data project budget based on AI, but it’s still a significant challenge.
  • Sprawling use cases – this is causing more of a compliance and regulatory issue, with no real resolution I have seen, only mitigation via policy. Take a system that can do five things, but your users only need two of them. So you risk assess and secure those two. However, if you can’t disable the other three features, users can simply use them if they wish. And it might not be as simple as features, it becomes more complex with generative AI. I expect changes in monitoring, analytics and feature decoupling to come as the EU AI act takes hold.

Lessons Learned

The first challenge with any blocker is knowing about it. Once you know about a problem, you can start to formulate a plan to solve it. And with the blockers I’ve outlined, you can probably already guess some solutions.

First and most important in my experience is dealing with the perception issue. I view AI as the least IT project a company can take on at present. Something like Copilot has little to no real IT or Ops work to configure it. License the user – milestone complete. But if an Ops team is given this as a project it can miss the real beginning an end of the project – perception and adoption.

Address the perception up front – why do we need AI, what will it do, what does good look like? Work backwards, and pick use cases that have simple, measurable outcomes.

Plan the adoption the second you have defined use cases – targeted users, timeframe, and a cadence of revisit. Most AI usage requires habit forming of some sort, adoption plans need to promote and push that.

In terms of Ops challenges, the most important lesson I have learned is get the team onboard, and get them out of the way. AI has the worst risk of shadow IT and data leakage I have ever seen. Users will want to try and use it. Give them something early, give them something decent and give them guidance. Then and only then – block everything else you can.

My Wishlist

This is tricky, but maybe not too complex. Instead of a list, I think I would push for one change and one change only – greater awareness or understanding of how to align a use case and capability. Nobody thinks email is the entire answer to their comms requirement, and they aren’t disappointed when it can’t do instant messaging or video calling. I know that isn’t necessarily a fair comparison, but if we can get AI understanding closer to that, I think it will greatly improve not only the adoption rates, but the success of AI projects.

I have another point on cost, and probably several points on environmental impact, but they are at least a blog post each. Perhaps something on that in Azure Spring Clean 2026…

In Closing

To close I will convert my opinion to a simple list:

  • Speak to your stakeholders, gather opinions and use cases. Identify enablers and detractors, plan for both.
  • Pick your use cases, always start with the simple ones that have measurable confirmed outcomes.
  • Address Ops concerns, get the team onboard for rollout. Create your plan for enablement and adoption.
  • Meet your stakeholders again, get the use case and outcome crystal clear. Leave no room for the ‘surely gap’
  • Rollout and continuously adopt. Revisit use and the use case. Evolve the plan as the use case does.

Exploring: Microsoft Copilot for Azure

Recently, I was lucky enough to gain access to Microsoft Copilot for Azure as part of a limited preview. For anyone who missed the announcement at Ignite, here is what Microsoft describe it as:

Microsoft Copilot for Azure (preview) is an AI-powered tool to help you do more with Azure. With Microsoft Copilot for Azure (preview), you can gain new insights, discover more benefits of the cloud, and orchestrate across both cloud and edge. Copilot leverages Large Language Models (LLMs), the Azure control plane, and insights about your Azure environment to help you work more efficiently.

So – what does that mean in practice? For me, this means reading the docs, then getting stuck into actually trying elements of this out. To be transparent, I had low expectations for this service. I am not 100% sure whether it is aimed at me, or someone with less Azure experience. I was also conscious that this is the limited preview I am working with, so there will be some oddities.

First up, the integration into the Portal UX – I like it. It’s simple, and consistent. As it is a tenant level service, it stays in place as you jump around the Portal from a Subscription to a Resource to Entra ID for example.

Next, what can I use this for that is quicker than me doing this myself? I will be honest, I struggled a bit here. This is for two reasons. One, this is enabled in my MVP tenant, so I have very little Production or day-to-day work to be done. Two, I was looking for something interesting rather than ‘tell me how to build a VM’.

So, I started with a question I know the answer to, but anyone who follows #AzNet knows we are all dying for progress on…

Imagine my surprise with how confident that response is! OH MY GOD I FOUND A THING. Well no, it doesn’t work. And I have no idea what it means in Step 3. If you find out – please let me, Aidan and Karl know, thanks 🙂 But I do like that it attempts to back up its answer with links to documentation.

As you make requests, it dynamically updates the text to tell you what it is ‘thinking’ which I really like.

And that ability to write queries, is a real winner for me. saves a lot of time, but you need to be quite specific with the ask and detail, but that’s no real surprise at this stage.

I do like its ability to take quite a non specific question and offer a decent and useful output in response

However, I am finding myself trying to find things for it to do. This is OK during preview, where there is no additional cost, however, it’s not clear on what pricing will actually be just yet, vague language on the landing site makes me think this will be charged for

Overall, I think it’s a welcome addition to the AI assistant space from Microsoft. I think those of us working with Azure would feel quite left behind otherwise. But I do think that as the platform is so vast and as each environment is unique, the core use case for different people will vary and that could significantly impact whether this is used widely or not. Having said that, I am looking forward to how this progresses, and more people having access can only mean improvements.

Opinion – I’m sold on AI

For the first time in a while, I’m writing just about my thoughts on a topic, rather than specifics. As a result, this will be categorised differently, and maybe I will continue it as a series…

It is hard to spend a single moment of a day at the moment without AI popping up. Social media is alight with opinions, dos and don’ts, hype and Skynet fear. I’m going to start by saying – I think that’s important.

If you spend some time looking at the capabilities that modern AI can offer, you quickly realise that this isn’t a fad. This is something that is coming at us like a train, and is here to stay. That certainty is why I think a mix of opinions is important. Experts on the subject are openly saying they are unsure where this can, will, or should go. That is both exciting and terrifying.

Rather than focus on the terrifying (I personally don’t think it will get there FWIW), I want to focus on the exciting. Recently I’ve seen some use cases, and some demos that have convinced me that this will make my work life better. While I haven’t seen something yet that helps my personal life directly, perhaps AI helping work can indirectly give me a better balance of time.

Also, I currently have zero interest in the creative side. Art, music, even blog posts are something I have no care for seeing AI involved with.

Part of my job requires me to explain complex technology to people in simple language. I have spent some time trying to think how could I explain AI in this way. I think simply stating it’s a new assistant is more confusing than helpful. “I don’t want or need another Siri”. I also think getting to deep on LLMs etc helps no one. I’ve simply settled on AI is an enabler for a new generation of your productivity tool.

In the same vein that an abacus helped you count, and a calculator was a leap forward, AI will be the tech that facilitates a leap forward for your tool.

Let’s take Excel as an example. It’s not a core tool of mine, but I use it a lot. Mostly to read data, rarely to work with complex sets. However when I do, finding out how to do something is difficult. Excel has been around a long time, there is a huge amount of content, searching and finding what you want always takes longer than a single Google search. Enter AI to boost productivity. This is what I want from it.

Working with VS code and GitHub Copilot creating Bicep templates etc. I’ve seen some of this already, write a comment, get some code. It’s not perfect, and sometimes maybe it’s not even correct and this is important to me and why I am calling AI tech to boost your tool.

I don’t think AI can replace someone. I might be wrong, absolutely. But right now, I see it making people more productive. You still need people to validate and confirm the value of what AI has returned. Regardless of capability, I don’t want AI to find data, edit it, and send it to a client, without me involved. I do want it to do the heavy lifting for me, hopefully helping me hit send quicker and with less mental effort. But that is far enough for now please.

As the title says, I’m sold. As long as modern AI can save me time and do so with a degree of accuracy that ensures I can spend time validating and tweaking rather than correcting work it has done I think this will change how we work forever. It is a very exciting time to be working with this technology.