AI is transforming work

But it’s also creating a new kind of data risk

Artificial intelligence tools such as ChatGPT, Copilot, Gemini and Claude are now embedded in everyday work.

They are being used to draft emails, summarise documents, analyse information and speed up decision-making. In many organisations, however, adoption has moved faster than governance. That gap creates a very specific kind of risk: sensitive information can be shared into AI prompts in ways that feel harmless in the moment, but that can introduce compliance, legal and reputational exposure.

This risk rarely comes from malicious intent. It is much more likely to come from ordinary teams trying to work quickly. A finance colleague may upload a spreadsheet to generate a summary without realising it contains personal data or commercially sensitive figures. A salesperson may paste a customer list into an AI tool to clean up formatting.

A HR team member may use AI to draft communications and inadvertently include information that should never be shared beyond approved systems. In parallel, “shadow AI” grows quietly when employees use unapproved tools, browser plug-ins, or embedded AI features that have never been assessed by security or privacy teams. The problem is that prompts do not look like traditional threats, so many established security controls are not designed to catch mistakes at the point they happen.

AI and governance

The regulatory context makes this more than a productivity discussion. If an organisation processes personal data, then AI use is immediately tied to the same obligations that already exist under UK GDPR and the Data Protection Act 2018. Using an AI tool does not reduce responsibility for handling personal information appropriately. UK GDPR is grounded in principles such as lawfulness, fairness and transparency, purpose limitation, data minimisation, and integrity and confidentiality. These principles still apply when personal data is typed, pasted, or uploaded into an AI prompt. The Information Commissioner’s Office provides guidance on these principles and what organisations are expected to demonstrate in practice.

AI use is immediately tied to the same obligations that already exist under UK GDPR and the Data Protection Act 2018

The simplest way to understand the compliance challenge is to consider what a regulator, auditor, or data protection officer would reasonably ask. They will want to know which AI tools are being used across the organisation, whether usage is approved or informal, where the highest volumes of use are occurring, and what kinds of sensitive information are being entered. They will also expect an organisation to show that it has taken proportionate steps to reduce risk, especially where personal data might be exposed. The UK Government’s overview of data protection makes clear that the legal framework exists to govern how organisations use personal information, and the Data Protection Act 2018 remains a key part of that framework alongside UK GDPR.

It's not just about privacy

AI governance is also expanding beyond privacy alone. For organisations that operate in the EU, serve EU customers, or have future plans that involve those markets, the EU AI Act introduces a risk-based legal framework for how certain AI systems are developed and used. Even where it does not apply directly, it signals the direction of travel and reinforces the expectation that organisations should be able to evidence responsible oversight of AI-related risk. At the same time, many organisations want AI controls that align with established security management approaches. ISO/IEC 27001, for example, provides a structure for managing information security risk through an information security management system, and that structure is often helpful because it turns AI concerns into policies, controls, evidence and continuous improvement.

A practical AI governance programme usually starts with visibility, because an organisation cannot govern what it cannot see. Teams need a baseline view of which AI services are being used, which departments are using them most heavily, and whether sensitive data is being entered into prompts. This baseline helps identify where risk is concentrated, and it allows organisations to take a proportionate approach rather than applying blanket restrictions. Once visibility exists, the next step is governance that people can realistically follow. Clear acceptable use policies written in plain language are essential, but they must be grounded in real workflows. In practice, that means being explicit about what must never be shared into AI tools, what may be acceptable when data is anonymised or low risk, and which approved tools and methods are preferred for day-to-day work.

The importance of real-time controls

Even the best policies will not prevent every mistake, because the moment of risk is often immediate. Sensitive information is typically exposed at the point someone pastes text, uploads a file, or submits an AI prompt. For that reason, real-time controls are increasingly important. The most effective controls do not just block everything. They provide guidance and guardrails that reflect organisational policy and data sensitivity. They can warn users when content looks risky, prevent actions that clearly violate policy, and support redaction or masking of sensitive fields before information leaves the organisation’s control. They can also record activity for oversight and tuning, so governance improves over time rather than becoming a one-off exercise. This approach supports the aim of enabling AI safely while keeping workflows moving.

Even the best policies will not prevent every mistake, because the moment of risk is often immediate.

Harmonic Security is built to solve a simple, very modern problem: people are using AI through the browser every day, and sensitive information can slip into prompts without anyone noticing. Harmonic sits in that browser journey, so you can see what tools are being used, what data is being shared, and you can put sensible controls in place before something becomes an issue.
Where Core to Cloud comes in is the part most organisations actually struggle with. Buying a tool is the easy bit. Making it work properly across the business, keeping it aligned to policy and regulation, and proving it to auditors is what takes time and experience. We don’t treat Harmonic as a box you install and hope for the best. We deliver it as a joined-up programme that gives you visibility, sets clear rules, and then keeps those rules working in the real world.

A clear picture of what's happening

That starts with getting a clear picture of what’s already happening. In almost every organisation, AI use is spread across teams and tools, and a lot of it sits outside official approval. We help you map that properly, so you can make decisions based on facts rather than assumptions. Once you have that baseline, we help you turn your governance requirements into something practical. That means defining what “acceptable use” looks like in plain language, making it role-aware, and matching controls to the type of data being handled, not just the name of the website someone visits.

We also stay close after deployment, because this is not a one-off project. Policies need tuning. Teams change how they work. New AI features appear. We help you keep control without turning AI into a constant battle between security and the business. You get reporting that leadership can understand, evidence that stands up in audits, and a clear way to show that you are improving over time rather than reacting to problems after the fact.

There is a cost angle to this as well, and it is usually where the value becomes obvious. When you can see what is being used and stop risky sharing early, you spend less time on investigations, reduce the chance of expensive incidents, and avoid paying for tools and licences you do not need. In short, you get safer AI use, with less effort spent cleaning up mistakes.

If you want to see how this works in practice, fill in this form to arrange a demo. We will show you how Harmonic protects AI use in the browser, and how Core to Cloud helps you roll it out in a way that fits your organisation, supports compliance, and delivers real savings by reducing risk and the time spent managing it.

The challenge organisations face

AI tools are being adopted rapidly across business teams, often outside of formal security or
governance frameworks. In many organisations, employees are using public or embedded AI
tools as part of everyday workflows, without clear visibility into what data is being shared or
whether that usage aligns with regulatory and contractual obligations.

As a result, leadership teams often lack a clear view of AI-driven data risk across the organisation.

The challenge organisations face
About Core to Cloud

This series is featured in our community because it reflects conversations increasingly happening among senior security and risk leaders.

Much of the industry focuses on tools and threats, with far less attention given to how confidence is formed, tested, and sustained under scrutiny. The perspective explored here addresses that gap without promoting solutions or prescribing action.

Core to Cloud is referenced because its work centres on operational reality rather than maturity claims. Their focus on decision-making, evidence, and validation aligns with the purpose of this publication: helping leaders ask better questions before pressure forces answers.

Related Stories
>
Rate the Article

Click the link below to rate this article

Rate this article
Have you seen...