Sentinelle PIISentinelle PII
Critical risk for businesses

Shadow AI: when your employees feed ChatGPT with your company data

More than half of employees use generative AI tools without authorization. With every prompt, client emails, phone numbers and IBANs leak to third-party servers — with no traceability.

What is Shadow AI?

Shadow AI refers to the use of generative artificial intelligence tools (ChatGPT, Claude, Gemini, Copilot) by company employees without validation or oversight from the IT department or DPO.

It is the modern equivalent of "Shadow IT": instead of installing unauthorized software, employees copy-paste sensitive data into AI chatbots accessible from their browser. No installation needed, no visible trace — and therefore no control.

The real problem: it's not the AI itself — it's the lack of protection for the data passing through it.

The scary numbers

55%+

of generative AI users use unapproved tools at work

Salesforce, 2024
4%

of global annual revenue: maximum GDPR fine (Article 83)

GDPR, Art. 83
€3.6M

average cost of a data breach in France

IBM, Cost of a Data Breach 2025

According to a Salesforce study (2024), more than half of employees who use generative AI at work do so without their company's approval. And nearly 7 in 10 have never been trained on the risks of sharing sensitive data.

Concrete risks for your business

Legal risk

GDPR and regulatory fines

Sending personal data (emails, phone numbers, IBANs) to an AI service without a legal basis constitutes a GDPR violation. Article 33 requires notification to the regulator within 72 hours.

Fines: up to €20M or 4% of global revenue (Article 83)

Data leak

Transmission to third-party servers

Data sent to ChatGPT or Claude is transmitted to OpenAI or Anthropic servers. Simply transmitting PII constitutes processing under GDPR.

Requires consent or a contractual basis

Reputational risk

Client and partner trust

A client data leak via an AI tool makes headlines. Beyond the fine, it is the trust of your clients and partners that is at stake.

Brand image can be durably impacted

Real leaks

Shadow AI is not a theoretical risk. Here are documented incidents and common scenarios observed in companies.

Documented cases

Samsung (2023)

Engineers pasted confidential source code into ChatGPT three times in less than a month. Samsung banned the tool internally.

Source: TechCrunch

Mata v. Avianca (2023)

A New York lawyer used ChatGPT to draft a legal brief. The AI invented 6 fictitious legal precedents. The firm was sanctioned by the court.

Source: CNN

Common typical scenarios

Customer support

Agents copy-paste full conversations (emails, phone numbers, addresses) to generate responses faster.

HR and recruitment

Recruiters submit full CVs (name, address, social security number) to get summaries or evaluations.

How to protect yourself from Shadow AI

Insufficient

Ban AI

This is what Samsung, Apple and several banks did. Problem: employees work around the ban. Shadow AI exists precisely because tools are banned without an alternative.

Zero visibility, zero control.

Necessary

Train employees

Essential to create risk awareness, but insufficient on its own. An employee under deadline pressure will still paste a client email into ChatGPT.

Human error remains the #1 cause of leaks.

Recommended

Protect at the source

The Privacy by Design approach: intercept sensitive data before it is sent to the AI. The employee continues working normally, PII is masked.

Automatic and transparent protection.

Sentinelle PII: frictionless protection

A Chrome extension that automatically detects and pseudonymizes personal data before sending it to ChatGPT, Claude or Gemini.

Step 1

Detection

The extension detects emails, phone numbers, IBANs, card numbers and other PII in the input area in real time.

Step 2

Pseudonymization

Before sending, sensitive data is replaced with anonymous tokens ([EMAIL_1], [IBAN_1]).

Step 3

Re-injection

The AI responds normally — then the tokens are replaced with the real data in the displayed response.

100% local. No data leaves your browser.

The result: your teams use AI freely, your company is protected, and your DPO can prove that protection measures are in place (Article 32 GDPR).

Protect your data right now

Installs in 30 seconds. Free up to 10 detections per day. No IT configuration required.