Your Company Is Leaking Data From The Inside

Breaches and ransomware are at an all-time high. A big reason that companies care about security is because they want to secure a perimeter around their organization to stop bad people getting in and accessing their data.

But they forget about all the employees inside the company, actively sending data out to third parties through their browser, emails, text messages, and AI chatbots.

Before we go any further, a quick note: today is Giving Tuesday. If this newsletter helps you protect your privacy, please consider supporting the free educational work we do with a donation to our non-profit. Ludlow Institute is community-funded, and your support directly pays for investigations, tutorials, and resources like this one. Many employers will match your contribution, which doubles your impact. Thank you so much for helping us keep independent, privacy-focused education available to everyone, and for spreading the word.

Donate

The landscape has shifted. We need to think just as much about what the company is voluntarily sharing with others as what might be involuntarily leaked in a hack. And we have to remember that every piece of data we hand to a third party is now sitting in someone else’s system, subject to their security bugs, their subpoenas, and their insider threats.

The new defensive mindset is simple:
Protect yourself by being more selective about what you share in the first place.

At your company, you can no longer operate on a “good faith” assumption that you can pour sensitive company data into every third-party tool and they will protect it. Instead, you need to protect yourself with more selective and judicious disclosures: share less, choose privacy-focused companies wherever possible, encrypt what remains. That is how you shrink the blast radius when defenses fail or access is compelled. And they basically all fail, eventually.

The mindset shift the modern company needs to adopt is one where it becomes second nature for employees to notice unneeded data sitting in centralized clouds; chat logs that never expire and can be taken out of context; microphone permissions no one reviewed that trigger embarrassment and retaliation; browser extensions with god-mode privileges leaking protected IP; or staff piping sensitive queries into an AI service that is later subpoenaed, turning those prompts into part of the public record.

The data your employees are sending out to countless entities is a huge liability, and it puts the security of your entire organization at risk. It’s up to you to get them to “think different.”

In this newsletter I am going to walk through a handful of company privacy best practices that focus on a neglected side of risk: data leaking out from the inside. Keep in mind that the examples of specific tools I have provided are just some of the things that I use at my company. If you have tools that you like and would recommend to others, please let everyone know in the comments.

1. Browsing

Let’s start with the lowest-hanging fruit.
The browser and search engine that your employees use can leak a huge amount of sensitive company data, quietly stored on third-party servers. Luckily, these are the easiest to switch out.

Lock down the browser

  • Use a privacy-respecting browser that locks down your settings by default.

  • Set a default search engine that minimizes logging and profiling.

  • Audit extensions regularly. Browser extensions are a huge attack vector and should never be installed unless approved by your company’s security team. Installing an unvetted browser extension is the equivalent of clicking on random links or downloading random software onto your machine. In my opinion, the only extension most users should have is a password manager from a trusted provider (details in a later section).

Some private, usable options:

Browser
I personally like the Brave browser. By default it blocks third-party ads, trackers, fingerprinting, and other data collection through its Shields feature.
If you would like to compare Brave’s privacy protections to other browsers, I recommend PrivacyTests.org.

Search engine
The search engine built into Brave is Brave Search, which is designed not to collect personal information about you, your device, or your searches, and not to use your queries to build behavioral profiles.

Other search engines you could consider include:

  • Startpage, which acts as a privacy-preserving front end to Google’s search results.

  • DuckDuckGo, which has better privacy protections than Google and a strict no-profile policy.

I recommend setting a hardened, privacy-respecting browser and search engine such as Brave as your company default in your policies, then enforcing that choice through your device management.

2. AI

This is the big new leak that almost no one is prepared for. Employees are pasting contracts, product roadmaps, unreleased code, HR issues, financials, and customer data into AI chatbots because it helps them work faster. For the individual, this feels productive. For the company, it can be catastrophic.

Your company can absolutely still use AI. The key is to mitigate leaks by choosing more privacy-focused options and setting clear rules.

Treat the major AI platforms like a sensitive database. Assume every query can be stored indefinitely in a permanent log that can later be accessed through legal process, insider abuse, or a breach.

Advice:

a. Set a no-go policy
Write a short, clear policy that spells out what must never go into any external AI. Make this explicit and simple.

Example “never” list for public cloud AIs:

  • Real names

  • PII such as email addresses, home addresses, phone numbers

  • Company financials and donor information

  • Customer data, support tickets, internal HR issues

  • Credentials, API keys, or passwords (these should never go into any tool)

b. Provide employees with access to more private options

Employees want to use AI. If you do not give them safe options, they will use unsafe ones. Provide access to more private alternatives and be clear about what information can go where. A simple tiered list helps.

Least private:
Cloud-hosted, account-based platforms
Examples: ChatGPT, Claude, Perplexity, Gemini, etc.

  • Do not:

    • Put real names, home addresses, email addresses, phone numbers

    • Add company financials, donor information, or customer data

    • Paste SSNs, passwords, API keys, or other secrets

  • Do:

    • Use these tools only for things you would be comfortable posting publicly on social media

    • Assume the content could be accessed without your knowledge and might one day be part of a legal process or a breach

    • Ask yourself: “Would this embarrass me or the company, or put clients, donors, or staff at risk if it were released out of context?” If yes, do not paste it here

Middle ground:
Privacy-focused cloud AI
Examples: Brave’s Leo, Venice.ai, NanoGPT.

These are cloud-hosted, but with stronger privacy protections and better logging policies than mainstream players. Some vendors claim no logging or short retention windows, and avoid using your prompts to train models.

  • Use these tools for sensitive internal work after removing direct identifiers:

    • Strip real names and convert to roles (e.g. “Client A,” “Vendor B,” “Engineer 1”)

    • Remove addresses, phone numbers, and unique IDs

  • Keep the mindset that it is still a cloud service. They still see your traffic. Their policies can change. Review them regularly.

Most private:
Self-hosted AI

If your company is large enough, you might consider setting up infrastructure and self-hosting large models that employees can use internally. Inside your own environment, behind your own access controls, staff can be much freer to paste sensitive PII or financials, because the data never leaves your infrastructure.

This option will not make sense for smaller organizations, because maintaining these systems is costly and complex. For larger organizations, they can be a very strong choice.

  • Models run on your own hardware or in your own tightly controlled cloud environment

  • Prompts and outputs stay inside your infrastructure

  • You can integrate the model with internal data sources without exposing that data to an external provider

It is also possible for individual employees to run AI locally on their machines, but the models capable of running on a standard laptop are not as strong as the best cloud tools, and it can be complicated for employees to maintain. If you want to learn how to get started with local models anyway, see our tutorial here:

c. Ban Agentic AI Browsers

Include in your policy that staff must not use “agentic AI browsers” that automatically roam the web and click on things for them, such as Comet or Atlas. These tools are currently a major security concern because of unseeable prompt injections and uncontrolled web actions. You do not want an AI agent browsing the internet and acting on your behalf without tight controls. Perhaps they get more secure in the future, but they’re not there yet.

Takeaway
AI is set to be the biggest security and privacy hole in your organization unless you train your employees how to use it responsibly.

You do not have to sit out the AI wave. You can absolutely use powerful tools that increase your company’s productivity. You just need to:

  • Teach employees clear, simple rules about what never goes into external AI

  • Give them more private alternatives

3. Communications

Internal chat is where most of the real work happens. It is also where people are the most relaxed and least careful.

What to do:

  • Choose end-to-end encrypted messengers for internal chats and sensitive coordination.

  • Treat disappearing messages as the default for casual conversation. Most internal back and forth does not need to live forever.

  • If your industry requires retention, keep it narrow. Use specific archival channels for conversations that must be kept, instead of hoarding every random thread.

  • Make it obvious which spaces are ephemeral and which are permanent so staff do not assume everything is “just chat.”

Some private, usable options

  • For DMs and casual group chats: Signal is a great end-to-end encrypted messenger.

  • For collaborative document discussions: Proton Docs is more private than Google Docs, and you can still collaborate in real time.

You may think saving everything by default is prudence, but it’s actually more of a liability than you realize.

4. Email

It’s the cockroach of the internet. Email never dies. It’s an inherently insecure tool. And it lives on backup servers you forgot existed.

Best practices

  • Use end-to-end encryption where possible for sensitive content, or at least password-protected attachments that are shared out of band.

  • For external parties, prefer portals or secure file transfer when the data is truly sensitive.

  • Set clear rules on what must not go into email at all, and offer better alternatives so people are not forced to choose between “get work done” and “follow policy.”

  • Train staff that as you tighten email hygiene, they become the front line against phishing and malware. Short, frequent, specific training works better than once-a-year compliance theater.

  • Keep personal and work mail completely separate to prevent cross contamination between consumer services and corporate accounts.

Some private, usable options

My personal preference is Proton, because they provide an entire ecosystem, with calendar, docs, VPN, and other tools as well as email. Inside Proton, messages between Proton users are end-to-end encrypted by default, and for anyone outside you can use either password-protected emails or PGP.

Other strong options are Tuta and StartMail. Tuta encrypts everything in your mailbox and lets you send end-to-end encrypted messages both inside their network and to external recipients using a shared password. StartMail builds on PGP and also offers password-protected messages when the other side has no encryption tools.

Treat email as a hostile environment that you occasionally have to use, not the default home for all company knowledge.

5. Passwords

Most password practices are terrible. People reuse passwords, or create passwords that are easily brute-forced. Everyone should generate random, unique passwords for every site, and store them in a password manager that is protected with 2FA.

  • Require a real password manager and enforce unique, long credentials for every system. Some options of password managers: 1Password, Dashlane, Bitwarden, Proton Pass, or if you want to go local only, KeePassXC

  • Ban password reuse outright.

  • Ban the inclusion of real words in passwords. They make passwords too easy to brute-force (yes, even if they add 123! to the end).

  • Turn on hardware-backed 2FA (security keys) on every website that supports it.

Use authenticator-app 2FA everywhere else.
Avoid SMS 2FA wherever possible; these are insecure.

Takeaway: Password practices inside organizations need to be dramatically upgraded.

6. Hot Mics, Cameras, And Voice Assistants

Anything with a microphone or camera should be treated as a potential recording device, not a harmless gadget. That includes laptops, phones, smart speakers, and “helpful” AI assistants. The goal is not just to stop spying. It is to prevent chilling effects and accidental leaks.

House rules

  • No always-listening assistants in rooms where sensitive conversations happen. That includes board meetings, HR, strategy, legal, and product planning.

  • For home offices, give staff a simple rule: if company-confidential topics are being discussed, devices that can wake on “Hey X” do not belong in the room.

  • Do not connect voice assistants to corporate messaging, calendars, or file storage. Convenience is not worth turning them into a side channel into your systems.

Practical controls

  • Post a clear device policy in conference rooms. Spell out when recording is allowed, and by whom.

  • For truly sensitive meetings, require phones and laptops to be left outside, or shut down completely.

  • On corporate devices, revoke microphone and camera permissions by default, and grant them only to tools that genuinely need them.

  • Encourage physical camera covers on laptops and external webcams, and make them the default on issued devices.

Takeaway: People follow norms more than they follow policy PDFs. Make the norm clear. Voice assistants don’t belong in strategy sessions, and every mic and camera should be treated like it can record at any time. Some companies are absolutely at risk of having employee devices turned into hot mics via mercenary spyware tools.

Summary

We can no longer think about security as a wall that keeps outsiders from getting in. In the modern workplace, a huge amount of the risk comes from what insiders are unintentionally sending out. Browsers, email, chat tools, cloud AIs, extensions, microphones, and countless background services leak sensitive information every day, long before an attacker ever touches your network.

The only real defense is culture. A culture where privacy is the norm. Where people default to safer tools, share less by design, and understand that every third party you trust with data becomes part of your threat surface. Employees should feel comfortable navigating what should never be pasted into an AI chatbot, what should never be sent through email, and what should never be stored in a tool that logs everything forever.

The routine habits of every employee can substantially reduce your attack surface, and it also helps you build trust with your customers and clients. Train people well, shorten retention, standardize your tools, ban unnecessary devices in sensitive rooms, and make privacy part of daily operations instead of a compliance checkbox.

By teaching privacy hygiene to employees and enforcing clear internal privacy policies, you can dramatically increase your organization’s security. Build that culture, and your company gets stronger.

Yours in privacy,
Naomi

Consider supporting our nonprofit so that we can fund more research into the surveillance baked into our everyday tech. We want to educate as many people as possible about what’s going on, and help write a better future. Visit LudlowInstitute.org/donate to set up a monthly, tax-deductible donation.

NBTV. Because Privacy Matters.

Privacc.org

Next
Next

The Most Important Email I’ll Send This Year