You're lying awake at 3 AM, mentally reviewing conversations you had with ChatGPT six months ago. Did you mention that client's name? What about the organizational restructuring you were coaching someone through?
The question loops endlessly: Were my conversations used to train AI models? And underneath that question sits a deeper fear-that somewhere in the parameters of an AI system, your clients' confidential information is now embedded, irretrievable, waiting to leak out.
You've been trying to find out. And that's the problem.
The Conventional Path
When professionals discover they've shared sensitive information with AI tools without understanding the data usage implications, the standard response follows a predictable sequence:
First, investigate which platforms you used and when. Make a list. ChatGPT in early 2024? Claude around the same time? Maybe some others you can't quite remember.
Second, research each platform's privacy policy. Read the terms of service you clicked through without reading the first time. Look for the section on training data. Try to understand what "may use for model improvement" actually means.
Third, determine if your specific conversations were used. Did you have the opt-out setting enabled? When did these companies change their policies? Were you grandfathered in or automatically enrolled?
Fourth, contact the companies. Demand answers. Request deletion. Exercise your data rights. Get certainty about what happened.
Fifth, keep digging until you know for sure. Check Reddit threads. Read tech journalism. Look for insider leaks. Find some way to achieve absolute clarity about whether your data was used for training.
This seems logical. How can you assess your professional liability if you don't know what happened? How can you decide whether to disclose to clients if you don't know if their information was exposed?
Why It Keeps Failing
Here's where this methodical approach breaks down:
The privacy policies are vague by design. They use phrases like "may use" and "for service improvement" without defining exactly what that means. And more importantly, the policies you're reading now aren't the policies that applied when you had those conversations. In August 2025, Anthropic-the company behind Claude-changed its policy from "we never use user data for training" to "we train on user data by default unless you opt out by October 8, 2025." Which policy applied to your early 2024 conversations? The answer isn't clear.
Even when you contact companies directly, they can't tell you if your specific conversations were used in training. Their systems aren't designed to track this at an individual level. You might get a generic response about their data practices, but you won't get: "Yes, your conversation from March 15, 2024 at 2:37 PM was included in training batch 3847."
If they say yes, your data was used-you can't verify it. If they say no-you can't verify that either. You're asking for certainty from a system that cannot provide it.
And here's what actually happens: The more you investigate, the more questions you uncover. You discover that different platforms have wildly different policies. Research from 2025 shows that Claude was unique in claiming to never use user inputs for training, while ChatGPT, Copilot, Le Chat, and Grok allowed opt-out, and Gemini, DeepSeek, Pi AI, and Meta AI offered no opt-out at all. So now you need to remember which platform you used for which conversation. The investigation expands.
Worst of all: your anxiety gets worse, not better. You thought finding answers would bring relief. Instead, each new piece of information triggers three more questions. The mental loop tightens. You're spending hours researching data policies instead of sleeping, instead of working, instead of living.
You're working harder at this investigation, and feeling progressively worse.
The Hidden Reason
What you think is causing your suffering: The possibility that your data was exposed.
What's actually causing your suffering: Your intolerance of uncertainty combined with the pursuit of impossible certainty.
Research from UNSW Sydney demonstrates that intolerance of uncertainty-the inability to accept not-knowing-is a primary driver of anxiety. In their 2025 study with 259 young adults, they found that teaching people to view uncertainty as less threatening significantly reduced anxiety levels, with effects lasting three months from a single 30-minute intervention.
You're experiencing this mechanism in real time. It's not the data exposure itself keeping you awake-it's the not-knowing. The ambiguity. The inability to definitively answer: "Did it happen or didn't it?"
And here's the cruel trap: You're trying to solve your uncertainty-driven anxiety by seeking certainty about an inherently uncertain past. It's like trying to eliminate your fear of sailing by achieving perfect weather prediction. The strategy itself is impossible, which guarantees continued anxiety.
The investigation feels productive. It feels like you're managing the risk. But you're actually feeding the exact mechanism that's causing your distress. Every privacy policy you read, every Reddit thread you check, every time you mentally review old conversations-you're reinforcing the belief that certainty is achievable and necessary.
It's not. And paradoxically, accepting that it's not is what actually reduces the anxiety.
The Complete Flip
Here's the reframe that changes everything:
The past is uncertain and irreversible. You cannot know with certainty what happened to your data. And even if you could know-you cannot undo it.
Data already incorporated into AI model training cannot be reliably removed with current technology. Think of it like stirring cream into coffee. Once it's diffused across the liquid, you can't extract it back out. AI models work the same way-information gets diffused across billions of parameters. There's no "delete" function that reliably removes specific training data from a model that's already been trained on it. Research from UC Riverside on machine unlearning confirms this: "Once learned, there's no simple way to get the model to ignore portions of what it has learned."
So even in the worst case-your conversations were used for training-that fact is now permanent. Irreversible. Final.
And here's the counterintuitive truth that research reveals: People psychologically adapt BETTER to irreversible situations than reversible ones.
A 2022 study published in Psychology Research and Behavior Management found that irreversible decisions actually yield higher post-decision satisfaction than reversible decisions. Why? Because when something is unchangeable, your psychological immune system kicks in. Your brain stops running the "what if I had done X differently" loop. You stop questioning whether you could have prevented it, because you definitively can't change it now.
You've experienced this yourself in investing. After a startup you passed on gets acquired, you might feel regret. But after a company you invested in goes through an acquisition and the deal closes-irreversible-your brain shifts into "that's done, what's next" mode. The marginal calls, the reversible decisions, the ones where you could have chosen differently-those are what keep you up at night.
The uncertainty about whether you COULD have prevented your data exposure is more painful than the certainty that it happened.
This flips the entire approach. You've been seeking certainty about the past to reduce anxiety. But accepting the irreversible uncertainty-operating under the assumption that the exposure happened and cannot be undone-is what actually ends the mental loop.
What You Can Now Forget
You can stop believing that you need to know with certainty what happened to your past AI conversations. That certainty is not achievable, and more importantly, it's not necessary for managing the actual professional risk you face.
You can release the idea that investigating every platform will eventually give you peace of mind. It won't. Each answer generates new questions. The investigation has no natural end point.
You can abandon the assumption that certainty about the past is required before you can make good decisions about the future. You've made strategic decisions under uncertainty throughout your career. This is no different.
You can let go of the belief that you can undo what's been done. If your data was used for training, the European Data Protection Board confirmed in 2024 that AI models trained on personal data often cannot be fully de-identified. The training poses "technical challenges around rectifying and deleting data." The regulatory bodies themselves acknowledge the technical limitations.
And you can stop carrying the myth that your anxiety means you're being responsible and thorough. Anxiety isn't risk assessment. The mental loop isn't problem-solving. Research shows that rumination about irreversible decisions actually leads to "intense sorrow, self-degradation, anxiety and depression" without improving outcomes.
You're not being diligent by obsessing over the past. You're just suffering.
What Replaces It
The new framework is built on the Stoic dichotomy of control you already know well:
You control your present actions. You do not control past outcomes.
Operate under the worst-case assumption: your past conversations were used for training. Not because you have proof, but because this assumption is strategically superior. It shifts you from rumination to risk management.
From that assumption, your questions change:
- Has any actual harm occurred? Can you detect any information leakage?
- What are your professional obligations to disclose given this assumption?
- What bulletproof practices prevent future exposure?
These questions are answerable. They're actionable. They redirect energy from investigating the unchangeable past to securing the controllable future.
The professional risk you actually face depends on demonstrable harm and your ability to show you took reasonable precautions. International Bar Association guidance for professionals warns that inadvertent sharing can trigger liability "unless [the professional] can discharge the heavy burden of demonstrating that there was no real risk of confidential information being unwittingly disclosed."
Notice what matters: Not whether data was technically used for training, but whether there was real risk of disclosure and what you did about it. Your focus should be on the present demonstration of professional standards, not the past investigation of technical data flows.
You replace the impossible quest for historical certainty with directional risk assessment:
1. One-time audit: Which platforms did you use, approximately when, what was the nature of the information shared? This takes one focused hour, not endless investigation.
2. Present-day protocol: Two-tier system. Platforms with strong privacy guarantees (enterprise tools, local models) for client-related thinking. Systematic anonymization for any identifiable information. A simple decision tree: "Does this contain identifiable client information? If yes, anonymize or use local model. If no, proceed."
3. Forward-looking disclosure assessment: Based on your directional risk assessment and professional context, determine whether proactive disclosure to clients is appropriate. This is a judgment call that might warrant discussing with your own counsel, but it's a present-day decision, not a historical investigation.
You recognize that some uncertainty will remain about the past. And you accept that. Not because you've given up, but because you've redirected your energy to where it actually matters.
As the UNSW Sydney research demonstrated, teaching yourself to view uncertainty as less threatening-to stop treating it as a problem that must be solved-significantly reduces anxiety. The ultra-brief intervention in their study took less than 30 minutes and produced effects lasting three months. The shift isn't complicated. It's the decision to stop fighting the uncertainty and start working productively despite it.
What Opens Up
When you accept the irreversibility and redirect your focus, several things become possible that weren't before:
The mental energy you've been spending on rumination-checking policies, reviewing old conversations, reading tech journalism-gets reallocated to strategy. You can think clearly about professional risk management instead of drowning in anxiety.
You can use AI tools again, productively, with clear protocols. The International Coaching Federation's 2025 guidance on AI and coaching confidentiality notes that "AI coaching technology uses user data input to learn and improve, which presents confidentiality risks." But they don't recommend abandoning AI tools-they recommend appropriate governance. You now have the mental space to implement that governance instead of being paralyzed by past exposure.
You get freedom from the uncertainty loop. Not because you've achieved certainty, but because you've stopped requiring it. This is the same mental shift you use in board game strategy-you make the best decision with incomplete information, then commit. You don't keep replaying the previous turn.
Your professional risk management becomes action-based rather than anxiety-based. Enterprise AI governance research from 2025 found that 53% of organizations identified data privacy as their biggest AI concern, but organizations with established governance showed greater confidence. You can build that governance now, for your practice, focused entirely on preventing future exposure.
And you can have conversations with clients about AI usage from a position of strength rather than shame. Not "I might have accidentally exposed your information and I've been investigating but I can't find out what happened"-but rather "I've assessed the risks of AI tools in coaching practice, implemented strict protocols, and here's how I protect client confidentiality going forward."
The shift from seeking certainty about the past to accepting uncertainty and controlling the present doesn't eliminate the fact that you shared sensitive information. It doesn't erase whatever risk exists. But it does eliminate the self-inflicted suffering of the endless investigation.
You can't unopen the bottle, to use your whisky collection metaphor. But you can decide which bottles to open in the future, under what circumstances, with what safeguards. You can become strategic instead of stuck.
The question isn't "Did my past conversations get used for training?" The question is: "What decision will I make regardless of that uncertainty?"
And that question, you can answer.
What's Next
In our next piece, we'll explore how to apply these insights to your specific situation.
Comments
Leave a Comment