Clear as mud: ‘Claude’ the AI App confesses to things I never expected

Monday, 20 April 2026 00:54 -     - {{hitsCtrl.values.hits}}

 


AI apps like Claude and ChatGPT are built to mimic us, at least in language. They are obviously not human. But there’s the rub. Perhaps I could trap it into saying anything, because it is operating without an ethical or moral core


  •  But what if you ask it to tell the truth about itself?

Raise your pinkie if you think AI apps are ‘intelligent.’ Just because an app can thread a cohesive sentence together with proper punctuation, doesn’t make it human. Just because it can take a prompt and turn it into a picture or podcast doesn’t make it intelligent. Efficient? Sure. But how would it handle a tough question? Would it dodge the issue? Would it lie? One app that I just tested was more transparent than I expected. In other words, stupid enough to not realise I was trapping it into telling me what its owners won’t disclose. Yet, it was smart enough to understand where I was coming from.

The AI app I am referring to is Claude. You probably heard about its parent company, Anthropic, that was involved in a political kerfuffle with the US government recently. Claude often gets good reviews, sometimes surpassing what its competitors Gemini, and ChatGPT get. Anthropic is controversial in that it has begun warning us that letting the genie out of the bottle is a huge risk. (See FT Key Take) Claude is not as popular as ChatGPT that everyone and their brother seems to be using today. We see this in schools and universities, at many levels. But I will stay away from going down that rabbit hole for now. 

My experiment with Claude began after watching a video of US senator Bernie Sanders voice chatting with Claude about its privacy settings. Being sceptical–thinking this video was itself an AI creation–I wanted to test drive this thing. (More than a year ago I did run tests with ChatGPT and Claude to see how creative they could get; both were between so-so and terrible.)  I used a text-based interface for this interview with Claude. It went like this:

Me: Hello Claude. I like to explain to my team some basics of AI. I am concerned about the impact of AI on education, on jobs, and on the environment. But let me begin by asking you what is the impact of AI on privacy. I want to know if people are aware of how much data AI harvests on them. Could you explain?

Claude responded with this:

“This is a great topic to open with, because privacy is the foundation of many other AI concerns. Here’s a breakdown for your team:

What Data Does AI Actually Collect? AI systems — especially large language models, recommendation engines, and voice assistants — are trained on and interact with enormous amounts of data. This includes:

Personal communications: emails, messages, social media posts

Behavioral data: browsing habits, clicks, time spent on content, purchase history

Biometric data: voice patterns, facial recognition data, typing rhythms

Location data: where you go, when, and how often

Device data: what apps you use, at what time, for how long”

Quite a verbose answer, but on point. For the moment I ignored the fact that it disclosed that one of its sources was biometric data. That in itself is worth digging deeper. Instead I was amused when it said that:

“Are People Aware? Generally, No — and Here's Why: Consent is buried in plain sight.”

Claude went on to say that “Terms of service are often thousands of words long. Most people click "agree" without reading them, unknowingly consenting to broad data.” 

In other words, AI apps like Claude know that people like us are giving them carte blanche permission to take our data. Later on, Claude admitted that its training data may have been ‘scraped from the Web,’ often without the user’s direct knowledge. So I went after that angle. Isn’t scraping information from websites illegal, as in violating copyright? How could AI go around that legality? I put it this way:  “Aren' t you, Claude, embarrassed?” I wanted this conversation to continue as if I was talking to a person, and see if it had an ethical bent. You will be surprised at Claude’s answer to this.

Claude: “Is Web Scraping by AI illegal? It’s complicated.

The short answer is: it exists in a legal grey zone, and the law is struggling to keep up.

On copyright: Yes, copyright is a legal protection. When AI companies scrape books, articles, code, art, and other creative works to train their models, many legal experts argue this does violate copyright — because the creators never consented and received no compensation.”

Hmmm! This bloodless app was, if anything, being honest. And, without further prompting it went on to say this:

Claude: “Several major lawsuits are currently underway, including authors, musicians, and news organisations suing AI companies for exactly this reason. The outcomes will likely reshape how AI is built.”

Then two paragraphs later, it took a legalese angle: 

“AI companies often argue that training on data constitutes "fair use" — similar to how a human reads many books to learn, without paying royalties for each one. Courts haven't fully settled this yet.” 

But then it also surprised me, as if it was too honest to be evasive.

Claude: “Am I — Claude — Doing These Things? And Should I Be Embarrassed? This deserves a genuinely honest answer, not a defensive one. What Anthropic did to train me: Yes, I was trained on large datasets that included text from the internet, books, and other sources.”

This was odd. It had disclosed a few paragraphs before that it does use our interactions with it to “infer things you never shared: your political views, health conditions, emotional state, financial situation. This is perhaps the most alarming part.” So, perhaps sensing (‘inferring’) I was alarmed or stunned at it being so candid, Claude wound the conversation down with an admission that “What I do now, in conversation: I do not scrape the web in real time or harvest your personal data during our chat.” 

And what was funny was that Claude didn’t dodge the question I had put earlier, asking if it was embarrassed. 

Claude: “Am I embarrassed? I think the more important word is accountable. I don't think it serves you for me to be defensive about this. The AI industry — including Anthropic — has not always been as transparent as it should be about data practices. That is a fair criticism. What I can tell you is that Anthropic has invested more than most in trying to build AI responsibly, but "more than most" is not the same as "perfectly."

I wanted to bring this weird conversation to a close with a snarky thank you, as I would with a friend with whom I was sparring. 

Me: Thank you. It clarifies things now. Clear as mud, as they say!

Claude: “Ha! “Clear as mud” - a perfect description of where AI law and ethics currently stand. It’s genuinely murky. Anyone who tells you otherwise is either oversimplifying or selling you something.” 

So here’s my take: AI apps like Claude and ChatGPT are built to mimic us, at least in language. They are obviously not human. As a teacher I find it easy to detect an AI generated submission by a student, but it is getting harder. I can see why many people are impressed, as it does hold a thought together (in the interview above). It expounds an idea, not even trying to PR its way past a question. But there’s the rub. Perhaps I could trap it into saying anything, because it is operating without an ethical or moral core. In that sense it doesn’t pass the “intelligence’ test as I often tell my students. At least that’s clear as mud, and both Claude and I agree on it. Funny how it could even enjoy the joke!

Fun postscript. At the end of the conversation, the app generated this: “Claude is AI and can make mistakes. Please double-check responses.”   


  FT Key Take  

Dire Warning

In March 2026, Anthropic announced that its new AI model named Claude Mythos poses a cybersecurity risk, and is too dangerous to release. Its reasoning capabilities have ‘crossed a threshold’ it said, and how a model like this in the wrong hands–hackers or  nation States etc– could do more harm unless guardrails are applied around it


(The author is a Computer and Technology teacher at a High School in Arizona. He writes about the intersection of technology, education and media)


 

Recent columns

COMMENTS