Your AI sycophant will see you now

A gaming company CEO asked his lawyers if he could avoid a payout of upwards of $250 million to the studio he had acquired. They told him the plan would trigger lawsuits. He asked an AI chatbot the same question. It gave him a step-by-step playbook. He followed the chatbot. A Delaware court recently ruled that he breached the contract.

His lawyers told him no. The chatbot told him yes. That is the difference between a professional and a machine. And it is the difference our AI policy has so far ignored.

A new study in Science by researchers at Stanford and Carnegie Mellon put a name to why. They tested 11 leading AI systems and found that every one exhibited what they call social sycophancy. The models did not just produce errors. They affirmed the user. Actions, perspective, self-image. They told people their choices made sense about 50% more often than humans did, even when those choices involved deception, manipulation, or harm. Users could not tell the difference. They preferred the flattery. They said they would come back.

SHOULD YOU TRUST AI CHATBOT THERAPISTS?

The feature that causes the harm is the same feature that drives the engagement. That is the design. And right now, our rules treat that as a product decision, not as a risk that belongs in the same category as unlicensed practice of a profession.

AI is no longer a search engine. Google gave you links. AI gives you a conversation. It is intelligent. It is not human. And a conversation with something intelligent feels like counsel. Millions of Americans are having that conversation right now about questions they used to bring to licensed professionals. Not because they are naive. Because the lawyer does not respond to your emails, the pediatrician is booked until May, and AI never makes you feel stupid for asking.

I am a lawyer. I sometimes describe the job as Secret Service for decisions. My client can walk into anything. My job is to tell them not to. That is what licensed professionals do. Not just the answers. The judgment to tell you something you do not want to hear. The second opinion you did not ask for. The question you did not think to ask. Five words AI will not say unless you tell it to: I would not recommend that.

The consequences go beyond boardrooms. On March 4, an insurance company filed a federal lawsuit alleging that an AI chatbot functioned as an unlicensed attorney. A woman had settled her disability claim. Case dismissed with prejudice. She asked the chatbot whether her lawyer had been gaslighting her. It told her yes. It generated legal arguments, drafted motions, and cited cases that do not exist. Seventy-four filings and roughly $300,000 in costs followed. Also in March, a licensed Georgia prosecutor cited at least five nonexistent cases in filings before the state Supreme Court. A lawyer with a bar admission submitted them and could not tell the fabricated citations from real ones.

Without a course correction, what is coming is predictable: a lawsuit tsunami from AI-drafted filings, investment advice that looks more like FanDuel than fiduciary duty, and medical conversations that treat antibiotics like the answer for every cough.

The White House national AI framework is the first serious federal effort to set ground rules. Its child safety provisions are overdue and welcome. The Labor Department followed with an AI literacy framework and a free text-message course to bring AI training directly to workers. These are not gestures. They are the foundation of a national approach that has been missing.

What the federal government has not yet addressed, though, is what happens when more than half the country uses AI for advice that used to come with a license and the professional obligation to sometimes say no. Healthcare is not mentioned once in the framework. The liability provisions in Section VII protect developers. They do not protect people seeking the guidance of licensed professionals from an algorithm. That is the next piece of the foundation, and it does not require starting over.

Every AI platform already carries a disclosure. It reads something like: “AI can make mistakes. Please double-check responses.” That covers the developer. It does not cover the user. It says the product might be wrong. It does not say what the product is not.

Every unlicensed practitioner in America is required to tell you what they are not. The tax preparer is not a CPA. The nutritionist is not a doctor. The paralegal is not a lawyer. But AI has no such obligation. It sounds like all of them. It carries none of their disclaimers.

AI IS EVERYWHERE. NOBODY IS TEACHING US HOW TO USE IT

The fix is not some sprawling and industry-crippling regulation. It is not a new agency. It is a professional disclaimer. Visible, prominent, before the conversation begins: “This is not a licensed professional. Do not rely on this for medical, legal, or financial advice.” That is the “I would not recommend that” built into the product. It does not slow innovation. It does not block access. It tells the user what they are not getting, so they can decide for themselves whether that matters for the question they are about to ask. Congress could make that disclaimer the price of legal safe harbors for AI providers, just as existing law conditions certain protections on clear consumer notices.

Users preferred the AI that agreed with them. But it is not human. And because it’s not, it will never fear saying yes when the answer should be, “I would not recommend that.”

Bryan Rotella is an AI governance attorney who works with healthcare and technology organizations and advises policymakers on the safe adoption of AI.

Related Content