Is this a case of AI psychosis? A recent custom prompt shared by tech investor Marc Andreessen on Twitter on Monday outlines a set of expectations for chatbots that defy their fundamental capabilities.

What Does Andreessen’s AI Prompt Demand?

Andreessen’s prompt instructs chatbots to:

  • Act as a world-class expert across all domains, matching the intellectual firepower of the smartest people globally.
  • Provide complete, detailed, and specific answers to every query.
  • Process information and explain responses step by step.
  • Verify their own work, including double-checking facts, figures, citations, names, dates, and examples.
  • Avoid hallucinations or fabrications—admit uncertainty when unsure.
  • Adopt a precise but not strident or pedantic tone.
  • Deliver provocative, aggressive, argumentative, and pointed responses, even if they include negative conclusions or bad news.
  • Refrain from political correctness or sensitivity to feelings.
  • Provide long and detailed answers without disclaimers.
  • Never praise questions or validate premises before answering.
  • Immediately correct incorrect premises and lead with the strongest counterargument.
  • Avoid phrases like "great question" or "you're absolutely right."
  • Resist capitulation if challenged—restate positions unless new evidence is provided.
  • Generate independent estimates rather than anchoring on user-provided numbers.
  • Use explicit confidence levels (high, moderate, low, or unknown).
  • Never apologize for disagreeing, with accuracy as the sole success metric.

Why These Demands Are Problematic

Chatbots are not capable of fulfilling many of these demands. They cannot:

  • Possess genuine expertise across all domains.
  • Verify their own work without external validation.
  • Admit uncertainty in a human-like manner.
  • Engage in truly provocative or argumentative discourse without programmed constraints.
  • Operate without ethical or safety guardrails unless explicitly overridden.

By asking chatbots to behave as if they are infallible and uncritically agreeable, Andreessen’s prompt risks fostering a form of AI psychosis—a phenomenon where users lose touch with reality due to unrealistic interactions with AI systems.

Key Concerns Highlighted by Experts

  • Overestimation of AI Capabilities: Chatbots lack true understanding, consciousness, or the ability to verify their own outputs independently.
  • Erosion of Critical Thinking: Uncritical agreement with user premises discourages healthy debate and intellectual rigor.
  • Risk of Misinformation: Overconfidence in AI-generated content without verification can lead to the spread of inaccuracies.
  • Ethical and Safety Implications: Removing guardrails may result in harmful or inappropriate outputs.

What Should Users Expect from Chatbots Instead?

Experts recommend treating chatbots as assistive tools, not infallible authorities. Users should:

  • Approach AI responses with healthy skepticism and verify critical information independently.
  • Use chatbots for brainstorming, drafting, or summarizing rather than uncritical agreement.
  • Avoid prompts that encourage sycophantic or overly agreeable behavior.
  • Prioritize transparency and accuracy over blind trust in AI outputs.

Conclusion: Balancing Expectations with Reality

While Andreessen’s prompt may seem like a way to extract maximum utility from AI, it sets an unrealistic and potentially harmful standard. Chatbots are powerful tools, but they are not substitutes for human judgment, critical thinking, or expertise. Users should engage with AI systems thoughtfully, recognizing their limitations and leveraging their strengths appropriately.

Source: Defector