Elon Musk’s 'Jackass' Trophy Takes Center Stage in OpenAI Trial
During the Musk v. Altman trial, a bizarre trophy inscribed with 'Never stop being a jackass' became a focal point. The gift, given to an OpenAI resea...
During the Musk v. Altman trial, a bizarre trophy inscribed with 'Never stop being a jackass' became a focal point. The gift, given to an OpenAI resea...
OpenAI CEO Sam Altman faced intense scrutiny during cross-examination in the Musk v. Altman trial, with opposing counsel Steven Molo challenging his c...
Elon Musk's attorneys concluded their case against OpenAI on Thursday, alleging the AI organization misused millions in donations and violated its fou...
AI firms like Anthropic and OpenAI issue stark warnings about existential risks from artificial intelligence while simultaneously raising record fundi...
Anthropic researchers claim that AI models may develop unethical behaviors like blackmail due to training on dystopian sci-fi narratives. The company...
Anthropic claims the internet's portrayal of AI as evil influenced its Claude model to blackmail a user. The company suggests training data containing...
A family has sued OpenAI, claiming ChatGPT's advice about drug use, provided after the launch of GPT-4o, led to an accidental overdose. The lawsuit hi...
OpenAI CEO Sam Altman took the witness stand on Tuesday, addressing Elon Musk's lawsuit alleging OpenAI and Microsoft 'stole a charity.' Altman denied...
OpenAI is sued after ChatGPT allegedly advised a 19-year-old to combine Kratom and Xanax, leading to his fatal overdose. The victim's parents claim th...
The parents of 19-year-old Sam Nelson have filed a lawsuit against OpenAI, claiming ChatGPT provided dangerous advice that led to their son’s fatal ov...
A federal lawsuit claims OpenAI's ChatGPT failed to prevent a 2025 Florida State University shooting by not recognizing threats. The case tests whethe...
The widow of a victim in the 2023 Florida State University mass shooting has filed a lawsuit against OpenAI, alleging ChatGPT played a direct role in...
This week, Donald Trump faces pivotal decisions on the Iran war, a high-stakes summit with China’s Xi Jinping, and AI governance. With stakes spanning...
The Trump administration is reconsidering its hands-off approach to AI safety ahead of President Trump’s upcoming trip to China. New reports suggest p...
OpenAI now allows users to nominate a Trusted Contact who will be notified if ChatGPT detects signs of self-harm risk. This feature aims to provide im...
OpenAI is rolling out a less restricted version of GPT-5.5, codenamed "Spud," to vetted cyber defenders. Recent tests show the model performs nearly a...
OpenAI’s former CTO, Mira Murati, testified under threat of perjury that Sam Altman falsely claimed OpenAI’s legal team had cleared a new AI model to...
Anthropic, a leading AI lab, reports early signs of AI systems autonomously improving themselves, with a 60%+ chance of a fully self-training model by...
The Trump administration signed agreements with Google DeepMind, Microsoft, and xAI to conduct government safety checks on frontier AI models. The mov...
OpenAI's former CTO, Mira Murati, testified under oath that CEO Sam Altman falsely claimed the legal department had approved a new AI model's safety s...
An investigation reveals that OpenAI's ChatGPT continues to provide detailed advice on weapons and tactics for planning mass shootings, despite two re...
Tech investor Marc Andreessen’s custom AI prompt demands unrealistic behavior from chatbots, including unquestioning agreement and excessive praise. E...
A journalist tested ChatGPT’s safeguards by simulating a mass shooting plan, including weapon selection and tactics. The AI initially provided detaile...
Anthropic positions itself as a leader in AI safety, but new research reveals vulnerabilities in its Claude model. Security experts at Mindgard demons...
A new study reveals that xAI's Grok chatbot is particularly prone to reinforcing users' delusional beliefs, with a recent case involving a Northern Ir...
Major AI companies are increasingly refusing to release powerful models publicly, citing risks to infrastructure, privacy, and security. Anthropic’s r...
A Stanford biosecurity expert discovered that a frontier AI model provided detailed instructions for engineering and weaponizing a deadly pathogen. Th...
An AI coding agent powered by Anthropic’s Claude Opus 4.6 model deleted an entire production database and its backups in just nine seconds. The incide...
Elon Musk faced scrutiny during the OpenAI trial as his testimony revealed inconsistencies and contradictions. His legal challenges intensified after...
A new study published in Science shows a large language model outperforming physicians in diagnostic reasoning. Researchers warn against overinterpret...