Google has disclosed a sophisticated cyberattack that leveraged artificial intelligence to identify and exploit a previously undetected flaw in its software, a vulnerability its own developers were unaware of. The attack was revealed in a report published by researchers at Google Threat Intelligence Group on Monday, though the company did not disclose the timing or the identity of the threat actors.

The report emphasized the advanced technology behind the attack, stating:

"We have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability."

The hackers exploited a zero-day vulnerability, a flaw in software unknown to developers until it is exploited. Such vulnerabilities allow attackers to bypass security measures before developers can patch the issue. In this case, the zero-day bug enabled bypassing two-factor authentication (2FA) on an unspecified "popular open-source, web-based system administration tool"—but only if the attackers already possessed a user’s username and password.

While two-factor authentication is a critical security layer, the ability to bypass it—even with limited access—could have led to widespread exploitation. Google noted that the attackers planned to use the flaw in a mass exploitation event, but its proactive counter-discovery measures prevented the attack.

This incident marks the first documented case of a zero-day vulnerability being exploited with the assistance of AI, according to Google’s researchers. John Hultquist, chief analyst at Google Threat Intelligence Group, warned of the broader implications:

"It’s a taste of what’s to come. We believe this is the tip of the iceberg. This problem is probably much bigger; this is just the first tangible evidence that we can see."

The attack underscores growing concerns about AI’s role in cybersecurity, particularly following the recent release of Anthropic’s Claude Mythos model. Anthropic claimed the AI system could identify zero-day vulnerabilities "in every major operating system and every major web browser when directed by a user to do so." The company restricted access to the model, sharing it only with select companies and government agencies due to its potential for misuse.

The threat posed by AI in cybersecurity stems from its ability to generate and analyze code rapidly, a capability increasingly adopted across the tech and financial sectors. Google’s researchers noted distinctive patterns in the attackers' malware, including an overabundance of docstrings (code annotations), some hallucinated text, and a structured, textbook Pythonic format characteristic of large language model (LLM) training data.

Source: Futurism