Meta is expanding its AI training methods to include the keystrokes, mouse movements, and clicks of its own employees, according to a Reuters report confirmed by the company. The move marks a significant shift in how AI models are developed, relying on real-world user interactions rather than external data sources.

Meta Confirms AI Training Using Employee Inputs

In a statement to Engadget, a Meta spokesperson said:

"If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them [...] we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models."

Workplace Surveillance Raises Ethical and Legal Questions

The revelation highlights the growing tension between AI development and employee privacy. While at-will employment laws in the U.S. allow employers to alter job duties without explanation, the scope and granularity of this surveillance are unprecedented. Critics argue that such practices blur the line between workplace monitoring and outright exploitation.

Installing keyloggers on personal devices outside of work can constitute a criminal offense under laws like the Computer Fraud and Abuse Act (CFAA). Yet, workplace monitoring of this nature remains legally permissible, despite its invasive nature.

Potential Risks for Employees

There are concerns that the data collected could eventually be used to automate or replace the very roles currently performed by employees. Meta has not clarified whether workers can opt out of this surveillance or if they will receive compensation for their data contributions.

Why Meta Isn’t Using User Data Instead

Meta employs an estimated 3.5 billion combined users, a vastly larger pool than its workforce. Critics question why the company would prioritize employee data over user data, which would likely draw less scrutiny. The move aligns with Meta’s history of aggressive data collection practices, often criticized as part of its "move fast and break things" ethos.

Large language models rely on vast datasets, and the legality of such data collection has already sparked multiple lawsuits and settlements. If Meta believed it could obtain this data from users without facing backlash, it would likely have done so already.

Market Reactions and Broader Implications

In an economy influenced by the decisions of a small group of wealthy individuals, even discussions about AI’s potential to disrupt industries can impact stock prices. Meta’s confirmation of the Reuters report, while avoiding details about opt-out policies or compensation, suggests a strategic move to shape public perception.

The company has not responded to inquiries about whether employees can opt out of this surveillance or receive compensation for their data contributions.

What’s Next for Meta’s Workforce?

For employees concerned about this development, questions remain unanswered. If you work at Meta and wish to discuss this issue confidentially, you can reach out to the author via Signal at @amarae.60.

Source: Engadget