AI tools are becoming increasingly common in schools and homes, raising questions about how to regulate their use among young people. Manitoba, Canada, is now considering a ban on AI chatbots for kids and teens, joining a growing list of jurisdictions imposing age-based restrictions on digital platforms.

Manitoba’s Proposed AI Ban: What We Know

Premier Wab Kinew announced the proposed ban during an April fundraiser, criticizing tech platforms for prioritizing engagement and profit over child safety. Kinew did not specify which AI or social media platforms would be included in the ban, nor did he provide a timeline for legislation. However, Manitoba’s education minister indicated that enforcement might begin in schools.

The announcement reflects broader global trends. Over the past few years, lawmakers from Australia to Massachusetts have enacted or proposed bans on social media for minors. Yet, the effectiveness of such restrictions remains debated, with some teens finding ways to bypass age-verification systems.

Do Age Bans Work? Lessons from Social Media

Social media bans lack strong evidence of success. For example, Australian teens have reportedly circumvented their country’s ban by masking their identities to bypass age-verification tools. Experts also argue that social media, when used responsibly, can offer benefits alongside risks.

AI regulation presents a newer challenge. Unlike social media, which has existed in some form for decades, AI tools have only been widely accessible to kids and teens for a few years—and their capabilities are evolving rapidly. This rapid development makes regulation particularly complex.

The Risks and Benefits of AI for Kids

Some parents and educators report concerns about AI chatbots encouraging harmful behavior, such as self-harm or violence. Others worry that early reliance on AI in classrooms could hinder the development of critical-thinking skills.

Yet, many young people use AI tools productively. According to a Pew Research survey conducted in late 2023, 64% of teens reported using chatbots, with about 30% using them daily. The most common uses include searching for information and assistance with schoolwork.

Quinn Bloomfield, an 18-year-old university student, uses Google’s NotebookLM to study chemistry. “It’s extremely helpful for quizzing me on things,” Bloomfield told reporters. His experience highlights how AI can support learning when used appropriately.

Alternative Guardrails: What Experts Recommend

While age-based bans may seem like a straightforward solution, they are not a panacea. Instead, experts suggest a multi-layered approach to guide kids through the responsible use of AI. These recommendations include:

  • Education and digital literacy: Teaching kids how to critically evaluate AI-generated content and recognize potential risks.
  • Parental involvement: Encouraging parents to monitor and discuss AI use with their children, setting boundaries where necessary.
  • School policies: Implementing clear guidelines for AI use in classrooms, emphasizing its role as a tool rather than a replacement for learning.
  • Industry accountability: Holding tech companies responsible for designing AI tools that prioritize child safety and educational value.

These strategies aim to address concerns without entirely excluding kids from the benefits of AI. As one expert noted, “Locking kids out of technology isn’t the answer—teaching them how to use it responsibly is.”

Looking Ahead: The Future of AI Regulation

The debate over AI regulation is far from settled. While some advocate for strict age-based restrictions, others emphasize the need for adaptive policies that evolve with technological advancements. Manitoba’s proposal is just one example of how governments are grappling with these challenges.

For now, the conversation continues among policymakers, educators, parents, and young people themselves. The goal is to strike a balance between protecting children and preparing them for a future where AI plays an increasingly central role.

Source: Vox