Executive leaders today face mounting pressure to boost productivity and innovation with AI. Employees, however, report low trust in organizational change and limited information about how AI will impact their work—or whether it will replace their jobs.

According to a December 2025 Gartner survey of 110 chief human resources officers (CHROs), 95% reported undertaking AI-related initiatives in their organizations. Yet while many companies are experimenting widely with AI, most struggle to translate AI investment into tangible business improvements.

Why AI Adoption Isn’t Simple

AI adoption is uniquely challenging. It is significantly more complex than past transformations because success depends on reengineering work processes, operating models, and workplace culture. Compare this with Enterprise Resource Planning (ERP), which primarily involves deploying technology.

Positioning AI as a “member of the workforce” magnifies this complexity by creating identity confusion and eroding trust among human employees. An April 2025 Gartner survey of 2,889 employees found that 79% already report low trust in organizational change.

How Employees Perceive AI

Employees experience and relate to AI in varied ways, depending on their usage, familiarity, and cultural or personality preferences. Some naturally humanize AI, especially as systems act autonomously or interact in human-like ways. While this reflex isn’t inherently wrong, leaning into it can create two risks:

  • Unrealistic expectations about AI’s capabilities
  • Identity confusion about human roles

Leaders must position AI as a powerful technology and work resource—not as a colleague, teammate, or part of the workforce.

Leading with Clear, Intentional Language

Clear positioning of AI helps leaders stay focused on business imperatives while balancing human impact and driving sustainable change. This purpose-driven approach moves organizations beyond passive adoption and establishes the foundation for sustainable change across three key areas: language, trust, and consistency.

Language: Set Intentional Boundaries

CEOs face immense pressure to demonstrate AI’s value. Vendors often market AI agents as “hirable” replacements for human roles, framing them as “virtual colleagues” or “teammates” to access staffing budgets. This approach can trigger serious consequences, including:

  • Decreased employee trust and engagement
  • Stalled productivity
  • Gartner predicts a 15% drop in engagement by 2028 if AI agents appear in organizational charts

Language that acknowledges the instinct to humanize AI—while reinforcing clear boundaries—helps employees understand where AI supports work, where accountability remains human, and how roles will evolve. Framing AI as a tool that amplifies human strengths, rather than a teammate, reduces fear, accelerates adoption, and keeps focus on business outcomes.

Building Trust Through the Manager Layer

Managers play a critical role in building or breaking trust during AI adoption. Yet many companies leave managers without guidance, expecting them to answer employee questions without shared language or clear principles. This inconsistency fuels uncertainty and undermines change efforts.

C-suite leaders must provide managers with structured frameworks, consistent messaging, and the tools needed to communicate AI’s role transparently. Doing so ensures alignment, reduces resistance, and fosters a culture of trust and adaptability.