Published on: December 3, 2025 at 4:20 pm
Before the end of 2026, AI-powered entities will come to be thought of as a human-like employees, according to Academy of Management Scholar Christian Tröster of Kühne Logistics University. Leaders will have to navigate unique challenges—including threats to workers’ identity—when integrating AI into their teams and organizations.
Across workplaces, a subtle yet profound shift is underway: Employees are beginning to treat AI systems as if they were human counterparts, Tröster noted, because AI increasingly communicates and appears to reason in ways that can trigger emotional reactions traditionally reserved for people.
“This marks AI’s emergence as a second social presence in organizations,” Tröster said. “My prediction: In 2026, the central leadership challenge around AI will be managing the relationship between people and AI—and the identity threats that come with it.
“AI will be experienced less as a tool and more as an entity whose ‘voice’ matters,” he said.
Evidence of this shift is mounting. A Washington Post analysis of 47,000 shared ChatGPT conversations found that more than 10% focused on emotional or relationship topics—not just information search. Research by OpenAI and MIT Media Lab across more than 4 million conversations identifies “power users” whose interactions show clear emotional dependence on the model.
Meanwhile, unease is growing, Tröster noted. A Pew Research Center survey across 25 countries found that 34% of adults feel more concerned than excited about AI’s role in daily life, while only 16% feel the reverse. A global KPMG study of 48,000 people across 47 countries found that more than half of respondents remain wary of trusting AI systems, even as adoption rises.
“These patterns make clear that AI is becoming socially meaningful, emotionally charged, and psychologically consequential,” Tröster said. “People engage with it as a conversational partner—and increasingly experience it as a threat.”
These dynamics generate three intertwined identity threats, he said:
- Distinctiveness threats, where AI’s human-like language blurs boundaries and makes people question what remains uniquely human in their roles.
- Comparative threats, where AI’s speed or apparent accuracy triggers fears of replacement and loss of status.
- Cultural value threats, where automation challenges norms such as meritocracy and the meaning of effort, destabilizing people’s sense of legitimacy and contribution.
In 2026, leaders who treat AI as a technical upgrade will underestimate these tensions, Tröster warned.
“Leaders who recognize AI as a new social presence will respond differently: They will articulate what remains uniquely human, frame AI as a collaborator rather than a competitor, model balanced use instead of blind reliance, and involve employees in governing how AI is deployed,” Tröster said.
“The benefit is clear: When employees feel their identity is protected and their contribution remains meaningful, organizations will see higher engagement, lower resistance, and far more successful AI integration,” he said. “AI doesn’t need to be human to reshape the workplace—it only needs to be human enough that people start comparing themselves to it.
“In 2026, that comparison will become one of leadership’s most defining tests.”