[Student IDEAS] by Dan Xiao - Master in Data Sciences & Business Analytics at ESSEC Business School & CentraleSupélec
As AI rapidly reshapes the global workplace and public perception, a question arises: how can organizations build the trust needed to collaborate effectively with machines? This article introduces a positive feedback loop between Human-AI collaboration and trust in AI—where each boosts the other to unlock higher productivity, employee confidence, and innovation. Drawing on examples from multiple industries and voices from diverse geographies and generations, we conclude that trust in AI is far from universal. Psychological, cultural, and generational divides shape how people engage with intelligent systems. In reframing AI not as a threat but as a partner, this article argues for a “gentle transition”: a deliberate, human-centered approach to maximize AI’s potential while safeguarding what makes us human. Can we design a future where trust and technology grow hand in hand? This article offers both a warning and a way forward.
AI’s narrative swings. Some folks worry it’ll raise job displacement fears, and others see it as a revolutionary hype. What really helps bridge that gap? Trust. Across industries, from healthcare to manufacturing, implementing AI tools smoothly hinges on two keys: Human-AI collaboration and AI Trust—the cornerstones of sustainable human-AI synergy.
Picture this: a doctor using AI to cross-analyze 10,000 case studies and label all the anomalous areas in seconds, then applying her medical expertise to finalize a diagnosis. That’s Collaborative Intelligence in action—a dance where human creativity meets AI’s brute-force computation. Pioneered by Wilson & Daugherty1 in 2018, Human-AI collaboration means redesigning work so both grow smarter together.
But no one dances with a partner they don’t trust. AI Trust isn’t just believing the tech works—it’s relying on it to ethically enhance your work. In hospitals, trust means doctors trust AI could reduce standardized inspection process time with high accuracy to some degree and coordinate different diagnoses. Unlike one-off “acceptance”, trust is earned through transparency and consistency.
Now, the real magic happens when these two forces loop:
This flywheel spun. When Siemens introduced AI visual inspection in its Amberg factory, skeptical workers argued machines couldn’t match their soldering expertise—until the AI spotted significantly more and even unanticipated defects, including microscopic flaws. The real win? Workers then taught the AI environmental nuances (like humidity impacts), creating a loop where humans refine algorithms while AI frees them to innovate. As the manager puts it: “It’s not humans vs. machines—it’s humans teaching machines to see better.”
One important thing is that you don’t need to force yourself to “trust AI.” Start small: let it automate the repeated in-table data-processing . When it works, you’ll naturally assign it bigger challenges. And here’s the kicker: as AI handles routine tasks, you’ll rediscover your irreplaceable human edge—empathy, ethics, the ability to ask “What if?” That’s how we win the AI era: not by competing with machines, but by cooperating with them.
When Dylan Thomas urged us to “not go gentle into that good night,” he framed resistance as an act of defiance. But today, as AI reshapes our world, “gentle” takes on a radically different meaning. This isn’t about raging against the machine—it’s about upgrading with it, one thoughtful step at a time.
For businesses today, staying afloat isn’t about radical revolution or reckless leaps into automation. It’s more about balance: adopting AI’s potential without losing sight of what makes us human. Think of it as a dance—more of a slow, steady waltz instead of wild rock—where innovation and tradition move gracefully on the floor. Eras setting the screens and music, humans take the lead and feel free to choose their partners, companies that master this rhythm don’t just survive; they thrive by blending machine precision with human intuition. They’ll shine on this AI era stage.
However, don’t mistake this gentle waltz for being passive. It’s about intentionally choosing how to adapt, ensuring AI serves our values rather than controlling them. Rather take this time as building a bridge between yesterday’s workforce and tomorrow’s possibilities—one where we walk confidently, not blindly, into the future.
In 2024, there are 47% tasks (eg.clerk drafting reports, nurse comforting patients…) in general workplace are led by humans and the other 22% are machine-driven, while the left 30% are hybrid(eg. marketer using AI to personalize campaigns, doctors leveraging AI to spot patterns and aid diagnoses). In 2024, 47% of tasks in the general workplace (e.g., clerks drafting reports, nurses comforting patients) are led by humans, 22% are machine-driven, and the remaining 30% are hybrid (e.g., marketers using AI to personalize campaigns, doctors leveraging AI to spot patterns and aid diagnoses).
By 2030, these numbers will blur into a ‘gentle’ transition, with a near even split of 33% each.
Nevertheless, the most successful workplaces won’t obsess over the percentages. They’ll focus on practical synergy rather than replacing humans with AI. They’re the ones asking, “How can we make 1 + 1 = 3?”
So let’s rewrite Dylan Thomas’ plea for the AI age:
Do not go gentle into this new frontier—
Rage, rage against replacement.
Dance gently with the partners that amplify
what makes us human.
Let’s look into the human share of work tasks across industries: By 2030, the trends point in the same direction: machines will be our innovative and vital partners although the estimated percentages vary.In highly-automated industries such as oil and gas and chemicals decreases in human share are on average around 10% since they start from a relatively low baseline. But in less-automated industries like healthcare, retail and professional services, declines range from 10% to 20% ( many are close to 20%).
Moreover, we can identify two kinds of industries according to their traits and strategies to implement new technologies, and discuss the two futures:
This includes the industries that are more focused on complementing and augmenting, for instance government and hospital services. AI will be applied more carefully in daily touching and will not be preferred for the final decision-making, the kind of decisions that can determine the fate of human groups or individuals.
The human amplifier group is the safe zone of the human-resource market in the AI revolution. Jobs that require a human touch—like empathy, counseling, or creative problem-solving—are harder to automate. Similarly, roles that involve complex manual tasks, such as surgeons or skilled tradespeople, are less likely to be fully replaced by machines.
However, in some specific areas, AI can bring a breath of fresh air. Studies4 show that AI-Generated Art is offering a feasible method to address the limitations of traditional physical art materials, which expands treatment approachability and improves the effect of art therapy.
For those industries that are oriented toward high efficiency and fueled by advanced technologies, where spreadsheets and speed are their language, AI can be applied boldly to empower the automation. You get chatbots resolving claims before the coffee cools, or algorithms sniffing out fraud in milliseconds. But there’s a hidden cost: the soul of customer service risks becoming transactional.
Now that these fields are at the transformation frontier, tasks that involve data processing, standardized writing, or multilingual translation are prime candidates for automation. These are areas where machines excel, often outperforming humans in speed and accuracy. Machines aren’t stealing these jobs—they’re exposing work that was always mechanical.
For all its promise, the path to Human-AI collaboration is littered with pitfalls—some obvious, others invisible. Whether it is a century-old manufacturer or a Silicon Valley disruptor, these four barriers haunt every industry: costs, biases, AI whiplash, and the knowledge abyss.
On the one hand, in traditional firms, strategy managers often struggle between past successful legacy systems and AI’s attractive siren song. The major issues are:
On the other hand, Silicon Valley companies aren’t exactly immune though. For every new technology, there’s a hype cycle to apply and locate. Tech firms face the challenges:
In 2023, Ipsos dropped a truth bomb5 with a sweeping 31-nation survey that laid bare a stark divide in how the world views AI. Emerging economies—from Asia to Africa—embraced algorithms like long-lost allies, seeing in them a promise of progress and precision. In contrast, the so-called WEIRD countries (Western, Educated, Industrialized, Rich, Democratic) hesitated. For societies already questioning their governments and corporations, AI seemed less a solution than another layer of uncertainty—a love-hate relationship with technology laid bare. By 2024, the story grew more complex6: cracks in trust began to spread, a generational rift deepened, and Canada, a nation synonymous with tech optimism, is now leading the world in AI distrust. Just 28% of Canadians trust algorithms with their data—a 6% freefall since 2023. This raises a warning flare: if skepticism can thrive in a society with Canada’s digital literacy and infrastructure, what hope remains for fractured democracies, with so many major political challenges to solve?
Fast-forward to 2024, and the data paints a generational thriller. Gen Z and Millennials, digital natives raised on apps and algorithms, have become AI’s ambassadors. A striking 58% of Gen Z now trust machines to act fairly—11% more than they trust their fellow humans. Millennials, too, lean into this techno-optimism, with 51% believing companies will safeguard their data through AI. But beneath the surface, unease stirs: Gen Z’s trust in AI’s data security dropped by 6% in just one year. Meanwhile, Baby Boomers cling to analog instincts. For them, the numbers tell a different story: a razor-thin 4% gap separates those who trust AI’s fairness (45%) from those who still bet on humans (41%). To this generation, machines aren’t saviors—they’re disruptors, upending norms they’ve spent lifetimes navigating.
The message is clear: AI has become humanity’s moral mirror, reflecting our growing disillusionment with human systems. And as algorithms reshape workplaces worldwide, this trust crisis is no longer philosophical—it’s personal.
As a survey7 conducted by KPMG Australia and the University of Queensland shows, nowhere is the global trust divide more visceral than in workplaces. In India’s tech sector and China’s manufacturing zones, 75% of workers welcome AI as a collaborator. Contrast this with Finland’s factories and Japan’s offices, where fewer than 25% have the same attitude.
The division appears hierarchically as well. Managers globally embrace AI (65% trust), while frontline workers are partially skeptical (36% trust). Across industries, healthcare surprisingly becomes the field with 44% of workers trusting AI (might be prone to diagnostic algorithms). But when HR deploys AI for promotions? Trust plummets to 34%, exposing our deepest fear: that algorithms might codify human prejudices rather than cure them. Education shows another divide. University graduates,steeped in tech culture,report 46% trust in workplace AI. For those without university education , that number drops to 32%.
The stakes transcend philosophy. As data displayed before, there are biases among countries, generations, social status and industries. Trust in AI isn’t binary; it’s a fragile ecosystem where personal stakes dictate tolerance for risk.
In reality, employees hold different views towards disruptive AI adoption. Whether they regard it as acculturation or alienation, it raises an identity crisis at work. Employees can’t just “adapt” to AI, which means the sustainable way is to rebuild professional identities around it. Person-Environment Fit Theory8 argues that workers thrive when they see AI as amplifying their strengths, not replacing them. For example, marketers using AI analytics to predict trends report feeling more creative, not less—they trade grunt paperwork for strategic thinking. But when AI is used as a tool to monitor employees or score “productivity”, trust evaporates. In this situation, the same technology that promised liberation feels like a digital leash.
Trust and then survive. Employees who trust AI are more likely to embrace AI collaboration, especially if they have a flexible career orientation. These individuals see AI as a partner (Human-AI Collaboration) in navigating their career paths, leading to greater sustainability9. But here’s the catch: trust isn’t automatic. It’s built on the belief that AI will benefit, not undermine, their professional growth. Without this trust, even the most advanced AI tools risk becoming obsolete in the hands of skeptical employees.
However, skeptics often argue that AI threatens jobs, but the reality is more nuanced. In sectors like healthcare diagnostics or creative industries, AI isn’t replacing humans—it’s redefining workflows and driving innovation. The balance lies in ensuring that AI integration doesn’t just enhance efficiency but also safeguards employee agency and long-term career stability. Appropriate strategic adoption will position employees and organizations for dominance.Trustworthy AI can bridge this gap between employees and AI, but only if employees are willing to let go of preconceived notions and embrace the potential of collaborative intelligence.
The AI organization is the strategy maker and as well beneficiary of the loop, where they’ll effect as a support multiplier. As employees can fit in and benefit from AI integration, this advantage only scales if organizations actively support it. Employees turn this flexibility to pivot, learn, and adapt at AI speed into real competitive power when they feel the organization has their back. In the AI era, competitive edge isn’t just coded—it’s cultivated. This is where practical and scientific launch approaches come into play.
Human-AI collaboration is reshaping how frontline employees perform. By enhancing their role breadth self-efficacy and sparking positive emotions, this collaboration drives measurable improvements in service performance. Interestingly, while trust in robots amplifies the emotional benefits, it doesn’t influence the self-efficacy pathway. This duality highlights the complexity of AI’s impact—how employees perceive and interact with it.
For employees, generative AI is more than a productivity tool but a gateway to competitive advantage. By enabling boundary spanning and boosting agility, AI helps employees connect ideas, teams, and opportunities in ways that were once out of reach. With organizational support, the benefits of AI are magnified, turning potential into tangible results. This interplay between technology and trust is what separates thriving employees from those merely keeping up.
Beyond the workplace, AI is sparking a deeper conversation about what it means to be human. Human-AI collaboration is pushing us to rethink resource use, cost-effectiveness, and even our values. But this transformation doesn’t happen overnight. It requires investment, a solid foundation, and a willingness to explore the unknown. As we unlock the potential of AI, we’re shaping the future of work, society, and humanity itself.
[1] Collaborative Intelligence: Humans and AI Are Joining Forces, Harvard Business Review, 2018. Available at: https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces
[2] The effect of human-robot collaboration on frontline employees’ service performance: A resource perspective, International Journal of Hospitality Management, April 2025. Available at: https://www.sciencedirect.com/science/article/pii/S0278431924003451
[3] Ibid.
[4] https://www.sciencedirect.com/org/science/article/pii/S2561326X24006930
[7] https://assets.kpmg.com/content/dam/kpmg/au/pdf/2023/trust-in-ai-global-insights-2023.pdf
[8] https://www.sciencedirect.com/science/article/pii/S000187912300088X#bb0100
[9] https://www.sciencedirect.com/science/article/abs/pii/S000187912300088X
[10] Employee Acceptance for AI Based Knowledge Transfer: Conception, Realization and Results of an ELSI+UX Workshop, Procedia Computer Science, March 2024, Available at: https://www.sciencedirect.com/science/article/pii/S187705092400022X