The Fragility of Trust in Hybrid Teams — and the Promises Leaders Break
Leading and building trust in a hybrid team is a bit like trying to toast marshmallows over a candle while someone keeps opening a window. You’re expected to conjure warmth and cohesion while nobody’s actually huddled round the flames — they’re scattered in kitchens and spare rooms, squinting at their own screens, trying to picture the marshmallow moment while the dog barks, children burst in, and the washing machine beeps accusingly in the background. Hardly the setting for golden goo and campfire camaraderie.
Trust, already elusive at the best of times, simply doesn’t travel well. In person, it moves in the margins — a nod, a sigh, a conspiratorial smirk. Online, those subtleties evaporate, and all you’re left with is a Teams message that could mean “Yes, absolutely” or “Over my dead body.” And just when leaders are finally learning how delicate digital trust can be, we’ve decided to invite AI agents into the team.
Which brings me to Rachel.
Trust in person vs. trust online
Let me introduce you to an impossibly elegant woman: poised, self-assured, cigarette smoke curling as she enters the room. Rachel. With grace and calm she sits opposite Deckard, Harrison Ford’s weary blade runner, and within minutes her very existence as a human being rests on the flicker of her irises under interrogation.
That’s hybrid trust in a nutshell. In the office, people read you through a hundred tiny cues: the arch of an eyebrow, the pause before you answer. Online? Those cues shrink to pixels, and leaders are left squinting like Deckard at Rachel — trying to decide whether what they’re seeing is authentic or an elaborate projection.
Microsoft’s 2024 Work Trend Index puts numbers to this: employees are 2.5 times more likely to mistrust leadership decisions when they arrive digitally. No wonder. Emails arrive stripped of warmth. Video calls flatten irony. “Fine by me” in chat can mean fine by me, fine by me?, or FINE BY ME (translation: prepare for a passive-aggressive deluge).
Why hybrid trust cracks so easily
Trust isn’t a great boulder you heave up a hill. It’s more like a tray of fragile teacups — every kept promise, every signal of attentiveness another cup balanced precariously. In hybrid settings:
- Latency breeds doubt. A three-hour silence looks like neglect, not lunch.
- Visibility evaporates. You miss the sighs, the slumps, the half-muttered asides.
- Myth-making thrives. Humans abhor a vacuum, so they invent stories to fill it. And you, dear leader, rarely feature as the hero.
This is why hybrid trust is precarious: the signals are faint, the gaps fertile for suspicion, and the stories merciless in their invention.
Do hybrid teams dream of real trust?
Philip K. Dick’s novel Do Androids Dream of Electric Sheep? — the book that inspired Blade Runner — asked what makes humanity authentic when machines can mimic it. Swap androids for AI agents and you’ve got today’s leadership dilemma. Can hybrid teams still dream of real trust when part of the “team” is synthetic?
When AI joins your team
Into this fragile balancing act we now drop a bowling ball called AI. These agents take notes, schedule meetings, draft emails, even dispense “empathetic” nudges like confetti.
Is this salvation or sabotage? Irritatingly, both.
The promise
- Consistency. AI doesn’t forget, doesn’t play politics, doesn’t sigh heavily in meetings.
- Levelling. Sometimes it elevates quieter voices, surfacing their contributions more evenly.
And yes, people notice the upside: early surveys show teams appreciate the sheer efficiency. Meetings run with greater precision, administrative drudgery diminishes, and the quiet person’s idea sometimes makes it into the deck where before it would have been overlooked. AI, in its best light, resembles the assistant every leader wishes they had.
The peril
- Synthetic empathy. A bot murmuring “I understand your concern” is Rachel-like: eerily persuasive until you realise the memory was manufactured.
- Surveillance creep. If the machine feels like it’s informing on you, trust doesn’t merely wobble — it implodes.
- The shifting anchor. Who do you trust now — your manager, your colleague, or the algorithm with its probability scores?
And people are already reporting the shadow side. Workers describe feeling scrutinised rather than supported, wondering if their “collaborator” is actually a note-taker for head office. Others say the feedback from AI feels ersatz — like someone purchased empathy wholesale and handed out a hologram version. Instead of cultivating trust, it leaves them second-guessing whether the humans around them mean what they say or are simply parroting what the tool has generated.
MIT Sloan research (2024) confirms the suspicion: employees recoil when leaders outsource relational work to AI. It’s Blade Runner déjà vu — it looks human, it sounds human, but beneath the veneer it’s circuitry.
What leaders can do
You can’t model your way out of this. You need acuity sharp enough to catch whispers, and principles robust enough to define AI’s role. Think of yourself less as a manager of tasks and more as Deckard with the Voight-Kampff test — constantly probing what’s real and what’s manufactured.
- Be explicit about promises — and test them.
Rachel believed in her implanted memories until someone tested them. Your team doesn’t need grand speeches; they need proof. Say exactly what you will deliver and by when, then invite others to hold you accountable. Let them “test your iris flicker” by checking whether your promises match your actions.
- Over-signal your humanity.
Replicants looked human, but their responses lacked nuance. Don’t leave your warmth to chance. When working hybrid, heighten the signals: pick up the phone, add tone to your words, show your face more often. A leader’s humanity must be unmistakable, so no one mistakes you for the cold efficiency of a replicant.
- Name the machine for what it is.
Deckard didn’t pretend Rachel was human once the truth was clear. You shouldn’t pretend AI is you. If a report, note, or draft came from an algorithm, acknowledge it. Transparency preserves credibility; concealment makes you look like you’re hiding circuitry under your skin.
- Keep the “human work” human.
Replicants could mimic affection, but it rang hollow once revealed. Don’t outsource empathy, feedback, or recognition to AI. Deliver it yourself, with your own fingerprints on it. People trust what feels lived, not what sounds generated.
- Protect psychological safety — let people challenge the illusion.
Deckard doubted what he saw, and Rachel eventually doubted herself. Your people must be allowed to doubt too. Give them permission to question AI outputs and even your own assumptions without fear of reprisal. Trust is strengthened when the façade can be challenged without consequence.
The Rachel problem
Rachel believed she was human because her memories told her so. She looked the part, acted the part, even loved like a human — but her foundation was fabricated.
Hybrid trust can fall into the same snare. It appears solid, sounds convincing, but if people sense the foundation is synthetic — promises unkept, empathy feigned, humanity outsourced — the illusion disintegrates. And rebuilding it? Harder than lighting that campfire in the rain.
The moral of the story
There’s a moment in Blade Runner when Rachel, having just discovered that her memories were fabricated and her humanity an illusion, turns to Deckard and asks quietly: “Have you ever taken that test yourself?” It’s not just a line — it’s an existential dart. For a second, the hunter is hunted. The blade runner who has been interrogating her identity is forced to question his own.
Leaders should feel that same sting. It’s easy to sit in judgement of others, critiquing their self-awareness, their brittle trust, their supposed failings. But the sharper question is: have you tested yourself? Are your promises genuine, or are they illusions you’ve planted to protect your own self-image?
In hybrid teams, as in Blade Runner, the leaders worth following aren’t the ones interrogating everyone else. They’re the ones who dare to take the test themselves first — and prove, moment by moment, that their humanity is real.
To discuss real and practical ways to build real trust and build teams that can thrive chat to us
