The Hybrid Workplace
Mohit Rajhans talks to Alan Carter about building your workplace for digital readiness.
Admin to Agentic
— Mohit Rajhans
The Trust Paradox: Where Billions Meet Doubt
Here’s the contradiction I keep seeing across media, education, and enterprise teams: capital is flowing, but coherence is missing. Organizations are buying AI, while employees are quietly routing around policy and tooling to get work done.
The Work Trend Index puts it bluntly: three out of four people use AI at work, but many do it without clear guardrails. The result is predictable—risk teams panic, executives ask for ROI, and frontline teams keep using whatever works. That gap is not a model problem. It’s a leadership design problem.
What this page is (and isn’t)
This is a living “authority hub”: frameworks, evidence, case patterns, and practical moves you can steal for your next exec meeting. It’s not a product rollout pitch—and it’s not a breathless AI cheerleading session.
If you’re here, you’re likely dealing with:
- Shadow AI and data leakage anxiety
- Tool sprawl (Copilots, chatbots, agents—pick your flavor)
- Training gaps and “AI theatre”
- Unclear ownership: IT vs HR vs Ops vs Comms
- Pressure to prove ROI while trust is fragile
The 3-Stage Evolution
AI adoption isn’t a binary switch. It’s a progression—from task relief to outcome delivery—where the trust requirement increases at every stage.
Stage 1: Admin Work
Repetitive, rules-based tasks with minimal autonomy.
- Data entry, scheduling, routine reporting
- Invoice processing, ticket triage
AI role: automation + pattern recognition (efficiency, error reduction).
Stage 2: Augmented Work
Human-AI collaboration that improves decisions and throughput.
- Research synthesis, draft writing, analytics assistance
- Recommendations, forecasting, ideation
AI role: decision support + insight generation (human stays accountable).
Stage 3: Agentic Work
Goal-driven systems that act, monitor, and iterate under oversight.
- Autonomous optimization (campaigns, workflows, service ops)
- Multi-step execution with audit trails
AI role: outcome execution + continuous learning (governance is non-negotiable).
Key question: Where does your organization sit on this spectrum—and where do you need to be to stay competitive without breaking trust?
Real-World Case Snapshots
Patterns I’m seeing (and how they evolve from Admin → Augmented → Agentic):
Media & Broadcasting: from scheduling to autonomous multi-platform optimization
Admin: manual scheduling and publishing → Augmented: audience insights + content packaging → Agentic: systems that adjust distribution decisions based on performance signals (with human sign-off).
Education: from LMS administration to adaptive learning pathways
Admin: course setup and support tickets → Augmented: tutoring support + feedback assistance → Agentic: adaptive pathways that adjust difficulty and sequencing based on progress (with educator oversight).
SMB Marketing: from templates to autonomous campaign loops
Admin: templated posts → Augmented: AI-assisted copy/design/testing → Agentic: budget allocation + creative iteration + reporting loops tied to outcomes (with constraints).
Performance Management: the “fairness” pressure-test
Employees are increasingly open to algorithmic support in evaluation—especially where human bias is suspected. The lesson isn’t “let AI judge people.” The lesson is: your current system has a trust problem, and AI is exposing it.
The Trust & Transformation Playbook
Bridging the divide between investment and trust isn’t a technical challenge; it’s a leadership challenge. Here are five moves that don’t require a moonshot budget—just operational courage.
- Start with transparency. Make AI usage visible. Define disclosure norms. Reduce the “black box” effect.
- Reskill from the ground up. Train on real workflows, not abstract modules. Make learning safe, messy, and supported.
- Design feedback loops. Humans in the loop isn’t a slogan—it’s an operating model with capture, review, and iteration.
- Ethical guardrails first. Define what’s allowed / expected / forbidden before scale (and bake it into tooling + process).
- Measure trust metrics. Track confidence and adoption alongside productivity—trust is a leading indicator.
What I’m building publicly
This hub grows over time: frameworks, scorecards, briefs, and “what changed this month” notes. The more evidence I publish, the more leverage you gain—and the more valuable the advisory becomes.
What you can expect here
- Board/ELT-ready one-pagers (no fluff, all signal)
- Producer-friendly angles (human stakes + real stats)
- Operating models for AI CoEs (ownership + governance)
- Automation patterns that don’t torch trust
- Canadian lens when data, policy, and culture matter
Signal Library (Domain Authority Engine)
This is the “double-sided market” layer: decision-makers get a credible signal feed; operators get practical artifacts. (Add links as you publish. Start small—ship weekly.)
Field Notes
Frameworks & Tools
Tip: keep each entry short, source-linked, and tagged by audience (HR, IT, Ops, Comms, Board).
Invite / Brief / Collaborate
If you want a rollout vendor, I’m not your guy. If you want a strategist who helps leaders redesign work, build trust, and align people + policy + systems—this is my lane.
Speaking
Keynotes + panels with evidence-led narratives and practical takeaways.
Workshops
Interactive sessions: scorecards, governance scenarios, role-based playbooks.
Advisory + CoE Design
Operating models, guardrails, measurement, partner strategy—built for real constraints.
Replace /contact, /speaker, /workshops with your actual Squarespace URLs.
Sources & Further Reading
- Goldman Sachs Research: AI investment forecast to approach $200B globally by 2025
- McKinsey: “Superagency in the workplace” (AI maturity and adoption barriers)
- Microsoft Work Trend Index 2024: “AI at Work Is Here. Now Comes the Hard Part.”
- Microsoft + LinkedIn summary: AI use at work, training gap, hiring signals
- Gartner: Workplace predictions for CHROs (includes algorithm feedback perception)
- University of New Hampshire: research on AI vs human evaluation trust
© 2025 Think Start Inc. • Update cadence: monthly (minimum). Weekly if you want authority faster.
© 2025 Think Start Inc. • Update cadence: monthly (minimum). Weekly if you want authority faster.

