Back to all posts

Agents Aren't Human — Stop Managing Them Like They Are

·The Human Side of AI Transformation·9 min read

Agents Aren't Human — Stop Managing Them Like They Are

You wouldn't do long division by hand. So why are you processing 200 Slack messages, 47 emails, and 12 status reports with your bare brain?

The biggest mistake in the AI discourse right now is anthropomorphizing agents. We talk about them like junior employees. We worry they'll "take our jobs" or "go rogue." We assign them personalities. We debate whether they "understand" things.

Stop. They're calculators. Really good calculators — but for information instead of numbers.

I've spent the last six months working with AI agents every single day. Not experimenting. Not "exploring use cases." Building production systems. And the single biggest unlock wasn't a better model or a smarter prompt. It was stopping treating agents like people and starting treating them like infrastructure.

That reframe has a name. I call it The Cognitive Infrastructure Thesis. And it changes everything about how you should deploy AI in your business.

You Already Solved This Problem Once

In the 1970s, engineers resisted pocket calculators. Hewlett-Packard launched the HP-35 in 1972 — the first scientific pocket calculator — and engineers were torn. Real engineers did math by hand. Slide rules were the craft. Using a calculator was cheating.

By the 1980s, nobody cared. The calculator won. Not because it was smarter than humans. Because long division was never the point. The engineering was the point.

Then spreadsheets. Accountants who'd spent decades mastering ledger books watched VisiCalc do in seconds what took them hours. Same resistance. Same fear. Same outcome.

Math by hand became calculator. Data analysis by hand became spreadsheet. Information processing by hand is becoming agent.

I lived a version of this progression myself. In 2022, I was using GPT-3.5 through an uncomfortable web interface. The code it produced was maybe 50% correct. I'd spend hours rewriting everything it generated. The AI gave direction — I did the actual work. It felt like using a calculator that was wrong half the time.

Today I run multiple agents in parallel from my terminal, rebuilding enterprise systems in days that used to take months. The capability curve from "50% correct code in a chat window" to "production software in 72 hours" happened in under three years.

Each time the pattern is the same: resisted, normalized, obvious. We're in the "resisted" phase right now with agents. Give it 18 months.

The Volume Problem Nobody Talks About Honestly

Here's a number that should make you uncomfortable: knowledge workers face an interruption every 2 minutes during core work hours. That's from a Microsoft study analyzing over 31,000 knowledge workers. The average worker receives 117 emails and 153 Teams messages daily. It takes 23 minutes to fully regain deep focus after each interruption.

Do the math. There's almost no deep thinking left.

Workers spend 60% of their time on "work about work" — chasing status updates, attending unnecessary meetings, switching between tools, coordinating across channels. Not thinking. Not creating. Not deciding. Processing.

I felt this before I understood it. When I was managing operations for a company with over 100 employees, the bottleneck was never talent. We had smart, capable people. The bottleneck was information volume. Every day was a flood of questions, updates, exceptions, approvals. I went from being a "computer person" to a manager drowning in signals my brain was never designed to process at that scale.

Your ancestors handled maybe a few dozen important pieces of information per day — weather, food, social dynamics in a group of 150. Now we process thousands. Daily.

Willpower won't fix this. Another "productivity system" won't fix this. Getting up at 5 AM won't fix this. These are band-aids on an arterial bleed.

The Cognitive Infrastructure Thesis

Here's the framework that changed how I think about all of this.

Physical infrastructure freed humans from physical carrying. Roads, bridges, railways, plumbing, electrical grids — each one removed a physical burden. Before plumbing, someone carried water. Before roads, someone carried goods on their back. Before the electrical grid, someone maintained a fire.

Nobody argues we should go back to carrying water from the river. Nobody calls plumbing "lazy."

Cognitive infrastructure frees humans from information carrying. Agents, automated workflows, intelligent routing, summarization systems — each one removes an information burden. Before agents, someone read every email. Someone triaged every support ticket. Someone compiled every status report.

The pattern is identical:

  • Physical burden → physical infrastructure → humans freed for higher-order physical work (sports, art, exploration)
  • Cognitive burden → cognitive infrastructure → humans freed for higher-order cognitive work (judgment, strategy, empathy, creativity)

Agents aren't replacements for human thinking. They're infrastructure that makes human thinking possible again — by handling the volume that was never meant for human brains.

This isn't theory for me. At the company where I managed 100+ people, I built targeted automation — document processing through OpenAI APIs, automated reminders, salary calculation systems, procurement workflows. Nothing flashy. Basic cognitive plumbing. Within weeks, the staff couldn't imagine working without it. Not because the AI was brilliant. Because it handled the information volume that was crushing their capacity to do actual work.

What Humans Are Actually Good At

When I talk to business leaders, they're afraid agents will replace judgment. The opposite is true. Agents are what finally give you time to exercise judgment.

Here's what humans do that no agent can:

Judgment in ambiguity. When the data is incomplete, contradictory, or unprecedented — humans navigate this. Agents freeze or hallucinate. A CEO deciding whether to enter a new market with imperfect information. A manager sensing that a team member is burning out before any metric shows it. I make these calls daily when directing agents — which approach to take, which trade-off to accept, when to scrap something and start over. The agent executes. The human decides.

Empathy and relationship. Closing a deal, managing a difficult conversation, building trust with a client over years. Agents can draft the email. Humans deliver the meaning. I learned this managing people from wildly different backgrounds at a dental operation — no amount of automation replaces the ability to read a room.

Creative direction. Agents can generate options. Thousands of options. But choosing which option matters — that's taste, vision, experience. That's human. When I rebuilt enterprise software in 3 days with agents, the speed came from the agents. The architecture decisions — what to build, what to skip, what matters for this specific business — came from 15 years of building systems.

Ethical reasoning. Should we do this? Not can we, not will it be profitable — should we? This question requires values, context, and moral weight that agents don't carry. Good actors must engage with AI precisely because bad actors already have. The technology amplifies whoever wields it.

Long-horizon strategy. Where should this company be in 5 years? What's the second-order effect of this decision? Agents optimize within constraints. Humans set the constraints.

These skills are currently being crushed under the weight of information processing. Your best strategic thinker is spending 60% of their day on "work about work." That's not a productivity issue. That's a waste of the most valuable resource in your organization.

The Anthropomorphizing Trap

The reason people resist this reframe is emotional, not logical.

We anthropomorphize agents because language models use words. Calculators use numbers — nobody thinks their calculator is alive. Spreadsheets use cells — nobody worries their spreadsheet will rebel. But agents use natural language, and language feels human. So we project human qualities onto them.

This creates two equally destructive errors:

False expectations. "My agent should understand the company culture." "My agent should know what I really mean." No. Your calculator shouldn't understand poetry either. Use the tool for what it does. Use yourself for what you do.

False fears. "Agents will replace all jobs." "Agents will make decisions they shouldn't." Calculators didn't replace mathematicians. Spreadsheets didn't replace accountants. Agents won't replace thinkers. They'll replace the parts of your job that aren't actually your job.

I had to learn this the hard way. When I started working with agents full-time six months ago, I kept trying to make them think like me. I'd write elaborate prompts trying to transfer my judgment. It failed every time. The breakthrough came when I stopped. When I accepted that the agent handles volume and I handle direction. That's when everything accelerated.

Gartner predicts 40% of enterprise applications will embed task-specific AI agents by end of 2026, up from less than 5% in 2025. The businesses getting this right aren't anthropomorphizing — they're building infrastructure. The ones stuck in debates about agent "trust" and "alignment" are watching their competitors ship.

What This Means For Your Business

If you run a company with 10-200 people, here's the practical takeaway:

Audit where your team spends time on information processing vs. actual work. Research says 60%. Your number might be higher. Every hour your people spend reading, sorting, summarizing, and routing information is an hour they're not doing the work you hired them for.

Identify the 3 biggest information bottlenecks. Where does information pile up? Where do people wait for summaries? Where do status updates require meetings that could be automated? Start there.

Deploy agents as infrastructure, not as employees. Don't hire an "AI assistant." Build cognitive plumbing. Route information automatically. Summarize automatically. Flag exceptions for human judgment. Let the routine flow through the pipes so your people can focus on the exceptions that actually need a human brain.

Measure what matters. Not "how many tasks did the agent complete" but "how much time did my team reclaim for judgment, strategy, and relationship work?" That's the metric that compounds.

The Identity Shift Nobody Warns You About

Here's what I didn't expect. Building cognitive infrastructure doesn't just change your workflow. It changes your identity.

I spent 15 years as someone who builds systems. Who writes code. Who solves problems by sitting down and grinding through them. Agents made me let go of that identity. The value isn't in writing the code anymore. It's in knowing what to ask for. It's in understanding the business problem deeply enough to direct agents toward the right solution.

That shift is uncomfortable. It feels like losing something. It took me six months of daily work with agents before it stopped feeling like loss and started feeling like leverage.

The same thing happened to engineers who picked up calculators. They didn't stop being engineers. They became better engineers — because they spent their time on engineering instead of arithmetic.

You won't stop being a leader when you build cognitive infrastructure. You'll start being one — because you'll finally have the cognitive space to lead.

The Infrastructure Will Win

The calculator won. The spreadsheet won. The agent will win. Not because any of them are smarter than humans. Because carrying — whether physical or cognitive — was never the point.

The point was always the thinking, the creating, the deciding, the leading. The carrying was just the tax we paid because we didn't have better infrastructure.

Now we do.

Build cognitive infrastructure. Free your people to do what humans actually do best. The organizations that don't will keep paying a tax their competitors stopped paying.


If you're running a business and want to figure out where cognitive infrastructure fits, let's talk. I help companies build systems that handle information volume so their people can focus on actual work.

References: Microsoft knowledge worker study via Speakwise 2026 Knowledge Worker Productivity Report. Enterprise AI adoption data from Gartner. HP-35 history from Smithsonian National Museum of American History.

The Pragmatic Builder

Weekly frameworks and lessons from building with AI agents. No hype, just what works.

No spam. Unsubscribe anytime.