Building in the Age of AI: What Humans Should Stop Doing

Building in the Age of AI: What Humans Should Stop Doing

The fear is real. But most of it is pointing in the wrong direction.


Last week, a developer friend messaged me in a panic. He'd spent three hours on a feature, then watched an AI write essentially the same code in two minutes. He was genuinely scared. "What's the point of learning this stuff if machines just do it better?"

I've been building with AI tools daily for the past two years. I've shipped features with AI-generated code. I've made mistakes by trusting it too much. And I've learned where it helps and where it doesn't.

Here's the uncomfortable truth: if your job is pure execution—typing what someone else decided—AI is a real threat. But if your job involves judgment, context, and decisions? You're not competing with AI. You're amplified by it.

What we used to do that made sense beforeWhat We Used to Do (That Made Sense Before)

Think about how we built software five years ago.

We spent hours on first drafts. Writing was grinding out paragraphs one by one, revising as we went.

We memorized syntax and boilerplate. Good developers could write a complete API endpoint from memory. Knowing the exact imports, the right error handling patterns—that was expertise.

We researched everything manually. Want to compare marketing tools? Open twenty browser tabs. Spend half a day reading. Compile notes.

All of this made sense when creation is slow. You optimize for speed at the mechanical level.

But AI is now dramatically faster at all of these mechanical tasks. Which means something has shifted.

Why judgment now matters more than executionWhy Judgment Now Matters More Than Execution

If AI can produce the first draft, write the boilerplate, and compile the research, what's left for humans?

Everything that matters.

Knowing what to build. AI can generate code for any feature you describe. But it can't tell you which feature your users need. Last month I had AI generate authentication flows for three different approaches—all technically correct. Choosing which one fit our architecture? That was my call.

Recognizing quality. AI produces output fast. But is it good? Two weeks ago, AI confidently generated a database query that looked perfect. It would have worked—but it would have been O(n²) on a table with 500k rows. Catching that requires understanding what you're looking at.

Asking precise questions. The difference between useful AI output and useless output is usually the question. "Write me a blog post" produces generic content. "Write 500 words explaining X concept to Y audience, using these specific examples" produces something useful.

Here's my position: AI rewards thinkers and exposes task-followers. If all you do is execute what others define, you're replaceable. If you define what needs doing, AI makes you more capable than ever.

Five habits humans should stopFive Habits Humans Should Stop

Here's what I've deliberately changed—things that made sense before but now waste time.

1 stop perfecting first drafts1. Stop Perfecting First Drafts

My biggest time sink was polishing as I wrote. Every sentence revised before moving to the next.

Now I let AI generate rough drafts. My job becomes editor, not typist. Same quality, fraction of the time.

2 stop memorizing syntax2. Stop Memorizing Syntax

I used to pride myself on knowing obscure CSS properties from memory. That knowledge is now worth little.

What's worth a lot: understanding why patterns exist and when to use them. Concepts over commands. If you're learning to code in 2026, spend less time on syntax and more on architecture.

3 stop manual research compilation3. Stop Manual Research Compilation

Opening fifty tabs, reading each one, synthesizing—still happens, but differently.

Now I have AI compile an initial summary. I spot-check sources, dig into what's interesting, add the judgment layer. The research happens faster. The analysis is still mine.

4 stop linear problem solving4. Stop Linear Problem-Solving

Before AI, you planned carefully because execution was slow. With AI, you can explore multiple approaches simultaneously. Generate three solutions. See which makes sense.

Less planning, more experimenting. Faster cycles of "try it and see."

5 stop treating speed as the goal5. Stop Treating Speed as the Goal

Counterintuitive: if AI makes everything faster, shouldn't we produce more?

Not necessarily. AI gives you time back—use it for quality, not quantity. Instead of five mediocre blog posts, two good ones. Instead of ten rushed features, three solid ones with proper testing.

The safest skill in 2026 is judgment. AI removes the constraint that forced us to rush. Use that freedom for thinking, not more output.

Understanding ai anxiety and fomoUnderstanding AI Anxiety and FOMO

Two kinds of AI FOMO:

Fear of being left behind. "Everyone's using AI. If I don't learn, I'll be unemployable." Partially valid—but the tools are getting easier, not harder. The learning curve is going down.

Fear of being replaced. "AI will take my job." Usually misdirected. AI takes over tasks, not jobs. Jobs change—less routine work, more complex judgment.

Here's what matters: Can you do things AI can't?

If your value is "I type fast" or "I memorize information" or "I follow processes reliably"—yes, you should be concerned. Those are exactly what AI does well.

If your value is "I understand what customers need" or "I make good decisions with incomplete information" or "I manage competing tradeoffs"—you're fine. AI enhances those skills. It doesn't replace them.

What ai is actually good atWhat AI Is Actually Good At

After two years of daily use, here's what I trust AI for:

  • Generating first drafts. Code, content, documentation—anything where starting is the hard part.
  • Boilerplate and repetition. The fifteenth CRUD endpoint. Form validation. Tedious but not complex.
  • Research synthesis. Gathering information from multiple sources. Not perfect, faster than manual.
  • Explaining concepts. Patient, customized explanations of new frameworks or codebases.
  • Translation and format conversion. Between languages, data formats, writing styles.

Human vs ai where humans still winHuman vs AI: Where Humans Still Win

And what AI struggles with:

  • Knowing what's important. AI summarizes documents but can't tell you which parts matter for your situation.
  • Understanding context. AI doesn't know your team dynamics, company history, customer relationships, or organizational politics.
  • Creative leaps. AI recombines existing patterns. Original ideas—truly new approaches—still come from humans.
  • Ethical judgment. Is this the right thing to build? Will this decision hurt anyone? AI gives capabilities. Ethics is up to you.
  • Long-term accountability. AI optimizes for the current prompt. Strategic thinking that balances short and long-term requires human judgment.
  • Relationship building. Trust, rapport, genuine connection—fundamentally human. AI mimics social interaction but can't create authentic relationships.

My current workflow with aiMy Current Workflow With AI

Writing: I start by talking through the topic or typing rough notes. Then I might ask AI for an outline or to expand sections. First draft is collaborative. Editing and final polish are mine.

Coding: I describe what I want at a high level, AI generates initial code. I read carefully, fix what's wrong, refactor what's awkward. For complex logic, I work it out myself first, use AI for mechanical parts.

Research: AI compiles initial information. I verify anything important. Synthesis is AI-assisted; judgment is mine.

Email: Routine messages—AI drafts, I edit. Anything sensitive or relationship-dependent—I write myself.

The pattern: AI handles mechanical work. I handle judgment.

The real risk and its not about jobsThe Real Risk (And It's Not About Jobs)

What worries me isn't job replacement. It's subtler.

When tools do work for us, we lose the ability to do that work ourselves. GPS made us worse at navigation. Calculators made us worse at mental math. What does AI make us worse at?

I've noticed it in myself. I used to be able to hold a complex problem in my head for hours, slowly building connections until a solution emerged. Now I catch myself reaching for AI after five minutes. The patience feels harder to summon.

Maybe it's synthesis—holding complex information and making connections. Maybe it's the deep skill that comes from repetition and struggle. Maybe it's the willingness to sit with discomfort until understanding arrives.

I've started deliberately doing some work without AI—writing first drafts by hand, solving problems before asking for help—just to maintain the muscle. I'm not sure it's the right balance. But I'm aware something is at stake.

The question I keep asking: am I using AI as a tool, or am I becoming dependent on it?

What to actually doWhat To Actually Do

If you're anxious about AI, here's practical advice:

Start using the tools. The fear is worse than the reality. Once you work with AI daily, you see its limitations clearly.

Focus on judgment skills. Reading code critically. Understanding user needs. Making trade-off decisions. These are what matter.

Stay curious, not panicked. Things are changing gradually. You have time to adapt.

Don't skip fundamentals. AI writes code, but you still need to understand what good code looks like. Don't skip learning—accelerate it.

Build things. The best way to understand AI's role is using it in real projects. Building teaches you what discussions can't.

What i believeWhat I Believe

Here's where I stand after two years of building with AI daily:

AI doesn't make humans obsolete. It exposes what was always human and what was always mechanical. The mechanical work that padded our days is disappearing. What's left is the thinking.

If your identity was "I type fast" or "I know the syntax"—that's uncomfortable. Those things felt like skill, but they were always closer to labor.

If your identity is "I understand problems" or "I make good decisions" or "I build things that matter"—AI makes you more capable than ever.

I'm not reacting to AI. I'm building with it. There's a difference.

The people who thrive won't be the ones who fear AI or the ones who over-rely on it. They'll be the ones who understand what it is—a powerful tool for mechanical work—and stay sharp at the things it can't do.

Judgment. Context. Responsibility. These are not features to be automated. They're what makes the work human.

That's the principle I'm operating on. So far, it's working.


This is what I'm figuring out as I go. If you're navigating the same questions, I'd love to hear how you're thinking about it. Find me at dharmikjagodana.com or on Twitter.


You might also like

I Exposed My AI Agent to the Internet — Here’s How I Secured My OpenClaw Deployment