The Part of Your Job AI Can't Do (And Why It Matters More Now)
There’s a lot of anxiety in our industry right now. Engineers are watching AI tools generate code in seconds that would have taken hours. They’re seeing headlines about layoffs citing AI productivity gains. They’re wondering if the skills they spent years developing are about to become worthless.
I feel this too. After years building products and leading technology organizations, I’ve had to ask myself hard questions about what I actually contribute versus what’s just mechanical work.
What I’ve landed on surprised me. It’s more optimistic than either the doomer predictions or the breathless hype suggest.
The most valuable part of what you do was never the typing. It was always the judgment, the architecture, the knowing-what-to-build. And that part is becoming more important, more valuable, and more essential than ever.
The revelation that changed how I think about this
Here’s something I noticed while working with AI coding tools that reframed everything for me.
I was building a system with multiple content processing modules. Documents, images, and video all needed similar transformations. The AI wrote separate implementations for each type. The code looked different enough on the surface that the duplication wasn’t immediately obvious.
Then a bug appeared. I asked the AI to fix it. The fix went into the document module, but the image and video modules still had the bug.
I recognized the pattern immediately. I said: “You built three separate functions instead of one flexible function that handles all three content types. Refactor this.”
But here’s the key move. Instead of just telling it what to do, I asked: “Why do you think I’m asking you to do this? What are the pros and cons?”
The AI explained the tradeoffs accurately: consolidating would improve maintainability, make bug fixes propagate automatically, reduce testing surface area.
“What are the cons?” I asked.
“The refactoring will be complex,” it said.
My response: “That’s not my problem. That’s compute time. You refactor it.”
Within minutes, I had a properly abstracted system that would have taken a human days to refactor, assuming they attempted it at all. Most engineers, facing a complex refactoring, would leave the duplication in place and document it as technical debt.
Here’s what struck me: the AI had all the knowledge needed to recognize the problem. It knew the patterns, understood the tradeoffs, could explain the theory perfectly. Yet it didn’t recognize that this was a situation where that pattern applied. I had to point it out.
That recognition, the judgment about when to apply what, was the human contribution. And it was the essential contribution.
What engineers actually provide
This experience helped me see more clearly what engineers actually do that matters:
Pattern recognition across context. The AI knew about code duplication. It didn’t see that these three modules were duplicated because they looked superficially different. Recognizing that this situation matched that pattern: that was human judgment.
Knowing when to push back. The AI presented “complexity” as a reason to avoid doing something. A human knows that complexity of implementation is now just compute time, no longer a real constraint. The AI optimizes for what it thinks you want. You have to know what you actually want.
Understanding what “good” means in context. Good code isn’t an abstract property. It depends on how long this system needs to live, who will maintain it, what the performance requirements are, how it fits into the larger architecture. The AI doesn’t know your context. You do.
Asking the right questions. The AI will answer whatever you ask. The skill is knowing what to ask, and knowing when the answer doesn’t quite fit what you actually need.
This is the hard part. This is what junior engineers spend years developing. And it’s the part that becomes more valuable as AI handles more of the mechanical work.
The anxiety is understandable, but misdirected
Let me address something directly: if you’re anxious about AI making your skills obsolete, that’s a rational response to a genuinely uncertain situation. I won’t tell you there’s nothing to worry about.
But I think the anxiety is pointed in the wrong direction.
The engineers who should be worried are the ones whose entire value proposition is “I can type code that works.” If that’s all you bring, then yes, you’re competing with a tool that can do that faster and cheaper.
If your value is judgment, AI doesn’t replace you. It amplifies you. Knowing what to build, recognizing architectural problems, understanding business context, making tradeoff decisions: these are the skills that matter more when implementation becomes cheap.
Think about what happens when implementation gets faster and cheaper: you can try more things. You can refactor without it being a three-sprint commitment. You can build the comprehensive test suite that always got deprioritized. You can actually pay down technical debt instead of just documenting it.
The bottleneck shifts from “how fast can we type” to “how good is our judgment about what to type.” That’s a shift toward the skills experienced engineers actually have.
What leaders are actually saying
I’ve had this conversation with dozens of engineering leaders over the past year, and they’re genuinely divided.
The CTO of a major media company told me he doesn’t care if his engineers can read the code anymore. As long as the machine can deal with it, that’s what matters. His belief is that humans will increasingly work at higher levels of abstraction, just as we moved from assembly to C to Python.
An SVP of Engineering at another company told me the opposite: his engineers can use AI, but they have to fully understand every line, and it has to look like they wrote it themselves.
I understand both positions. The second one comes from a reasonable place: a desire to maintain quality, predictability, and deep understanding of the systems we build.
But I think it misunderstands what “understanding” means, and more importantly, what it’s for.
The purpose of understanding code is so you can change it, debug it, secure it, and maintain it. That’s the actual goal. Reading code with your eyes is one method of achieving that goal. It’s one method among several, and increasingly, it’s no longer the best method.
A different way to think about abstraction
Here’s something that helped me think about this more clearly.
No commercial computer actually runs Python. Or JavaScript. Or Java. Your code gets compiled, optimized, transformed, and reorganized multiple times before anything executes on hardware. The instructions that actually run look nothing like what you typed.
The computer doesn’t run your code. It runs its interpretation of your intent.
This observation isn’t an argument that “you were already trusting machines, so you should trust AI.” That framing misses the point.
The real insight is this: we’ve been working at increasing levels of abstraction for the entire history of programming. Assembly to C to Python to frameworks to libraries. Each level up, we trade direct control for leverage. We trust a layer below us to handle details so we can think about higher-level problems.
That trade has been worth it every single time. Compilers have bugs; frameworks have vulnerabilities. Yet working at higher levels lets us build things that would be impossible otherwise.
Think about how biologists work. They study cell division and cancer cells through microscopes. They can’t see these things with their bare eyes; they need machines. And for certain structures, optical microscopes don’t even work. You need an electron microscope, which doesn’t use light at all. It bombards samples with electrons and computationally reconstructs an image of what the structure would look like if you could see it. It’s a complete abstraction.
That abstraction enabled modern biology. Nobody argues biologists should study cells with their naked eyes.
AI-assisted development is another step up that ladder. You’re trading direct control of implementation for leverage in what you can build. The same tradeoff, the same basic bet.
The question isn’t whether to make that trade. It’s how to make it well: how to verify the output, how to maintain judgment, how to use the leverage wisely.
What verification actually looks like
“But how do you know the AI code is correct?”
This is the right question to ask. And the answer is: the same way you know any code is correct. You test it. You review it. You reason about it.
But here’s what I’ve found: I actually verify AI-generated code more rigorously than I verified code from junior engineers. Because I know it might have subtle issues, I interrogate it harder.
I use what I call multi-model verification. Code generated by one AI gets reviewed and explained by a different AI. I ask probing questions: What does this function actually do? What are the edge cases? Is this the most maintainable approach? Why did you structure it this way?
I often end up with deeper understanding than I would from skimming human-written code, because I’m actively interrogating every decision rather than passively reading.
Some people object: “But AI might hallucinate. Compilers don’t.”
Actually, compilers have bugs too. The difference is you’ve learned to trust compilers without verification, while AI requires verification. That’s a more honest relationship with your tools.
The engineers who get burned by AI are the ones who treat it like a compiler: give it input, trust the output. The engineers who benefit are the ones who treat it like a very fast, very knowledgeable junior developer who needs supervision.
The quality objection, and why it’s backwards
The most common objection I hear is about quality: AI-generated code is sloppy, creates technical debt, has security issues.
This is true of undisciplined AI-assisted development. It’s also true of undisciplined human development. The question is whether discipline is possible, not whether the tool is perfect.
Here’s what I’ve found counterintuitive: disciplined AI-assisted development often produces higher quality code than typical human development.
Why? Because the economics flip.
Humans skip documentation because it takes time. AI generates comprehensive docs trivially.
Humans write minimal tests because thorough testing is tedious. AI generates exhaustive test suites without complaint.
Humans cut corners under deadline pressure because they’re tired and stressed. AI doesn’t get tired.
Humans leave duplication in place because refactoring is risky and time-consuming. AI refactors in minutes.
The things we skip “to save time” are exactly the things AI does effortlessly. When you stop paying a time tax for quality, you can actually have quality.
Undisciplined AI-assisted development produces garbage. But disciplined AI-assisted development, with real architectural judgment, genuine verification, and thoughtful prompting, often produces better outcomes than the handcrafted code we actually ship (as opposed to the handcrafted code we imagine we’d ship if we had unlimited time).
What about non-determinism?
Some engineers point out that compilers are deterministic (same input, same output) while AI might generate different code each time.
This is true. But think about what you actually rely on determinism for.
You don’t verify compiler correctness by compiling the same code twice and checking for identical output. You verify it by testing behavior: does the compiled program do what I intended?
You can verify AI-generated code the same way: does this code do what I intended? The verification method is testing behavior, not comparing outputs.
The non-determinism matters for reproducibility in some contexts. But for most practical purposes, the question is “does this work correctly,” not “would I get the same code if I asked again.”
Recognizing false constraints
Here’s a pattern I’ve learned to watch for: AI systems often present their own limitations as if they were fundamental constraints.
I wanted to add end-to-end testing to a command-line tool I was building. The AI told me: “This isn’t really testable because it’s CLI-based. To test this properly, you’d need to install Playwright or Puppeteer to automate browser interactions, and that would be overkill for this project.”
I paused. Overkill for whom?
The AI was optimizing for human effort, warning me that setting up a testing framework would be time-consuming. But that’s no longer my constraint.
“Install Playwright,” I said. “Build a proper end-to-end test suite.”
Three minutes later, I had comprehensive e2e tests. A full suite covering the critical user paths, with proper setup and teardown, assertions on actual behavior, and clear failure messages.
Something that would have taken me three hours of configuration, documentation reading, and troubleshooting: done in three minutes.
The AI was right that it was complex. It was wrong that complexity was a reason to avoid doing it.
This is a judgment call humans need to make constantly: when the AI says something is hard or inadvisable, is that a real constraint or a false one? The AI optimizes for what it thinks you want to minimize. You have to know what you actually want.
A practical example
Let me give you something concrete.
Someone asked me to add their domain to the authorized domains for a system. A simple configuration change.
I asked the AI to do it. Within five minutes, it had completed the task, built security checks and safeguards, validated the configuration before deploying, written unit tests, checked for potential security vulnerabilities, and asked whether there was any way someone could authenticate from an unauthorized domain.
All the things that would take a human a week to do properly, if they did them at all. Most developers, given a “simple” domain authorization task, would make the config change and move on.
The AI treated it as an opportunity to build it right.
Now, here’s the human contribution: I had to recognize that those security checks were appropriate for this system. A different system might need less protection. A different context might prioritize speed over security. The judgment about what “right” means: that was mine.
When the scale of what’s possible changes
Here’s an example that convinced me this isn’t incremental improvement. It’s a fundamental shift in what’s possible.
I had an open-source project, Ragbot, that started as a rapid prototype using Streamlit. Streamlit is great for proving concepts fast; you can get something working in hours that would take days with a traditional web framework. So I built the prototype, validated the idea, and understood what the system actually needed to do.
Then I wanted to evolve it into a production architecture: React frontend, Python FastAPI backend, proper microservices so other systems could integrate with it.
That kind of migration, from a prototype framework to a production stack, is traditionally measured in weeks or months. It requires rewriting nearly everything, because the structural paradigms are completely different even when the logic stays the same.
I described the target architecture to the AI. Within a few hours, I had a production-ready application with a polished UI, proper separation of concerns, API documentation, and containerized deployment.
The architectural migration that would traditionally take weeks of careful refactoring: done in an afternoon.
This is about making entire categories of improvement economically viable. Refactorings that no rational team would prioritize suddenly become trivial. Migrations that would never get staffed become afternoon projects.
The human contribution: knowing that the migration was worth doing. Understanding what production-ready meant for this particular system. Recognizing when the output needed adjustment. The judgment was mine. The implementation was compute time.
The shift is already happening
A CEO I deeply respect often says: “Time is our enemy, speed is our friend.”
That’s about focus and decision quality, not working longer hours. It’s about directing human cognition toward things that require human cognition.
There’s a book called The Science of Scaling that articulates this rigorously: leaders should “stop optimizing what shouldn’t exist.” Stop polishing processes that don’t need to happen. Stop perfecting work that a tool could do.
The constraint that “good code takes time” was true when humans had to type every character. But that was a limitation of tools, not a law of physics. When tools change, constraints change.
I’ve talked to CTOs across the industry about this. The approaches vary, but the direction is consistent: engineers are moving toward higher-level work. Implementation is increasingly automated. Judgment is increasingly valuable.
The industry is converging on this realization from multiple directions. OpenAI has published how they built their Android app using AI agents. LangChain has released their agent engineering framework. When I’ve shared my synthesis coding methodology with CTOs at major tech companies, they often tell me they’re doing many of these things internally.
Nobody invented this alone. We’re all discovering the same principles because this is simply how effective software development works now.
This is a description of what’s already happening, not a prediction.
What this means for you
If you’re an engineer wondering how to navigate this:
Your experience matters more than ever. The judgment you’ve developed over years (recognizing patterns, understanding tradeoffs, knowing what “good” looks like in context) is exactly what AI lacks. It’s the complement to what AI does well.
Verification is a skill worth developing. Learning to interrogate AI output effectively, to probe for edge cases, to catch subtle issues: this is a learnable discipline. It resembles code review, but with different failure modes to watch for.
Architecture becomes more important. When implementation is cheap, the design decisions matter more. Understanding systems, thinking about boundaries, making structural choices: this is where leverage lives.
Speed enables quality. This is counterintuitive but important. When you can implement quickly, you can afford to refactor. You can afford comprehensive tests. You can afford to try approaches and throw them away. Speed and quality are no longer tradeoffs. Speed enables quality.
The role is elevating. The shift from implementation to architecture, from typing to thinking, from writing code to judging code: this is a move up. It’s more interesting work.
An invitation
I’ve been developing a methodology I call synthesis coding: the discipline of combining human judgment with AI implementation effectively. It’s about focusing engineering skill where it matters most.
The core practices: maintaining architectural judgment, interrogating AI output rigorously, using multi-model verification, recognizing when AI presents false constraints, keeping humans in the decision seat while AI handles the mechanical work.
The methodology is open. I’m sharing it because I think this transition matters, and I’d rather engineers navigate it well than get blindsided by it.
If you’re interested in exploring this, I’ve been writing about it at synthesiscoding.org. I’m a practitioner figuring this out and sharing what I learn.
The engineers who thrive in this transition will be the ones who understand what they uniquely provide, and lean into it. The judgment. The context. The knowing-what-to-build.
That was always the hard part. Now it’s the only part that matters.
This article is part of the synthesis coding series.
About the Author
Rajiv Pant is President of Flatiron Software, where he leads organizational growth and technological innovation. Throughout his career (as CTO of The New York Times, Chief Product and Technology Officer at The Wall Street Journal and Hearst Magazines, and earlier leading technology for Condé Nast and Reddit) he has built and led product and engineering teams ranging from small groups to 500+ employees. He continues to write code daily and contributes to open source projects including Ragbot.AI. Connect with him at rajiv.com or on LinkedIn.