Agentic Code Through a Photographer's Lens: Part 2
The Evolution Continues
Last December I wrote about my experience with agentic coding after almost a year of experimentation, trial and error. In that post I compared the transition to AI-assisted coding to what it was like for photographers moving from film to digital. Just as photographers initially resisted digital cameras despite their potential, many developers approached AI coding with similar skepticism.
Less than 6 months later, not only has the technology improved dramatically, but the market has validated what many of us early adopters suspected: this represents a fundamental shift in how software gets built. Forever.
From Agentic SDLC to "Vibe Coding"
The phenomenon I described last year has been given a catchier name by AI pioneer Andrej Karpathy: "Vibe Coding." This term perfectly captures the intuitive, collaborative flow that emerges when working with AI coding assistants. It is a less rigid approach to programming that emphasizes conveying intent, receiving suggestions, and refining iteratively.
Beyond the Code
While "vibe coding" excels at code creation, enterprise software development encompasses far more than writing code. A mature development practice considers requirements gathering, automated testing, and deployment strategies.
The full promise of agentic software development will only be realized when AI assists throughout the entire development lifecycle. We're already seeing early examples of this expansion, with tools beginning to help with test generation, documentation, and even requirements analysis.
Market Validation: The Cursor Phenomenon
The market has delivered a clear verdict on the value of these tools. Cursor, which I mentioned in my original post, is now valued at an astounding $9 billion, having increased their Annual Recurring Revenue by $200 million in just four months. This extraordinary growth confirms that enterprises recognize the transformative potential of these platforms.
OpenAI's Strategic Play with Windsurf
In another significant development, OpenAI acquired Windsurf, the second-largest player in the space behind Cursor. I've been using both platforms extensively and Windsurf has advanced remarkably since last year; to the point where I now prefer it to Cursor for many tasks.
That said, OpenAI's acquisition signals something profound about coding's role in the AI ecosystem. Beyond the immediate business advantages, I suspect there's a strategic element related to reinforcement learning. Code provides unambiguous feedback. It either works or it doesn't, passes tests or fails them. This clarity offers a powerful reinforcement signal for AI models, potentially driving improvements that extend well beyond coding applications.
The Enterprise Arms Race
In my original post, I predicted that "software development teams will inevitably get smaller and more productive" and that "every aspect of the software development lifecycle will see tremendous gains." These predictions are materializing even faster than I anticipated. However, it is also entirely possible that development teams get larger as companies elect to build even more tools as the cost of development continues to drop.
Everything we do in enterprises today is driven by software. The ability to build and maintain high-quality software efficiently is no longer just an IT concern. It is at the heart of running a successful enterprise. This is why a mature software development practice, powered by agents, has become an arms race. Organizations that master these capabilities will operate at velocities their competitors simply cannot match.
The Legacy System Dream Coming True?
I concluded my original post with a "long shot" prediction: that agentic coding might finally help us move away from legacy systems that persist mainly because the original developers are long gone. A year later, this dream seems increasingly attainable. The speed at which these tools can help understand, refactor, and test legacy codebases is transformative.
Setting aside IDEs like Cursor and Windsurf, the introduction of Anthropic's Claude Code and OpenAI's Codex CLI - two command line agents - has been game changing. I've used both to reverse engineer legacy applications, generate user stories, and even produce detailed documentation. This is yet another reminder of how much focus frontier model builders are putting on coding with agents.
The ability to quickly parse, analyze, and explain thousands of lines of legacy code has traditionally been one of the most significant barriers to modernization efforts.
Tying It All Together: Model Context Protocol
While the progress we've seen with LLMs and agents has been remarkable, in my mind one of the most valuable additions to our toolbox was the introduction of Anthropic's Model Context Protocol (MCP). This innovation is now serving as the crucial glue between command line agents, Cursor, Windsurf, and external systems in my workflow.
MCP fundamentally transforms what LLMs can do by providing them with a standardized interface to interact with external tools, systems, and data sources. Rather than being limited to their training, models can now request access to capabilities like file systems, databases, APIs, and computational resources. This protocol enables models to read codebases, execute code, retrieve documentation, and interact with version control systems - all while maintaining a coherent conversation with the developer.
In practical terms, this means I can now build seamless workflows where a command line agent reverse-engineers a legacy codebase, passes that understanding to an IDE for modernization work, which then triggers automated tests through external CI/CD pipelines - all orchestrated through natural language. The model becomes the central coordinator of a complex development ecosystem, rather than just an isolated assistant.
The improvements we've seen in just one year only reinforce the conviction that agentic coding is the norm.
Join the discussion on LinkedIn.