hero:Tech: Solving problems in the post-GenAI world
– 3 min read

Tech: Solving problems in the post-GenAI world


In my pondering on the differences between a chainsaw and a harvester, titled “Charting a course in the post-GenAI world,” I tried to explain how and why GenAI is changing the way we work. While my thoughts were a bit philosophical, others have written good explorations of how the change might manifest for a particular discipline - like the excellent Medium post on how GenAI changes design space, written by Lisha Dai (here https://lishadai.medium.com/alone-in-a-ui-ux-project-spar-with-chatgpt-to-get-some-ideas-bb1fd9a4197c)

However, those explorations sidestep the second question of my earlier post. The one I also deferred for later.

Why do developers (designers, product people, and so forth) suddenly need to learn to create genAI solutions?

I was postponing this question as it’s much harder to answer than the first one. As we only see the first genAI enhanced user experiences, it is hard to grasp or explain why it matters. And why it matters so much that almost every developer should have a basic understanding of when, how, and why to use the technology.

It was not until I watched Dave Farley’s conspicuously titled YouTube video “GitHub Copilot Is Making Elite Developers EVEN BETTER” (https://www.youtube.com/watch?v=GRcpEpVNhRc) that I understood how to explain this. Or at least how to try explaining.

Farley discussed how the genAI tools can make you a lot more - or a lot less effective. In my words, yes, the GitHub CoPilot can make a person 15 times faster, but trying to use ChatGPT for the same purpose is likely to make you 15 times slower. And I think it’s fair to say that CoPilot and ChatGPT are the same thing under the hood, at least to a degree that matters here.

What, then, creates that difference, if not the AI?

The thing is, CoPilot is integrated into a developer’s natural workflow in an unintrusive manner. It leverages the power of the large language models to offer options, ideas, and inline searching for code patterns and syntax. It is, at the same time, easy to use or leverage and to forget and dismiss. It offers auto-completions, suggestions, and solutions in a context where a genAI excels.

In contrast, the standard way to use ChatGPT is with the web app or an API. These methods require you to provide the context, the question, and the sandbox where the answer needs to fit. All of that takes work and adds a source of errors. Garbage in, garbage out. This lack of context is likely why many developers have been disappointed with what the AI can offer aside from some elementary code samples.

A developer is a person who creates solutions and services for the rest of us. Suppose the service could benefit from a tool that increases users’ productivity multiple times when applied correctly or does the inverse when misused. In that case, we, as developers, need to understand the limits and possibilities of such a tool.

No, this does not mean every developer has to become an AI expert. But, like with understanding data structures and persistence, you need to know the ropes to create a solution of any significant size. Should you require complex data handling or lots of GenAI magic, you call in an expert. But you need to learn the basics to know when to call them. And when - figuratively - “to just forget the philosophical debates and save the payload to a flat file instead of a DB.”