Every few years our industry rediscovers something that is apparently about to change everything. Sometimes it does but more often than not, the story is slightly more complicated. Artificial Intelligence is currently enjoying one of those moments. Depending on which article, conference talk, or social media thread you happen to read, AI is either:
- About to replace most human labour, or
- A dangerously overhyped statistical parrot that will collapse under its own limitations.
Possibly both and before lunchtime!
I’ve lived through enough technology cycles to have a healthy suspicion of narratives that are too tidy! This isn’t the first time a technology has been presented as the inevitable future of software development. In the 1980s and 1990s we had the rise of Fourth-generation programming language tools. The pitch sounded compelling: programmers would no longer write complicated procedural code; instead, they would describe what they wanted at a higher level and the system would generate the rest.
If that sounds faintly familiar, it should. Before that there were expert systems, neural networks, knowledge engineering environments and a variety of other approaches broadly grouped under Artificial Intelligence. Each wave arrived with impressive demonstrations, confident predictions, and inevitably disappointment when the reality turned out to be more nuanced. However, that doesn’t mean the ideas were wrong. Many of them eventually became useful once the surrounding technology caught up, it just took longer than the hype suggested.
Because of that history, I’ve tended to approach new developments with a mindset borrowed from Scientific Method. It’s not particularly dramatic, but it does have the advantage of working reasonably well over long periods of time. When the recent wave of generative AI systems began appearing, particularly systems based on Large Language Model architectures, I was curious but unconvinced.
Early experiments produced results that were occasionally impressive, occasionally wrong and sometimes spectacularly confident about things that were entirely made up. This meant my initial conclusion was fairly simple: interesting technology, but not yet something I’d rely on. Fast forward a couple of years and things look somewhat different. Tools such as ChatGPT and GitHub Copilot have improved considerably. More importantly, I’ve had enough time to experiment with them in real work.
My conclusion now is slightly more positive, though still not quite aligned with either extreme of the current debate. AI systems can be genuinely useful and I'm probably using them every day now, for one thing or another. They can summarise information, help draft text, suggest code and occasionally point out angles you hadn’t considered. Used well, they can accelerate certain kinds of work. However, and this is the important part, they work best when treated as tools rather than substitutes for thinking. If you outsource the thinking part entirely, you’re likely to get exactly what you asked for: something plausible-looking that may or may not actually be correct.
The approach I’ve settled on at the moment is roughly this: Think of modern AI systems as unusually capable assistants. They are fast. They have read an enormous amount of material. They can generate surprisingly coherent responses. What they lack is understanding in the human sense and they have no intrinsic mechanism for determining whether a statement is true or merely statistically likely. Which means the human still has an important role in the loop and preferably the thinking part!
For anyone curious about experimenting with AI without surrendering their critical faculties entirely, a few habits have worked well for me:
- Start with low-risk tasks. Use AI for activities where errors are easy to detect, such as drafts of papers/emails, summaries, brainstorming, or exploratory coding. If it gets something wrong, the cost is low and the lesson is useful.
- Treat outputs as hypotheses. Instead of assuming the result is correct, treat it the same way you would treat any other unverified claim: something that might be right but should be checked. This aligns nicely with the scientific mindset.
- Keep the human review step. Particularly for technical work, always read the output critically. AI-generated code can look convincing while containing subtle mistakes.
- Use it to expand, not replace, thinking. The most useful interactions I’ve had with AI involve asking it to generate alternative approaches, identify trade-offs, or summarise unfamiliar areas. In other words, it’s acting as a catalyst for thought rather than a replacement for it.
- Experiment continuously. The technology is evolving quickly. Something that worked poorly last year may work much better today. Periodic re-evaluation is worthwhile.
If all of this sounds slightly cautious, that’s the intent. Technologies that genuinely matter tend to follow a pattern: early excitement, inflated expectations, inevitable disappointment and eventually steady integration into everyday practice. We’ve seen it with distributed systems, cloud computing, containers, and many other developments. My suspicion is that AI will follow the same trajectory. The interesting question isn’t whether it will replace human thinking but rather how effectively can we learn to use it alongside our own thinking? And on that front, keeping your brain engaged still seems like a reasonably good strategy.
No comments:
Post a Comment