Saturday, March 28, 2026

Dark Star and AI Morality

There’s a persistent assumption in enterprise AI conversations that autonomy is simply a function of capability: make the model smarter, give it more data, remove latency and eventually you can take the human out of the loop. But that framing misses something fundamental. Autonomy without judgment isn’t progress, it’s risk at scale.

If we’re serious about “no human in the loop” systems, then we need to move beyond accuracy metrics and start thinking in terms of moral reasoning under uncertainty. Not in the abstract philosophical sense, but in a very practical, system-design sense: how does an AI decide not to act? This is where an unlikely source of inspiration comes in: the 1974 film Dark Star. In one of its most memorable scenes, Pinback attempts to disarm an intelligent bomb by teaching it phenomenology, essentially guiding it through a process of self-awareness and doubt. The goal isn’t to override the bomb’s logic, but to expand it, to introduce the idea that its perception of reality and therefore its decision to detonate, might be flawed. (Yes, I know, it doesn't quite work out the way Pinback expected!)

That’s a useful metaphor for where we are with AI today. Most AI systems are optimised for decisiveness. Given an input, produce an output. Given ambiguity, resolve it probabilistically. Given uncertainty, infer. This works well in bounded domains, but it breaks down in open systems where the cost of a wrong decision is asymmetric or irreversible. In those cases, the correct behaviour is often deferral, or even deliberate inaction. But inaction is not a natural outcome of most AI architectures. It has to be designed in.

In distributed systems, we’ve long understood the value of back-pressure, circuit breakers and fail-safe modes. When the system is under stress or operating outside known parameters, the right answer is to slow down, degrade gracefully, or stop. We don’t treat this as failure; we treat it as resilience.

AI systems need an equivalent. Teaching an AI “morals” doesn’t mean encoding a fixed set of ethical rules. That approach doesn’t scale and it doesn’t generalise. Instead, it means equipping systems with mechanisms to recognise the limits of their own understanding. Confidence thresholds, uncertainty quantification and contextual awareness are part of this, but they’re not enough on their own.

What’s missing is a first-class concept of epistemic humility. An AI system should be able to reason along the lines of: “Given what I know and given the potential impact of being wrong, the optimal action is to abstain.” That abstention might manifest as escalation, request for additional data, or simply a refusal to act. However, critically, it must be treated as a valid and expected outcome, not an edge case.

This has architectural implications. It means designing workflows where “no decision” is explicitly modelled. It means integrating AI components into systems that can absorb and respond to uncertainty, rather than forcing premature resolution. Furthermore, it means aligning incentives, both technical and organisational, so that correctness is valued over throughput. There’s also a cultural dimension. In many enterprises, decisiveness is rewarded. Systems that hesitate are seen as inefficient. However, as AI takes on more consequential roles, we need to recalibrate that instinct: the cost of a wrong automated decision can far exceed the cost of a delayed one.

The Dark Star analogy is instructive because it highlights a shift from control to understanding. Pinback doesn’t try to out-logic the bomb; he tries to expand its frame of reference. In doing so, he introduces doubt, not as a weakness, but as a safeguard. (And Doolittle manages to surf!)

That’s the direction we should be heading. If we want AI systems that can operate safely without constant human oversight, we need to teach them not just how to decide, but when not to. In a world of increasing autonomy, restraint isn’t a limitation, it’s a capability. And in many cases, it may be the most important one we build.

Leadership at the Edge of the Unknown

There are many ways to run a team. Management textbooks will happily offer frameworks, acronyms, and laminated flowcharts that promise efficiency and alignment. After forty years in the technology industry, long enough to see mainframes give way to cloud clusters and paper tape give way to neural networks, I have learned that leadership rarely fits neatly into diagrams.

Oddly enough, some of the most enduring lessons about leadership come not from boardrooms, but from the bridge of a fictional starship. For many of us who grew up reading Arthur C. Clarke, Robert Heinlein, and Isaac Asimov, science fiction was never simply escapism. It was a laboratory for ideas about civilisation. If you know me at all, and if you're in my organisations at work, you'll hopefully know that I'm a big Star Trek fan. Among the most compelling of those laboratories was Star Trek, particularly the original series. Beneath the coloured lights and improbable alien prosthetics, it presented a remarkably thoughtful model of leadership. At the centre of that model stood Captain James T. Kirk. (The Shatner version, nor any other!)

The defining quality of Kirk’s leadership was not command authority. Plenty of captains possess that. His defining quality was responsibility. Kirk understood that leadership meant standing at the boundary between uncertainty and decision. He listened carefully to Spock’s logic, McCoy’s humanity, Scotty’s engineering realism but ultimately the decision was his and crucially, he owned the consequences.

In modern technical teams, especially in software and research environments, this quality is often undervalued. Teams can fall into endless consensus loops or hide behind process. Yet progress, whether building distributed systems or probing the physics of the early universe, often requires someone willing to say: this is the direction we take. Kirk rarely acted without input. But he never abdicated judgment and that distinction matters.

A team leader who behaves like Kirk does not suppress expertise; he amplifies it. Spock’s analysis is sharper because Kirk values it. McCoy’s objections are heard because Kirk knows emotional intelligence is not weakness but a complementary form of insight. What emerges is not hierarchy for its own sake, but a kind of dynamic equilibrium. Logic. Emotion. Experience. Judgment. The Enterprise bridge worked because all four were present. (Don't get me started on any of the more recent Star Trek series!)

One of the quiet revolutions of Star Trek was its assumption that diversity was normal. The Enterprise crew included people from different nations, cultures, and even species. What was striking at the time was that the show rarely treated this as remarkable. It simply worked. The mission mattered more than the differences. In a modern technical organisation, the same principle applies. A strong team is rarely homogeneous: it includes the analytical mathematician, the pragmatic engineer, the imaginative designer, the cautious tester. Each sees the system from a different vantage point. Remove any one of them and blind spots appear.

Kirk’s genius was not that he personally possessed every skill. It was that he trusted people who did. When Scotty said the engines could not take any more, Kirk knew it was probably true, even if Scotty would eventually find a way to bend physics. When Spock delivered an uncomfortable logical conclusion, Kirk listened carefully, even if his instincts told him otherwise. Trust is not a sentimental quality in engineering teams, it is a structural necessity.

Large software systems resemble starships more than factories. They are intricate, interdependent, and constantly evolving. A change in one subsystem can cascade unpredictably through the rest. Anyone who has watched a microservices deployment spiral into chaos will understand the analogy: in such environments, indecision can be more damaging than a wrong decision. Kirk understood this instinctively. When facing unknown phenomena, such as time distortions, hostile intelligences, collapsing stars, he gathered information quickly and then acted. He was not reckless: he simply understood that paralysis is the enemy of exploration.

In real-world teams, especially those working with emerging technologies such as artificial intelligence or large-scale distributed infrastructure, the same lesson applies. Perfect knowledge is unattainable and leaders must decide with incomplete information and then adapt.

The original Star Trek series appeared decades before modern machine learning, yet it raised questions about intelligent machines with surprising foresight. The Enterprise computer was powerful, but it never replaced the crew. That distinction is becoming increasingly relevant. Today we work alongside sophisticated AI systems that can analyse code, detect patterns in astronomical data, generate documentation and even assist with design. Used properly, they are extraordinary tools. They extend human capability much as the Enterprise sensors extended the crew’s awareness but they are still just tools.

Kirk’s leadership offers a subtle lesson here: throughout the series, the Enterprise frequently encountered computer-controlled societies, systems where decisions had been delegated entirely to machines. Inevitably, something had gone wrong. The machines followed rules perfectly but failed to understand context. (Yes, ok, Kirk often had an approach of persuading AI systems to self destruct!) 

For modern teams, the same principle applies. AI can help us explore vast search spaces, optimise architectures and process loads of data. But the final decision, the moral and strategic direction, must remain human. Leadership requires responsibility and responsibility cannot be outsourced to an algorithm.

Perhaps the most important aspect of Kirk’s leadership was his recognition that people are not components: they are explorers. The best teams are not assembled merely for efficiency; they are bound together by shared curiosity. They want to discover something new, whether that is a more elegant algorithm, a better distributed system, or a deeper understanding of the cosmos. Kirk inspired that curiosity. He did not command the Enterprise crew merely through rank, he commanded them because they believed in the mission.

There is a moment repeated many times in the series when the ship approaches an unknown region of space. Sensors show anomalies. The risk is obvious. Kirk pauses for a moment, then he gives the order to proceed. In many ways, that is the essence of leadership in science and technology: someone must be willing to move forward into uncertainty, not blindly, but with courage informed by expertise. The leader does not eliminate risk, he acknowledges it, weighs it, and then says: Engage.

In the coming decades, our teams will increasingly work alongside artificial intelligence, autonomous systems and technologies that today sound almost like science fiction. The complexity of these systems will only grow, yet the essential qualities of leadership will remain remarkably unchanged. Listen to your experts. Encourage diversity of thought. Decide when necessary. Use machines as tools, not masters and above all, remember that exploration, whether across galaxies or across the landscape of human knowledge, is ultimately a human endeavour. As someone once said: "The human adventure is just beginning."

AI, Hype Cycles and the Importance of Keeping Your Brain Engaged

 Every few years our industry rediscovers something that is apparently about to change everything. Sometimes it does but more often than not, the story is slightly more complicated. Artificial Intelligence is currently enjoying one of those moments. Depending on which article, conference talk, or social media thread you happen to read, AI is either:

  • About to replace most human labour, or
  • A dangerously overhyped statistical parrot that will collapse under its own limitations.

Possibly both and before lunchtime!

I’ve lived through enough technology cycles to have a healthy suspicion of narratives that are too tidy! This isn’t the first time a technology has been presented as the inevitable future of software development. In the 1980s and 1990s we had the rise of Fourth-generation programming language tools. The pitch sounded compelling: programmers would no longer write complicated procedural code; instead, they would describe what they wanted at a higher level and the system would generate the rest.

If that sounds faintly familiar, it should. Before that there were expert systems, neural networks, knowledge engineering environments and a variety of other approaches broadly grouped under Artificial Intelligence. Each wave arrived with impressive demonstrations, confident predictions, and inevitably disappointment when the reality turned out to be more nuanced. However, that doesn’t mean the ideas were wrong. Many of them eventually became useful once the surrounding technology caught up, it just took longer than the hype suggested.

Because of that history, I’ve tended to approach new developments with a mindset borrowed from Scientific Method. It’s not particularly dramatic, but it does have the advantage of working reasonably well over long periods of time. When the recent wave of generative AI systems began appearing, particularly systems based on Large Language Model architectures, I was curious but unconvinced.

Early experiments produced results that were occasionally impressive, occasionally wrong and sometimes spectacularly confident about things that were entirely made up. This meant my initial conclusion was fairly simple: interesting technology, but not yet something I’d rely on. Fast forward a couple of years and things look somewhat different. Tools such as ChatGPT and GitHub Copilot have improved considerably. More importantly, I’ve had enough time to experiment with them in real work.

My conclusion now is slightly more positive, though still not quite aligned with either extreme of the current debate. AI systems can be genuinely useful and I'm probably using them every day now, for one thing or another. They can summarise information, help draft text, suggest code and occasionally point out angles you hadn’t considered. Used well, they can accelerate certain kinds of work. However, and this is the important part, they work best when treated as tools rather than substitutes for thinking. If you outsource the thinking part entirely, you’re likely to get exactly what you asked for: something plausible-looking that may or may not actually be correct.

The approach I’ve settled on at the moment is roughly this: Think of modern AI systems as unusually capable assistants. They are fast. They have read an enormous amount of material. They can generate surprisingly coherent responses. What they lack is understanding in the human sense and they have no intrinsic mechanism for determining whether a statement is true or merely statistically likely. Which means the human still has an important role in the loop and preferably the thinking part!

For anyone curious about experimenting with AI without surrendering their critical faculties entirely, a few habits have worked well for me:

  1. Start with low-risk tasks. Use AI for activities where errors are easy to detect, such as drafts of papers/emails, summaries, brainstorming, or exploratory coding. If it gets something wrong, the cost is low and the lesson is useful.
  2. Treat outputs as hypotheses. Instead of assuming the result is correct, treat it the same way you would treat any other unverified claim: something that might be right but should be checked. This aligns nicely with the scientific mindset.
  3. Keep the human review step. Particularly for technical work, always read the output critically. AI-generated code can look convincing while containing subtle mistakes.
  4. Use it to expand, not replace, thinking. The most useful interactions I’ve had with AI involve asking it to generate alternative approaches, identify trade-offs, or summarise unfamiliar areas. In other words, it’s acting as a catalyst for thought rather than a replacement for it.
  5. Experiment continuously. The technology is evolving quickly. Something that worked poorly last year may work much better today. Periodic re-evaluation is worthwhile.

If all of this sounds slightly cautious, that’s the intent. Technologies that genuinely matter tend to follow a pattern: early excitement, inflated expectations, inevitable disappointment and eventually steady integration into everyday practice. We’ve seen it with distributed systems, cloud computing, containers, and many other developments. My suspicion is that AI will follow the same trajectory. The interesting question isn’t whether it will replace human thinking but rather how effectively can we learn to use it alongside our own thinking? And on that front, keeping your brain engaged still seems like a reasonably good strategy.

Sunday, March 15, 2026

Hedley, Hedy and the Frequency of Being Forgotten

I'm going to try to blog a bit more than I have in recent years. Let's see how long that lasts :)

With that said, every so often I’m reminded that history has a curious way of simplifying people. It tends to compress them into a single label that’s easy to remember and easier to repeat. Unfortunately, that compression often throws away the more interesting parts. A recent example that crossed my mind again is Hedy Lamarr.

But before getting there, I can’t resist a short (related) diversion. Anyone who has seen Blazing Saddles will remember the villainous politician Hedley Lamarr, played by Harvey Korman. The running joke throughout the film is his exasperation whenever someone calls him “Hedy”, to which he always replies:“That’s Hedley!”

The joke works because everyone immediately thinks of the Hollywood actress, Hedy Lamarr. The film plays the misunderstanding purely for laughs. However, the irony is that the real Hedy Lamarr is misunderstood in a far more interesting way which, when I first watched Blazing Saddles in the early 80's, I missed because I assumed she was "just" an actress. Most people remember her only as a film star from Hollywood’s golden era. She appeared in many films and for a time she was widely described as one of the most beautiful women in the world. That’s the version of the story that tends to survive, but it’s not the whole story. In the late 90's when I finally learned about her full story, I was truly impressed.

Away from the film sets, Lamarr had a strong interest in engineering and invention. During World War II, she collaborated with composer George Antheil on a system intended to prevent radio-controlled torpedoes from being jammed. The basic idea was what we now call Frequency-Hopping Spread Spectrum (FHSS). In simple terms, instead of transmitting a signal on a single radio frequency, where an enemy could easily disrupt it, the signal rapidly switches among many frequencies according to a shared pattern. If you know the pattern, you can follow the signal. If you don’t, you mostly hear noise.

In 1942 they were granted a patent for the idea. At the time, the technology needed to implement it practically wasn’t quite there. The design even proposed using synchronised mechanisms inspired by player pianos to coordinate the frequency changes. Ingenious, but perhaps a little ahead of the available hardware. The US Navy politely filed the patent away and everyone moved on to other things. Decades later, however, the principle became foundational for modern wireless communications. Technologies such as Spread Spectrum Communication underpin things we now take entirely for granted, including Wi-Fi, Bluetooth and parts of CDMA cellular systems.

So why is Lamarr’s contribution often overlooked? Part of it is timing. The patent wasn’t widely used until decades later, long after the moment when the story could have been told differently. Part of it is categorisation. Once someone has been placed firmly in one box (“Hollywood actress” in her case) it’s surprisingly difficult for the historical narrative to accommodate a second identity. And part of it is that innovation rarely happens in isolation. Ideas evolve, get refined and are rediscovered by later engineers. Along the way attribution becomes diffuse. But occasionally it’s worth pausing and remembering the earlier step in that chain.

Since I first learned about it, I’ve found Lamarr’s story interesting not only because of the invention itself, but because it highlights something about how we think about expertise: we tend to assume that people operate in neat domains, such as engineers design systems, artists create art and of course, actors act. Unsurprisingly, reality is a bit messier.

Lamarr happened to be all three things: an actress, an inventor, and someone with a genuine curiosity about how things worked. The world often remembers the first label and quietly drops the others, which is a pity, because the second one might be her more interesting legacy.

Before I sign off, let's take a quick trip back to Hedley The next time you watch Blazing Saddles (thoroughly recommended if you've not seen it so far!) and hear the immortal protest: “That’s Hedley, not Hedy!” …it might be worth remembering that the real Hedy Lamarr deserves to be remembered for considerably more than the joke. Not bad for someone history insists on filing under “actress”.