Saturday, March 28, 2026

AI, Hype Cycles and the Importance of Keeping Your Brain Engaged

 Every few years our industry rediscovers something that is apparently about to change everything. Sometimes it does but more often than not, the story is slightly more complicated. Artificial Intelligence is currently enjoying one of those moments. Depending on which article, conference talk, or social media thread you happen to read, AI is either:

  • About to replace most human labour, or
  • A dangerously overhyped statistical parrot that will collapse under its own limitations.

Possibly both and before lunchtime!

I’ve lived through enough technology cycles to have a healthy suspicion of narratives that are too tidy! This isn’t the first time a technology has been presented as the inevitable future of software development. In the 1980s and 1990s we had the rise of Fourth-generation programming language tools. The pitch sounded compelling: programmers would no longer write complicated procedural code; instead, they would describe what they wanted at a higher level and the system would generate the rest.

If that sounds faintly familiar, it should. Before that there were expert systems, neural networks, knowledge engineering environments and a variety of other approaches broadly grouped under Artificial Intelligence. Each wave arrived with impressive demonstrations, confident predictions, and inevitably disappointment when the reality turned out to be more nuanced. However, that doesn’t mean the ideas were wrong. Many of them eventually became useful once the surrounding technology caught up, it just took longer than the hype suggested.

Because of that history, I’ve tended to approach new developments with a mindset borrowed from Scientific Method. It’s not particularly dramatic, but it does have the advantage of working reasonably well over long periods of time. When the recent wave of generative AI systems began appearing, particularly systems based on Large Language Model architectures, I was curious but unconvinced.

Early experiments produced results that were occasionally impressive, occasionally wrong and sometimes spectacularly confident about things that were entirely made up. This meant my initial conclusion was fairly simple: interesting technology, but not yet something I’d rely on. Fast forward a couple of years and things look somewhat different. Tools such as ChatGPT and GitHub Copilot have improved considerably. More importantly, I’ve had enough time to experiment with them in real work.

My conclusion now is slightly more positive, though still not quite aligned with either extreme of the current debate. AI systems can be genuinely useful and I'm probably using them every day now, for one thing or another. They can summarise information, help draft text, suggest code and occasionally point out angles you hadn’t considered. Used well, they can accelerate certain kinds of work. However, and this is the important part, they work best when treated as tools rather than substitutes for thinking. If you outsource the thinking part entirely, you’re likely to get exactly what you asked for: something plausible-looking that may or may not actually be correct.

The approach I’ve settled on at the moment is roughly this: Think of modern AI systems as unusually capable assistants. They are fast. They have read an enormous amount of material. They can generate surprisingly coherent responses. What they lack is understanding in the human sense and they have no intrinsic mechanism for determining whether a statement is true or merely statistically likely. Which means the human still has an important role in the loop and preferably the thinking part!

For anyone curious about experimenting with AI without surrendering their critical faculties entirely, a few habits have worked well for me:

  1. Start with low-risk tasks. Use AI for activities where errors are easy to detect, such as drafts of papers/emails, summaries, brainstorming, or exploratory coding. If it gets something wrong, the cost is low and the lesson is useful.
  2. Treat outputs as hypotheses. Instead of assuming the result is correct, treat it the same way you would treat any other unverified claim: something that might be right but should be checked. This aligns nicely with the scientific mindset.
  3. Keep the human review step. Particularly for technical work, always read the output critically. AI-generated code can look convincing while containing subtle mistakes.
  4. Use it to expand, not replace, thinking. The most useful interactions I’ve had with AI involve asking it to generate alternative approaches, identify trade-offs, or summarise unfamiliar areas. In other words, it’s acting as a catalyst for thought rather than a replacement for it.
  5. Experiment continuously. The technology is evolving quickly. Something that worked poorly last year may work much better today. Periodic re-evaluation is worthwhile.

If all of this sounds slightly cautious, that’s the intent. Technologies that genuinely matter tend to follow a pattern: early excitement, inflated expectations, inevitable disappointment and eventually steady integration into everyday practice. We’ve seen it with distributed systems, cloud computing, containers, and many other developments. My suspicion is that AI will follow the same trajectory. The interesting question isn’t whether it will replace human thinking but rather how effectively can we learn to use it alongside our own thinking? And on that front, keeping your brain engaged still seems like a reasonably good strategy.

Sunday, March 15, 2026

Hedley, Hedy and the Frequency of Being Forgotten

I'm going to try to blog a bit more than I have in recent years. Let's see how long that lasts :)

With that said, every so often I’m reminded that history has a curious way of simplifying people. It tends to compress them into a single label that’s easy to remember and easier to repeat. Unfortunately, that compression often throws away the more interesting parts. A recent example that crossed my mind again is Hedy Lamarr.

But before getting there, I can’t resist a short (related) diversion. Anyone who has seen Blazing Saddles will remember the villainous politician Hedley Lamarr, played by Harvey Korman. The running joke throughout the film is his exasperation whenever someone calls him “Hedy”, to which he always replies:“That’s Hedley!”

The joke works because everyone immediately thinks of the Hollywood actress, Hedy Lamarr. The film plays the misunderstanding purely for laughs. However, the irony is that the real Hedy Lamarr is misunderstood in a far more interesting way which, when I first watched Blazing Saddles in the early 80's, I missed because I assumed she was "just" an actress. Most people remember her only as a film star from Hollywood’s golden era. She appeared in many films and for a time she was widely described as one of the most beautiful women in the world. That’s the version of the story that tends to survive, but it’s not the whole story. In the late 90's when I finally learned about her full story, I was truly impressed.

Away from the film sets, Lamarr had a strong interest in engineering and invention. During World War II, she collaborated with composer George Antheil on a system intended to prevent radio-controlled torpedoes from being jammed. The basic idea was what we now call Frequency-Hopping Spread Spectrum (FHSS). In simple terms, instead of transmitting a signal on a single radio frequency, where an enemy could easily disrupt it, the signal rapidly switches among many frequencies according to a shared pattern. If you know the pattern, you can follow the signal. If you don’t, you mostly hear noise.

In 1942 they were granted a patent for the idea. At the time, the technology needed to implement it practically wasn’t quite there. The design even proposed using synchronised mechanisms inspired by player pianos to coordinate the frequency changes. Ingenious, but perhaps a little ahead of the available hardware. The US Navy politely filed the patent away and everyone moved on to other things. Decades later, however, the principle became foundational for modern wireless communications. Technologies such as Spread Spectrum Communication underpin things we now take entirely for granted, including Wi-Fi, Bluetooth and parts of CDMA cellular systems.

So why is Lamarr’s contribution often overlooked? Part of it is timing. The patent wasn’t widely used until decades later, long after the moment when the story could have been told differently. Part of it is categorisation. Once someone has been placed firmly in one box (“Hollywood actress” in her case) it’s surprisingly difficult for the historical narrative to accommodate a second identity. And part of it is that innovation rarely happens in isolation. Ideas evolve, get refined and are rediscovered by later engineers. Along the way attribution becomes diffuse. But occasionally it’s worth pausing and remembering the earlier step in that chain.

Since I first learned about it, I’ve found Lamarr’s story interesting not only because of the invention itself, but because it highlights something about how we think about expertise: we tend to assume that people operate in neat domains, such as engineers design systems, artists create art and of course, actors act. Unsurprisingly, reality is a bit messier.

Lamarr happened to be all three things: an actress, an inventor, and someone with a genuine curiosity about how things worked. The world often remembers the first label and quietly drops the others, which is a pity, because the second one might be her more interesting legacy.

Before I sign off, let's take a quick trip back to Hedley The next time you watch Blazing Saddles (thoroughly recommended if you've not seen it so far!) and hear the immortal protest: “That’s Hedley, not Hedy!” …it might be worth remembering that the real Hedy Lamarr deserves to be remembered for considerably more than the joke. Not bad for someone history insists on filing under “actress”.

Friday, December 05, 2025

Good Neighbours UK

 I've not done this before but I'm posting this on behalf of one of my sons. It's a worthy cause.

"I'm working with Good Neighbours UK where they have just started a small matched-fundraising effort and I wanted to share it here in case you'd like to help.

 

We’re working with our Zambia team to build proper classrooms for a rural school where 335 children are currently learning in makeshift shelters. For this campaign, every pound donated will be doubled, so even small amounts go a really long way.

 

here’s the link If you’d like to support (or even just share it):

https://donate.biggive.org/campaign/a05WS000006nxgFYAQ


Any support is greatly appreciated so thank you."




Friday, March 21, 2025

Red Hat Middleware moving to IBM

It's been a while since I wrote anything on my blog. I'm not even sure if blogging is a thing anymore! Maybe I should be recording a short TikTok video or overlay some commentary on a cute kitten YouTube video. However, until I figure that out, I'll stick with text and blogging. Fortunately, I also have something important to write about, which hopefully makes the long gap between my previous entry and this one worth the wait: the Red Hat Middleware move to IBM.


If you're not already aware, then you can read about the general idea in this article. In summary, after having a successful enterprise Middleware strategy and portfolio for almost two decades, it has been decided that our future is best served by moving to IBM and merging with their Java middleware teams. On the one hand, you might think that this was inevitable given the IBM acquisition of Red Hat, but on the other you would be forgiven for asking, didn't IBM do this in the opposite direction in 2020? In this article, I want to try to address these thoughts as well as what I actually think about this and the future of my team. It's important that I point out at this stage that everything I'm writing here is my personal opinion and not any official statement on behalf of Red Hat.


On the subject of whether or not this merger is inevitable, I agree. I've been acquired a number of times during my career and the fact that Red Hat has managed to retain so much independence for so long is a surprise to me. I remember talking with a few other Red Hat friends and colleagues in 2019 after the acquisition was announced and we all agreed that IBM would fold Red Hat into itself within a couple of years. Five years later and we were wrong, though clearly we've seen some fluidity between what products are the responsibility of Red Hat versus IBM, e.g., PAM/DM and Storage. Therefore, I do think it was inevitable that, at some stage, someone at Red Hat or IBM would turn their attention towards the two middleware businesses.


This brings me to the next point: didn't IBM do this already? Kind of. In 2020 IBM decided to move some of their Java middleware expertise to Red Hat, such as AdoptOpenJDK, Kruize Autotune, Node.js and developer productivity, as well as making the business decision that the Red Hat Build of OpenJDK would be the preferred OpenJDK/Hotspot distribution for IBM customers. However, they retained a significant business with technologies like WebSphere, Liberty and J9. Along with these, IBM retained its own strategy for enterprise Java which was distinct from that which Red Hat had created.


Alright, let's get to the important questions: why now and what do I think about it?


On the first question, I think it's a combination of things. I can't talk about them all, but I can mention some of the more important things that drove the decision. There's the fact that Red Hat wants to focus more on hybrid cloud and open source AI, which has meant reduced investment in middleware, as someone reported in El Reg last year. I need to be clear here: focus is not a bad thing and the open source enterprise software landscape is just getting larger and larger; even a company with pockets as deep as IBM cannot be the master of everything.


It's also precisely because of this need to focus that merging the two Java middleware efforts makes sense: better together! As Red Hat works more and more in the AI space, the emphasis is around Python, which to enterprise Java developers is often difficult to grapple with when they have such large Java-based applications already deployed. IBM recognises the need to bridge AI and Java but cannot do this by relying solely on its own middleware offerings. When you look at what Red Hat Middleware has to offer, the answer is pretty obvious: combine and innovate together!


And I suppose that then leads us to the second question: what do I think about it? Recently I hosted my annual Engineering Kickoff meeting, where I gathered about 50 senior managers and individual contributors from my organisation in the same place. We also hosted an All Hands meeting which was attended remotely by about many hundreds of people! Of course, that question came up, more or less. Therefore, I'll try to answer it in the way I did in those meetings.


For a start, I do believe that, on paper, combining the teams makes perfect sense. IBM is one of the key companies that originally helped to popularise Java. This is in no way meant to ignore the great work Sun Microsystems did in creating and pushing Java. Still, I think if it hadn't been for companies such as IBM, HP, Dec and yes, even Microsoft at the time, Java wouldn't be the dominant language for developers we see today. It may be hard for many to cast their minds back to the late 1990s and comparing it with today is hard, but back then, it was these companies dominating the enterprise market prior to the advent of the Web and even for the first few years after it came to life.


Whatever you might think about the pros and cons of things like messaging systems, databases, transactions, DCE, CORBA, J2EE and Web Services, IBM was a dominant force in them all at the time and into the early 2000s. It's had its ups and downs over the years and I'll leave it to more involved people to talk about them, but overall I'd say that IBM has a rich history in the Java and Middleware space. Furthermore, many of IBM's products today are built on Java, so they definitely understand its relevance.


Now I also like to think that Red Hat and all those companies it acquired over the years, such as JBoss, FuseSource, 3scale and FeedHenry, have a lot of history in many of these same areas. I'd also say that, without question, our open source influence has been at least equally prolific and influential. Prior to the IBM acquisition in 2019, we also had a growing business taking out IBM, so there's always been some good rivalry there.


Looking at things objectively and, as I said earlier, perhaps as a paper exercise, combining two related businesses and teams into one makes sense. I know many of these groups and individuals have worked together for years in upstream communities. On a personal note, I count a number of IBMers as friends I've known for decades. None of us are “evil” and everyone at Red Hat and IBM wants to do the best for our communities and partners.


However, many "on paper" exercises don't turn into good reality and in this case, there remain a number of potential stumbling blocks. Let's quickly address the overlaps because I don't really believe these are that significant in the grand scheme of things. Over the last 5 years, both IBM and Red Hat have collaborated more and more on sharing efforts on innovation and products. Several of the IBM products embed Red Hat Middleware products or components. IBM uses the Red Hat Build of OpenJDK, where J9 isn't an option. IBM uses a number of our Eclipse MicroProfile offerings on their own.


I'm not going to suggest that there aren't going to be discussions around future product strategy. There are obvious questions to be answered, such as where do EAP and Liberty come together, or what about OpenJDK/Hotspot and J9? What about AMQ and MQ Series or various Apache Kafka efforts? What about Visual Studio Code java and Eclipse Java? As mentioned in Matt’s announcement, Red Hat customers will continue to buy our existing products from Red Hat and partners, including getting new features, fixes, etc. Everything will remain open source, in the same communities and with the same passion the teams have brought over the years. The future planning around cloud-native Java, including Quarkus, will need to take all of this into account.


This leads us very neatly to where I do see problems and we all have to work seriously to alleviate them. What I'm about to outline may well be just perception on the part of my team, but it will directly impact the success of any such endeavour.


Our two cultures are very different, even after 5 years of being part of IBM. As some of my extended team have pointed out, even the culture within my organisation is different from the wider Red Hat culture. That difference is seen by the associates in my organisation as a positive and important thing to preserve. I'm not just talking about the different ways in which Red Hat and IBM approach open source. Rightly or wrongly, there is a strong belief that IBM has a very top-down approach to strategy and business, whereas in Red Hat, it has largely been driven the other way. Some of this is definitely a result of Red Hat's roots and a community-first mentality. Still, our engineering and business leaders stay in place for a long time compared to their IBM counterparts and that helps to build strong trust relationships internally and externally. Amongst other things, this leads to a very autonomous approach in my organisation/business for driving strategy and delivery.


As I told my entire organisation, our culture doesn't come as a right because we are Red Hat: we all shape it, nurture it and allow it to evolve… or decay. Therefore, if our culture is important to us, it's something we can take with us and fight for wherever we end up. I've discussed this within Red Hat and IBM and they all agree that this move is not meant to change our culture and they recognise that it is an important part of how we function and deliver value to our customers. Therefore, changing it will be detrimental to our continued success. I have made the commitment to my entire team that our culture is something I want to protect, or I won't be involved.


This is one of my "red lines". These are commitments or promises that I've obtained from Red Hat or IBM, which I believe are necessary to maximise the chance of success for this move. There aren't many of them and in my view, they're all reasonable. However, I've been clear that these things are not negotiable, or we risk losing key associates, communities, customers, etc. If this move is truly going to bring the best result for everyone, then Red Hat and IBM need to deliver on these commitments and I’ll continue to help them do that. Failure to do so risks attrition of myself and many others.


So far, I am pleased to say that both companies agree and are giving my team and me the support to bring our culture and processes with us. I know many of you reading this will already be asking yourselves, "For how long?" and you aren't alone, as it's something many in my team are also asking. I don't know the future, but I do know that if their response changes then they cross that red line. These things aren't just important for the initial transition, they are important for the duration. It's important to know that this isn't about me; it never was. I've had the privilege to work with a growing team of committed and passionate people who are responsible for the success of the business and it's them I'm doing this for now. They're my extended family!


Some may read these ‘red lines’ as threats. Nothing could be further from the truth. In the opinion of myself and others, these items are key elements needed to ensure our success regardless of the name on the letterhead. They can't be taken piecemeal. If there's no commitment then what's the point in all of this? I'm at a stage in my career where I want to invest my time and energy in things that I truly believe are worth it, or I may as well stop and take up lion taming!


As we move forward with the transition, I’m convinced that the IBM people we are working with are authentic and want to make this a success too. For example, I’m just back from a few days in Raleigh meeting with the Red Hat and IBM leadership teams and I thought that was a pretty positive experience. In general, Red Hat has been good at treating my team fairly over the years, which has been a part of our corporate success. If IBM can continue that tradition, then the transition could be as simple as moving from one good home to another. I know speaking with a number of IBMers, they also seem to feel treated well by IBM, so that’s a good start. While the positivity and goodwill continue, I’m more than happy to continue to lead the organisation and help define the future of enterprise Java well into the future.


Conversely, if the support we need for our culture, upstream first approach etc., falters, it will affect how the team works and how our communities and partners trust us. That would be a bad thing no matter which company we were working for if it happened and I think the results would be obvious. However, if I was really worried that might happen, then I could put a pin in it right now and stop. I’m not going to because this team and this business are worth the effort and so far, IBM and Red Hat agree.


 

Saturday, July 11, 2020

Farewell Rob ...

This year has definitely been shitty for many people globally for a number of reasons. To add to that, I just heard that a long time friend of mine died the other day from pancreatic cancer. I could write about him but another friend of ours, Mike, already did and very eloquently. Therefore, I'll just link to his entry and send Rob's family my condolences. I'll also toast Rob tonight along with all the good memories we share!

Monday, April 06, 2020

Community driven innovation and freedom

Picture this ... a world where we have an industry with open source innovation at its heart. Communities come together to tackle problems. Majority decisions are made in the open through mutual respect and often lots of discussion. Differences of opinion are a reality inside these open source communities as outside.

In this world if you don't like the way a community is heading then you get involved and try to persuade them to change and understand your point. If there are many more people who believe the same then they get involved too and maybe that community changes direction. But in this reality you don't complain when a decision goes against you ... you respect the choices the community makes and you have the freedom to go and try to create your own community elsewhere if that's the best option.

You also don't complain about a community when you've had minimal or even zero involved. You don't complain about dominance of communities by vendors or individuals if you fail to get involved or fail to get a majority of like minded people active enough for them to gain voting rights.

Even without voting rights or until they are obtained, influence is something which is possible because everything is discussed in the open. In this ideal world you get involved and help. You spend the time to earn the same rights as others. You expend the time and energy and do the work that is required to gain those voting rights because they are a privilege and not an automatic right. However, if something is important to you then you spend that time to earn it.

Sometimes I think we live in this world but then I wake up!

Don't default fork MP into Jakarta EE

There should be no default rule for the relationship between Jakarta EE and MicroProfile because each MicroProfile specification is often driven by a mix of different developers, vendors and communities who may not want their efforts forked. To ignore them is tantamount to a hostile take-over. The Jakarta EE communities should work with them and try to persuade them to see their point of view. However, if the MP spec community cannot be persuaded then I caution continuing with a fork. Embed and try to work together because the MP spec should still be usable within Jakarta EE. And working with the original community is a far better path to future success than trying to split efforts - anyone remember Hudson these days?

If no way to collaborate can be found, including simply embedding that spec into Jakarta EE, then I'd suggest that there is something fundamentally wrong with that specific MP community or those in Jakarta EE. I really can't see this ever happening though so it's not worth further consideration.

Then there's the notion of changing the namespace of a MP specification if it is "imported". Well I also don't think that should be a hard and fast rule either. It should be something that is worked out between the MP specification in question and the Jakarta EE community. It should also not be a reason to reject importing and collaborating with the MP community and defaulting to a hostile-like fork.

And that brings me to the final question: where does work on the MP specification continue, assuming it does need to continue? Well guess what? I think that's up to the MP spec community since they are the founders of their work and their future destiny. If they feel that innovation should shift entirely to Jakarta EE then go for it, with all of my support. But likewise, if they feel innovation should continue in MP and maybe they are a part of the corresponding Jakarta EE community they work to pull updates across when it makes sense. A great collaboration.

Wednesday, September 11, 2019

That time again ... September 11th

Yes, it comes around again. This year I decided to link something more for Ed. My thoughts are with his family as usual. And mine.