A blunt, British take on what actually mattered in AI this week — and what was just noise dressed as a breakthrough.

Full Episode Transcript

Good afternoon. It's sunny in London, though I'm not sure how much longer that will last. Today we're going to explore a theme that resonates with a quote from Alan Turing: "A very large part of space-time must be investigated if reliable results are to be obtained." This is particularly pertinent in the realm of artificial intelligence where the challenge often lies in separating the signal from the noise. We're inundated with claims of breakthroughs and innovations, yet it's essential to maintain a critical eye. How do we discern what is truly significant amidst the clamor of the latest trends and fads? This is Turing's Torch: Artificial Intelligence Weekly — the bits that matter, minus the hype. The integration of artificial intelligence into business operations is accelerating, but questions linger about whether the underlying data infrastructure is ready to support it. Firms are rapidly adopting artificial intelligence tools across departments, from finance to HR, with predictions suggesting widespread use within a couple of years. The problem is that these artificial intelligence systems rely on a robust "data fabric," essentially a unified and well-managed data infrastructure. Without it, companies risk encountering issues with data quality, accessibility, and the ability of different systems to work together. All of which, naturally, undermines the potential benefits of the artificial intelligence itself. Businesses are investing heavily in artificial intelligence, and if the data foundations are weak, they may not see the return they expect. Poor data management can lead to inaccurate insights, flawed predictions, and ultimately, wasted resources. The speed of artificial intelligence deployment risks outstripping the ability to manage the data it needs. It's another example of the technology outpacing the practicalities. We've seen similar patterns in other areas, like cybersecurity, where investment often lags behind the adoption of new technologies. One wonders if some of these firms have put the cart somewhat before the horse and are now scrambling to build the road while the vehicle is already moving. Still, it's a situation that will provide plenty of work for consultants in the years to come. The uptake of so-called ‘agentic artificial intelligence' appears to be slower than many predicted. It seems that only a small fraction of these systems are actually ready for real-world deployment. By ‘agentic artificial intelligence', we're generally talking about systems that can perform tasks autonomously, without constant human oversight. Think of a customer service chatbot that can not only answer simple questions, but also troubleshoot problems, schedule appointments, and even process refunds, all on its own. The idea is that these systems act as agents, taking initiative and making decisions within certain parameters. The problem, according to recent reports, isn't necessarily the technology itself, but rather the difficulty of integrating these new artificial intelligence agents into existing business structures. Many companies are finding that their current IT infrastructure, their established workflows, and even their internal company culture, are simply not ready for this level of automation. It's a bit like trying to install a high-performance engine into a car with a rusty chassis. You might have a powerful engine, but the rest of the vehicle can't handle it. The promise of agentic artificial intelligence is significant. It's not just about cutting costs through automation, but also about creating entirely new business models and services. If companies can't overcome these integration hurdles, they risk falling behind competitors who can. And it raises questions about the real-world impact of many artificial intelligence systems: are they genuinely transformative, or just expensive window dressing? We've seen this pattern before, of course. The history of technology is littered with examples of groundbreaking innovations that failed to live up to the hype, simply because the surrounding infrastructure and processes weren't ready. Perhaps the excitement about artificial intelligence has distracted from the more mundane, but essential, work of modernising legacy systems. It appears that the real bottleneck isn't in the algorithms, but in the plumbing. Until that changes, the true potential of these artificial intelligence agents will remain largely untapped. The performance of artificial intelligence in real-world business scenarios is under scrutiny, with delays in processing times emerging as a significant obstacle. We're seeing examples of artificial intelligence systems, designed to expedite tasks, actually taking months to complete processes that should ideally take hours. The term being used is "latency" and it refers to the lag time between a request being made of the artificial intelligence and the artificial intelligence delivering a response. While the artificial intelligence models themselves might be capable of high-speed calculations, the overall systems they operate within can introduce bottlenecks. This can arise from slow data transfer, inefficient workflows, or outdated infrastructure. Businesses are increasingly reliant on artificial intelligence for critical operations. If an artificial intelligence system is slow, it doesn't matter how clever the underlying algorithms are. The inefficiency can lead to frustrated customers, lost revenue, and ultimately, a loss of confidence in the technology itself. Insurance claim processing is one such example, but think also of loan applications, customer service queries, or even supply chain management. We have discussed the importance of clear regulatory standards for artificial intelligence development, and the question of latency brings another dimension to that discussion. The focus shouldn't just be on the raw processing speed of the artificial intelligence but on the end-to-end efficiency of the entire system. This requires a holistic approach, identifying and addressing bottlenecks throughout the operational workflow. It's another reminder that the true value of artificial intelligence is only realised when it's integrated seamlessly into the real world. One wonders if some of these companies paused to consider their existing infrastructure before rushing to implement the latest artificial intelligence solutions. It seems many are discovering that a shiny new artificial intelligence model is only as good as the clunky old systems it has to work with. That's a reminder that technological progress is rarely a straightforward upward curve. There's been some discussion this week about the architectural choices companies are making as they implement artificial intelligence. It seems many are opting to build their artificial intelligence systems within the ecosystems of the big cloud providers, firms like Amazon, Google, and Microsoft. The attraction is obvious: a single vendor offering a complete suite of tools and services. The problem, however, is that these systems aren't really designed to play nicely with anything outside their own walls. Think of it as choosing to build your house entirely from Ikea components: everything fits together, but good luck incorporating that antique dresser you inherited. For artificial intelligence, this "walled garden" approach could limit flexibility. As artificial intelligence agents become more sophisticated and are expected to operate across diverse environments, a rigid infrastructure might become a real impediment. Imagine trying to deploy a customer service artificial intelligence chatbot that needs to access data from both your cloud-based CRM and your legacy on-premise database. If your artificial intelligence architecture is too tightly coupled to a single provider, that integration could become needlessly difficult, expensive, or even impossible. This has significant implications for businesses. It's a question of control, and potentially, competitive advantage. Are you willing to cede control over your artificial intelligence infrastructure to a single vendor, even if it means sacrificing some flexibility? Or do you invest in a more open architecture that allows you to mix and match artificial intelligence components from different providers and integrate with your existing systems more easily? The decision made now will likely have a lasting impact on the effectiveness of artificial intelligence deployments for years to come. It's reminiscent of the early days of the internet, when companies were wrestling with whether to build their own websites or rely on proprietary platforms like AOL. Those who embraced the open web ultimately thrived. Perhaps the key is to remember that shiny all-in-one solutions rarely stay shiny forever. There's been a study suggesting many organisations are surprisingly ill-equipped to deal with things going wrong with their artificial intelligence systems. Apparently, a significant number are unsure how quickly they could actually stop an artificial intelligence if it started misbehaving, or even how to report the problem accurately. What this really highlights is that while everyone's keen to implement artificial intelligence, there's a distinct lack of planning for when things inevitably go wrong. We're not just talking about minor glitches here; imagine an artificial intelligence used in a critical system, like healthcare or finance, making a serious error. The ability to quickly shut it down and understand what happened is paramount. The implications are considerable. If companies can't effectively manage artificial intelligence failures, they risk financial losses, reputational damage, and potentially even legal repercussions. And it's not just about having a plan on paper; it's about understanding the artificial intelligence systems themselves, and having the technical expertise to intervene when necessary. This all comes at a time when regulators are increasingly scrutinising artificial intelligence deployments, so any perceived lack of control could attract unwanted attention. It does seem a little like buying a powerful sports car without learning how to use the brakes. Plenty of acceleration, but a somewhat limited capacity to avoid a crash. That said, some organisations are likely relying on the hope that if something goes wrong, they can just blame the algorithm. And perhaps they're right. All this really suggests is that a bit more attention needs to be paid to the downside of these systems. This week, a software company called Bobyard launched version 2.0 of its platform, aimed at professionals who estimate costs for construction and landscaping projects. The headline feature is apparently faster "takeoffs," and a unified artificial intelligence for those estimators. Now, "takeoff" in this context refers to the process of quantifying materials and labour needed for a project. Think of it as meticulously counting bricks, measuring lumber, calculating person-hours – all that tedious, but crucial, groundwork before any building actually begins. The promise is that the artificial intelligence can help speed up these calculations and provide more accurate project budgets. The practical implications are fairly clear. If estimators can indeed generate quotes more quickly and accurately, then construction companies can bid more competitively, potentially win more contracts, and, of course, increase their profit margins. It's a sector under constant pressure to become more efficient, so any tool offering an edge will naturally garner attention. This push towards artificial intelligence-assisted estimation reflects a broader trend across industries: the automation of tasks that are seen as repetitive or time-consuming. We've seen similar developments in legal research, financial analysis, and even content creation. The promise is always greater efficiency but the reality often involves navigating a new set of challenges like ensuring the artificial intelligence is actually reliable and doesn't simply amplify existing biases. Of course, the critical question is whether this new version of Bobyard actually delivers on its promises. Faster workflows and unified artificial intelligence sound good in a press release but the real test will be in the field where estimators grapple with the messy realities of construction sites and ever-changing material costs. One can't help but wonder if this is yet another example of technology offering a superficial solution to a much deeper problem. Perhaps what the industry really needs is fewer unexpected delays and more reliable supply chains, rather than just a faster way to count the costs. One hopes someone will put it to the test, thoroughly. There's been a noticeable uptick in concern among economists regarding the potential for widespread job displacement due to artificial intelligence. What was previously considered something of an alarmist position is now gaining more mainstream acceptance. Essentially, economic forecasters are starting to factor in the possibility that artificial intelligence could significantly reduce the number of jobs available across various sectors. We're not just talking about repetitive manual tasks, but also white-collar roles involving data analysis, customer service, and even some aspects of creative work. The immediate impact is likely to be felt by those in roles easily automated or augmented by artificial intelligence tools. Companies are already using artificial intelligence to streamline operations, reduce overhead, and potentially increase profits. This, of course, leads to redundancies, as we saw recently with Jack Dorsey's announcement of substantial layoffs at Block, a decision that was, somewhat grimly, rewarded by investors. The broader implications are that we may need to rethink our social safety nets, retraining programs, and even the very definition of work. This shift comes amid ongoing discussions about the regulation of artificial intelligence, and its potential impact on various aspects of society. If artificial intelligence genuinely delivers significant productivity gains, the benefits could accrue to a small number of individuals and corporations, exacerbating existing inequalities. This also ties into the ongoing debate about transparency in artificial intelligence development. Without a clear understanding of how these systems are making decisions, it becomes difficult to assess their true impact on employment and other societal factors. It's all very well to talk about the wonders of artificial intelligence but perhaps we should also consider whether the relentless pursuit of efficiency is actually efficient for society as a whole. Just because something can be automated doesn't necessarily mean that it should be, particularly if the consequences include widespread unemployment and social unrest. That's the economic picture, at least for this week. European financial authorities are concerned about the potential for artificial intelligence to be used in cyberattacks against banks. This isn't simply theoretical; the European Central Bank is apparently engaging with banks to assess their readiness for such threats. What we're talking about here is the prospect of an artificial intelligence model designed to identify weaknesses in financial systems. This model could then, theoretically, exploit those vulnerabilities. Now, cybersecurity in finance is nothing new. Banks have been dealing with hackers for decades. The difference here is the speed and scale an artificial intelligence could bring to the process. A human hacker might take weeks or months to find a flaw. An artificial intelligence could potentially do it in hours, or even minutes. The real-world impact is fairly obvious. A successful artificial intelligence-driven attack could lead to significant financial losses, erode public trust in the banking system, and even trigger a wider economic crisis. It's a question of systemic risk, in other words. And given the interconnectedness of the global financial system, a problem in one bank could quickly spread to others. This development also ties into the broader discussion around the responsible use of artificial intelligence. We've heard a lot about artificial intelligence being used for good, to improve customer service or detect fraud. But this serves as a stark reminder that the same technology can be used for malicious purposes. It is, to use a cliché, a double-edged sword. Of course, one might ask whether this is all just a bit of scaremongering. Banks already invest heavily in cybersecurity. Is an artificial intelligence-driven attack really such a game-changer? Or is this simply the latest excuse to sell more security software? Perhaps. But the potential consequences are serious enough that it's probably wise to take the threat seriously, even if the actual risk is overblown. The financial sector, it seems, has yet another challenge to add to its already considerable list. It seems a set of suggested shortcuts has been published for the Claude large language model, aimed at helping users get more out of it. Essentially, these are suggestions for more sophisticated prompts. The idea is that instead of just asking for a summary you might for example ask Claude to adopt a specific persona or to focus on particular aspects of a document or to structure its response in a certain way. It's about moving beyond the most basic uses and trying to leverage the system's capabilities more fully. The immediate impact is likely to be on productivity, at least for those who adopt these techniques. If these shortcuts genuinely improve the quality or speed of Claude's output then professionals who rely on it for tasks like writing research or analysis could see a tangible benefit. This also plays into the broader discussion around how we interact with these systems. Are we using them to their full potential, or are we stuck in a rut of simple queries that barely scratch the surface? We have seen similar efforts across many different artificial intelligence platforms and it highlights a recurring theme: the skill lies not just in building these models but in learning how to use them effectively. It's a bit like discovering that your expensive camera has features you never knew existed. Of course, the claim that these shortcuts will "boost your efficiency" tenfold should be taken with a pinch of salt. Marketing departments are prone to hyperbole. But even if the gains are more modest, the underlying principle remains sound: a little investment in learning how to prompt these systems can pay dividends. That's the theory, anyway. Google's latest iteration of its Gemma model now includes a feature called function calling. Essentially, the artificial intelligence can now access real-time data directly, rather than relying solely on its training data. What this means in practice is that the artificial intelligence can now, in theory, invoke functions – typically Python code – to retrieve information. So, instead of just guessing what the weather is like, it could call a weather API and give you the actual, current conditions. The stated aim is to reduce the well-documented problem of AIs simply making things up – or, as some call it, hallucinating. This development matters because it represents a push towards artificial intelligence that is grounded in verifiable facts, rather than just repeating patterns it has learned. If it works as advertised, it could improve the reliability of artificial intelligence systems in applications where accuracy is paramount. This also ties into the broader trend of open-weight artificial intelligence models, where the underlying code is more accessible. While this openness has benefits, it also raises concerns about security and data integrity, especially when dealing with real-time information. The idea of an artificial intelligence seamlessly calling functions and delivering accurate information is certainly appealing but the reality of such integrations is often more complex than it appears. We need to see how well these function calls perform in real-world scenarios. Are we genuinely moving closer to artificial intelligence that can provide accurate, context-aware responses, or are we simply layering complexity over existing issues? It seems to me that before we start celebrating the end of artificial intelligence fabrication we should probably wait to see if these models can reliably tell the difference between a weather forecast and a particularly florid poem about clouds. That's the data for now. The Pentagon is currently embroiled in a legal disagreement with Anthropic one of the leading artificial intelligence firms which raises some uncomfortable questions about the role of artificial intelligence in modern military operations. Specifically, it calls into question the popular notion that keeping "humans in the loop" will somehow keep things safe when artificial intelligence is used in warfare. Now "humans in the loop" is a rather comforting phrase suggesting that while machines might be doing the heavy lifting there's always a person there to step in to make the final decision to prevent unintended consequences. The reality, it seems, may be rather different. As artificial intelligence systems become more autonomous, and are asked to make decisions faster than any human possibly could, the idea of effective human oversight becomes increasingly questionable. The speed and scale of modern conflict mean that artificial intelligence systems are likely to be making decisions in real time with little or no opportunity for human intervention. This is not about artificial intelligence assisting with logistics or intelligence gathering it's about artificial intelligence potentially choosing targets deciding on appropriate responses and generally acting on its own initiative. The implications are, to put it mildly, rather alarming. We are entrusting decisions about life and death to algorithms, with the comforting idea of human oversight proving to be something of a convenient fiction. This is particularly concerning given the speed at which artificial intelligence technology is advancing, outpacing the development of any meaningful ethical or legal frameworks to govern its use. There's a lot of talk about ethical artificial intelligence of course but the reality on the battlefield may be far more unpredictable more chaotic than anyone is willing to admit. It's another example of the technology running far ahead of our ability to understand, or control, its potential impact. We should remember that the phrase "artificial intelligence safety" sounds comforting but it can also be a convenient label for decisions made far away by people we don't know using systems we cannot audit. Perhaps the comforting image of the human in the loop is simply a way to soothe our. consciences as we sleepwalk into a future where machines are making increasingly critical decisions on the battlefield. And that, one suspects, is a dangerous illusion indeed. The conversation around artificial intelligence in business seems mostly concerned with the models themselves, the headline-grabbing capabilities and benchmark performance. But a more fundamental issue is emerging: control of the operational infrastructure. What does that mean? Well, consider it like this. Everyone's focused on the engine of a car, but someone still needs to build the roads, maintain the traffic signals, and ensure the petrol stations are stocked. In the artificial intelligence world, the operating layer is the collection of systems and processes needed to actually deploy, manage, and improve the models. It's about governance, adaptability, and scalability. Without a solid foundation, even the most sophisticated artificial intelligence will struggle to deliver real value. This matters because whoever controls the operating layer wields significant power. They dictate how artificial intelligence is used, who has access, and how it evolves. Companies

Well, another week digested, another mountain of pronouncements scaled. Clarity, I think we can agree, remains a rare and precious commodity. If you'd like a daily distillation of the important bits, without the froth, you can find it at jonathan-harris dot online. And for those wanting to venture further down the rabbit hole I've put together a rather comprehensive guide called "Artificial Intelligence in Banking: Revolutionizing Finance and Data Security" available at jonathan-harris dot online slash ebooks slash artificial dash intelligence dash in dash banking dash revolutionizing dash finance dash and dash data dash security slash buy dash now. It's for people who prefer understanding to slogans. That's your lot for this week's Turing's Torch. If you want the daily brief, head to jonathan-harris dot online. Same time next week — try not to believe the press releases.