A number of AI (artificial intelligence) experts have published the second International AI safety report.
The AI experts are usually resident or alumni from various prestigious universities such as Oxford, Stanford, Princeton, Cambridge – the list goes on, as well as contributors attached to various research institutes such as The Alan Turing Institute, Carnegie Mellon and many more well known and recognised organisations. The panel comprises of representatives nominated by over 30 countries.
Report Focus
The reports scope is ‘general-purpose AI’. However, the report points out that for example, a general purpose AI model trained on enough examples of 19th-century romantic English poetry can recognise new poems in that style and produce new material in a similar style.
Hence, with such capabilities, it is of no surprise that general-purpose AI is being adopted globally by individuals to organisations, from the silent generation to generation alpha, and from single-entity companies to very large organisations. Some very large organisations, in recent times, have come to realise the benefits and the perils of general purpose AI.
Report Conclusion
The report comes with a warning that the overall trajectory of general-purpose AI systems remains ‘deeply uncertain’ despite rapid advances in capability and growing deployment. AI capabilities are advancing faster than expected, and risks are becoming more real.
The experts cannot reliably predict how powerful AI will become, how it acquires new capabilities, or whether existing safety measures will hold up, making long-term outcomes highly uncertain.
While the report does not prescribe policy, it establishes a common scientific foundation to help governments and institutions make informed choices as AI continues to evolve.
The report outlines a broad spectrum of risks associated with AI systems. Some of these included potential impacts on jobs, environmental pressures and the malicious uses such as cyber-attacks and AI generated misinformation.
The authors note that while AI capabilities have improved over the past year, progress remains “jagged”, with systems excelling at some complex tasks while failing at simpler ones. They also highlight an “evaluation gap”, warning that benchmark results cannot reliably predict real-world utility or risk.
Future Discussions and Research
The report calls for more research into risk measurement, mitigation effectiveness and governance frameworks, noting that policymakers currently have limited visibility into how AI developers test and manage emerging risks.
The findings are expected to inform diplomatic discussions at the upcoming India AI Impact Summit, as governments consider how to balance innovation with safety, accountability and inclusive access to AI’s benefits.
Industry Comment from Zoho UK
Sachin Agrawal, Managing Director for Zoho UK, has commented:
“AI is already delivering meaningful gains for UK businesses, from accelerating decision-making to strengthening fraud detection, and its potential will only grow as frontier systems mature. However, this very evolution, which the latest international safety report describes as ‘deeply uncertain’, shows why we need clear, enforceable regulation that gives confidence to innovators and the public.”
“AI requires strong transparency and governance practices, especially as cloud-based AI systems are increasingly developed in one country and deployed in another, making consistent oversight and responsible data handling even more important. With AI capabilities advancing far faster than traditional governance cycles, and many sectors relying on a small number of general-purpose models, measures such as clear documentation, monitoring and standardisation are becoming increasingly important.”
“Robust AI governance must go hand in hand with a firm commitment to data privacy and ethical management. As organisations adopt more advanced systems, they have a responsibility to ensure that the data powering these models is secured, transparent and protected. This approach won’t just help mitigate the risks noted in the report, such as unpredictable behaviour, deepfakes and misuse, it’s vital to earning long-term trust and delivering lasting economic and societal value.”
Global constraints
Whilst the report’s commentary and sentiments centre around the pace of AI adoption, and the risks it poses, the report’s conclusion also draws attention to unforeseen technical limits that could slow its progress, despite investment commitments currently being made. Such forebodings in the IT industry are not new.
As of early 2026, the worldwide semiconductor shortage has not fully ended but has shifted, focusing heavily on high-performance AI chips and specialised memory rather than a general shortage of all components. Extreme demand for AI infrastructure is causing supply bottlenecks in high-end memory and packaging that are expected to last for a number of years.
Industry Comment from Intel, at Cisco AI Summit held 3 February 2026
Lip-Bu Tan, CEO at Intel has recently commented, in reference to a pointed question about global constraints:
“In terms of the AI, the biggest challenge, I think, for a lot of my customers is memory. Memory, actually, there’s no relief as far as I know. I talk to three key players, two of them I talk to very frequently, and they told me there’s no relief until 2028 because I think AI sucks up a lot of memory……I think it’s clearly from the compute side. And I was very happy to hear that customers are all crying for more products, and I didn’t prepare the production enough to meet the requirement. I think people started to find out that in application, CPU actually is more useful in terms of performance for all the compute requirements. [The need for] compute is increasing so much, and right now, my biggest challenge is focused on our production of supply chain to make sure we can meet the requirement.”
Industry Comment from March and April 2025 – DE-CIX
Almost a year ago, after MWC (Mobile World Congress) 2025, Head of DE-CIX’s Global Business Partner Program at the time Mareike Jacobshagen, voiced a number of issues, primarily focused on “Scaling AI must start with Infrastructure“.
Mareike Jacobshagen, now Manager for Strategic Marketing Programs at DE-CIX, commented in UC Advanced magazine:
“The challenge lies in AI’s growing need for distributed processing. AI models are no longer confined to centralised data centers; they are deployed across cloud platforms, edge devices, and enterprise environments, each with unique latency and bandwidth requirements. AI inference – the real-time application of trained models and the most common use of AI – depends on ultra-fast data transfers between these locations. Traditional cloud architectures, which rely on unpredictable public Internet routing, introduce performance limitations that become unacceptable when milliseconds matter. Whether an AI model is making real-time decisions in a self-driving vehicle or processing predictive analytics at a financial firm, the network must be able to handle large-scale, high-speed data movement without congestion or extensive packet loss.”
The Real Challenge
Mareike went on to discuss how that can be achieved, and her conclusion was:
“The real challenge is building the digital infrastructure necessary to support AI at scale, ensuring that latency, security, and bandwidth constraints do not hold back its potential. As organisations push AI deployments beyond isolated use cases and into widespread real-world applications, our focus must expand from developing new AI capabilities to ensuring the underlying infrastructure is ready to sustain them.”
Words spoken almost a year ago, but just as relevant today.




