Friday, August 30, 2024

ASPI ( Australian Strategic Policy Institute) The Strategist - 30 August 2024 - Nishank Motwani - The Danger od AI in war: it doesn't care about self - preservation

 

The danger of AI in war: it doesn’t care about self-preservation

Recent wargames using artificial-intelligence models from OpenAI, Meta and Anthropic revealed a troubling trend: AI models are more likely than humans to escalate conflicts to kinetic, even nuclear, war.

This outcome highlights a fundamental difference in the nature of war between humans and AI. For humans, war is a means to impose will for survival; for AI the calculus of risk and reward is entirely different, because, as the pioneering scientist Geoffrey Hinton noted, ‘we’re biological systems, and these are digital systems.’

Regardless of how much control humans exercise over AI systems, we cannot stop the widening divergence between their behaviour and ours, because AI neural networks are moving towards autonomy and are increasingly hard to explain.

To put it bluntly, whereas human wargames and war itself entail the deliberate use of force to compel an enemy to our will, AI is not bound to the core of human instincts, self-preservation. The human desire for survival opens the door for diplomacy and conflict resolution, but whether and to what extent AI models can be trusted to handle the nuances of negotiation that align with human values is unknown.

The potential for catastrophic harm from advanced AI is real, as underscored by the Bletchley Declaration on AI, signed by nearly 30 countries, including Australia, China, the US and Britain. The declaration emphasises the need for responsible AI development and control over the tools of war we create.

Similarly, ongoing UN discussions on lethal autonomous weapons stress that algorithms should not have full control over decisions involving life and death. This concern mirrors past efforts to regulate or ban certain weapons. However, what sets AI-enabled autonomous weapons apart is the extent to which they remove human oversight from the use of force.

A major issue with AI is what’s called the explainability paradox: even its developers often cannot explain why AI systems make certain decisions. This lack of transparency is a significant problem in high-stakes areas, including military and diplomatic decision-making, where it could exacerbate existing geopolitical tensions. As Mustafa Suleyman, co-founder of DeepMind, pointed out, AI’s opaque nature means we are unable to decode the decisions of AI to explain precisely why an algorithm produced a particular result.

Rather than seeing AI as a mere tool, it’s more accurate to view it as an agent capable of making independent judgments and decisions. This capability is unprecedented, as AI can generate new ideas and interact with other AI agents autonomously, beyond direct human control. The potential for AI agents to make decisions without human input raises significant concerns about the control of these powerful technologies—a problem that even the developers of the first nuclear weapons grappled with.

While some want to impose regulation on AI somewhat like the nuclear non-proliferation regime, which has so far limited nuclear weapons to nine states, AI poses unique challenges. Unlike nuclear technology, its development and deployment are decentralized and driven by private entities and individuals, so its inherently hard to regulate. The technology is spreading universally and rapidly with little government oversight. It’s open to malicious use by state and nonstate actors.

As AI systems grow more advanced, they introduce new risks, including elevating misinformation and disinformation to unprecedented levels.

AI’s application to biotech opens new avenues for terrorist groups and individuals to develop advanced biological weapons. That could encourage malign actors, lowering the threshold for conflict and making attacks more likely.

Keeping a human in the loop is vital as AI systems increasingly influence critical decisions. Even when humans are involved, their role in oversight may diminish as trust in AI output grows, despite AI’s known issues with hallucinations and errors. The reliance on AI could lead to a dangerous overconfidence in its decisions, especially in military contexts where speed and efficiency often trump caution.

As AI becomes ubiquitous, human involvement in decision-making processes may dwindle due to the costs and inefficiencies associated with human oversight. In military scenarios, speed is a critical factor, and AI’s ability to perform complex tasks rapidly can provide a decisive edge. However, this speed advantage may come at the cost of surrendering human control, raising ethical and strategic dilemmas about the extent to which we allow machines to dictate the course of human conflict.

The accelerating pace at which AI operates could ultimately pressure the role of humans in decision-making loops, as the demand for faster responses might lead to sidelining human judgment. This dynamic could create a precarious situation where the quest for speed and efficiency undermines the very human oversight needed to ensure that the use of AI aligns with our values and safety standards.

No comments:

Post a Comment