Proliferation of AI weapons among non-state actors ‘could be impossible to stop’–The Register

Lindsay Clark

The proliferation of AI in weapon systems among non-state actors such as terrorist groups or mercenaries would be virtually impossible to stop, according to a hearing before UK Parliament.

The House of Lords’ AI in Weapon Systems Committee yesterday heard how the software nature of AI models that may be used in a military context made them difficult to contain and keep out of nefarious hands.

When we talk about non-state actors that conjures images of violent extremist organizations, but it should include large multinational corporations, which are very much at the forefront of developing this technology

Speaking to the committee, James Black, assistant director of defense and security research group RAND Europe, said: “A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware: it’s missiles, it’s engines, it’s nuclear materials.”

An added uncertainty was that there was no established “war game” theory of how hostile non-state actors might behave using AI-based weapons. A further uncertainty we’d like to add is that today’s artificial intelligence isn’t particularly reliable, a point we hope isn’t lost on anyone.

***

Read more…

Leave a comment