Four people involved in the creation of an industry partnership say its intent will be clear: to ensure that A.I. research is focused on things that will benefit people, not hurt them.
“I’m a thirty-second bomb! I’m a thirty-second bomb! Twenty-nine! … twenty-eight! .. twenty-seven!” – The bomb, Starship Troopers, Robert A. Heinlein (1959)
“Serve the public trust, protect the innocent, uphold the law.” – Robocop, Robocop (1987)
Holy shit! We might be the ones “growing” them NOW. But how long until they grow themselves?!?
THE future of modern warfare just got terrifying, thanks to a breakthrough in the development of military drones. Researchers from BAE systems — the world’s second-largest defence contractor — are currently exploring a new technology that will allow the military to “grow” small scale unmanned aircraft …
From Digital Trends:
Faception’s theory of facial personality profiling is supported by two genetic research observations: DNA affects personality, and DNA determines facial features. Linking these two observations lead the company to infer that personality can be identified in facial features since both are products of genetic expression.
The accuracy of this inference is yet to be determined but … AI technology itself is idiosyncratic. Machine learning algorithms learn from the data they’re given – and this data can can cause them to learn things in error.
From The Guardian:
Given that magnetic media has a finite shelf life, and that disks and the drives needed to read and write to them are older than some of the operators of the machinery, the floppy revelation makes you wonder whether the US could even launch a nuclear attack if required. An “error, data corrupted” message could be literally life or death.
RoBattle … is equipped with a modular “robotic kit” comprised of vehicle control, navigation, RT mapping and autonomy, sensors and mission payloads. The system can be operated autonomously in several levels and configured with wheels or tracks, to address the relevant operational needs.
The watchdog said it had “concerns regarding both the effectiveness of the technology” and the “protection of privacy and individual civil liberties.”
From the BBC:
They say future AIs are unlikely to “behave optimally all the time”.
“Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions …”
But, sometimes, these “agents” learn to over-ride this, they say, giving an example of a 2013 AI taught to play Tetris that learnt to pause a game forever to avoid losing.
Related Posts: Google’s ‘big red button’ could save the world
From The Guardian:
Stephen Hawking, Elon Musk, Steve Wozniak and artificial intelligence researchers published a letter calling for a ban on autonomous weapons. This is an easy first step. A ban that works in practice will be much harder …
“Any AI research could be co-opted into the service of war, from autonomous cars to smarter chat-bots… It’s a short hop from innocent research to weaponization.”
The tension between dual uses of technology – for hazard and for good – is particularly difficult to manage when the exact same technology can be used in a wide and unpredictable range of ways.
From NBC News:
An example of the sort of conversation that takes place on … dark web forums involved a cleaner in Berlin who worked the overnight shift and wanted to know how they could help … Others chimed in, explaining how the janitor could load malware onto a USB device and plug it into a computer to allow them to remotely hack into the network.
“That is the kind of insider threat that we are going to be facing …”