DEFENCE

“I’m a thirty-second bomb! I’m a thirty-second bomb! Twenty-nine! … twenty-eight! .. twenty-seven!” – The bomb, Starship Troopers, Robert A. Heinlein (1959)

“Serve the public trust, protect the innocent, uphold the law.” – Robocop, Robocop (1987)

How Tech Giants Are Devising Real Ethics for Artificial Intelligence

Four people involved in the creation of an industry partnership say its intent will be clear: to ensure that A.I. research is focused on things that will benefit people, not hurt them.

Source: How Tech Giants Are Devising Real Ethics for Artificial Intelligence

Engineers creating military drones that can be ‘grown’ in labs

 

Holy shit! We might be the ones “growing” them NOW. But how long until they grow themselves?!?

 

From News.com.au:

 

THE future of modern warfare just got terrifying, thanks to a breakthrough in the development of military drones. Researchers from BAE systems — the world’s second-largest defence contractor — are currently exploring a new technology that will allow the military to “grow” small scale unmanned aircraft …

 

Photo: Bae Systems

Source: Drones ‘grown’ using chemical reactions

Can this software identify terrorists by facial features?

From Digital Trends:

 

Faception’s theory of facial personality profiling is supported by two genetic research observations: DNA affects personality, and DNA determines facial features. Linking these two observations lead the company to infer that personality can be identified in facial features since both are products of genetic expression.

The accuracy of this inference is yet to be determined but … AI technology itself is idiosyncratic. Machine learning algorithms learn from the data they’re given – and this data can can cause them to learn things in error.

 

An Israeli startup claims its software can identify terrorists, academics, professional poker players, and pedophiles by facial features alone.

Source: Can this Software Identify Terrorists by Facial Features? | Digital Trends

US nuclear arsenal controlled by 1970s computers with 8in floppy disks

 

Reassuring!

 

From The Guardian:

Composite: Richard Masoner/Joint Task Force One/Flickr/AP

Given that magnetic media has a finite shelf life, and that disks and the drives needed to read and write to them are older than some of the operators of the machinery, the floppy revelation makes you wonder whether the US could even launch a nuclear attack if required. An “error, data corrupted” message could be literally life or death.

Source: US nuclear arsenal controlled by 1970s computers with 8in floppy disks | Technology | The Guardian

New robot bee may soon be a spy’s secret weapon

From Mashable:

Image: Carla Schaffer/AAAS

Image: Carla Schaffer/AAAS

A robot bug that can land when no one is around and then stay quietly attached to the ceiling, without the need for audible motors, and that can wait to take off until no one is around, could be quite a boon for would-be spies.

Source: New robot bee may soon be a spy’s secret weapon

RoBattle is over 7 tons of semi-autonomous war machine

From Australian Popular Science:

RoBattle … is equipped with a modular “robotic kit” comprised of vehicle control, navigation, RT mapping and autonomy, sensors and mission payloads. The system can be operated autonomously in several levels and configured with wheels or tracks, to address the relevant operational needs.

Read more: RoBattle Is Over 7 Tons Of Semi-Autonomous War Machine | Military | Tech | Australian Popular Science

FBI has 411 million photos in its facial recognition system, and a federal watchdog isn’t happy

From ZDNet:

Image: GAO; Screenshot: ZDNet

The watchdog said it had “concerns regarding both the effectiveness of the technology” and the “protection of privacy and individual civil liberties.”

Source: FBI has 411 million photos in its facial recognition system, and a federal watchdog isn’t happy | ZDNet

Google developing kill switch for AI

From the BBC:

They say future AIs are unlikely to “behave optimally all the time”.

“Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions …”

But, sometimes, these “agents” learn to over-ride this, they say, giving an example of a 2013 AI taught to play Tetris that learnt to pause a game forever to avoid losing.

Read more: Google developing kill switch for AI – BBC News

Related Posts: Google’s ‘big red button’ could save the world

A ban on autonomous weapons is easier said than done

From The Guardian:

Softbank Corp robot Photograph: Bloomberg/Getty Images

Stephen Hawking, Elon Musk, Steve Wozniak and artificial intelligence researchers published a letter calling for a ban on autonomous weapons. This is an easy first step. A ban that works in practice will be much harder …

“Any AI research could be co-opted into the service of war, from autonomous cars to smarter chat-bots… It’s a short hop from innocent research to weaponization.”

The tension between dual uses of technology – for hazard and for good – is particularly difficult to manage when the exact same technology can be used in a wide and unpredictable range of ways.

Read more: A ban on autonomous weapons is easier said than done | Science | The Guardian

Terrorists gearing up for a cyber fight, security firm says

From NBC News:

An example of the sort of conversation that takes place on … dark web forums involved a cleaner in Berlin who worked the overnight shift and wanted to know how they could help … Others chimed in, explaining how the janitor could load malware onto a USB device and plug it into a computer to allow them to remotely hack into the network.

“That is the kind of insider threat that we are going to be facing …”

Source: Terrorists Gearing Up for a Cyber Fight, Security Firm Says – NBC News