An arms race of a different kind has officials in the United States worried. Handout.

The image is forever burned into the minds of movie goers … Arnold Schwarzenegger as the Terminator, a cyborg assassin sent back in time from 2029 to 1984 to kill Sarah Connor (Linda Hamilton), whose son will one day become a savior against killer machines in a post-apocalyptic future of artificial intelligence gone awry.

Pure fiction, you say? Not if you ask the US Navy.

According to a Navy official, it is making efforts to prevent Skynet — the fictional intelligence computer network that attempts to destroy humanity in the Terminator series — from becoming a reality as it continues efforts to field evermore capable robots, Defense News reported.

As the service works to build autonomous capabilities, trust is an ever-present concern for those charged with testing and evaluating the safety of the systems they are developing, said Steve Olsen, deputy branch head of the Navy’s mine warfare office.

But the developers are keenly aware that overly trusting weapons systems can present serious dangers, the report said.

“Trust is something that is difficult to come by with a computer, especially as we start working with our test and evaluation community,” Olsen said. “I’ve worked with our test and evaluation director, and a lot of times it’s: ‘Hey, what’s that thing going to do?’ And I say: ‘I don’t know, it’s going to pick the best path.’

“And they don’t like that at all because autonomy makes a lot of people nervous. But the flip side of this is that there is one thing that we have to be very careful of, and that’s that we don’t over-trust. Everybody has seen on the news [when people] over-trusted their Tesla car. That is something that we can’t do when we talk about weapons’ system.” he added.

“The last thing we want to see is the whole ‘Terminator going crazy’ [scenario], so we’re working very hard to take the salient steps to protect ourselves and others.”

YouTube video

According to Popular Science, in August 2010, US Navy operators on the ground lost all contact with an unarmed Fire Scout helicopter flying over Maryland. They had programmed the unmanned aerial vehicle to return to its launch point if ground communications failed, but instead the machine took off on a north-by-northwest route toward the nation’s capital.

Over the next 30 minutes, anxious military officials alerted the Federal Aviation Administration and North American Aerospace Defense Command and readied F-16 fighters to intercept the pilotless craft.

Finally, with the Fire Scout just miles shy of the White House, the Navy regained control and commanded it to come home. “Renegade Unmanned Drone Wandered Skies Near Nation’s Capital,” warned one news headline in the following days. “UAV Resists Its Human Oppressors, Joyrides over Washington, D.C.,” declared another.

Hardly a machine with the degree of intelligence or autonomy necessary to wise up and rise up, as science fiction tells us the robots inevitably will do, it was still a wake-up call.

No surprise that the Pentagon is actively looking for the right person to help it navigate the morally murky waters of artificial intelligence and the battlefield of the 21st century.

“One of the positions we are going to fill will be someone who’s not just looking at technical standards, but who’s an ethicist,” Lt.-Gen. Jack Shanahan, director of the Joint Artificial Intelligence Center (JAIC), told The Guardian last week.

The US Navy is experimenting with a 135-ton ship named the Sea Hunter that could patrol the oceans without a crew, looking for submarines it could one day attack directly. Handout.

“We are thinking deeply about the ethical, safe and lawful use of AI,” he said. “At its core, we are in a contest for the character of the international order in the digital age. Along with our allies and partners, we want to lead and ensure that that character reflects the values and interests of free and democratic societies. I do not see China or Russia placing the same kind of emphasis in these areas.”

Olsen said the Navy is making great strides in getting autonomous systems to work together to perform complicated tasks, pointing to the Navy’s recent demonstration of single-sortie mine hunting, the report said.

And central to that is the creation of a common control system, or CCS, from which sailors can operate multiple systems. The problem, however, is the fragmentation of unmanned systems acquisition; there are several different systems in the fleet doing different things, Olsen said.

“One of the biggest challenges we have with autonomy,” Olsen said, “is that all of these unmanned systems go out into acquisition, and the requirements folks have always just said: ‘We need it to be an open-systems architecture.’ So industry brings us their version, and the problem is I’ve got 10 different open-systems architectures and none of them work together.

“As we move forward, you bring in more information where these systems start working together until ultimately you get to a level where you have more of a mission commander role, where the system has a type of knowledge,” he said. “That’s where trusting, but not over-trusting, comes into play,” he added.

According to a report in The Atlantic, the Navy is experimenting with a 135-ton ship named the Sea Hunter that could patrol the oceans without a crew, looking for submarines it could one day attack directly. In a test, the ship has already sailed the 2,500 miles from Hawaii to California on its own, although without any weapons.

The Army is also developing a new system for its tanks that can smartly pick targets and point a gun at them. It is also developing a missile system, called the Joint Air-to-Ground Missile (JAGM), that has the ability to pick out vehicles to attack without human say-so.

And the Air Force is working on a pilotless version of its storied F-16 fighter jet as part of its provocatively named “SkyBorg” program, which could one day carry substantial armaments into a computer-managed battle.

Leave a comment