United States launches Artificial Intelligence programme to detect hidden nuclear missiles

IvyMikeShot 16-9 nuclear bomb

However, Google says it will continue to work with the United States military on cybersecurity, search and rescue, and other non-offensive projects. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions. Google likely planned to bid on JEDI, a cloud computing contract with the Defense Department that could be worth as much as $10 billion. But Google is joining other labs in saying it may hold back certain research if it believes others will misuse it.

A company executive told employees this week the programme would not be renewed after it expires at the end of 2019.

In addition to a ban on autonomous weaponry at Google, the company will also seek to avoid the creation of AI whose principle goal is to injure or harm human beings as well as "Technologies that gather or use information for surveillance violating internationally accepted norms". The Google official acknowledged that enforcement would be hard because the company cannot track each use of its tools, some of which can be downloaded free of charge and used privately.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases.

One Google employee who signed the petition said the principles don't go far enough, but it's good that the company has finally addressed the issue.

Google is one of the leading technology companies in artificial intelligence, which landed it a juicy government contract previous year to work on "Project Maven". Other employees described the internal reception as lukewarm.

"The global norms surrounding espionage, cyberoperations, mass information surveillance, and even drone surveillance are all contested and debated in the worldwide sphere", he said.

The last one especially should address the fact that the company intends to no longer work on AI projects that could be used for surveillance, cause overall harm, or violate worldwide human rights laws.

While the principles were released soon after the protests over Project Maven, the guidelines also cover a wide range of the concerns people have over AI.

"Ultimately, how the company enacts these principles is what will matter more than statements such as this", Asaro said.

Google's Chief Executive Officer Sundar Pichai specified seven objectives for the use of artificial intelligence in a blog post on Thursday. "But it's a start".

Google expects to have talks with the Pentagon over how it can fulfil its contract obligations without violating the principles outlined Thursday.

"While this is our chosen approach to AI development, we also understand that there is room for many voices in this conversation", Pichai wrote in the blog post. And we will continue to share what we've learned to improve AI technologies and practices.



Other news