Google may have quietly dropped its Don't Be Evil tagline but it is still firmly in the 'we're not planning to end the world' camp when it comes to AI.
To prove it, it's released a comprehensive list of guidelines that it is going to adhere to when it comes to everything it is doing with AI.
The first batch of rules read like something out of The Scouts' handbook and go under the banner: Objectives for AI Applications. It comprises seven rules, that cover everything from privacy to being accountable to upholding high standards of scientific excellence.
Making the rulebook
Where it gets really interesting, though, is the bit titled: AI Applications We Will Not Pursue. This is the section where Google outlines what is won't do. Choice keywords here include: weapons, harm, surveillance, violating... that sort of this.
"We will not design or deploy AI in technologies that cause or are likely to cause overall harm," explains Google.
"Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints."
Google concludes: "We believe these principles are the right foundation for our company and our future development of AI.
"We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time."
This is all great, but exactly what the T-800 would say if it was trying to blend in with the real world.
- Google Pixelbook review: The best Chromebook to date – period
source http://www.techradar.com/news/google-creates-its-own-ai-rulebook-promises-it-wont-go-all-terminator-on-us
No comments:
Post a Comment