The virtues of AI aren't always guaranteed Google's CEO warns.
Google Chief Executive Sundar Pichai sees the benefits of artificial intelligence but is also calling for regulation of the technology in a new Financial Times editorial.
The leader of the world's largest Internet search company called AI one of the "most promising" new technologies to shape our lives and pointed to work Google has been doing with AI including helping doctors spot breast cancer, provide real-time hyperlocal forecasts of rainfall and reduce flight delays among other things.
The good that AI brings isn't guaranteed
Nevertheless Pichai warned the virtures of AI aren't a guarantee.
"Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread," wrote the Google CEO in the editorial.
"These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone."
Regulation is a must
Pichai said it's a good step in the right direction that the EU and U.S. are working to develop regulatory proposals governing AI, noting that international alignment is necessary to create global standards. At the same time, the executive said everyone has to be on the same page about their core values.
Companies like Google can't build technology and let market forces decide how it is used. They also have to develop technology that's used for good and is accessible to everyone not just a few, he said.
"Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it," wrote Pichai, noting that Google published its own AI principles in 2018 to help guide ethical development and use of AI and has created tools to put the principles in action. That includes testing AI decisions to ensure they are fair and conducting independent human rights assessments of its new AI-based products.
Government will play a big role in regulating AI
Still government regulation will play a big role in safeguarding society from the downside of AI. Pichai pointed to the General Data Protection Regulation that's on the books in Europe as serving as a foundation.
"Good regulatory frameworks will consider safety, explainability, fairness, and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities," he wrote.
SOURCE:Paper.li
コメント