VOA Standard | Whether artificial intelligence should learn from human moral and ethical framework

VOA Standard | Whether artificial intelligence should learn from human moral and ethical framework

2.9分钟 3671 144wpm

人工智能是否应学习人类道德伦理框架

Loading the player...
Whether artificial intelligence should learn from human moral and ethical framework 

From driving a car and fighting diseases to organizing what people see on the Internet, AI is increasingly making decisions for humans, say experts at a Stanford University conference on AI ethics, but they say as AI get smarter programmers need to make sure they are also teaching AI programs to protect human values.

Many argue a good way to do that is to incorporate the UN’s Universal Declaration of Human Rights into AI programming.

We have a framework of principles that we should turn to, the existing International Human Rights Framework.

The 1948 Universal Declaration of Human Rights lays out a series of basic articles supporting among other things, the right to life and liberty, and denouncing slavery and torture.
That speaks to the legal obligation of governments and the responsibilities of the private sector to protect and respect and remedy human rights violations.

But opponents in AI say the technology will only be as good or bad as the humans who are programming it. They say more regulations on AI programming will only stifle innovation.

We're working on that policy. But words are different than action, sir.

Others here disagree.

Arguing that implies that innovation is more important than democracy or the rule of law, the foundations of our quality of life, and I believe actually that some of the most serious challenges to our open societies, but also to the open Internet today do not stem from over but rather under regulation of technologies.

The rapid development of AI by companies like Google, Facebook, Baidu and Tencent has far surpassed regulations, exposing challenges such as fake news, privacy violations, bias AI decisions and dangers of automated weaponries.

Experts say the global governance framework to ensure AI ethical practice is a necessity. Industry leaders, like former Google CEO Eric Schmidt, say more discussion is needed.

We’re here fundamentally because we want to have an open debate. We want to make sure that the systems that we’re building are built on our values, on human values.

But who gets to decide human values? These experts say making sure AI incorporates the UN’s understanding of human rights into every decision goes a long way into ensuring a safe and secure future.

I have feelings like everyone else.

SOURCE: VOA
  • 时长:2.9分钟
  • 语速:144wpm
  • 来源:VOA 2019-11-19