Yazmin How, Digital Content Manager - RE•WORK 作者

政府应该对人工智能的监管和安全负责吗?

此时世界已经制作了大量的科幻电影和书籍,其中许多都是基于人工智能(AI)在其发展中取得了令人难以置信的进步的想法,直到人类根本无法控制它。当然,这种结果对人类来说并不是最好的,因为智能机器甚至可能威胁到我们的存在,所以很多人都警告智能机器对我们的未来存在负面影响。

Original

The world has produced a lot of sci-fi movies and books at this point, and many of them have been based on the idea that artificial intelligence (AI) will reach unbelievable progress in its development, up to the point where humans simply cannot control it.

Naturally, this outcome isn’t exactly the best for humanity because intelligent machines could even threaten our existence, so many people warn about the potential negative impacts of intelligent machines on us in the future.

But is fear of those thinking that the rise of AI is dangerous to the world justified? Well, many people, including some of the brightest minds in the world, have already expressed concerns regarding AI. For example, Elon Musk named it “an immortal dictator from which we would never escape.” Bill Gates and Stephen Hawking shared this concern as well.

These people certainly a thing or two about technology, so we should listen to them. But who’s going to be responsible for the regulation and safety of AI when the technology becomes mainstream and has the potential to deliver a profound negative impact?

Surely, the first thing that comes to mind is the government, so let’s talk about whether the governments around the world should step up and take action against challenges associated with AI.

What are the Potential Negative Impacts of AI for People?

While we’re still many years away from a potential sci-fi-like effect of AI on humanity, there are already a number of significant challenges people need to address. For example, they include but not limited to potential job displacement, safety implications, unethical use of data, and even bias that was unintentionally introduced into AI algorithms by a human developer.

Here are some of the facts that demonstrate the current scope of the problem, current issues, and the potential impact of AI.

  • AI-powered semi-autonomous driving system has been involved in two fatal accidents this year; one killed a driver(the National Highway Traffic Safety Administration concluded that the system was working fine, but was very easy to misuse) and another one killed a pedestrian.

  • There have been several reports of people in California attacking self-driving cars, autonomous delivery robots, and security robots.

  • According to a recent study by McKinsey Global Institute, robots and AI-powered agents could eliminate up to 30 percent of the world’s human labor by 2030 (depending on the speed of adoption). This means millions of lost jobs.

  • The “black box” problem. Companies develop complex AI algorithms that can perform various tasks, from determining the risk of hypertension to approving loans. Sometimes, however, the algorithms are so complex that even their developers can’t explain exactly how they work. This is called the “black box” problem – the inability to understand and explain how an algorithm works.

With rapid advancements in AI technologies, the government clearly needs to address the problem and introduce reasonable regulations to minimize the negative impact on society and maximize safety.

How can the Government Regulate AI?

According to 2017 Morning Consult report on emerging technology, 71 percent of U.S.-based respondents said the government should impose national regulations on AI, and 67 percent also called for international regulations. These results suggested that the public is aware of potential issues brought by AI as well as safety issues and supported the idea that the government should move fast and adopt appropriate regulations.

But what could be the basic principles that governments around the world should apply to regulate the impact of AI and keep humans safe from it?

Actually, we already have an example.

The British government has made serious moves this year to regulate the use of AI in the country and protect the data of citizens from potential exposure. In fact, the House of Lords Artificial Intelligence Committee has released a report called “AI in the UK: Ready, Willing, and Able?” that proposes a specific strategy for the government and local businesses for the regulation and management of AI.

There are give specific principles for an essential AI code for the UK proposed in the report:

Principle #1: Artificial intelligence should be developed for the common good and benefit of humanity.

Principle #2: Artificial intelligence should operate on principles of intelligibility and fairness.

Principle #3: Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

Principle #4: All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

Principle #5: The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

They present a good basic framework for regulation of AI and are likely to be adopted by the British government. This year, the country’s government made another step in this direction and adopted a code of conduct on AI in healthcarethat imposed ten principles that data-driven AI-technologies must adhere, and ensured the protection of patient data used by technology.

The Answer

With rapid advancements in AI technology, it’s becoming clear that governments across the world need to step up to regulate its development to ensure that it delivers benefits to humanity. They should research, understand, and regulate AI accordingly to minimize potential impacts on citizens.

Only a well thought-out and smart approach can ensure that AI systems make ethically sound and safe recommendations and solutions, so it’s clear that the governments should not stay away but be an active player providing strong guiding principles.

Tom Jager is a professional blogger based in London. He covers topics related to digital marketing, blogging, social media and business in general. He is always seeking to discover new ways for professional and personal growth. Tom Jager is a professional blogger and a content manager at Proessaywriting. He covers topics related to digital marketing, blogging, social media and business in general. He also contributes his guides and tricks to College-paper and Essayontime.

Original

RE•WORK
RE•WORK

RE•WORK成立于2013年,宗旨为促进国际人工智慧及相关之研究,发展,应用及交流。迄今为止,RE•WORK在世界各地已创办超过50场人工智能高峰会,包括新加坡,香港,美国,伦敦,加拿大等等。

产业Deep LearningAIGovernment Transformation
暂无评论
暂无评论~