Auto Byte

专注未来出行及智能汽车科技

微信扫一扫获取更多资讯

Science AI

关注人工智能与其他前沿技术、基础学科的交叉研究与融合发展

微信扫一扫获取更多资讯

Yazmin How, Digital Content Manager - RE•WORK作者

迈向负责任的人工智能

当您听到“负责任的人工智能”时,很容易理解为,人工智能本身需要对其行为的后果负责。但是,如果AI确实是人为的,那是否意味着它不能承担责任呢?虽然人工智能的发展为教育,医疗保健,金融等行业创造了无数改善全球人民生活的机会,但它也为确保这些人工智能公平透明并同时保持私密和安全提出了新的挑战。

Original

When you hear the term ‘responsible AI’, it’s understandable that this is to be interpreted as the model itself being responsible for the consequences of its actions. However, if AI is indeed artificial, surely that means it cannot assume responsibility? In this context, we refer to the overarching responsibility of AI from the very early stages of creation right up to deployment - so whose responsibility is it for the models to act safely and for the greater good of society?

Many options spring to mind: the government, businesses and independent regulators. Or should it be someone else entirely?

Whilst the development of AI is creating countless opportunities to improve the lives of people across the globe in industries such as education, healthcare, accessibility, finance and more, it’s also raising new challenges around the best way to ensure these models are fair and transparent, whilst being private and secure. This calls for regulation to ensure that companies, researchers, and enterprises alike are following the correct steps for developing responsible AI.

Expectation vs. Reality

The phrase ‘artificial intelligence’ holds big expectations. When companies announce that they are using artificial intelligence, often this comes along with a promise or intention to solve a problem either internally, for their customers, or for a bigger social impact. In the early stages of adoption, this is often not the case as companies are faced with teething problems, buggy algorithms and integration issues. In order to meet the expectations of society it’s important for businesses to manage these by creating a realistic, strong and transparent AI strategy, which includes governance structures and practices needed for “responsible AI" and company-wide AI adoption.

In a recent article, Charles Radclyffe at Forbes reinforces the importance of AI being employed for the benefit of society by explaining that ‘the first step towards responsible AI needs to be about people and not strategy.’

AI for Good vs. Responsible AI

AI can only be ‘good’ if it is responsible - the two go hand in hand. It’s worth mentioning however, that AI for good has another area of core focus. Whilst responsible AI hones in on the ways in which individuals and companies should be using AI, AI for good zooms out looking at the bigger picture of how AI can be applied to solve global challenges like reducing poverty, accessible healthcare, increasing sustainability, climate change, better education, future of food and much more.

Increasing the trust of AI for the general public is incredibly important. If consumers are comfortable that the information they’re giving over will be treated without risk of being compromised they’re far more likely to be loyal customers who trust the service. If, for example, a company is known for having biased algorithms or for building models that are too complicated for humans to understand, there will be limited trust for consumers who will struggle to get on board. As PwC explained in a recent article, ‘AI needs to be explainable so algorithms are transparent or at least auditable. It builds a control framework to catch problems before they start. It deploys teams and processes to spot bias in data and models, and to monitor ways malicious actors could “trick” algorithms. It considers AI’s impact on privacy, human rights, employment, and the environment. Leaders in responsible enterprise AI also participate in public-private partnerships and self-regulatory organisations, to help define standards worldwide.’

It’s all very well putting strategy first, but if people are not at the front of developers minds, they will find themselves in challenging positions having created blackbox and untrustworthy AI.

How can we help?

If you’re keen to learn more about both responsible AI and applying AI for social good, join RE•WORK at the upcoming summits:

AI for Good Summit, San Francisco, 20 - 21 June, confirmed speakers include Carlos Filipe Gaitan Ospina, ClimateAI; Erin Kenneally, U.S. Dept of Homeland Security; Girmaw Abebe Tadesse, University of Oxford; Kay Firth-Butterfield, World Economic Forum. View confirmed speakers here.

Responsible AI Summit, Montreal, 24 - 25 October, previous speakers include Yoshua Bengio, Universite de Montreal; Natacha Mainville, Google Brain; Brendan Frey, Deep Genomics; Daphne Koller, Calico. Speakers will be announced soon. Suggest a speaker here

Get in touch: if you’d like any more information about the summits, email John at john@re-work.co

Original

RE•WORK
RE•WORK

RE•WORK成立于2013年,宗旨为促进国际人工智慧及相关之研究,发展,应用及交流。迄今为止,RE•WORK在世界各地已创办超过50场人工智能高峰会,包括新加坡,香港,美国,伦敦,加拿大等等。

产业AIAI for GoodBusiness Apllications
1
暂无评论
暂无评论~