人工智能与深度学习的未来:社会影响、应用与研究进展

“人工智能将会夺走我们的工作。”“杀手机器人即将接管地球。”“你的医生将被一台机器取代。” - 过去几个月媒体上的所有短语。尽管人工智能正在改变它所涉及的每个行业,但它在新闻中描绘的方式往往是片面的,而不考虑更大的图景。在RE•WORK峰会上,我们展示了人工智能的最新突破,并关注它们如何影响商业和社会。我们第一次看到工业界,学术界和社会携手合作,确保技术进步能够应用于工业,以及有利于社会的整体利益。

Expanding on the ever popular Deep Learning Summit Series, we have returned to San Francisco today and tomorrow for a triple-track event. The Applied AI Summit is focusing on implementing the most cutting edge AI methods in a real-world setting; the AI for Good Summit looks at how we can ensure all departments are assuming responsibility in leveraging AI to benefit society to tackle global challenges; and the Deep Reinforcement Learning Summit draws on the most current research breakthroughs through the combination of deep learning and reinforcement learning to achieve human-level intelligence. We have also seen experts host Deep Dive Sessions which are designed to allow attendees to delve into more detail on some of the key topics explored across the two days. These sessions vary from interactive hands-on workshops to demonstrations and lecture-style presentations.

This morning kicked off with attendees getting to know each other over breakfast before selecting their first session of the day. With the attendee app up and running in advance of the summit, everyone had the chance to personalize their schedules to ensure they didn’t miss the most relevant sessions. There was also plenty of chatter on the app with people setting up meetings, engaging in polls, and arranging catch ups for the coffee breaks. 

I found myself in the Deep Reinforcement Learning (DRL) Summit listening to the fantastic Dawn Song, Professor at UC Berkeley this morning speaking about secure DRL. She explained how DRL has emerged as an important family of techniques for training autonomous agents and has led to the achievement of human-level performance on complex games such as Atari, Go, and Starcraft. Dawn also explained that DRL is also vulnerable to adversarial examples and can overfit.

DRL has been making great advancements like winning over one of the world’s top players on AlphaStar, but as we deploy this, we need to be aware that there is a presence of attackers. Attackers follow the footsteps of new technology, and with AI the stakes are higher because the consequences will be more severe. We need to measure the right goals to approach better generalization, integrity and confidentiality. -Dawn Song

I also spoke with Dawn in an interview for the Women in AI podcast where we discussed this in more detail, as well as some of the challenges and successes in using DRL to train agents. You can subscribe to the podcast here and we’ll let you know when Dawn’s episode becomes available.

Women in AI Podcast

With DRL being a relatively new field of study, it was great to have Joshua Achiam, Research Scientist at OpenAI hosting the Deep Dive session “An Introduction to Deep Reinforcement Learning” where he took attendees through an introduction to DRL models, algorithms and techniques; examples of DRL systems; and ran us through case studies on DRL can be used for practical applications.

“When do you want to use deep RL? You want to do it when there’s a complex high dimensional non-sequential situation. For example when you want to control sophisticated robots or play video games from raw pixels or be the best at a strategy game. Deep RL has the potential to succeed in these tasks.” - Joshua Achiam

Building on this session later in the afternoon, SAS hosted a session focusing on ‘What You Didn’t Learn About Machine Learning in School' where Wayne Thompson, Chief Data Scientist at SAS filled in some of the blanks that may have been missed from college or online courses by mapping out how to tune and evaluate models and how to actually put these models into practice and model them once they’re deployed. C-level attendees, as well as technical experts, joined this session, all engaging in the more interactive Q&A session: When asked about generalization, Wayne explained that from the machine learning pipeline, what has to happen is everything has to be blueprinted, and “that is the number one reason why models don’t get deployed.”

He explained that once these models are deployed, they immediately begin to degrade. It is important to monitor model drift, retrain champion models and evaluate new challenges. Model fairness and bias must be addressed. Wayne suggested that it’s important to regulate and repeat the models to harvest and build many models with similar features.

Back in the session rooms, Lucas Ives, Engineering Manager at Apple was speaking on “The Art of Applied AI”, explaining that there is a particular approach to the problem-solving space that he thinks is missing from deploying AI in the real-world. “It needs to be driven by the creative person, not from the technical person”. He spoke about the importance of companies developing AI to actually serve their consumers, and more often than not this needs to come from a natural and creative perspective, rather than from a technology standpoint. “There’s been a quantum leap forward in the last 5 or 6 years in AI. With Siri, the word error rate was sat at about 10% before the introduction of neural networks in 2010; now if the environment is right, it performs better than humans. In his presentation, Lucas took some time identifying ‘Appied AI’ and explained that ‘some people see it as an incremental step towards AGI, some people see it as a narrow use of AI, some people prefer the term Machine Learning, but really it needs to be a combination of a variety of things that can be presented to a user to help solve their real-world problems.’

Following on from Lucas, Chul Lee from Samsung Electronics spoke about “The Challenges of Implementing” and explained how recent advances in AI have enabled consumer and mobile device companies to greatly automate their existing operations, and enable more seamless and compelling user experiences around their devices. He explained how recent trends and algorithmic advances in personalization, data analytics, audience science, and human-computer-interactions related to IoT, personal assistance, device interaction/control, media discovery and logistics. Chul explained that they are using on-device AI processing that improves “the picture quality of our TV's as well as our TVs as an AI assistant as a universal guide that can make personalized suggestions and target specifically. It’s important for us to understand what kind of content is being served so we can personalize it accordingly.”

Also discussing the applications of AI, Carla Bromberg, Co-Founder & Program Lead of AI for Social Good at Google gave an overview of the program, discussing examples and techniques Google and others are using to apply AI research and engineering efforts to social, humanitarian, and environmental projects, and hope to inspire others to apply AI for good. Today we’ve heard about people working in poaching predictions for conservation, natural disaster prediction, using AI in education and much more.

In her presentation, Carla spoke about her work in migration prediction in whales to help preserve endangered species: “We’re working with Noah who has underwater recording equipment - It would take a human 19 years to listen to the recordings, and they may never even hear a whale! Machine learning takes the 100,000 hours of recordings and finds the whale noises. We took the underwater audio and turned it into visualization and annotated them with the species name. The more annotated examples we can show it, the better it gets. We can now see on a map where there’s a higher chance of finding the whales.”

At the Summit, attendees are welcome to attend sessions on all four tracks, and several attendees and speakers alike were enjoying the flexibility, finding that they were learning plenty of cross-transferable skills of their current work:

“You have such a great line-up in there and it's not just the organizations, it's the people from those organizations like those who I was sitting with at lunch.”Jeff Clune, Uber AI Labs

“It was a great talk from Anna from Intel speaking about using AI to protect wildlife. It’s amazing how AI can do things like this as well as crop prediction and other social endeavours people are working on.” Lisa Green, Data Science for Social Good

"I’ve not been to many events where you can go from super technical to looking at the bigger picture. We’re working with deep learning but are investing more and more in ethics and responsibility." - Mitchell, Microsoft Azure

We also hosted the increasingly popular Talent & Talk session during the coffee break and hear about vacancies from SAS, Moogsoft, Amazon, Bayer, Dolby Digital and many more. Matt from Numenta shared his vacancy and explained that their mission is to understand how intelligence works in the brain: “You don’t need a neuroscience background, but you need to be interested in it. We livestream all of our research meetings on Twitch and everything’s open-sourced. Only 2% of the neurones in your brain are firing at once which is sparse - most DL models are very dense which is the opposite to the brain, so we’re building DL models that only fire at 2% and they work!”

Back in the DRL summit room, we heard about “Learning to Act by Learning to Describe“ from Jacob Andreas from Microsoft Semantic Machines. He explained how there are a few problems at the intersection of language and RL - using interaction with the world to improve language generation and using models for language generation to efficiently train reinforcement learners. “When we move into our RL phase, we have no information, just an instruction following model. So what can we do with it? We know the parameters, but no instructions. So we have to search for instructions to identify what the DRL model wants us to do. We keep plugging them in to find the instruction and fine tune. We’re using the structure in the language learning data to tell us what’s important when searching for policies. We restrict ourselves to what’s relevant and meaningful.”

During the coffee and lunch breaks, I was fortunate enough to interview several of our speakers including Douglas Eck from Google, Karl Cobbe from OpenAI, Dawn Song from UC Berkeley and we had some really interesting discussions you can watch on our YouTube channel in a couple of weeks. With several press in attendance from various publications, Sonja Ried, CEO of OMGitsirefoxx also helped out with several interviews, speaking with Danielle Deibler from Jurispect, Alicia Kavelaars from OffWorld and Jeffrey Shih from Unity Technologies.

Back for this afternoon’s sessions, Junhyuk Oh, Research Scientist at DeepMind spoke about some deep reinforcement learning approaches that have been shown to perform well on domains where tasks and rewards and well-defined. Junhyuk is working on AlphaStar which is the first AI to defeat a top professional player in the game of Starcraft, one of the most challenging Real-Time Strategy (RTS) games. He explained that new agents tend to be strictly stronger than all of the previous agents.

One of the common themes of today’s presentations has been personalization, and speaking about how this can be used in business was Ankit Jain, Senior Data Scientist at Uber. He explained that whilst these techniques have been used in areas such as e-commerce and social media, it’s transferable to Uber by using past ride data to predict future journeys and patterns on a use by use case. He explained how they are training LSTMs to predict trips by combining past engagement data of a particular driver with incentive budgets and use a custom loss function (i.e. zero inflated poisson) to come up with accurate trip predictions using LSTMs. Predicting rider/driver level behaviours can help Uber find cohorts of high-performance drivers, run personalized offers to retain users, and deep dive into understanding of deviations from trip forecasts.

What else did we learn this afternoon?

Sherif Goma, IBM: Reinventing your company with AI and becoming a Cognitive Enterprise

Sherif explained how there is now an 'outside-in' digital transformation which is giving way to the 'inside-out' potential of reshaped standard business architectures and intelligent workflows. This has given birth to Cognitive Enterprises who are defining and pursuing a bold vision to realize new sources of value and restructure their industries, missions and business models.

Ashley Edwards, Uber AI Labs: Learning Values and Policies from State Observations

Ashley used an example in her presentation to communicate the DRL she’s working on. “If we’re building an IKEA table, it’s more valuable to have the pieces outside the box than inside the box, but then it’s more valuable to have the table built than to have the pieces on the floor. So we give these states values. We then apply these values in observation and use them in Deep Reinforcement Learning.”

Wrapping up the presentations of today was the panel discussion “What is Responsible AI and Where Do I start?”. As mentioned previously, ensuring that entire companies have ethical AI at the centre of their mission is integral, yet there are many areas where people feel that the research and applications take center focus, and there’s not enough time to focus on the social implications. The panellists spoke about creating transparent frameworks and common standards, as well as the positive impact of economic growth.

  • Anna Bethke, Head of AI for Social Good at Intel - "we have a lot of projects ranging from earth conservation to social impact. We’re working a lot at online harassment at the moment and using NLP algorithms to figure out if we can deter and defuse the conversations with automatic replies."
  • Tulsee Doshi, Product Lead for ML Fairness at Google - "I lead product for ML fairness. I get to work across products and have to learn how this is different across all our Google products. I’m looking at how developers can ask questions about their own products to see how we can ensure everyone is responsible."
  • Devin Krotman, Director at IBM Watson AI XPRIZE - "we’re asking teens and startups around the world to pick global challenges and apply AI or DL to it".
  • Adam Murray, Diplomat at U.S. Department of State - "we’re interested in looking in the international frameworks on AI and we’re looking at things related to the digital economy. It’s really important to foster trust in AI because it will boost our economy, boost innovation, but to do that we need trust.  AI should be human-centred and fair, trustworthy, robust and ethical."

Some of the best networking happens outside of the presentations and sessions, to wrap up today we rounded off with networking drinks where we brought together all four streams.

RE•WORK
RE•WORK

RE•WORK成立于2013年,宗旨为促进国际人工智慧及相关之研究,发展,应用及交流。迄今为止,RE•WORK在世界各地已创办超过50场人工智能高峰会,包括新加坡,香港,美国,伦敦,加拿大等等。

产业
相关数据
OpenAI 机构

OpenAI是一家非营利性人工智能研究公司,旨在以惠及全人类的方式促进和发展友好的人工智能。OpenAI成立于2015年底,总部位于旧金山,旨在通过向公众开放其专利和研究与其他机构和研究人员“自由合作”。创始人的部分动机是出于对通用人工智能风险的担忧。

https://www.openai.com/
DeepMind机构

DeepMind是一家英国的人工智能公司。公司创建于2010年,最初名称是DeepMind科技(DeepMind Technologies Limited),在2014年被谷歌收购。在2010年由杰米斯·哈萨比斯,谢恩·列格和穆斯塔法·苏莱曼成立创业公司。继AlphaGo之后,Google DeepMind首席执行官杰米斯·哈萨比斯表示将研究用人工智能与人类玩其他游戏,例如即时战略游戏《星际争霸II》(StarCraft II)。深度AI如果能直接使用在其他各种不同领域,除了未来能玩不同的游戏外,例如自动驾驶、投资顾问、音乐评论、甚至司法判决等等目前需要人脑才能处理的工作,基本上也可以直接使用相同的神经网上去学而习得与人类相同的思考力。

IBM机构

是美国一家跨国科技公司及咨询公司,总部位于纽约州阿蒙克市。IBM主要客户是政府和企业。IBM生产并销售计算机硬件及软件,并且为系统架构和网络托管提供咨询服务。截止2013年,IBM已在全球拥有12个研究实验室和大量的软件开发基地。IBM虽然是一家商业公司,但在材料、化学、物理等科学领域却也有很高的成就,利用这些学术研究为基础,发明很多产品。比较有名的IBM发明的产品包括硬盘、自动柜员机、通用产品代码、SQL、关系数据库管理系统、DRAM及沃森。

https://www.ibm.com/us-en/
相关技术
Anki机构

Anki 公司是由卡内基梅隆机器人研究所(Carnegie Mellon Robotics Institute)的三名毕业生在 2010 年创办的,现已获得了超过 2 亿美元的风险投资。更重要的是,它的产品确确实实吸引到了客户。Anki 目前已经售出了 150 万台机器人,并且他们找到了他们认为是最容易打入家庭市场的道路——玩具。这个明星产品是一个狂躁的小推土机机器人,名为 Cozmo,它可以在桌面上行走,玩简单的游戏,它的顶部装有会亮的立方体。根据一项分析,如果按照收入计算的话,Cozmo 是 2017 年美国、英国和法国的亚马逊网站上最畅销的玩具。 2017 年,Anki 公司就声称收入接近 1 亿美元了,当时 Anki 本可以进入「盈利」状态了,但它却将资金投入了一个 10 到 15 年的计划——一个从 Roomba 到 Rosie 的转变。

http://anki.com/
相关技术
AlphaStar技术

AlphaStar是2019年1月DeepMind推出的打星际争霸2的AI系统。在1月的首次亮相中,DeepMind播放的比赛视频显示AlphaStar击败了两名人类职业选手TOL与MaNa,引起了业内极大的关注。DeepMind 官方博客介绍,AlphaStar 的行为是由一种深度神经网络生成的,该网络从原数据界面(单位列表与它们的特性)接收输入数据,输出构成游戏内行为的指令序列。具体来说,该神经网络使用了一个 transformer 作为躯干,结合了一个深度 LSTM 核、一个带有 pointer 网络的自动回归策略 head 以及一个中心价值基线。

推荐文章
暂无评论
暂无评论~