Luke Kenworthy作者

麻省理工学院Lex Fridman:强化学习简介

在旧金山AI峰会上,麻省理工学院研究员Lex Fridman分享了他对AI的最新发展及其当前地位的看法。

We were delighted to be joined by Lex Fridman at the San Francisco AI Summit today, taking part in both a ‘Deep Dive’ session, allowing for a great amount of attendee interaction and collaboration, alongside a fireside chat with OpenAI Co-Founder & Chief Scientist, Ilya Sutskever. The MIT Researcher shared his thoughts on recent developments in AI and its current standing, highlighting its growth in recent years.

“It’s amazing how much has happened in recent history, the industrial revolution was just 300 years ago, human beings started creating automated processes during this time. Since then, we have seen a gigantic leap toward machines that are learning for themselves!”

Lex then referenced, Lee Sedol, the South Korean 9th Dan GO player, whom at this time is the only human to ever beat AI at a video game, which has since become somewhat of an impossible task, describing this feat as a seminal moment and one which changed the course of not only deep learning but also reinforcement learning, increasing the social belief in the subsection of AI. Since then, of course, we have seen video games and tactically based games, including Starcraft become imperative in the development of AI.

The comparison of Reinforcement Learning to Human Learning is something which we often come across, referenced by Lex as something which needed addressing, with humans seemingly learning through “very few examples” as opposed to the heavy data sets needed in AI, but why is that? Lex gave what he thought were the three possible reasonings:

  1. Hardware - 230 million years of bipedal movement data
  2. Imitation Learning - Observation of other humans walking
  3. Algorithms - Better back-propagation and stochastic gradient descent

Furthermore, it was suggested, to much laughter, that the most surprising and profound idea in Deep Learning is that it works at all. It’s still not fully understood that it can learn a function in a lean manner - it’s over parameterised but for now, we should be impressed by the current direction! Moving on to what he is most-known to socially commentate on during his podcasts and social media presence, autonomous driving! The segway to this came through the discussion of the rule-based systems methodology used in self-driving cars, which further allows for an increasing amount of reliability. That said, Lex also suggested that in its current state, the ability to surprise is not something overly welcomed. Does this really differ from humans though?

“There hasn’t been a truly intelligent rule-based system which is reliable (in AI), however, human beings are not very reliable or explainable either.”

Lex went on to further break this quote down suggesting that whilst a black-box style of accuracy is expected from AI, we also place far too much pressure on its development, especially when you consider that regular human mistakes are forgiven when met with excuse, however, it takes only one AI-based mishap for news articles, blogs and media outlets (mostly outside of the industry) to write-off years of development. As humans, we don’t want to know the truth, we simply want to understand. When this isn’t possible, ideas and developments are often rejected as futile. The witty remark that DL needs to become more like the person in school who studies social rules, finding out what is cool and uncool to do/say, suggested that the openness and honesty seen in currently algorithms makes it easy for problems to meet a disgruntled audience.

So what’s next?  Naive question perhaps, especially when development of RL systems is ongoing, however, Lex suggested that explainable AI was the next buzzword, something considered ‘sexy’ in the industry right now even if seemingly overused. What gets one of the top minds in AI excited in 2020?

  • RL robotics / Human Behaviour Development
  • Working with Boston Dynamics and collaborating on RL
  • Self-play is not written about enough. One of the most important things in AI, and a really powerful idea, which can explore objectives and behaviours

Keep an eye out for the video of Lex’s talk on our youtube page!

Speaker Bio

Lex Fridman is a researcher at MIT, working on deep learning approaches in the context of semi-autonomous vehicles, human sensing, personal robotics, and more generally human-centered artificial intelligence systems. He is particularly interested in understanding human behavior in the context of human-robot collaboration, and engineering learning-based methods that enrich that collaboration. Before joining MIT, Lex was at Google working on machine learning for large-scale behavior-based authentication.

RE•WORK
RE•WORK

RE•WORK成立于2013年,宗旨为促进国际人工智慧及相关之研究,发展,应用及交流。迄今为止,RE•WORK在世界各地已创办超过50场人工智能高峰会,包括新加坡,香港,美国,伦敦,加拿大等等。

入门强化学习Lex Fridman麻省理工学院
相关数据
OpenAI 机构

OpenAI是一家非营利性人工智能研究公司,旨在以惠及全人类的方式促进和发展友好的人工智能。OpenAI成立于2015年底,总部位于旧金山,旨在通过向公众开放其专利和研究与其他机构和研究人员“自由合作”。创始人的部分动机是出于对通用人工智能风险的担忧。

https://www.openai.com/
Keep机构

Keep 致力于提供健身教学、跑步、骑行、交友、健康饮食指导及装备购买等一站式运动解决方案,持续打造「自由运动场」来帮助人们随时随地尽享运动。 Keep APP提供丰富的运动课程、社区交友、产品功能;Keepland线下城市运动空间,轻便的小团课精品课程使城市人群可以随时随地享受运动的乐趣;KeepKit智能硬件是部署「家庭」场景,硬件产品KeepKit连接运动与家庭场景,以内容为核心的智能运动产品平台,重塑家庭运动体验;KeepUp 是 Keep 的运动服饰品牌。年轻、酷感和运动是 KeepUp 一脉相承的品牌特点。

https://www.gotokeep.com/
推荐文章
暂无评论
暂无评论~