对话Jack Clark:谁掌握了算力,谁就掌控了AI未来发展

导语:Eye on A.I.是由纽约时报资深记者Craig S. Smith主持的一档双周博客节目。每一期节目,Craig都将与这一领域有影响力的人物进行交流,推进广义环境中的机器智能新发展,思考技术发展新蕴意。

机器之心为此系列对话的中文合作方。以下为此系列内容的第二篇,Craig Smith与Jack Clark就全球AI发展进行了探讨。

Hi, this is Craig Smith with a new podcast about artificial intelligence. I’ll be talking to Jack Clark, author of the popular Import AI newsletter, about what he has learned during the previous week and why it is significant. Jack is a veteran of the British technology journal, The Register, and of Bloomberg News. He now works on policy and communications for OpenAI, the nonprofit artificial intelligence research company founded by Elon Musk. So, think of this podcast as a review of what’s happening in the world of AI, curated by one of its keenest observers.

大家好,我是 Craig Smith,这是我创建的一个关于人工智能的播客。今天与我对话的是 JackClark,他是很受欢迎的Import AI 新闻订阅的作者。今天我们要谈谈他过去一周了解到了什么以及为什么这很重要。Jack是英国科技杂志 TheRegister和 Bloomberg News的资深行业老兵。他现在也为 ElonMusk资助的非营利性人工智能研究公司 OpenAI工作,致力于政策和沟通问题。所以你可以把今天的播客看作是对当前人工智能世界的概览,而且是来自一位最敏锐的观察者。

The first thing I thought we’d talk about is this guy who wants to use reinforcement learning to trade cryptocurrencies. One question I had is why there hasn’t been more research published in this area. I know academics sort of look down on applying AI to trading, but all the big hedge funds are chasing AI engineers these days. With all the data available on the financial markets, you’d think it would be fertile ground for researchers.

Craig:今天我们要谈的第一件事是这个想使用强化学习来交易加密货币的家伙。我的一个问题是,为什么这方面没有很多研究论文发表出来?我知道学术界多少有些瞧不起将 AI用来交易的做法,但现在所有大型的对冲基金都在争夺 AI工程师。鉴于金融市场有那么多可用的数据,我不禁思考这会成为研究者的一片沃土,你觉得呢?

Jack: I mean hedge funds for many years have hired physicists, they've hired A.I. engineers, they have hired basically quantitative people, statisticians and others.

Jack:多年以来,对冲基金已经在雇佣物理学家了,他们也一直在雇佣 AI工程师,他们基本上雇佣的是做量化分析的人、统计人员等。

[01:34]They have been modeling this stuff. It's just by nature, any information you provide back to the public realm puts you at a disadvantage. I mean let's think about you know hedge funds like Renaissance are basically the epitome of these quant aka quantitative trading strategy shops. The difference with this and what interested me about it is that Bitcoin, Ripple, you know these other large crypto currencies are early enough in their cycle that we can start to model these markets. We have lots of data about them. But they aren't so large as markets that they have these giant-like crypto algorithmic sharks in them that are trying to sort of destroy everyone that trades in them. So, it actually seems like a reasonable research platform whereas today if you if you don't know much about the financial markets and you try to build an algorithmic trading bot that works in them, you'll most likely be killed almost instantly and so you can't learn much. Whereas crypto presents a kind of new burgeoning market where we can see maybe more of a DIY culture about the sort of financial bot research, which excites me and I think could be quite a big thing for broadening the knowledge about it.

他们也一直在做这方面的建模。从本质上讲,你提供给公共领域的任何信息都会将你处于不利的地位。我的意思是,假设你知道 Renaissance 这样的对冲基金就是这些量化交易策略商店的缩影。我感兴趣的地方又有所不同,而是与比特币、瑞波币有关,你知道这些大型的加密货币仍处于其生命周期中相当早期的阶段,我们才刚开始建模这些市场。我们有大量与之相关的数据。但它们的市场又并不非常大,里面还存在着一些巨大的算法巨鲸,想要吞噬在其中交易的每个人。所以,如果你不了解金融市场,而你又试图创造一个算法交易机器人,这些加密货币市场实际上就是相当合理的研究平台。因为在其它已有充分发展的市场,你很可能立马就会遭遇滑铁卢,学不到任何东西。加密货币是一个相当新的发展迅速的市场,我们可以看到在金融机器人研究方面可能更多的还是 DIY文化,我觉得这激动人心,我认为这能帮助人们扩展对金融的理解。

Craig: Yeah, I thought that people should band together, a bunch of AI engineers and create an AI hedge fund and use the proceeds of that to fund further research. I mean that's kind of what Renaissance does right. But the research their funding is just fed back into their own hedge fund. In any case. Yeah, I thought it was interesting that this guy Denny Britz is talking about it. I hadn't seen many papers about it, I hadn't seen anything referencing crypto currencies. Maybe I'm just not reading the right things but…

Craig:我认为人们应该团结起来,大量 AI工程师团结在一起可以创造一个 AI对冲基金,比使用其收益来支持进一步的研究。Renaissance 现在做的就是这种事情。但他们赞助的研究只是回馈给了他们自己的对冲基金。Denny Britz谈论过它。但我看到有关这方面的论文不多,我没看到任何参考了加密货币的研究。也许只是因为我读得还不够多……

Jack: It was the first thing I saw, also. I think you're reading the right things and I think it's just not much is happening yet because we're early in that market. But I'm excited to see what happens when you have a bunch of hobbyist AI researchers turn their attention towards something with a huge amount of information and a huge amount of dynamicism like a financial market and get to work. I think that could lead to some really fun things. 

Jack:这也是我看到的第一个。我认为你读得够多了,我认为只是现在相关研究还不够多,因为这个市场还处于相当早期的阶段。但是我很期待看到当一些有相关爱好的 AI研究者将注意力转向这个有大量信息且非常动态的领域时会发生什么。我认为这会带来一些真正有趣的成果。

Craig: Fun and presumably lucrative.


[04:02] You also talk in your newsletter about neural architecture search, which allows researchers to quickly find the best architecture for their needs. There seems to be a lot of focus right now on using AI to automate a lot of tasks in designing AI systems - and that speeds up the process. As that continues, it just seems that it would feed on itself and more and more of the writing of AI systems would be done by AI systems and researchers will focus more and more on fresh implementations or theory.

你也在你的新闻订阅中谈到了神经架构搜索,这让研究者可以快速找到满足自身需求的最佳架构。现在似乎有很多研究者都在关注使用 AI来自动化设计 AI系统中的很多任务——这能加速设计过程。随着这方面研究的继续,似乎会实现自我增益,使得越来越多的 AI系统开发工作可以交给 AI系统完成,研究者则会越来越多地关注全新的实现或理论。

Jack: Yeah, I think that something that most people don't really realize yet is that the last five years of AI has been dominated by data. You know you always hear about how important data is and that's true. If I'm doing an application today I need data. If I'm doing a specific proprietary sort of vertical AI business, I probably need some kind of proprietary data. But what techniques like neural architecture search tell us is that the person with the largest computer can probably figure out the most efficient AI algorithm to do something with data. And regardless of how strategic data is, in the future, techniques like neural architecture search mean that the person that can wield that most effectively will essentially be the most flexible and most agile at taking whatever data they are presented with and creating some best-in-class algorithm on top. So, I think that, not only is the technique interesting in the sense it uses things like reinforcement learning to get AI to learn how to build better AI, but from an economic standpoint it's quite indicative of the sorts of economies of scale benefits you're going to get as an AI developer in the future and what it means for the larger competitive market around it.

Jack:我认为大多数人还没真正意识到一件事,即过去五年的 AI领域已经被数据所主导。你常常听人说数据很重要,这是事实。如果现在我要做个应用,我就需要数据。如果我要做一种专有的垂直领域的 AI 业务,我可能就需要某种形式的专有数据。但神经架构搜索这样的技术告诉我们,拥有最强大计算机的人可能能够找到最有效的使用数据的 AI算法。而且不管数据的战略性如何,在未来,如果神经架构搜索这样的技术得到最有效的应用,那么基本上就能得到最灵活的模型——不管处理什么数据,都能创造出那一类别中最优的算法。在我看来,使用强化学习等方法来让 AI学习如何创造更好的 AI,这方面重要的不仅有技术问题,而且从经济角度看也大有裨益——能够形成规模经济,未来你也许只需要一位 AI开发者即可,这也意味着会有更大的竞争性市场。

Craig: They can design state-of-the art systems using less than a day’s computation on a single GPU, which represents 1,000-time reduction in computational cost.  So, when you are talking about scale, you mean if you can do this on a single GPU and there are people with access to hundreds of GPUs, that's where the scale comes in. Is that what you mean?

Craig:他们仅用单个 GPU,用少于一天的计算时间就设计出了当前最佳的系统,这表示计算成本下降了 1000倍。所以,当涉及到规模时,既然在单个 GPU上就能做成这种事,那么如果有数百个 GPU,应该就能做出更大的成果。这就是规模化的好处。你是这个意思吗?

Jack: Yes. What it means is that previously if I was Google or Facebook or Microsoft, I would have a few hundred GPUs lying around and I could ask them to try and figure out AI systems for me.

Jack:是的。这意味着,假设之前我在谷歌或 Facebook或微软,我能使用数以百计的 GPU,我可以用它们来为我试探和寻找 AI系统。

And now if I have one GPU lying around then it will allow me to do the same thing except if I’m Google I can now use my hundreds of GPUs to be hundreds of times more efficient than I was previously and sort of widen the gap further. So, though it's made it much more efficient for the individual developer, it has some pretty significant advantages built in for the large scale operators.

现在也许我用一个 GPU也能做成同样的事情,但如果我在谷歌,有数百个 GPU,那么我的效率也会比之前提升数百倍,从而能进一步拉开差距。所以,尽管对个人开发者来说效率更高了,但这也会大规模运营者带来十分显著的内在优势。

Craig: And along the same line, [07:13] this Facebook Tensor Comprehensions that allows people to write in mathematical notation and it will in effect translate that into implementations. There seems to be this trend of automating a lot of these steps or at least streamlining a lot of these steps to allow a lot of it to be taken care of by AI systems and leave the researchers more time to research. Am I reading that right?

Craig:这一思路的还有,Facebook的 Tensor Comprehensions能把用数学符号写的算法有效地转译成具体实现。似乎正是这一趋势,也就是自动化很多步骤,至少简化了其中很多步骤,将其交给 AI系统处理,让研究者能将更多时间投入研究。我的理解正确吗?

Jack: You are, and it's a lot like the shoes that you and I wear on our feet, right, where if you ever go to a bespoke shoemaker they will make you probably the best shoe of your life. And that's still the case in AI. If you go to a person they'll be able to make the best possible system today. But if you go to a factory, the same way that if you go to a factory for making shoes, and you give them your parameters, which in your and my case will be our foot size and some other details, and in an AI case will be the type of problem, the type of system you're running it on, and, you know, via Facebook’s Tensor Comprehensions some more specific details about the characteristics of this problem you're trying to solve, they will automate the solution to it. 

Jack:你是对的,这就像是你和我脚上穿的鞋一样,如果你去找定制鞋匠,他们可能能为你做出最合脚的鞋。在 AI领域也是如此。现在如果你专门找人做,他们可能能做出可能最佳的系统。但如果你去工厂,就像你买鞋厂做的鞋一样,你提供的参数,也就是我们的脚掌大小和某些细节,在 AI的案例中就是问题的类型、你所运行的系统的类型,它们就能自动化提供解决方案。当然,对于 Facebook 的 Tensor Comprehensions,还需要有关问题特征的某些更具体的细节。

And the really significant thing about that is that we have a relatively small number of people who are capable of doing smart stuff in AI, but we have a really, really, really large amount of computers available to us. [08:50] And this again suggests to me that we're going to enter this era of the compute monopoly, where though these organizations are making new techniques available - you know Facebook tells us about this; Google tells us about neural architecture search - if you don't have the underlying infrastructure you're going to struggle to implement this and struggle to sort of develop it. 

而真正意义重大的问题是能在 AI领域做聪明的事情的人相对来说非常少,但我们有非常非常多可用的计算机。这让我又觉得我们即将进入计算垄断时代——尽管这些组织机构会公开提供新技术(你知道 Facebook提供了这个技术,谷歌提供了神经架构搜索),但如果你没有那些底层基础设置,你很难实现这些技术,更别说开发这些技术。

One point I need to make so sure I'm being clear about Facebook is that they are releasing this general automated algorithm tool and I think that's great. If you're an individual developer this means you don't need to design the really, really, fine grained stuff that maximizes your performance in an industrialized way. But I think it is indicative of how, just as in the Industrial Revolution, the large compute companies of this era are automating themselves and are experimenting with new ways of arranging their workloads, except that rather than with people and people being automated, it’s now automating discrete software programs into larger automated end to end systems. And that to me is very new and very significant.


Craig: Yeah. What struck me about this in particular was moving from a two-step process where you have a researcher who writes something out in mathematical notation and then gives it to an engineer who codes for GPUs or CPUs, you have the ability for a researcher to write in mathematical notation and have that automatically coded and implemented for a GPU or CPU.

Craig:我觉得这尤其值得关注的一点是脱离了原来两步式的过程,即先由研究者用数学符号写出某些东西,然后再交给工程师写成在 GPU或 CPU上运行的代码。现在只需要让研究者写出数学符号,然后就能自动编写用于 GPU或 CPU的实现代码。

[10:36] Are we moving to the day where you will not even need to write in mathematical notation, that you will be able to write in natural language and that that will be translated into code?


Jack: I wish we were in that day now, but we are some distance from it. Like where we are now is more like I can write a somewhat less alien programming language and I can automate the sort of underlying system to a high degree and that's what you're seeing with Tensor Comprehensions and to some extent with these neural architecture search techniques. I think that language is going to be one of the last things that we get really, really, really good at before we have fundamental breakthroughs in the fields that people call artificial general intelligence. So as hopeful as you are, Craig, I really wouldn't hold your breath here. I actually think it's going to sadly take a while before you or me can specify natural language stuff that it's able to do. And the reason why that is that it needs to map your language to a very specific subset of technical commands which it then needs to map to a broad set of sort of technical implementation terms. And language is so unrestricted and so broad, even relative to programming languages that we use for computers, that I think it's going to be one of the last things we do.

Jack:我希望我们现在就能达到那种程度,但我们还有些距离。现在我们所做到的是将一些用不那么难懂的编程语言写成的东西自动化处理成可以运行的代码,这是 Tensor Comprehensions 以及某种程度上那些神经架构搜索技术所做到的。我认为,在所谓的通用人工智能领域实现重大突破之前,语言将会是人类最后一个真的非常擅长的领域。所以,尽管你满含希望,Craig,但我认为你不应抱有期待。很不幸,我认为还需要很长时间我们才能用自然语言指定计算机要做的事情。因为这需要将你的语言映射到一个非常特定的技术指令子集,然后它还需要映射到一个更广泛的技术实现术语集合。而相对于我们为计算机使用的编程语言,自然语言本身又是如此的非限定且具有广泛的含义,所以我认为这将会成为人类最后一项独有能力。

Craig: [12:16] We’ve spoken before about the rise of Evolution Strategies as an alternative to reinforcement learning. You highlight Google’s research that shows the results of the two techniques converging when they are extended out far enough. Can you talk a little bit about that?

Craig:我们之前谈到过进化策略(Evolution Strategies)的兴起,成为强化学习的一种替代方法。你重点谈到了谷歌的一项研究,该研究表明这两种技术的结果在扩展得足够远时会聚合到一起。你能谈谈这方面的研究吗?

Jack: Evolution is all around us. Evolution definitely works in the sense that you and I are talking to each other via technology that we invented and we both evolved from slugs. So, we had some pretty compelling evidence that over a long enough time scale evolution produces significant things. Reinforcement learning has weirdly more of a theoretical basis, more of a reason to believe in it and more of a reason to be able to easily develop it. And what this Google research shows is that, if you have a very, very, very large number of computers, a large enough amount to leverage the distributed intelligent nature of evolution, then evolution will converge to a higher accuracy amount faster than reinforcement learning. So, if you run the computers for a long enough amount of time reinforcement learning and evolution strategies attain something corresponding to parity. And that's pretty interesting.


And that again comes back to a point that you and I were talking about earlier, which is about compute. And evolution strategies, you can view as a big sort of dumb tool that gets its power from compute and reinforcement learning as a slightly more specific tool that gets its power from certain priors baked into the algorithm that help the algorithm efficiently learn over the information that is provided. And what you find is that, if I'm in a regime where I have an infinite number of computers, then evolution strategies does just as well if not marginally better than reinforcement learning. Now that doesn't mean that reinforcement learning researchers are about to become unemployed but it does mean that they're going to need to think quite carefully about how they can surpass this and in what regimes you need to have to the sort of priors that an RL algorithm has, whereas in other regimes they may be so economically valuable or so critical that you're just happy to throw a shitload of computers at them and run evolution over them instead.


Craig: And what did you mean when you said that larger computer operators will be able to explore potentially dangerous use cases earlier, giving them an advantage?


Jack: What I mean is that we're entering the era where we can design AI agents that can take actions in the world, as well as just sort of passively observe them. The whole endpoint of reinforcement learning is to design AI agents that can be installed on, you know, robots or cars or drones or what have you, that can do things. You know they know how to do things, they know how to generalize their actions, and it tells us that people with the biggest computers can probably train the most advanced AI agents to take actions in the world. And so they are going to become aware quicker than other people about whether these things are dangerous or not. They're going to be able to learn how capable a drone is that can, say, follow a person running around a forest and target them. You know, that something that you're going to learn if you have a bigger computer. And that doesn't necessarily mean you have to use evolution strategies or you have to use reinforcement learning. But one of the messages of this research is that if you have larger computers you can explore this strange frontier more.

Jack:我的意思是,我们正在进入这样一个时代——我们可以设计出能够在世界中采取行动的智能体,而且某种程度上只能被动地观察它们。强化学习的整体最终目标是设计出能安装在机器人、汽车乃至任何事物上的 AI 智能体,使它们能够完成任务。你知道它们知道如何行事,它们知道如何泛化它们的动作。有最强大的计算机的人可能能够训练出最先进的AI智能体在世界中执行任务。因此他们能比其他人更快地了解这些事物是否危险。他们能知道一台无人机的能做到什么,比如跟踪一个跑进森林的人并且以他为攻击目标。你知道的,如果你有更大的计算机,你就能知道某些东西。这并不意味着你必须使用进化策略或使用强化学习。但这一研究传递出了一个信息:如果你有更大的计算机,你就能更深入未知的领域。

Craig: But the point that experiments like this suggests that large compute operators will be able to explore potentially dangerous use cases earlier doesn't necessarily have anything to do with evolution strategies versus reinforcement learning. It's just that these guys showed that with 450 GPUs they can see down the road that these two things converge. And so, with that kind of compute they will be able to see other things down the road faster than someone that doesn't have that much computing power.

Craig:这个实验的目的是说明更大的计算机运营者有可能更早地探索危险的用例,这并不一定与进化策略或强化学习有关。只是这些人表明,使用 450个 GPU,他们能看到这两种方法的汇聚。因此,有那样的计算能力,他们将能比其他没有如此多计算能力的人更早看到发展的路径。

Jack: It’s sort of a chicken and egg. Like if I had a very large number of computers then it's very reasonable to me to test evolutionary strategies versus reinforcement learning. If I have a small number of computers, I'm going to use the area it has the greatest theoretical justification which is, to some extent RL versus evolution, which I know requires sort of larger quantities of computers to get the initial performance but may converge faster than RL. And it tells you that you're in this kind of rich-get-richer world where the people with the largest computer can probably do the more impactful experiments about the underlying theoretical constraints of the scientific discipline they're working in.


Craig: Yeah. [17:41] And I thought it was interesting what you said about AI research diverging into low compute and high compute domains; people who can run these massive experiments and then there are others that are stuck with less computing power who can still do things with reinforcement learning, for example, but can't project as far forward because they don't have the computing power.  

Craig:你说 AI研究将分成低计算力领域和高计算力领域,我觉得这很有意思。有大量计算能力的人可以运行那些大规模实验,而受更少计算力所困的其他人只能坚持用强化学习等方法来进行开发,从而不能将研究推进得更远。

Jack: It's that and it's also a way to explore fundamental rules that are completely not obvious until you break them. And a really good example is, until you break the sound barrier, it’s not obvious what the sound barrier is. But when you break it, it's very obvious because there's a giant sound and there’s a visual appearance. And we know that in most things in a world there are scaling laws and there are what’s called phase changes, right, where you go through a transition in the system. And these transitions are usually a consequence of some kind of energy that you're inputting into the system. And so all this tells us is that the larger my computer, the higher my chance of finding these phase change boundaries, which will usually tell me about, or at least provide helpful pointers to, understanding the underlying sort of theoretical constructs from which I'm deriving information. Because if I know the kinks in it, or the transition points, it’s easier for me to work out the underlying theory that justifies it all.


Craig: Let’s move on to some interesting applications, particularly in the world of healthcare. You talk about one study that frankly amazed me: [19:24] This study that took data from wearable devices like Fitbit and Apple Watch and showed a high accuracy at detecting things like diabetes and high cholesterol, high blood pressure, sleep apnea.

Craig:让我们来谈谈某些有趣的应用,尤其是医疗领域的应用。你谈到了一个让我很惊奇的研究。这个研究从 Fitbit 和 Apple Watch 等可穿戴设备获取数据,在检测糖尿病、高胆固醇、高血压、睡眠呼吸暂停等疾病时能达到很高的准确度。

That really surprised me because it doesn't seem to me that something as simple as an Apple Watch or Fitbit, simple in terms of what kind of data it's collecting, would be enough to make those kind of diagnoses. But the probabilities were very high, 80 percent, 75 percent. Was this groundbreaking? Have there been other studies that have shown that such skimpy data can provide that kind of diagnoses? And if so, how long before we see commercialization of this kind of thing because it sounds fantastic. 

这确实让我惊讶,因为像Apple Watch 或 Fitbit 这样简单的事物(简单是说收集的数据方面)也足以进行这样的诊断。而且其概率也非常高,80%,75%。这算是突破吗?是否有其他研究也表明这样简单的数据也能提供这样的诊断?如果有,这种听起来非常棒的方法何时才能实现商业化?

Jack: So, a rule that we keep on finding with deep learning systems is that they are able to discover things that were not obvious to professionals before. I can give you a very tangible example. Just this week, Google published a new paper that showed that by looking at scans of people's eyeballs, it was better able to predict certain cardiological problems than doctors. 


And that was because its model had learned an interesting correspondence present in sort of the dataset of people's eyeballs that it was showed that somehow correlated to their underlying health of their hearts. And that’s obviously a very strange thing to understand. Right. You know it's not obvious how they connect.


However, I think that was pretty compelling evidence that they do connect and that system found a connection. And to me this study was very similar, where you're using somewhat crude data, you know, to learn about people’s, you know, sleep apnea. You're reading enough information about their wrists somewhat similar to learning about people's hearts by looking at their eyeballs and given the aggregate data, this sort of unprecedentedly large group in the sense that getting very large medical study groups together is a massive pain currently, and they were able to sort of trivially get thousands of people to donate data, shows these interesting correspondences. So, I think it is significant, that tells us about a new sort of scientific medicine practice that's going to emerge as a consequence of this convergence of AI and big data and deep learning.

但是,我认为确实存在非常具有说服力的证据表明它们确实存在联系,而且该系统找到了这种联系。在我看来,这项研究非常类似于使用某种粗略的数据来了解人们的睡眠呼吸暂停问题。当读取到了足够多的来自手腕的信息之后,就能找到其中的关联,就像是根据聚合数据,通过检查眼球来了解人们的心脏问题。这种前所未有的大型群体在某种程序上需要将非常大的医疗研究团队聚合到一起,这是很难做到的。而他们能够轻而易举地让数以千计的人捐赠数据,展现这些有趣的关联。所以,我认为这意义重大,能通过 AI、大数据深度学习的融合,为我们带来一种新的科学医疗实践方法。

Now how soon it gets productized? I am an Englishman living in America and I don't understand your healthcare system. I don't understand any aspect of how it works and I don't see great incentives for it to save my life cheaply. So, I don't know how quickly it gets productized. I wish it was quick but I it's not obvious to me how that happens quickly because the meta system that it's happening within doesn't seem to necessarily have the right incentives to, to make me live longer.


Craig: That's interesting. This this sounds so promising, there have been so many of these health-related AI papers and a lot of them have been implemented in computer vision scanning for malignant lesions and things like that. But this one had the possibility because of the simplicity or crudeness of the data that it's collecting and how widespread these wearable devices are it seems that it really has the possibility of revolutionizing the diagnosis of certain kinds of illnesses. I mean things like high cholesterol and diabetes.

Craig:很有意思。听起来很有希望,现在已有很多与医疗相关的 AI论文,其中很多都涉及到计算机视觉扫描,比如扫描恶性病变等东西。但这一个很有可能成功应用,因为它所用的数据很简单和普通,而且可穿戴设备也有广泛的应用,看起来这确实可能为某些疾病的诊断带来变革,比如高胆固醇和糖尿病。

The crudeness of the data that that's working off of or as you said looking at the eyeballs and detecting heart trouble, you know, the Chinese, traditional medicine, there are two principal ways that a doctor diagnoses a patient: one is measuring its pulse which is analogous to a Fitbit, and the other is looking at the patient's tongue. And I love this other paper that you highlighted about [23:45] scientists mapping herbal prescriptions to tongue images.  That they had 10,000 pictures of tongues and they mapped it to the herbal prescriptions that had been given to those patients.

这种简单的数据也是有效的,就像你说的,通过检查眼球来诊断心脏问题。你知道吗?中国的中医在医生诊断病人时有两个做法:一是测脉搏,这类似于 Fitbit;二是看舌头。我还很喜欢你提到的这篇论文,科学家将中药处方与舌头图像进行了映射。他们有 10000张舌头照片,并将它们与提供给这些病人的中药处方进行了映射

I mean it sounds funny but just as with the eyeball thing, the presumption is that in Chinese traditional medicine there is something to this tongue diagnosis. With AI and with enough data you can start seeing whether there are correlations. In this case what bothered me was that they were correlating the tongues to different herbals prescriptions, not to different illnesses.

这听起来很有趣,但就像眼球这个研究一样,人们假设中医中存在与舌头诊断相关的东西。使用 AI和足够多的数据,就能找到它们之间存在的关联。这个案例让我疑惑的是他们是将舌头关联到不同的中药处方,而不是不同的疾病。

So just because certain colorations or textures of the tongue tend to get the same herbal prescription doesn't necessarily mean that there is an underlying pathology there that that is being accurately diagnosed.


Jack: Correct. I wouldn't take a prescription that the system gave me at all, I'd run in the opposite direction. However, the fact that you have scientists who are mostly specialists in any sort of medicine looking at using these systems for better diagnosis - the main point to me is that these things have reached such an obvious point of, not only utility, but usability, that you're bringing in really, really fringe stuff at an increasing rate. That to me is the interesting thing. Like their methodology may be somewhat bogus. I suspect that it is. But their ability to access large amounts of data is quite good and their intuition which is, we should sort of gather this data and see if we can learn sort of mappings that we can then fortify, is also good.


And it suggests to me that a lot of medicine that has less of a scientific basis and at least in Western scientific medical traditions, we may see weird deep learning systems emerge which show correlations that run counter to the beliefs of the scientific establishment. And that to me is why papers like this are interesting because I see them as the first signs of people starting to experiment with this. And the reason why they're able to experiment with these sorts of technologies is that they have become available enough and simple enough to use that you can use them in these domains that are sort of somewhat non-standard.


Craig: Yeah. This stuff is moving so quickly that it feels literally months, certainly not many years away, before you take a picture your tongue or you send your Fitbit feed or you take a picture of the boil on your leg and get back a diagnosis very quickly.

Craig:是啊,这方面发展得非常快,让人感觉用不了几个月,肯定不需要很多年,你只需要拍一张舌头照片,或发送 Fitbit数据流或脚癣的照片,你就能很快得到诊断结果。

Jack: I mean bring it on. You know, I am so ready for that moment, aren’t you?


Craig: Yeah absolutely. Given health care costs in the United States.


Well, that’s it for this week’s podcast. Thanks again, Jack, for making the time. For  those of you who want to go into greater depth about the things we talked about today, you can find a transcript of this show in the program notes along with a link to Jack’s newsletter. I encourage you to subscribe. Let us know whether you find the podcast interesting or useful and whether you have any suggestions about how we can improve.

好,这就是本周的播客了。Jack,再次感谢您抽出时间。如果你想要更深度地了解我们今天所谈的内容,你可以在 https://www.eye-on.ai/找到本节目的转录文本。希望你也能订阅 Jack的新闻源:https://jack-clark.net/。你觉得本期播客有哪些你感兴趣或觉得有用的内容,你是否有帮助我们改进节目的建议,请与我们分享。

The singularity may not be near, but AI is about to change your world. Pay attention.


产业计算机视觉深度学习强化学习Jack Clark

深度学习(deep learning)是机器学习的分支,是一种试图使用包含复杂结构或由多重非线性变换构成的多个处理层对数据进行高层抽象的算法。 深度学习是机器学习中一种基于对数据进行表征学习的算法,至今已有数种深度学习框架,如卷积神经网络和深度置信网络和递归神经网络等已被应用在计算机视觉、语音识别、自然语言处理、音频识别与生物信息学等领域并获取了极好的效果。


比特币是一种用去中心化、全球通用、不需第三方机构或个人,基于区块链作为支付技术的电子加密货币。比特币由中本聪于2009年1月3日,基于无国界的对等网络,用共识主动性开源软件发明创立。比特币也是目前知名度与市场总值最高的加密货币。 任何人皆可参与比特币活动,可以通过称为挖矿的电脑运算来发行。


在学术研究领域,人工智能通常指能够感知周围环境并采取行动以实现最优的可能结果的智能体(intelligent agent)




在数学,计算机科学和逻辑学中,收敛指的是不同的变换序列在有限的时间内达到一个结论(变换终止),并且得出的结论是独立于达到它的路径(他们是融合的)。 通俗来说,收敛通常是指在训练期间达到的一种状态,即经过一定次数的迭代之后,训练损失和验证损失在每次迭代中的变化都非常小或根本没有变化。也就是说,如果采用当前数据进行额外的训练将无法改进模型,模型即达到收敛状态。在深度学习中,损失值有时会在最终下降之前的多次迭代中保持不变或几乎保持不变,暂时形成收敛的假象。


张量是一个可用来表示在一些矢量、标量和其他张量之间的线性关系的多线性函数,这些线性关系的基本例子有内积、外积、线性映射以及笛卡儿积。其坐标在 维空间内,有 个分量的一种量,其中每个分量都是坐标的函数,而在坐标变换时,这些分量也依照某些规则作线性变换。称为该张量的秩或阶(与矩阵的秩和阶均无关系)。 在数学里,张量是一种几何实体,或者说广义上的“数量”。张量概念包括标量、矢量和线性算子。张量可以用坐标系统来表达,记作标量的数组,但它是定义为“不依赖于参照系的选择的”。张量在物理和工程学中很重要。例如在扩散张量成像中,表达器官对于水的在各个方向的微分透性的张量可以用来产生大脑的扫描图。工程上最重要的例子可能就是应力张量和应变张量了,它们都是二阶张量,对于一般线性材料他们之间的关系由一个四阶弹性张量来决定。




映射指的是具有某种特殊结构的函数,或泛指类函数思想的范畴论中的态射。 逻辑和图论中也有一些不太常规的用法。其数学定义为:两个非空集合A与B间存在着对应关系f,而且对于A中的每一个元素x,B中总有有唯一的一个元素y与它对应,就这种对应为从A到B的映射,记作f:A→B。其中,y称为元素x在映射f下的象,记作:y=f(x)。x称为y关于映射f的原象*。*集合A中所有元素的象的集合称为映射f的值域,记作f(A)。同样的,在机器学习中,映射就是输入与输出之间的对应关系。


先验(apriori ;也译作 先天)在拉丁文中指“来自先前的东西”,或稍稍引申指“在经验之前”。近代西方传统中,认为先验指无需经验或先于经验获得的知识。先验知识不依赖于经验,比如,数学式子2+2=4;恒真命题“所有的单身汉一定没有结婚”;以及来自纯粹理性的推断“本体论证明”




无人机(Uncrewed vehicle、Unmanned vehicle、Drone)或称无人载具是一种无搭载人员的载具。通常使用遥控、导引或自动驾驶来控制。可在科学研究、军事、休闲娱乐用途上使用。


强化学习是一种试错方法,其目标是让软件智能体在特定环境中能够采取回报最大化的行为。强化学习在马尔可夫决策过程环境中主要使用的技术是动态规划(Dynamic Programming)。流行的强化学习方法包括自适应动态规划(ADP)、时间差分(TD)学习、状态-动作-回报-状态-动作(SARSA)算法、Q 学习、深度强化学习(DQN);其应用包括下棋类游戏、机器人控制和工作调度等。