广告
学术界的AI研究能不能量化?
00:00
00:00

MAURIZIO DI PAOLO EMILIO: Ciao! This is Maurizio Di Paolo Emilio, editor of Power Electronics News, direttamente dall'italia, this is EE Times on Air…
MAURIZIO DI PAOLO EMILIO: 你好!我是Maurizio Di Paolo Emilio,来自意大利,是Power Electronics News主编,这里是EE Times on Air…

BRIAN SANTO: …and I’m Brian Santo, EE Times Editor in Chief, and you're listening to your Briefing for the week ending December 6th.
BRIAN SANTO: 我是EE Times主编Brian Santo,您正在收听的是截至12月6日的本周播报。

And the role that universities play in new technology development is pretty well established. Or at least it was, until artificial intelligence came along. AI is going to be a challenge for academic researchers for several reasons. Furthermore, when AI is set to do some of the actual development work, that threatens to have some profound ramifications.
大学在新技术开发中所扮演的角色已经相当成熟。或者说,至少在AI出现前是这样。出于多种原因,AI将成为学术研究人员的新挑战。此外,当AI被设定去执行一些实际的开发工作时,这样的挑战可能会产生深远的影响。

EE Times and our sister publications publish occasional Special Projects. These are in depth reports that examine a particular aspect of the electronics industry from multiple angles. Our most recent special report is on artificial intelligence and engineering education. Two of the major contributors to this set of stories are EE Times editor George Leopold and Loring Wirbel, who had been on the staff for a very long time and who still contributes to the magazine. They reported that AI and engineering education can be a fitful match. To explain why, I asked Loring what the typical relationship between industry and academia had been.
EE Times会与我们的姊妹出版社不定期推出特别报道项目。这些特别报道项目从多个角度回顾电子行业的特定方面,形成深入报告。我们最近一期的特别报告是关于AI和工程教育的。 EE Times编辑George Leopold和Loring Wirbel是该系列文章的两个主要撰稿人,他们已在公司供职很长时间,目前继续为杂志供稿。 报告称,AI和工程教育可能是合适的组合。为了解释如何得出这个结论,我询问Loring,产业和学术界之间的关系一般是什么样的。

So, Loring, first of all what I want to do is have you explain what the typical process is, the typical loop and relationship between industry and academia, when a new technology has been developed.
Loring,首先我想说的是,你是否解释了当开发出一种新技术时,典型的过程是什么,以及产业与学术界之间的典型沟通和关系是怎么样的。

LORING WIRBEL: Well, you know, I think for at least six or seven decades since the end of World War II, you had both a compute industry itself and then internet development-- organized through federal agencies like DARPA or just the Pentagon in general earlier on-- always looking for proving grounds through academia. And from there, you had the National Science Foundation and others on the civilian side of the street always looking to steer engineering and computer science development in certain directions. And I think there was a mutual benefit to be established throughout that period of time, because the universities could kind of take that research in the direction they thought most applicable. And the federal government usually found that the universities could determine what was necessary even better than the feds could.
LORING WIRBEL: 我认为自第二次世界大战结束以来,至少有六、七十年的时间,人们既拥有了计算机行业,又迎来了互联网的发展——通过诸如DARPA(Defense Advanced Research Projects Agency:美国国防高级研究计划局)之类的联邦机构,或者直接可以说五角大楼——始终在学术界寻找证明依据。从那时候开始,建立了国家科学基金会(National Science Foundation)和一些其他民间机构,他们一直在设法引导工程和计算机科学的发展。我认为在这段时期,双方建立了互惠互利的关系,因为大学可以按照其认为最适用的方向来进行一项研究。联邦政府发现,比起政府机构,通常大学可以更好地确定什么是必要的研究。

What started to happen over time is that there became a process inherent in both the universities and the nonprofit corporations to not just throw money at a problem, but to have quantifiable deliverables on a quarterly basis that if people were going to get more money, they had to show exactly what they were doing. And I think that became the norm, say, in DARPA (the Defense Advanced Research Projects Agency), IARPA, the Homeland Security ARPA, and all these others to say, No, this isn't just free bucks that are shoveled into your face. You have to prove your quantifiables and show your deliverables.
随着时间推移而开始发生的事情是,大学和非营利组织都存在一个固有的过程,即不仅仅是把资金投到一个研究上,而需要每季度获得可量化、可交付的成果,如果人们要获得更多的资金,他们就必须明确地呈现他们正在做什么。 并且我认为这已经成为一种规范,例如,在DARPA、IARPA(Intelligence Advanced Research Projects Activity:美国情报高级研究计划局)、国土安全部ARPA,以及所有其他机构都表示,这不是白拿的钱,你得证明自己的研究成果可量化、可交付。

BRIAN SANTO: Right. So we have the feds and to some extent industry working together, saying, here are some contracts for the development that you do, and there's also an element of once you have these research and development operations going on on campus, you've got college students learning, becoming some expert in these disciplines, and moving into positions at companies and in the government and those agencies that oversee that technology. So it's not just the research and development, you're also developing expertise, right?
BRIAN SANTO: 对的。因此,我们有联邦政府和行业开展一定程度的合作,声称,这是你所做的一些开发合同。还有一个要素是,一旦你在校园中开展了这些研究工作,你就能培养学生学习这些知识,这些学生将来可以成为这些学科的专家,并在公司、政府和该技术的监管机构中任职。所以不仅是研发技术,你还同时在发展专业知识,对吗?

LORING WIRBEL: Right. And I think that there was a certain amount of propaganda that started really pumping up around the turn of the millennium to say that the United States was falling behind in science and engineering. A certain amount of that was true, as compared to both China and some of the European nations. But that good the cheerleader squad for STEM-- Science and Technology, Engineering,(Mathematics)-- kinds of education that applied to both high school and college levels, that everything had to be oriented to STEM. Well, you know and I know it's like testing at the high school level. Once something becomes a propaganda element, it's overemphasized, overdefined, and a lot of times it causes more trouble than it's worth.
LORING WIRBEL: 对。 我认为,在迈入21世纪之际,确实有一定数量的宣传活动开始声称美国在科学和工程方面落后了。与中国和一些欧洲国家相比,这确实是事实。不过对于STEM——科学(Science),技术(Technology),工程(Engineering),数学(Mathematics)的簇拥者,这种教育既适用于高中又适用于大学,一切都以STEM为中心。 好吧,我知道这就像在高中阶段进行测试。一旦某种东西成为宣传元素,它就会被过分强调、过度定义,并且很多时候会带来更多的麻烦。

BRIAN SANTO: So now we come to artificial intelligence and the special project that you and George and several of our other colleagues were involved with. And the issue there is that AI is so different that even the imperfect rules that we had before with perhaps an overemphasis on STEM may not apply. That's what our premise is here, right?
BRIAN SANTO: 现在我们来谈谈AI以及你和George,还有我们其他几个同事参与的特别项目。问题在于AI是如此不同,以至于我们以前过分强调STEM的不完善规则也可能不适用。这是谈此话题的前提,对吧?

LORING WIRBEL: I think that one of the issues with AI is that the rules of what is definable or quantifiable are so different than anything the world has seen before in two ways. I think I mentioned at the start of one of the articles that there is no model for moving from machine language to higher level languages in software development as there used to be. There isn't an object-oriented type of modular language that exists above a higher level compiled language. We're moving to an area where there is no underlying language anymore, and so software development may just kind of go the way of all saints.
LORING WIRBEL: 我认为问题之一是,从两方面来看,AI的可定义或可量化的规则与世界上以前看到的任何事物都不同。我想我在一篇文章的开头有提到,在软件开发中,没有像过去那样从机器语言过渡到高级语言的模型。在更高级别的编译语言之上,不存在面向对象类型的模块化语言。我们正朝着不再使用基础语言的领域发展,因此软件开发可能会像所有圣徒必须受洗礼那样。

Then you have the other problem, that the goals for defining what is an advancement in high level deep learning are getting higher and... harder and harder to find. When neural network simulation was taking place just on a single chip basis, then it became just a product of semiconductor research. You saw a lot of work at Cal Poly and everything, both processors and analog. But after that, it became an issue of how we define trained and untrained learning for neural networks on a system level. And that's where all this problem of an undefinable result from a set of problems emerged, and that was what drove all the bean counters at universities crazy. Because really, most neural networks researchers couldn't explain what the hell they were doing.
然后另一个问题是,定义高级深度学习的进步目标也越来越高,越发难以找到。当神经网络仿真仅在单个芯片上运行时,它就成为半导体研究的产物。你在Cal Poly(加州州立理工大学)可以看到这些研究工作,无论处理器还是模拟器件。但在此之后,这成为了我们如何在系统级别上,为神经网络定义已训练和未训练学习的问题。这就是导致所有问题出现无法确定结果的情况,是促使所有大学财务人员抓狂的原因。因为确实,大多数神经网络研究人员都无法解释他们到底在做什么。

BRIAN SANTO: And as a practical matter, that makes it difficult for industry and the government agencies that are looking for this R&D to define what the deliverables are, and it makes it harder for the university research operations to actually deliver on what supposedly the demand might be. Is that correct?
BRIAN SANTO: 实际上,这也使得正在寻找研发成果的行业和政府机构很难去定义可交付成果到底是什么,同时让大学研究运营也难以实际交付预期的需求。是这样吗?

LORING WIRBEL: Well, there's a little bit of a difference in that I think that mathematicians and scientists have made very clear within DOD and the National Science Foundation and others that there is no such thing as an entirely definable set of human knowledge. There were a few engineers that responded to the original article, saying, Hey, are neural network researchers just lazy? Because I think that.... Seriously! Engineers want to believe that if you have defined all inputs and outputs and know the architecture fairly well, that you should be able to describe the target of what you're looking at in a way that can be understood even by the neophyte.
LORING WIRBEL: 好吧,有一点不同,我认为数学家和科学家已经在国防部和美国国家科学基金会以及其他机构中明确表示,没有可以完全定义人类知识的东西。有几位工程师对原初的文章做出了回应说,嘿,神经网络研究人员纯粹是懒吗?因为我认为... 工程师希望相信,如果你已经定义了所有录入和产出,并且相当了解架构,那么你应该能够以一种,即使对于新手来说也能够理解的方式,描述所查看对象的目标。

But in the case of deep learning, it's kind of like mathematicians dealing with Kurt Gödel's Incompleteness Theorem. People are used to saying that we don't know what this problem... there are a certain set of problems where we don't know if we can ever know the answer. It was like that old Bush Administration thing about, we don't know what we don't know or whatever. Scientists and mathematicians are used to that. They're used to the idea that-- you know, from quantum mechanics and everything-- that you don't know the answer to all the problems. That drives engineers crazy, and it also drives academic bean counters crazy.
但是说到深度学习,就像是数学家在处理Kurt Gödel(库尔特·哥德尔)的不完全性定理。人们习惯说,我们不知道这个问题是什么...在某些问题中,我们不知道是否能够得到答案。 就像老布什政府的事务一样,我们不清楚我们所不了解的东西,也许永远也搞不明白。科学家们和数学家们已经习惯了这一点。他们习惯于这样的想法——从量子力学到所有事物——你无法知道所有问题的答案。这使得工程师抓狂,也让学院的财务人员发狂。

GEORGE LEOPOLD: Do they know what they don't know?
GEORGE LEOPOLD: 他们知道他们所不了解的事物吗?

LORING WIRBEL: I would say they're getting closer and closer to understanding how to define a vertical application for deep learning and what is going on inside the multi-layered neural network. The problem is, they can't really explain to the neophyte. And even they can't explain it to a lot of other neural network researchers who have deep knowledge of what those hidden layers are. Because no one knows that the neural network is trying to do within those hidden layers, so it become impossible to explain it to others.
LORING WIRBEL: 我得说,他们越来越了解如何定义用于深度学习的垂直应用,以及多层神经网络内部的情况。问题是,他们无法真正向一个新手作解释。他们甚至也无法向许多其他神经网络研究人员解释这些问题,且这些研究人员还对隐藏层有深刻的了解。由于没人知道神经网络正尝试在这些隐藏层中进行哪些操作,因此无法向其他人解释清楚。

BRIAN SANTO: So what are the practical implications? Does this make it harder for academic organizations to create AI labs and do AI research and expect to get contracts for what they can deliver? Does this make it more difficult to develop talent? Are there other practical ramifications for this new stuff that we haven't seen before?
BRIAN SANTO: 那它的实际意义是什么? 是否会让学术组织更难创建AI实验室并开展AI研究,以期获得可交付成果的合同?会使得人才培养更加困难吗? 对于我们之前从未见过的这些新事物,还会有其他实际后果吗?

LORING WIRBEL: Sure. I think there's kind of a set of steps you have to follow. Number one, I really like what the Allen Institute did with their Green AI promotion. Not so much just for the environmental aspect of it-- although that can be important-- but they said universities should stop trying to get involved with what you might call "brute force AI." Because you've always got the Amazons and the Facebooks and everybody that can just throw city-wide power supplies-- we're talking about the power supply of a small city-- and compute resources toward solving a brute force problem. Instead, a university should look at how to best optimize an AI problem so that they can do so with low floating point operations and low uses of utility resources and compute resources.
LORING WIRBEL: 当然。我认为你必须遵循一系列步骤。第一,我真的很喜欢Allen Institute在Green AI推广方面所做的工作。他们称大学应该停止尝试去参与可能称为“蛮力AI”的活动,这不仅仅只是出于环境方面的考虑——当然这方面也很重要。因为总是有亚马逊和Facebook,还有其他每个人,都可以投放全市范围的电源——我们正谈论的是一个小城市的电源——并计算资源来解决蛮力用电问题。不同的是,一所大学应该研究如何最佳地优化AI问题,以便他们能够以较低的浮点运算,以及对公用事业资源和计算资源的低使用率来做到这一点。

And number two, you should look at a problem where you're not trying to use a training set that uses every photograph ever taken of a human face on the planet, or something with an astonishingly large training set, which could get you into privacy problems and scaling problems.
第二,你应该考虑一个问题,即你不打算使用训练集来使用地球上曾经拍摄过的每张人脸照片,或是使用数量惊人的大训练集给你带来隐私保护问题和扩展问题。

And number three, try to define a problem where you minimize the area of deep learning where you say. This is all a black box, and mysterious, magical stuff happens inside the black box that we can't describe. There will always be a black box of a certain size that cannot be described, but your financial bean counters within the university will appreciate it if you make the black box area as small as possible.
第三,尝试定义一个问题,在其中将你说的深度学习范围最小化。这是一个黑匣子,而神秘、神奇的事情发生在我们无法具体描述的这个黑匣子中。总会有一个无法具体描述的特定大小的黑匣子。不过如果你能将黑匣子的面积设置得尽可能小,那么大学的财务人员将不胜感激。

BRIAN SANTO: What we started hearing about is artificial intelligence, neural networks, being able to program themselves. I think we've heard folks like Mark Cuban and a bunch of other people thinking, Well, you know, AI is going to undercut the need for programmers. Is that going to affect how many people we need to educate and how we educate them?
BRIAN SANTO: 我们开始听说的是AI、神经网络能够自行编程。我想我们已经听过Mark Cuban,以及其他许多人的想法,嗯,你知道,AI将会削弱对程序员的需求。那会影响到我们对受教育人数的需求,以及我们如何教育他们吗?

LORING WIRBEL: Well, first of all, there will always be humans necessary to set the parameters of what a deep learning platform should do and not do. Because we've all heard about instances where a Microsoft AI program went off and wrote poetry without being asked. And then another instance where that same Microsoft platform started making very racist comments. And with as politically correct as universities are these days, you want to be sure... make sure you set the parameters so your AI platform doesn't turn into a screaming racist or a surrealist poet whom no one can understand. And both of those possibilities are very real. I mean, this is not just a joke! This is a real thing!
LORING WIRBEL:首先,总会需要有人来设置深度学习平台应该做什么和不应该做什么的参数。 因为我们都听说过Microsoft AI程序被取消,并在没有指令要求的情况下开始写诗。 然后是同一微软平台上的另一个例子,AI开始发表种族主义言论。如今,由于政治正确与选对大学一样重要,因此你得确保...确保设置参数,让你的AI平台不会变成没人能理解的尖叫种族主义者,或是超现实主义诗人。这些可能性都是真实存在的。我的意思是,这可不只是开玩笑!这是真的!

Then it sounds like the Sorcerer's Apprentice, because you got a bunch of humans trying to tell a deep learning platform what it can do, and the deep learning platform is going to say, Ha ha, silly humans! I can get beyond that.
这听起来像是“巫师的学徒”,因为有一群人试图告诉深度学习平台它可以做什么,而深度学习平台会说,哈哈,愚蠢的人类!我可以做得比这更好。

BRIAN SANTO: So George, we haven't given you a chance to chime in here.
BRIAN SANTO: George,我们还没给你参与到这个话题的机会。

GEORGE LEOPOLD: Yeah. Well, I can sum up my take on what's happening at the community college level with the following quote, which I think plays into what Loring was talking about: "Who will repair the robots when they break down?" So that's what's happening at the community college level, whether technical training and certification. If you get certified, then you get promoted. And you get a job. And maybe you can get into-- in the case of our story-- the California State system. Although the people we talked to said they get 100,000 applications for 10,000 slots in the California university system. And that's not even counting Berkeley.

GEORGE LEOPOLD:是的。我想用以下引用来总结我对社区大学里正在进行的事情的看法,我认为这与Loring所谈论的内容有关系:“机器人坏掉的时候,谁来修理机器人?” 无论是技术培训还是认证,这都是社区大学正在做的事情。如果你取得认证,那么你将得到提高,你能获得一份工作。就我们的故事而言,也许你可以进入加州大学系统部门。尽管与我们交谈的人说,他们在加州大学系统中有10,000个空缺职位,收到了100,000份申请。而且这还没算上伯克利。

So it's very different. But they're in the process of trying to meet industry halfway in terms of what are the skills you need? That's what we'll teach. You've got to meet us halfway. you've got to provide mentors and so forth. And so they're wrestling with these problems at sort of a lower level, but I think just as important.
所以这是不同的东西。但是他们正在尝试以某种折中方式来满足行业要求,比如去考虑到底什么技能是必需的?那就是我们要教的东西。你必须得有折中的办法,必须提供导师指导等等。因此他们还在较初级的阶段去努力解决这些问题,但我认为这同样很重要。

Loring, what about intellectual property? If you're getting federal funding and you come up with something at the university level, any ideas as to who gets to keep the intellectual property? Because that's a big deal at universities now.
Loring,那关于知识产权呢?如果你获得了联邦赞助,并且在大学里提出了一些建议,那么谁可以保留这些建议的知识产权呢?目前这对于那些大学来说意义重大。

LORING WIRBEL: What if no one can identify where this IP originally came from, and you only find out after the fact that you're screwed? You know, I think that that is something that has not been talked about a lot, and I get this feeling that within the next couple of years, we are going to run into major, major IP problems that... it's probably going to blow the Digital Millennium Copyright Act out of the water. That DMCA will be seen as totally useless for defining intellectual property rights on any kind of trademark or copyright basis. And that has yet to happen, and I think it's going to.
LORING WIRBEL: 如果没人能确定这项知识产权的原始来源,然后在你遇到麻烦时才意识到知识产权这个问题怎么办?我认为这个问题还没有被广泛讨论,并且我觉得在接下来的几年中,我们将遇到重大的知识产权问题…可能将引发对《数字千年版权法案》(简称DMCA)的关注。DMCA对于任何以商标或版权形式定义知识产权的案例都是完全没有用处的。当然这种情况还没发生,但我认为这种情况在未来将会出现。

That’s your Weekly Briefing for the week ending December 6th. This podcast is Produced by AspenCore Studio. It was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss.
这是截至12月6日的本周播报。该播客由AspenCore Studio制作。Coupe Studios的Taylor Marvin和Greg McRae担任设计。Kaitie Huss担任片段制作。

The transcript of this podcast can be found on EETimes.com, complete with links to the articles we refer to, along with photos and video. You can find a new episode every Friday on our web site, or via your favorite app for podcasts. I’m Brian Santo. See you next week.
EETimes.com上有本播客音频的文字版本,包含我们所引用文章的链接、照片和视频。我们的网站会在每周五上传最新一期节目。我是Brian Santo。我们下周见。

 

感谢收听本期推送,全球联播 (EE|Times On Air) 现已同期在喜马拉雅以及蜻蜓FM上线,欢迎订阅收听!
广告