BRIAN SANTO: I’m Brian Santo, EE Times Editor in Chief, and you're listening to EE Times on Air. This is your Briefing for the week ending August 2nd.
BRIAN SANTO：我是EE Times总编Brian Santo，你正在收听的是EE Times全球联播。这是截至8月2日的一周新闻概要。
We want the Internet of things to be smart, but being smart requires processing power – which will be lacking in millions of IoT devices. It’s what we call in the business “a conundrum.” But – there may be an answer! You’ll hear what that is.
As we reported last week, the biggest companies in the world are beginning to compete with their own chip suppliers. The latest example is Alibaba, which just released a high-performance processor of its own design. Alibaba’s move is significant for technological, financial, and political reasons. We’ll look into that.
Also, you’d think that the people building autonomous vehicles are using sound design principles.
JUNKO YOSHIDA: Those with an IT background who have grown up in the culture of they must “move fast and break things” don’t necessarily do that. They tend to go for alternate approaches.
BRIAN SANTO: “Alternate approaches.” You are going to want to hear the rest of this. We’ll get to that in a minute.
First – the Internet of things is going to lead to a world that is smarter. We’ll be installing sensors farther and farther away from data centers – along highways to make driving safer, into farm fields to monitor how our food grows, into remote areas to track weather patterns, and much, much more. Adding intelligence has always meant adding more processing, which also means drawing more power – but the vast majority of the devices we install in these remote areas – at the farthest edges of the network – will, by necessity, lack sophisticated processing capabilities and will be very low-powered. How to reconcile that?
首先 - 物联网将走向一个更智能的世界。我们将会在距离数据中心越来越远的地方安装传感器 - 沿着高速公路使驾驶更安全、进入农田以监控食物生长、进入偏远地区以跟踪天气变化模式等等。增加智能总是意味着增加更多的处理能力，这也意味着增加功耗，但我们在这些偏远地区安装的绝大多数设备都是在网络的最边缘，一般都缺乏复杂的处理能力，并且功耗非常低。如何调和这一矛盾？
Sally Ward-Foxton is one of our correspondents in London. She keeps on top of trends in artificial intelligence for EE Times. In a recent story, Sally wrote about a group of researchers looking into ways to distribute AI at the network edge. They call their approach to machine learning “TinyML.” International correspondent Junko Yoshida caught up with Sally.
Sally Ward-Foxton是我们驻伦敦记者，她为EE Times把守人工智能趋势的最前沿。在最近的一篇报道中，莎莉介绍了一组研究人员如何在网络边缘配置人工智能。他们将自己的机器学习方法称为“TinyML”。国际通讯记者Junko Yoshida采访了Sally。
JUNKO YOSHIDA: Let's go back to the basics here. I want you to explain what's TinyML, and what is it for?
SALLY WARD-FOXTON: So TinyML stands for Tiny Machine Learning. Not just for Edge devices, but for devices at the Very Edge. So machine leaning's already in Edge devices. If you the Facebook app on your phone, you're already machine learning inference on your phone. So what we're talking about here is machine learning at the Very Edge. So something like ultra-low power sensor nodes, gadgets that use energy harvesting or situations where there's barely any power available at all.
As far as defining TinyML, in the recent meeting of the TinyML Group, one of the speakers, Evgeni Gousev from Qualcomm, defined TinyML as "machine approaches that consume less than a milliwatt." "In Qualcomm's experience, he said, "the milliwatt really is the magic number for applications in a smartphone that Acosta's always on," so under a milliwatt is what we're aiming for. And there will be a whole ecosystem springing up around this application, but it's really still emerging right now.
JUNKO YOSHIDA: Right. So we are here talking about how best to enable ultra-low power machine learning, not just on smartphones, but all the way down to the sensor node. So I just want you to break it down. Is this a matter of streamlined framework for training to make this TinyML possible? Is this a matter of framework or some sort of a new technique we're talking about here? Or simply a new low-power hardware that we need?
SALLY WARD-FOXTON: So there are techniques that we use today in machine learning for reducing power. We can do things like quantization, where we reduce the precision of the numbers that we use in the model to make the model more efficient.
In the TinyML meeting, one of the Google engineers, Nat Jeffries, spoke about cascading models. So instead of running one large model, he broke it into three smaller models. So say for speech recognition, the first model might just be deciding whether there's any sound happening. And if there is, it activates a second model, which decides whether that sound is human speech. And then that triggers the rest of the model, which is more power-hungry and so on.
So only a small low-energy part of the model is running, unless it's needed. And that can save lots of power.
JUNKO YOSHIDA: So rather than doing everything in one shot, you are truncating the AI process in several different parts. Is that it?
SALLY WARD-FOXTON: Exactly, yeah. Kind of like when we used to talk about ultra-low power microcontrollers and only waking up certain parts of the device as they're needed to save power.
JUNKO YOSHIDA: Yup. What about software and hardware? What sort of inventions or new developments or improvements are needed to make this ultra-low power machine learning possible?
SALLY WARD-FOXTON: So yeah. In terms of hardware and software solutions, these are definitely still emerging. Google's working on building a version of TensorFlow for microcontrollers. There's already a version called TensorFlow Lite, which is primarily for mobile phones. They're adapting it for microcontrollers.
On the hardware side, there are several specialist companies working on ultra-low power accelerator chips. At the TinyML Group meeting, there was a presentation from GreenWaves Technologies, based in France. They've developed an eight-core accelerator that uses RISC-V. They reduced the clutch speed in the core voltage to get it to run on barely any power.
JUNKO YOSHIDA: Wow! That's interesting. So in your story, you wrote industry discussions of how to proceed with ultra-low power machine learning was overdue. And I couldn't agree with you more with you on that one. But Sally, give me your take: Where do we stand, and who in the hardware and software space are leading in this effort for the ultra-low power machine learning?
SALLY WARD-FOXTON: I think there's certainly a feeling that new applications are being held back because the hardware isn't there yet and the software frameworks are not there yet. Google's really taking the lead on this one. They've clearly identified this as something important that they won't surface you with TensorFlow Lite. And on the hardware side, I think microcontrollers definitely have the edge at the moment. They are just totally ubiquitous.
All these ultra-low power sensor nodes we're talking about, they probably have a microcontroller in them already. It's a mature technology, relative cheap, everybody knows how to use them. And Google is backing microcontrollers as well. So microcontrollers just have a massive advantage, really.
That's not to say that'll always be the case. Specialized hardware might make some inroads, but overall I think the microcontroller will be very difficult to unseat from its prime position.
The GreenWave speaker, Martin Kru, he said that things are moving so fast that for specialized chip companies, the danger is they end up being really good accelerating what everyone was doing last year, which is obviously not good. So retaining a bit of flexibility for future machine learning algorithms, that might be the key there.
BRIAN SANTO: Last week EE Times launched a series of articles – what we call a Special Project. The series explored the various ways the biggest companies in the world are remaking the semiconductor industry. They include Amazon, Baidu, Google, Facebook, Microsoft.
BRIAN SANTO: 上周，EE Times推出了一系列文章——我们称之为特别项目。 该系列探讨了世界上那些最大的公司重塑半导体产业的各种方式。 这些大公司包括Amazon（亚马逊），Baidu（百度），Google（谷歌），Facebook（脸书），Microsoft（微软）。
Correspondent Nitin Dahad’s contribution to our Hyperscaler Special Project was about how the big internet companies are beginning to compete with their own IC vendors.
The day we published the package, as if on cue, one of the hyperscalers – Apple – bought Intel’s modem business, an acquisition that will have far-reaching repercussions through the semiconductor industry. Apple used to be a big modem customer of Qualcomm’s; that’s now likely to change.
就在我们发布该系列报道的当天，恰巧这些超大规模公司中的一家——Apple（苹果）——收购了英特尔的5G MODEM业务，此次收购所产生深远的影响将贯穿整个半导体行业。 苹果曾是Qualcomm（高通）公司调制解调器业务的大客户；现在情况可能要改变了。
That same day, another hyperscaler, Alibaba, announced it had designed its own new processor. Here are Nitin and Junko Yoshida again to discuss the very many ways the new processor is significant.
JUNKO YOSHIDA: Nitin, you were part of this theme that we at EETimes just launched last week the new Special Report focused on hyperscalers' impact on the semiconductor industry. Given that internet platform giants are getting into a host of vertical business segments, which by the way includes their own chip designs, how significant do you think Alibaba's move is? You know, Alibaba's move to design its own chips. Tell me your take.
JUNKO YOSHIDA: Nitin，您参与了EETimes上周刚刚发布的新专题报道，该报道旨在关注超大规模公司对半导体行业的影响。 鉴于互联网平台巨头正在进入一系列垂直业务领域，包括自主芯片设计，您认为阿里巴巴这项举措的意义何在？要知道，阿里巴巴正在设计其自己的芯片。 请说说你的看法。
NITIN DAHAD: Okay, yes, Junko. So just to recap, as we highlighted in the Special Report and in EETimes On Air last week, many of the large internet platform companies-- and these include Facebook, Amazon, Apple, Alibaba and Google-- they're increasingly getting impatient with existing roadmaps and timelines from the semiconductor industry. And going the do-it-yourself route. For all kinds of reasons.
NITIN DAHAD: 好的，Junko。正如我们在上周的特别报道和EETimes全球联播中所强调的那样，许多大型互联网平台公司—— 包括脸书、亚马逊、苹果、阿里巴巴和谷歌——他们对半导体行业现有的发展规划和开发进度越来越没耐心。所以出于多种原因，干脆自己动手了。
Alibaba's move to design its own chip is part of this trend. And I think you'll probably talk about this a bit later. It's also strategic. It's also significant for China, since it address the country's ambition to be more self-sufficient in semiconductors as part of the Made in China 2025 initiative. So in effect, this is symbolic both for China and for RISC-V.
阿里巴巴自己设计芯片的举动也是这一趋势的一部分。 我想你稍后可能会谈到这一点。这一举动也具有战略性的意义，尤其对中国，因为这体现了中国希望在半导体领域更加自给自足的雄心，这也是“中国制造2025”的一部分。 因此，实际上这对中国和RISC-V都具有特别的象征意义。
JUNKO YOSHIDA: Got it. Actually, as I briefly mentioned before, over this past weekend, I had an opportunity to quickly catch up with Xiaoning Qi. He's the Vice President of Alibaba Group. He was previously the CEO of C-Sky, who developed their own homegrown 32-bit microprocessor for the embedded market.
JUNKO YOSHIDA: 明白了。 实际上，我之前简短提到过，在刚过去的这个周末，我有机会联系上戚肖宁，他是阿里巴巴集团副总裁，之前是中天微CEO，曾为嵌入式市场开发了中国本土32位微处理器。
So Xiaoning's team has chops to do various chips, but what they're doing now under the umbrella of Alibaba is quite interest to me, and when I talked to him, he said, You know, Alibaba's chip group doesn't plan to sell the newly designed RISC-V chip. Rather, it tends to offer what he called "chip templates" for other companies.
So my question to you is, what is the performance of this RISC-V chip, and what are the target markets for this?
NITIN DAHAD: It's actually absolutely right what he says. What they're going to do is, they're going to sell their own chip platform 4 and release parts of the code as open source on GitHub to stimulate related development. So really, this is an enabler or RISC-V development in China. As you say, they're not trying to sell their own chips.
NITIN DAHAD: 实际上，他所说的完全正确。 阿里巴巴将要做的是销售他们自己的芯片平台4，在GitHub上发布部分开源代码来激发相关的开发。 实际上，这是中国芯片领域的推动者，也是RISC-V的发展。 正如你所说，他们并没有试图出售自己的芯片。
And as regards performance, Alibaba claims a major breakthrough with what they call a Xuantie 910 chip, which they released last Thursday. It's said it's 40% more powerful than any other RISC-V processor to date.
JUNKO YOSHIDA: Wow.
NITIN DAHAD: Just one stat, and you can read the rest in the article, but it achieves 7.1 coremark/megahertz at a frequency of a 2 and a half gigahertz on a 12 nanometer process node. What they're doing is they're targeting really high performance applications, both in infrastructure and at the Edge. So artificial intelligence, internet of things, 5G and autonomous vehicles. And they said this specifically in the announcement. They're saying the whole, for example, artificial intelligence IoT market is fragmented and there's no universal chip solution. What they're trying to do is enable the development through RISC-V and open source, but also get that high performance.
NITIN DAHAD: 这还只是一个指标，你可以阅读文章中的其余部分，它采用12纳米工艺，以2.5 GHz的频率达到7.1 coremark/megahertz。他们的目标其实是高性能应用，物理是基础架构还是边缘端，涵盖AI、物联网、5G和自动驾驶。他们在官宣中特别说明了这一点。他们说想覆盖全部领域，例如，AI物联网市场是碎片化的，也没有现行的通用芯片解决方案。 他们想要做的是通过RISC-V和开源来开发高性能的芯片。
JUNKO YOSHIDA: Wow. That is very ambitious. But you know, I'm not surprised when I think about how strategic Alibaba's decision was back a couple of years ago, when Alibaba decided to pick up C-Sky as part of the Group. And I think from the get-go they did have the intention to get into the semiconductor business. But I think throwing RISC-V in, it kind of changed the game a little bit here. Especially in the context of China you briefly mentioned before.
JUNKO YOSHIDA: 哇， 野心可不小啊。但是你知道，当我回想几年前，阿里巴巴决定收购中天微作为集团的一部分时，他们的决定是多么具有战略性，所以现在我并不感到惊讶。 我认为从一开始，他们就有意进入半导体行业。 但是我认为抛出RISC-V，这有点改变了游戏规则。 特别是你之前有简要提及，牵涉到中国背景的情况下。
So do you think this illustrates China making deeper inroads in RISC-V, Nitin?
NITIN DAHAD: Yes, Junko. It's definitely that. But it's beyond this as well. And to look at it in more context, China's semiconductor and consumer electronics industries really needed some kind of boost following the ongoing trade war with the US. First we had the sanctions against (Setti E?), and then the current ongoing saga with banning Hauwei... well, more correctly, putting it on the entity list.
NITIN DAHAD: 是的，绝对是这样的。 但不仅仅限于此。 从更广的角度来看，中国的半导体和消费电子行业在与美国的持续贸易战之后确实需要某种推动力。首先，我们看到对中兴（ZTE）的制裁，然后是现在进行的华为禁令，更准确地说是将华为列入实体清单中。
So more specifically related to RISC-V, this is huge news. A major player in China has developed a homegrown, high-performance, 16-core RISC-V chip 40% more powerful than any others to date. This is hinting at both the leadership position in RISC-V, as well as-- and this is the important bit-- less reliance on access to chips and other RISC-V architectures developed by, say, US and European companies.
As one analyst said to me last week... Sorry, not to me. He said, With this chip, Chinese companies don't have to rely on a supplier like ARM or Intel, there's now no threat ever of them losing access to a key part of their design.
BRIAN SANTO: For months, Junko has been investigating autonomous vehicles and vehicle safety. This week she did an article that looked at the fundamentals – the methodologies that various auto makers have developed to design safe vehicles, and to validate those designs.
It turns out that some of the new electric car startups are running thousands of miles of what they call "road testing," without ever being on the road – which is a valid approach in combination with real road testing. The problem is that nobody knows if they designed their software models correctly. There are no standards. There aren’t even any common metrics.
Worse, it’s all a big secret. Car companies don’t share their safety design data with anyone. That means no one can check their work, and there’s no way to tell if it’s valid or garbage. Furthermore, car companies don’t share their test data either, which means they can’t benefit from each others’ safety research. When did safety become a proprietary issue?
It’s all shockingly disorganized, and it’s no wonder that car companies keep pushing back the date when autonomous vehicles will be ready for the road. Cruise had promised to introduce its robotaxi service this year. Last week it just finally acknowledged it isn’t going to make its deadline. The real wonder is why ANY car company EVER promised autonomous driving in 2019. What are they thinking?
I asked Junko about that.
So the traditional car companies-- the Fords, the BMWs-- and then there are a bunch of new startups-- the Byton and Tesla. And the two groups, what we're discovering is that they operate completely differently, or very differently. The startups all come out of the electronics industry, or most of them do. And so they're fast and they're nimble and they're smarter than the traditional car companies because they're fast and nimble in their electronics. And then there's the traditional car companies that are big dinosaurs and they're dragging their heels on electric vehicles. And they're slow and they're DOOMED because they just can't keep up with the startups, right?
JUNKO YOSHIDA: Well, that's the simplistic view of the autonomous vehicle industry, Brian. But sure, there are tech companies which do nothing but developing AV software stacks, right? And there are car OEMs who design both: traditional vehicles and autonomous vehicles. And you know, I think what we need to be aware is that there is a lot of interminglings going on among them. You know, through partnerships and acquisitions like Ford and Volkswagen I think earlier this month, for example, just became equal share investors in a company startup called Argo AI. This is the autonomous vehicle startup based in Pittsburgh. So there! So there are a lot of partnerships going on right now.
BRIAN SANTO: It's a lot of technology, it's a lot of new technology. There's electric vehicles, there's AI, there's just self-driving, there's the... It's a big, big technological set of problems and challenges that have to be settled. And it doesn't look like any one company can really take them all on. Not Tesla, not Ford.
JUNKO YOSHIDA: No, it's true. And I think we should be cognizant of this... You know, there are certain culture differences or lack of institutional memory on the part of the tech startups. You know, they often lack the disciplines in rigorous design and engineering I think.
You know, for example, traditional aircraft, train, automotive designers first build rigorous mathematical models and apply formal verification to validate that a system design matches the original specs.
On the other hand, those with IT backgrounds who have grown of famous "move fast and break things" don't necessarily do that. They tend to go for alternate approaches.
So listen to what Jack Weast, Intel's Vice President of Autonomous Vehicle Standards, told us in our recent interview.
JACK WEAST: The alternate approach should be, "Hey, I just started writing code immediately. Didn't do any formal design, didn't do any design verification. I've got a pile of code, and I'm just going to test it and iterate, test, iterate, test, iterate. And then try to gather statistical evidence to convince me that the thing is safe." And that's why I've driven 10 million miles. I've driven a hundred million miles without an accident. Okay, that means it's safe, right? Well, I don't know. Because you haven't actually verified that the design is safe. What you've done is gather statistical evidence that this pile of code that I've got actually seems to work, so it tries to give you more confidence, but it's not a sound approach.
JUNKO YOSHIDA: So such an alternative approach is a stock departure from a traditional design process under which you formally define the vehicle architecture and design algorithms on paper first. The important thing here is that you must formally verify them. As Intel's Weast told us, it's... Take for example an airplane, right? You design an airplane for example, you know it's going to fly-- from a physics standpoint-- because you've proven that it will fly. You know, they just don't put the airplane out there. Does it fly? Right? So you can prove that on paper.
BRIAN SANTO: When you're talking about trains, planes, cars, you're talking about things that there's a life-critical element to it, right? Versus like designing a FitBit or a PC. You can reboot a PC. You can't reboot an airplane, right?
JUNKO YOSHIDA: Yeah, exactly.
BRIAN SANTO: That's kind of like the fundamental thing going on here. So we've had cars for a hundred years, but when you add autonomy, it's a different thing all of a sudden. And adding autonomy to a vehicle kind of makes it LESS safe, at least at first, right?
Can I get you to explain why that is?
JUNKO YOSHIDA: Yeah. I guess I have to break this down into two parts here. Because on one hand, ADAS, as you mentioned, the Advanced Driver Assistance System, is great. Because a feature like automatic emergency braking can get you covered to avoid a forward crash with another vehicle, for example. I mean that's what AEB does. On the other hand, when autonomy becomes MORE advanced, like Level 3 cars, in which the driver can take his eyes off, that's when things get complicated. You know, although the Level 3 car is designed to do MOST of the driving, drivers still require... the driver needs to be prepared to intervene when called upon by the vehicle to do so.
BRIAN SANTO: Because the vehicle is going to get involved with things that it hasn't seen before. It's going to be human judgment.
JUNKO YOSHIDA: Right. It if it gets confused, it asks the driver, Hey, take over now, right? But this is a real human-machine interface issue. It's huge! Because you might have been texting until that moment, or you might have been emailing somebody. And then you're suddenly told, Hey, take over! That's really unrealistic in my point of view, right? You add autonomy, more autonomy, human drivers will get used to it, you know? They get bored and they can't stay alert all the time. It's human nature. And that makes driving highly automated vehicles less safe I think.
BRIAN SANTO: Right, right. So you've got to design for that in the beginning. So we've been discussing how this all starts with design and test and verification, but that process isn't really all the way through the automotive industry when it comes to autonomy. What do you think the basic problem is?
JUNKO YOSHIDA: I believe the biggest issue of the autonomous vehicle industry, AV industry, today comes down to one thing: lack of transparency. I'm sure Waymo is learning a ton of stuff while racking up miles and miles by testing their robocars on public roads, right? So are other AV companies like Uber and others. They must be all individually looking for extreme cases that will make the automated vehicles unsafe or ineffective. If that is the case, shouldn't they be pooling that data to design test validation?
As one of the astute EE Times readers actually pointed out in our Comments section today, saying this: "What we don't hear from the AV crowd," he said, "is what would typically be called a requirements document." You need that requirements document to identify as many use case and failure points as a requirement. Then research and design a feature that mitigates that risk of failure mode, right? There is no such document at this point in time.
But first things first. As Phil Koopman, he's the CTO of Edge Case Research, he extraneously told me that at minimum, at MINIMUM, AV companies should be publishing safety metrics to demonstrate that they are operating safely before test cars hit the road.
BRIAN SANTO: Are there any basic requirements, basic metrics, that the auto industry has agreed upon?
JUNKO YOSHIDA: Not yet. None. Isn't that shocking? It's a shocker. It's a real shocker.
BRIAN SANTO: That's not encouraging.
JUNKO YOSHIDA: I know. People say that they're working on it, but not at this point in time.
BRIAN SANTO: So it kind of makes sense then, to me, after hearing that, that the introduction of robotaxis and autonomy in vehicles is getting pushed back. These guys need time to deal with all this stuff.
Now on the other hand-- and I've had this argument over and over with other people-- if autonomous vehicles are going to be safer than humans-- even if it's only like 10% safer at the beginning-- shouldn't we just force everyone to get those autonomous vehicles out on the road, force everybody into autonomous as soon as possible, and okay maybe the traffic death toll goes up maybe like 10% at first. But eventually traffic deaths will get cut maybe in half or even more. Just roll with it! Just get it going already!
JUNKO YOSHIDA: Well, that's the crux of the issue, isn't it? Especially in the United States, I think asking drivers to give up driving is like asking people to give up their guns! You can't force everybody to switch to autonomous vehicles. Actually, I really hate the argument of "take the human out of the equation." You can never take the human out of the equation, right?
BRIAN SANTO: No.
JUNKO YOSHIDA: Even driverless cars need to deal with cars driven, regular cars driven by human drivers on the road where the human pedestrians cross the streets, right? So you can never take the humans out of the equation.
BRIAN SANTO: Life is unpredictable. Technology is really good at predictable stuff, but the default situation of reality is, it's unpredictable.
JUNKO YOSHIDA: Exactly. So as long as there's human drivers mixed in the roads with automated vehicles, there's going to be accidents. Period. So I don't want to sound too old fashioned, but what our cities need really, in my opinion, is that if they're really serious about reducing fatalities, what we need is public transportation, not autonomous vehicles.
BRIAN SANTO: Onward into the past, a rundown of important events in tech history that took place on dates from the past week.
On July 29th, in 1958, President Dwight D. Eisenhower signed the National Aeronautics and Space Act. It officially created NASA.
On July 30th in 1898, the Winton Motor Carriage Company of Cleveland, Ohio, placed an advertisement in Scientific American advising readers to “Dispense with a Horse.” It appears to have been the first car ad ever. The vehicle was priced at $1,000, but running it cost only about a quarter of a penny per mile – presumably cheaper than the horse.
On August 1st, in 2016, NHK started regular TV satellite broadcasts of 8K television. No one was selling 8K TV sets at the time; viewers had to congregate in front of public viewing stations.
Also on August 1st, this time in 1981, MTV signed on the air, the first 24-hour stereo video channel. The first song ever played on MTV was this one by The Buggles.
MUSIC CLIP: I heard you on the wireless back in '52 / Lying awake intent at turning in on you / Oh-a-oh! / I met your children / Oh-a-ah! / What did you tell them? / Video killed the radio star / Video killed the radio star
BRIAN SANTO: That’s your Weekly Briefing for the week ending August 2nd. This podcast is Produced by AspenCore Studio. It was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss. The transcript of this podcast can be found on EETimes.com, complete with links to the articles we refer to. Be sure to join us next week for your August 9th Weekly Briefing on EE Times On Air. I’m Brian Santo.