I’m Brian Santo, EE Times Editor in Chief, and you're listening to EETimes on Air. This is your Briefing for the week ending June 28th.
我是EE Times主编Brian Santo，你正在收听的是EETimes全球联播。这是截至6月28日的一周新闻概要。
The French research institute, LETI, held a conference on artificial intelligence at the edge. What does putting AI on the edge of the network mean, and what’s the advantage? EETimes editors were in Grenoble, and filed the report.
A few weeks ago, PCI introduced a new ultra-fast networking specification that will make data centers perform even better, and that will make the internet faster and more capable. A few days ago, PCI unexpectedly doubled the speed again with yet another spec.
EE Times editors were at the annual Sensors Expo, which has become an important conference for the Internet of Things. We’ll have a report.
Also, the prevailing wisdom is that self-driving vehicles will be safer than human drivers. But ,what if there’s a third option – one that’s just as safe as self-driving cars are supposed to be?
…We’ll hear about that in a moment.
Data centers and other computing clusters simply can’t go fast enough. In the last few weeks, the PCI special interest group introduced not one but two generations of the PCI spec. PCI will accelerate the data centers that are now central to the modern internet, but adopting the new specs will force some significant changes in how data centers get configured.
Here’s Rick Merritt on the new spec and what it means.
The PCI special interest group surprised many of us at their annual event last week when they announced plans for Gen 6, a 64 gigatransfer per second version of PCI Express. They had just finished last month their Gen 5 at 32 gigatransfers per second, so doubling the interconnect speed within two years is really pretty impressive.
Now, they're really just leveraging the Pam 4 technology with Ford error correction that's been sorted out and developed for high end CERTIs; but nevertheless, to implement that in a kind of a mainstream specification that has to go into high volume servers and even PCs is going to be quite a challenge. And I'm sure it's going to take them the whole two years.
One of the interesting aspects here is that we understand from talking to people on the show floor there that copper really still has a long life ahead of it. There's three specs in the optical interconnect forum: there's an IEEE backplane spec, uh, that are already running 112 gig interconnect speeds on copper. And I'm told by one startup that they say that they certainly see a way forward to doubling that to 224 gigs in the future. And they even have ideas to doubling that to 500 or so gigs. So, a lot of room for copper interconnects.
The trouble is, the faster they go, the shorter they go. And it's just the laws of physics, folks. So the system designers are already starting to look at ways that they can do smaller boards inside the servers and networking gear, and then do cable links between them. It's a little more expensive, but it's a lot less expensive than moving from FR4 to higher board materials or using retimer chips.
So faster interconnects are coming, but they're going to force a redesign of board-level products. And that's a pretty interesting and pretty broad impact.
This is Rick Merritt reporting from Santa Clara, California for EETimes on Air.
CEA Leti is French research institute that specializes in microelectronics and nanotechnology. The institute hosts an annual conference called “Innovation Days.” The theme this year was “Deep Tech for Edge AI.”
Where you physically host AI resources in the network makes a difference. Emmanuel Sabonnadière is the chief executive of CEA Leti. We asked him to explain the two basic approaches for architecting AI in the network.
To develop an AI model, you need to have a lot of data and a lot of power of computations. For that, you develop the model in the Cloud. After that, you have to use the model for the real life. You can either keep it in the Cloud or you can have it more local. That means having Edge AI.
So what I can see today is that the Chinese or the Americans are more thinking having the model running into the Cloud, because they have easy access to the data, so real-life data. Where in Europe, due to the data privacy, you need to have it more local, more personal. And for that…the more the one in Europe where we try to develop chip that will support Edge AI as a technology.
EE Times editors Nitin Dahad and Junko Yoshida were both at CEA Leti in Grenoble. They filed this report.
So Leti was actually...We've been talking about embedded intelligence for a while.
And I think what Leti was looking to demonstrate is the real sort of deep tech behind it for AJI. And it comes from a lot of what Emmanuel, the CEO of Leti, said earlier, which was basically addressing some of the issues around data privacy in Europe.
Right. That's right. Yeah, he actually made it very clear that, unlike China/US model, AI model, Europe... you know, their hands are tied because of the strict regulations of privacy, data privacy, they would have to solve this problem of Edge. It's a hard problem to solve. And they think they see their mission, Europe's mission, in AI is the Edge AI. But you know, listen, you and talked about, Well, Edge AI is almost like an over-hyped, overused in our coverage.
So tell me the building blocks of Edge AI CEA Leti is talking about here.
So actually that's quite important, because they're talking really about the nonvolatile memory in memory computing, spiking neural networks, the process of technology, FDSOI. And I think what they're trying to demonstrate is, you can't really have the Edge AI without the hardware. And that's actually a very important building block. So without that... that's what the research going on in, for example, the spirit chip we saw today, the project that Leti has been working on with the analog synapse, is quite an interesting application. And then we also know... We're looking at, for example, the stacked memory and processing, you know, the thing that came out of Stanford that we wrote about last year.
And I think just how putting all of that architecture together to basically reduce the data bottleneck. And I think that's key.
Yeah. I think they talked about not just bottleneck, I think one of the researchers said, Don't move data! Because every time when you move data from memory blocks, external memory blocks, to the processor, it use a lot of energy. So right? So there's energy efficiency is one. There is another thing that they talk about: reduce the number of operations. And that's where I think spiking neural networks and event-driven stuff that comes in. So these are the two things.
One of the things I wanted to ask you, what was the most interesting application of Edge AI that we heard today?
So, you know, we've heard a lot about health care, but we saw something today which really hit that home. And the sort of application of Edge AI respecting data privacy and just doing a lot of that processing right at the Edge and then providing a solution. And this was a company that was providing... what they called it was "first autonomous medical device." And what that means is, it's a Type I Diabetes automated treatment device.
Yeah. It's the first autonomous medical device that makes a decision on its own, locally. And it's been already approved by regulators, both in Germany and in France. I was kind of surprised.
So, yeah. So I think we heard from them... They only started last week and they've already got 30 people using it. I think they still have to go a bit of a way, because they need to go through the system to get reimbursed through the medical system...
...But you said that the CEO of... He said that the first reimbursement may happen actually pretty quickly.
And they've also started clinical trials on children. So that's quite interesting. Because one of the things I think we're... we talk about the technology, but I think one of the things that we don't realize is that this enables people with Type I Diabetes, which is quite severe because they have to check their measurements and what the food, 40 or 50 times a day. Having that automated in autonomous decision making, it make... changes their lifestyle totally.
Yeah. I think we have already seen several startups came out with a wearable devices for the continuous monitoring, right? But this one is different. Tell us how is it different from just the monitor system.
So yes. There's a wearable canula, which basically is used for injecting insulin automatically, but for that they have a sensor. And that sensor sets really sort of inside the skin. They change this regularly because… for everythree days.
And then they have a phone... It's actually...
...It is a phone, but...
...But. But they've actually put their own secure OS on it, and they're using secure Bluetooth. And what they said was, you know, the standard phones, they don't want to use those...
It's not connected to the internet, right? You don't... It would be lethal if somebody hacks into the device like this.
Yeah. I mean, just imagine if somebody wants to get rid or somebody. They could just inject, over-inject insulin or something like that.
When I asked what kind of hardware is inside this smartphone look like device, and he said it's a mid-range (snap, drag and type) of application versus a... But he did say that he would like to have more processing power.
Yes. I mean, one of the challenges in many of the smartphones is there's not enough processing power. And this is the whole idea of what Leti has been doing today in terms of, you know, telling us about how you can increase that compute power at the Edge. And I think what he's saying is, you know, they would like to have more processing power to sort of do a lot more data processing. But at the moment what they want to do is get something out to market, test it, and then get people using it, and then maybe in the future they'll look at some iteration...
...The important thing is that this device does look like smartphone, but the entire processing power is used for the personalization of machine learning at the Edge, right? They're not going to run games or other applications.
That's absolutely right. It's totally... the whole phone and computing power is dedicated to running the machine learning for that patient. So that's... That gives them a lot of power, but they still need a lot more.
Yeah. Right. So I think that as Emmanuel, the CEO of CEA Leti said, that data privacy is a cornerstone, especially for health care devices like this one.
The Internet of things is all about sensors. Rick Merritt and Dylan McGrath attended the Sensors Expo in Silicon Valley this week. This is what they found.
Well, it turns out there are a lot of sensors out there, Rick. And there's a lot of sensors that are out there, they're going into everything. I think we're all aware of this, but it's just kind of breathtaking to see the amount of companies on the show floor, the amount of small companies that I’ve never heard of before. And quite frankly, I had a little trouble understanding, you know, the differences between the things that they're all doing. But I think the bottom line is pretty clear: There's plenty of sensors out there, and there is going to be plenty more.
Yeah, I looked at just a distributor’s list, and there was like 70 companies, sensor manufacturers that they work with. There's the ones that we know, the big ones STM, TDK, Rohm… but then a bazillion tiny companies here.
All kinds of tiny companies. It's great to see. It’s very interesting to see. Like I said, I had a little of trouble trying to differentiate between the various technologies.
Because there's so many... I mean, they're sensing movement and they're sensing position and they're sensing gas, they use different technologies for different markets. It’s a hyper-fragmented area. Any stories from the show?
Yeah. I was just going to say. So one of the more interesting things I had a conversation with a gentleman from Voler Systems who was talking about using sensors from China, obviously a lot of parts and sensors are made in China at low cost. And they were trying to use these sensors in one particular application. They were having some trouble. They realized after a couple days of testing and kind of looking under the hood to see what was wrong, that these sensors that they had were not operating up to spec.
And he basically said, you know, it's pretty clear to me what's going on. If we hadn’t have said anything, we would have continued to receive these sensors that didn't operate up to spec. But because it is so difficult to get the accuracy required from these sensors at the cost they need to be made at, a lot of the parts aren’t yielding correctly, and those... and they find a home for these parts and people that aren’t doing the testing and really beating on the thing to make sure it works. So. Test. Definitely test. That was his message, and it’s a scary thought.
How about you? What did you see, Rick?
I've head a couple of companies talking about trying to move up the food chain like STMicro and a little bit also at Omron, is just getting started where they're trying to move from selling sensor components to selling modules and then the software on those modules. So, natural move up the food chain, to where the margin is.
Interestingly, a Google manager for their Assistant service also gave a talk here and was telling people, Boy, come sign up and use our APIs on your home sensors. Because they're at the very top of the food chain, and they just want to suck up all that good data. So we're seeing everybody move in that direction.
Around the world, well over a million people are killed in auto accidents every year. By some estimates, self-driving cars should cut the total number of deaths by 75 percent – and that’s why some people can’t wait for self-driving cars to take over the roads. Before that happens, however, cars will be equipped with a variety of features that will help make humans much better drivers.
Colin Barnden wrote an article for EE Times in which he posits that driver-assist technology might get so good, there might be little justification to ever switch entirely to fully self-driving passenger vehicles for safety’s sake.
Junko Yoshida recently had this conversation with him:
So, what prompted you to write this piece? This was the most provocative piece I’ve read in a long time.
You know, I was actually on holiday two weeks ago, and I was sitting on the sun lounger, and sometimes you just have to space when you step away from everything to see some clarity about life. You know what it's like, that the world is so busy and there's just so much information and there's so many data points that we're bombarded with daily.
There I was, just sat in the sun, and I got out my notepad and I just started writing the definitions of each of the different autonomy levels. And that gave me the idea for that piece.
As for so-called misconceptions, you raised a couple of points in your piece. One of them is: “Take the human out of the equation in driving. So roads will be a lot safer.” This is what you and I hear all the time from the guys in Silicon Valley. But isn’t it true that human drivers tend to be unreliable? No?
So that's a great question. I mean, essentially what we've got here is a misrepresentation of a Nitzer Report. And the Nitzer Report states 94% driver error. And really that's been misrepresented as people then saying it's 94% HUMAN error. And really my argument is that if we take the humans out of the loop and we replace human drivers with machine drivers, we will still have traffic fatalities. As indeed we've seen from the accidents with Boeing's MCat and the 373 Max. Automation still makes mistakes. Code still makes mistakes; machines will still make mistakes. Be that sensor errors or whatever types of errors. Traffic fatalities will still happen.
That's true. Let’s cut to the chase. You wrote, “Forget Level 3, Level 4, Level 5 cars for mass market.” Take me through your reasoning from the top. I sense that Level 5 is kind of an intellectual exercise, rather than a commercial reality. Please walk me through.
Let's take something else out of science fiction. Do people believe in teleportation? Do they believe in time travel? So I kind of look at it in terms of, Level 5-- and you could call it "anytime, anyplace, anywhere", the machine driver will handle all situations, all extremes, all unknowns, all of the times. And to quote Phil Coopman, "Did you think of that?"
And there would just be so many of those situations in which we find that the world is a complex system. And automation is just not capable yet to handle that.
So Level 5, it's too far out. It's too difficult is my take. Level 4 is too expensive. And what I'm seeing from the OEMs is, they are agonizing over adding 8S and driver monitoring system features for a couple of hundred bucks. These guys are taking their time and really sweating every cent out of that system. So $5,000 for unproven technology is not close.
And Level 3, we've got the handover problem.
I just want to make one clarification here. You make a distinction between so-called robotaxi and consumer mass-market cars, right?
Correct. Yeah. So what's happening really with the Waymos and the Ubers and the Cruzes and those guys, they have a completely different business model. They may very well be able to survive with tens of thousands of dollars of sensors and neuronet processors in a vehicle. But the mass market companies-- the traditional mass market OEMs-- they are simply not going to put thousands of dollars... And this technology is unproven. Lidar is unproven. GPU is unproven in mass market vehicles.
And the liability issues. Everybody looks at this from a technical perspective, but what's the liability? What's the legal issues? What are the political implications? And really when I look at all of that in the round, the traditional OEMs, they are such conservative companies that they will not go near this.
Which level of mass-market cars will we have in 2025?
Yeah, so, what I'm looking at really, 90% of cars in use today are Level 0. That's the base position. And that argument doesn't really get made. So what's happening now is, the OEMs, they are introducing Level 1 and Level 2 technologies. So we've got there autonomous emergency braking and lane keeper systems. So IEB and LKas are the longitudinal and lateral control systems that will be used, along with what I call an infra red driver monitoring system to permanently monitor the driver's attention state and engagement level.
And between those three systems, that's exactly where I see the OEMs adopting, really around Levels 1, Level 2 and then you could call this new thing Level 2+, Level 3-. That's where I see the volumes.
I actually liked the way you ended your story, saying that “As for the 2030s, I’ll get back to you in ten years!”
It's coming. If we listen to experts like Missy Cummings, she talks about the fact that this technology is going to happen. It truly is going to happen. And I don't doubt that. But exactly which year it becomes commercially viable and we start to see it-- or even which decade-- really nobody knows. But as far as I can see for the 2020s, we're really looking at Levels 1 through 3-, and Levels 3, 4 and 5 are just not realistic for the mass market OEMs.
Thanks so much, Colin. It’s always a pleasure talking to you.
Thank you very much, Junko. Nice to talk to you again, too.
And this week’s bit of technology history:
Alexander Graham Bell publicly demonstrated his telephone on June 25th, 1876, at the Centennial Exposition in Philadelphia. Thomas Edison was also at the Expo; he showed a machine that increased the speed of producing screws and bolts from 8,000 a day to 100,000.
June 23, in 1912, was th birthday of Alan Turing, whose expertise in cryptography contributed to the success of the Allies in WWII. He later proposed the Turing test, an approach to evaluate the performance of artificial intelligences.
On this very date in 1965, Intelsat 1 was activated for service. The world's first commercial communications satellite, it could support 240 voice channels or one TV channel. It remained in service for three and a half years.
Here’s President Lyndon Johnson announcing the satellite was open for business.
President Johnson inaugurated regular commercial satellite service.
PRESIDENT LNDON JOHNSON:
This moment makes a milestone in the history of communications between peoples and nations. For the first time, a manmade satellite of Earth is being put into commercial service as a major communication between continents.
And that’s your Weekly Briefing for the week ending June 28th.
This podcast is Produced by AspenCore Studio. It was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss.
You can find previous episodes at EETimes.com and at major podcast platforms. Be sure to join us next week. I’m Brian Santo, and this your Weekly Briefing on EE Times On Air.