广告
智能语音的边缘处理、飞机和平衡车的可靠性、自动驾驶究竟怎样才算安全?
00:00
00:00

BRIAN SANTO: I’m Brian Santo, EE Times Editor in Chief, and you're listening to EE Times on Air. This is your Briefing for the week ending September 27th.

In this episode we discuss…

● …voice activation technology. Most voice processing is done in the cloud. Now, there are a lot of good reasons to NOT send all of our conversations off to the cloud, but we do it anyway because it’s significantly cheaper to do it there in the cloud. But what if there were some unexpected, inexpensive alternative for doing voice processing at the edge? Now there is.

● Also this week, we’ll examine reliability in complex systems, and for that we’re going to revisit the Boeing 737 Max, two of which crashed earlier this year. Recently the New York Times reported the bigger problem might have been the lack of experience of the pilots involved. We'll discuss that view in light of what we know about the engineering and the oversight of the Boeing jet.

● …And we’re also going to revisit autonomous vehicles and driving safety. There’s a shift in emphasis from full autonomy to driver-assist technologies. Why? Because when it comes to autonomy, there’s still an open question: How safe is safe?

PHIL KOOPMAN: You switch over to complete self-driving, fully autonomous. And if you were to use that same ADAS system, and nine times out of ten it stops, then one time out of ten it's going to hit, and you haven't saved nine lives, you've killed one person.

BRIAN SANTO: That was Phil Koopman, CTO of Edge Case Research. We’ll get back to his comments in a moment.

Topping the show this week, we’re going to take a look at smart devices and voice input. Voice input is becoming the preferred human-machine interface, with Siri and Alexa and Google Assistant and the like getting built into more and more devices: smartphones, cable TV remotes, and more.

Voice input is being sent to data centers, and that represents a lot of traffic on the net, and a lot of volume in data centers. For voice-enabled products, having the capability means never fully powering down. Instead, idling in power-consuming anticipation of your next command. Voice input also creates the temptation to violate consumers’ privacy.

EE Times ran two stories this week, one by Anne-Francoise Pele – the newest addition to our editorial staff – about a company that has developed a MEMS sensor that will detect voice cues and wake up a sleeping device almost instantaneously. That's with the goal of minimizing power consumption. The other by Sally Ward-Foxton about a company called PicoVoice that has devised a way to perform voice processing at the edge easily and inexpensively. Here’s international editor Junko Yoshida with Sally.

JUNKO YOSHIDA: This is something I have always been wondering about. You know, when are we moving this natural language processing in the cloud model to the voice inference on the edge devices. So I guess this is exactly the case. Is it?

SALLY WARD-FOXTON: Right. Yeah. So if we had an appliance or something, maybe a coffee maker or a fridge, if we use that 10 times a day to process that voice in the cloud, at today’s rates that would apparently cost around $15 dollars a year per device. That's quite a lot over the lifetime of the device and for the number of appliances for the appliance manufacturer. So that would have to be balanced against how many of these expensive coffee capsules you can sell for your coffee maker or whatever.

The point is, if it’s some kind of smart appliance where you've already got a small CPU in there, now you might not need to use the cloud at all. You can use the compute power you've already got in the device.

These cloud companies, Amazon and Google, the cost to them might not be $15 a year, since it’s their own cloud service that they are using, but yeah, the Amazon business models help for them to sell things, and they make that money back, right?

JUNKO YOSHIDA: So the cost of a big driver, I could see this. But what other motivation do people have to move from cloud to edge?

SALLY WARD-FOXTON: So privacy is a really, really big one. There's been a bit of a scandal recently with the Amazon Echo where it turned out there were people, there were human reviewers listening, eavesdropping on people’s conversations though Alexa. And it’s not just them, other manufacturers of smart assistants have been at it as well – they use humans to transcribe some of the conversations. Basically, they label the data, basically, so they can use that to train the models. There was a backlash against this from unhappy consumers. Their doctor’s appointments had been recorded and stuff. Obviously, it was not good.

So user privacy is a big reason NOT to connect devices to the cloud.

There's other things like security, data security. If you're doing something like, something more like transcription, your own device is doing full transcription. Maybe it's in a meeting room, recording meetings and transcribing the minutes. If that’s company information, you might not want to send that to the cloud for whatever reason. You might want to do that on the device.

There's other reasons where you might need really strict latency, or a certain level of reliability that you can control. But yes, cost and privacy are really the big reasons really.

JUNKO YOSHIDA: So let's talk a little bit about this PicoVoice. It's a startup based on Canada. And it claims it has newly developed a machine learning model for speech-to-text transcription, as you say, that runs on a small CPU. Can you give me some specifics? How much compute power or memory does PicoVoice model require?

SALLY WARD-FOXTON: Right. So there are three different models. There’s a wake word engine, which is detecting a specific phrase that wakes up the rest of the system. There’s a speech-to-intent engine, which operates kind of in this limited domain that’s relevant to the application. And there’s a third model which does full speech-to-text transcription.

So for the speech-to-intent, which is where it understands spoken commands in a particular domain, maybe it’s a smart lighting system; it understands things to do with lighting. Maybe you want to turn the lights on and off. It understands those commands – changing the colors. But if you ask it about politics or economics, it doesn't understand that. You don’t have to have specific phrases, other than the wake-word, but it only understands things to do with lighting. That model was less than half a megabyte. So that’s what you’d be doing on your sub-$1 microcontroller.

For the full speech-to-text transcription where it understands absolutely everything, 200,000 words, which is the entirety of the English language, the demo they had running for that, they were doing it on a Raspberry Pi Zero, without an internet connection. That’s more like a $5 kind of system. The CPU on that uses an 11-core, kind of classic ARM core from years back, so it’s nothing fancy. Although I don't know the exact size of the model, but yeah, it's very results-constrained environment still.

JUNKO YOSHIDA: I was just reading your colleague Anne-Francoise Pele's story this week. She wrote a story about piezoelectric MEMS microphone company. It's called Vesper. And she interviewed the CEO, and the CEO was kind of alluding to the fact, alluding to the future, when artificial intelligence will be embedded in the sensor itself. So what other devices – whether MCUs or sensors – have you heard that are heading to the similar direction?

SALLY WARD-FOXTON: Yeah. Definitely. I mean, artificial intelligence is coming to microcontrollers, and that’s a fact. Google is making a version of TensorFlow called TensorFlow Lite that is specifically for microcontrollers, very small devices, so no doubt about that.

In terms of sensor nodes, similar to PicoVoice, there is a company in Seattle called XNOR. Same as PicoVoice, they use the instruction set of the CPU. XNOR, as you might imagine, they use the exclusive NOR instruction, but their models are built for image processing, object detection, face recognition. Whereas PicoVoice is for speech. So it’s not just natural language processing that is coming to microcontrollers. Image processing is coming, too. XNOR had some good demos showing image recognition, maybe like person detection, on tiny little boards, something like a sensor node, where there was no batteries. They were using energy harvesting. So I could easily imagine something like that in a sensor node somewhere, yes. Definitely.

JUNKO YOSHIDA: I remember... I'm old enough to remember when a fridge or an elevator started to talk. Probably it's more than a decade ago. They weren’t listening to us, but they, somehow out of blue, blurted out and warned us when a door of our fridge is ajar or when the elevator door is about to close. I found it incredibly annoying. Do you think that a coffee maker or a washing machine suddenly listening to your command is a good thing?

SALLY WARD-FOXTON: Yeah, with all these new technologies, you’ve got to use it judiciously. You’ve got to think carefully about what consumers will accept or even what consumers will enjoy using, and what is going to quickly become annoying. Especially if it’s every appliance in your kitchen suddenly piping up with something, and they are all talking at once, that’s going to be annoying. It may be less annoying if they only speak when spoken to. Perhaps their speech could be minimized, like maybe they understand you. Maybe they just make a beep or some kind of response. Yeah, it doesn't have to be a vocal response. I guess it’s up to device manufacturers really to find the right balance there, between useful and annoying.

JUNKO YOSHIDA: Your example is really interesting, Sally, because you said multiple devices, right? Thank about that. You go into the kitchen. It's not just your coffee maker, your toaster, your fridge. Everything is listening to you. What if they all understood what you're saying and started to respond?

SALLY WARD-FOXTON: Yeah, so, your fridge is trying to make you a coffee, and washing machine's trying to make toast. No, they all have to operate in their own little domains I guess. It's a crazy thought to imagine the future where you're just speaking out loud to all these devices. Absolutely crazy.

JUNKO YOSHIDA: I have to think, how many consumers really want this? Just because technology can do this, doesn't necessarily mean it's a good idea to add this. I'm not putting any cold water to this thing, but I think it might not be a bad idea to step back a little and think about if there is really such a need. I think what happens is that companies will start building these things anyway and shove them into our throats I think.

SALLY WARD-FOXTON: Absolutely. So something like a lighting system where you might be sitting down the light switch is on the other side of the room or something, and you want to turn it on and off with voice. But where it's a washing machine or a toaster where you have to physically be at the device anyway to control it, how useful is that? I don't know. But I think device manufacturers maybe will go a bit crazy and add it to try to differentiate their products at first. And maybe we'll see some pushback. Who knows?

JUNKO YOSHIDA: Yeah. Or the first thing I’m going to look for is the disable button.

SALLY WARD-FOXTON: Right? Hopefully you can turn it off.

JUNKO YOSHIDA: Yeah. All right. Well thank you so much. It’s always fun to talk to you, Sally.

SALLY WARD-FOXTON: Thanks, Junko. Have a great day.

BRIAN SANTO: The greatest innovation since sliced bread is the integration of zip-lock technology in food packages. I love that! After that, it’s the little bell that dings at you when you get out of your car but forgot to turn off your headlights. So I am totally on board with the idea of our devices communicating with us. But I would like to go on record saying that the headlight indicator bing-bing-binging at me is as much communication as I want from my inanimate objects. How about you?

EE Times recently published a package of stories, edited by George Leopold, that look at how reliability can be affected by pressure to get to market. In that package of stories, we’ve got one on how NASA learned to maintain integrity in engineering operations despite intense deadline pressure. We’ve got a story on how the entire hoverboard industry rushed too quickly to market and ended up with unsafe products, but then ended up with safe products after all. And we revisited the fatal crashes of the two Boeing 737 Max jets.

Junko, George and I got together recently to discuss the issue of reliability versus time to market.

When you need to get something to market by a specific date, you've got deadlines. Deadlines are hard to meet. That's not... there's nothing nefarious about that, right?

JUNKO YOSHIDA: Right. But it depends on what you're rolling out. If it is safety critical, what corners do you cut? And is that allowed? I mean, in the end you'll pay a big price. The impetus of this special project was that I think not just us, reporters, but I think the whole industry was really taken back by what Boeing did, or did not do. I think it was a huge wakeup call.

BRIAN SANTO: Right. And there, there was a clear competitive, compelling competitive reason to move as fast as possible. There's essentially two major airline companies – Boeing and Airbus – and Airbus seemed like they were ahead, right?

GEORGE LEOPOLD: Yeah. Right. Boeing felt pressure to compete with Airbus on the... I think it's the A320 something. And they had to get this thing out. They were facing the loss of business from key customers, so they got into a big hurry. And as we've reported and others have reported, they put bigger engines on the aircraft to extend their range. That changed the airframe. That was corrected with MCAS, this software system that they didn't tell anybody about. And the FAA didn't make them tell anybody about it. And that's how we got to where we are now. We thought clearly this was an example of reliability and safety versus time to market. The problems that this is causing as systems become more complex.

And another point we made: One of our sources pointed out there's an equivalence going on here that is the 737 Max 8s and 9s, is that the previous 737? Well, it turns out that it wasn't. Because the airframe was fundamentally changed, and different flight characteristics. And I've had this thing called MCAS that they didn't tell anybody about.

JUNKO YOSHIDA: I think it's important, this equivalence you call it. Because there's a huge cost involved if your new aircraft – assuming this is not equivalent – if a new aircraft is introduced, you have to go through the whole certification process, right?

GEORGE LEOPOLD: Yeah. That's right. And the point that Missy Cummings from Duke University makes about the equivalence argument is that besides the 737 Max, how many other aircraft might have been certified as air-worthy by the FAA using that same equivalence model? That this aircraft is fundamentally the same as the previous one. Clearly the 737 Max wasn't. So she makes this point about aircraft, about other consumer products, so many of them. How many of them get approved – or certified in the case of aircraft – based on equivalence? And it's a dangerous trend.

BRIAN SANTO: So you've got the notion of equivalence out there. You get down to the engineering level. Once you've accepted the notion that you are going to try to alter an aircraft rather than rebuild one from scratch or make one that's new, you have very specific goals: make sure this thing flies. There's nothing nefarious about it. The problem was, it started with the lack of regulation, and then it moved into a lack of internal controls at Boeing so that everybody knew what was going on and could adjust accordingly. So when they start adding bigger engines, when they start adding new sensors, part of the problem was that they weren't talking to each other and stopping and figuring out: what does this mean for the aircraft? Do you agree with that?

GEOGE LEOPOLD: Yeah. I mean, it's the old integration problem. How are you going to make all this stuff work? And obviously in order to do that, you're going to have to communicate. And I think it has been documented, for example, that Boeing outsourced the software that went into MCAS. That went into the autopilot. And you're outsourcing it to software developers who don't know much about flight controls on aircraft. So there's one fundamental error as well.

The other thing that comes to mind was, I think the FAA's Inspector General came out with a report yesterday, basically saying that they didn't do their job. That the FAA inspectors did not do their job. So that confirms a lot of what's been reported by us and by others.

BRIAN SANTO: That brings up The New York Times report from last week. It was a really nicely researched and well-written piece that agreed: Boeing's controls were off; the FAA should have been on top of it. But this author put the onus on the airlines who hire inexperienced pilots. And the inexperienced pilots themselves, their lack of experience pretty much due to the way their employers handled their careers. It's squarely on them. A more experienced pilot – in both cases of the two crashes of the 737 Max – their inexperience contributed directly to the tragedies. It was well researched, they were talking about where they were getting spare parts from essentially junkyards. Are there counter-arguments to that?

GEORGE LEOPOLD: Well, as our source, Greg Travis, points out, the 737 was originally marketed to that part of the world, to the Lion Airs of the world. They were interested in low cost airfares. And in the case of Indonesia, go from island to island on a 737 instead of a ferry. And the Times article makes an important fundamental point about lack of training and absolutely no regulation. But I think that the discussion that we had earlier, I think still applies here. I think that the Lion Air thing is a special case. The lax operations at Lion Air contributed to the accident, directly to it.

So yeah. It's another piece of this story, that's for sure.

BRIAN SANTO: Another thing notwithstanding, we had Sully Sullenberger testify that he had actually gotten into the simulators and flew that Lion Air flight, flew the other flight. He testified that in the simulators, he could not compensate, he could not get out of those situations himself.

GEORGE LEOPOLD: Right. And he knew they were coming, and he knew of the existence of MCAS. And the Lion Air pilots did not know about the existence of MCAS because Boeing didn't tell anybody.

BRIAN SANTO: So lack of experience is definitely an indictment and is a serious thing, but it may not have come into play here.

GEORGE LEOPOLD: Yeah. The point we made is experienced pilots use autopilots grudgingly. And they train and they train and they train to be able to override them because they want those flight control surfaces. That's how you stay in the air and stay alive. But you can't do that unless you know about it. And Boeing didn't tell anybody.

JUNKO YOSHIDA: Yeah. That's the biggest crime, right? But also, if you don't have experience, you don't know the right questions to ask. I'm talking about the pilots. The story that I heard from one of the safety experts is that airlines companies like Delta, for example, Delta is not the only one, but the airlines who have a lot of pilots with a military aviation background, these guys ask a lot of questions. They actually demanded Delta to pay extra to add another angle of attack sensor. So they knew what to ask, and they pressured airline companies to buy more additional stuff for the sake of safety. But if you don't have that experience, you wouldn't ask that.

BRIAN SANTO: Well, that's Boeing. You've got airline safety, you've got your passenger safety. Anytime something goes wrong with an airline, it's a huge thing. For Boeing, losing business meant billions of dollars.

Junko, you wrote a story about hoverboards. This is not necessarily... I mean, you don't want hoverboards blowing up, but when something goes wrong with a hoverboard, it's usually one person on it. It's not good. And the competitive pressure wasn't billions, it was just getting the product out for the Christmas buying season.

JUNKO YOSHIDA: Exactly. But some of the fundamental elements that are involved in the hoverboard crisis are actually pretty similar. In a way that... Did they have... Well, first of all, in the case of the airlines, I mean aircraft, you actually do have the safety standard. But if the companies were making aircraft and lying about certain parts of the specification, that's another story. But in the case of the hoverboard, they didn't have a specification to start with. It was a new industry. They didn't have industry standards. That's number one.

Number two is the supply chain. You were talking about junkyard parts, right? But here in hoverboards, this was the new market. Suddenly a host of new players came to this market, mostly from Asia. And they had... this is a global supply chain. Many of them did not have any experience in building, designing transportation equipment with batteries in it. They had not idea...

BRIAN SANTO: And those batteries are tricky to handle. They need to be handled well.

JUNKO YOSHIDA: Exactly. So actually the hoverboard story has a nice ending because U/L used to be known as...

GEORGE LEOPOLD: …Underwriters' Laboratory…

JUNKO YOSHIDA: …Underwriters'. Yeah, thank you. But it's now called the U/L. But anyway, U/L stepped in, and they actually wrote a standard for this hoverboard within like a few weeks during the Christmas and New years. And that sort of saved the whole industry at a time when a lot of technologists or engineers in Silicon Valley say, Oh forget about standards! These standards take forever to come to an agreement. But U/L actually came from sideways and took over the whole thing within several weeks. They were able to come up with the standards and kind of save the day. It's a good story.

BRIAN SANTO: It's interesting, because U/L can offer safety parameters. The Silicon Valley distain for the standards process also has its genesis in having so many different companies try to get together and all contribute intellectual property, and there's always arguments about which way's the best way to do that, and that does legitimately drag on. But by the same token, after it's done the dragging, you usually have typically a bullet-proof system.

JUNKO YOSHIDA: But then, George, we have to come back to the fact that we always believed FAA was the gold standard. So what happened with FAA?

GEORGE LEOPOLD: Yeah. They basically let Boeing self-certify. And the Inspector General's report that came out yesterday underscores that point. These systems have to work. You've got to have a regulatory regime in place. And I'm hoping that all these cautionary tales that we're telling are going to raise awareness about that these things have to be bullet-proof. And as you're reporting on autonomous vehicles, that there has to be standards in place. They have to be enforced. So in some ways, maybe this will raise awareness, and consumers will say, Hey, wait a minute! I'm not going to get on this thing until I know it's been properly certified by somebody who knows what they're doing. I know you'll be doing a lot of reporting on the autonomous vehicle market, so it's going to play into that as well.

JUNKO YOSHIDA: It is the third-party, right? Third-party needs to certify. You don't masquerade as somebody who used to work at Boeing being part of the panel of the FAA to certify the thing, right? That was a shock.

BRIAN SANTO: Contributors to our special project on reliability versus time to market included George, Junko and Martin Rowe, whose article appears our sister publication, EDN. The links to all the articles in the package are on this podcast’s web page on eetimes.com.

For months now, International editor Junko Yoshida has been pursuing a key question about autonomous vehicles: How safe is safe? Experts have discussed numbers with her – 98 percent safe, 99 percent, 99.99. Those numbers sound as if they might be high enough to be encouraging, but are they really? What do those numbers actually mean?

In terms of world travel, Junko is like our very own Carmen SanDiego. In recent weeks she has been in Shenzhen, China; in Dallas, Texas, and in Paris. And when we recorded this segment, she was someplace else. I had to ask her: Where in the world are you?

JUNKO YOSHIDA: I am at the AutoSens conference in Brussels. And I happened to attend the lecture given by Professor Phil Koopman at Edge Case Research. He's the CTO at Edge Case Research, and he is a Professor at Carnegie-Mellon University.

It struck me, in the very beginning of his lecture, he talked about back in 1995 we already had an autonomous vehicle that worked 98% of the time. But over the last 25 years, they actually spent the last 25 years to get the remaining 2% done. So my first question to Phil is, What is the greatest obstacle to achieving automotive safety in the area of artificial intelligence?

PHIL KOOPMAN: Hi, Junko. It's great to be here in person with you.

JUNKO YOSHIDA: All right. Finally.

PHIL KOOPMAN: This is the first time we've met in person. So it's great.

I think there are a number of different things, so let me start with a social-societal issue. And that is that people have to trust these vehicles. And that trust has to be built on a solid foundation. If it's built on hype, inevitably there'll be a letdown and the bubble will deflate and people will lose confidence. And we already see that kind of dynamic playing out a little bit. So I think it times that the industry has to get very serious about extremely realistic and very transparent about the pros and the cons and what exactly the benefit is, so that if there's the occasional speed bump, that people won't be disillusioned. They'll say, Okay, we were told that would happen. That's what we expected. We're still on track.

JUNKO YOSHIDA: Right. You know, actually I'm going to inject myself here that during this talk, you talked about the difference about the safety approach between ADAS and autonomous car. You mentioned, for example, AEB [automated emergency braking]. Can you explain that?

PHIL KOOPMAN: Sure. And this of course is somewhat simplified, but it's an attempt to get at the fundamental differences. So on a technical basis, the difference is, in ADAS you typically tune for very few false alarms because if you stop a car on a highway in front of a truck, that's a bad outcome. And you're willing to accept the fact that sometimes you miss times you should activate, and you don't. And the reason is because it's the driver's fault. The driver shouldn't have been almost hitting something anyway. And if nine times out of ten you can prevent a death, you just saved nine lives.

JUNKO YOSHIDA: Right.

PHIL KOOPMAN: And so ADAS doesn't have to be perfect because the driver's supposed to be in charge, and you're supposed to kick in when the driver makes a mistake. This is classical ADAS. We're not talking lane-keeping, we're talking stability control, emergency braking, anti-lock braking, things like this.

You switch over to complete self-driving, fully autonomous, and if you were to use that same ADAS system and nine times out of ten it stops, and your autonomous people – and this is a bad idea – say, “All right, we're not worried about hitting things because we have AEB and that prevents us from ever hitting anything.” Then one time out of ten it's going to hit – and I made up that number; it's just an example – and one time out of ten it's going to hit, and you haven't saved nine lives. You've killed one person. And so this is why it's a bad idea. And again, I made up the numbers just to prove the point. But the point is that the ADAS systems are not supposed to be perfect. They're not advertised as perfect. And they do that for a reason. Because when you have a human driver in charge, it's a different situation than a fully self-driving car. And so the technology has to change. The tuning has to change. The sensors have to change. You could still use AEB, but that had better be the backup. It can't be the primary reason.

JUNKO YOSHIDA: So let's get back to my original question. We were talking about how do you achieve automotive safety in the era of artificial intelligence? So you talked about the social optics, right? I mean, how important it is not to hype but build reasonable expectations for the public to understand where AEB is today. What about the scientific research? How do you build the trust based on the engineering?

PHIL KOOPMAN: There has to be solid engineering. There has to be more transparency than we're seeing from most of the companies right now. And ultimately, you can promise all you want, but the public has to believe there's something there. And you can't just... The big companies can't just say, “Trust us; we're smart.” They used to be able to get away with that. But that ship has sort of sailed. And so you need... I like to say you need a safety case, a safety argument, saying, Here's why we're safe. Here's why you should believe us. And someone credible and independent has checked the homework to make sure we believe it.

From a technical basis, in my view, the hardest part... there are many hard parts. But the hardest part that really keeps me thinking is prediction. So if you see someone, a pedestrian, on a sidewalk, is going to step off in the middle of the road or not. And humans are really good – not perfect – but really good. But if you just say, “All right, there's an object and it's not moving; we're good to go.” That's not what you need. And if the person's distracted by their cell phone and they're walking toward... what happens next? Well, they step into the road. And a human knows this. And so that's prediction of the human behavior.

Now you could do simple prediction and say, Well, it'll keep doing what it is doing. That's not how humans work. And so if you see a bunch of kinds having a shoving match at the bus stop, what happens next? One of them spills into the road. So a human driver won't be perfect, but he'll say, “Oh that looks dangerous; I'm slowing down.” And you would hope that self-driving cars have that degree of understanding of their environment.

But to do prediction, it's a bunch of kids shoving, well, you have to know they're kids; you have to know they're shoving. That's perception. So perception underlies the prediction. And that's right now one of the prime applications for machine learning. And that's hard to validate. They say, all right, I'm 90%, I'm 99%. That's like, Okay, 99% is good. But what about the hundredth kid?

Now in fairness, they use sensor fusion, they use radar, they use lidar, sometimes they use vision. And they try and say, “Well one of them will see them.” There's a lot you can bring to bear. But it's not as simple as, “Oh our camera always sees them.” Because it's 99%. And I like to say, that's nice. That's two 9s. But you need like eight 9s. [99.999999 percent]

JUNKO YOSHIDA: Okay, tell me a little bit about this eight-9 thing. Because I was talking to my editor, Brian Santo, and he was telling me last night, Why eight 9s? In the semiconductor world, I think the standard is six 9s.

PHIL KOOPMAN: Right. Okay. And part of it is, by the way, I omitted the units. Per hour versus per mile whatever. In the semiconductor world, I used to be a chip designer, and five or six 9s is what you get. That's just all you're ever going to get. And there, that's saying basically in a million hours it's probably going to fail around a hundred thousand hours or something like that. And for life-critical applications, that's not good enough.

So aircraft, it's nine 9s. Okay, airplane falls out of the sky, it's nine 9s. And they have two engines. Why? The engines are more like five 9s, each engine. For in-flight shutdown. But that's why you have two. And you can say they're not going to both shut down on the same flight. And when one goes you do a fail or remission. And self-driving cars will be the same. But the nine 9s, I don't know exactly what the right number is. Eight 9s, nine 9s. But here's just the starting point. In the US, it's a hundred million miles between fatal accidents. Okay? A hundred million miles. So count up the number of 9s. The number of zeroes. That's eight zeroes, right? So right there, that's eight 9s per mile. And you're no different than a human driver. And that's for electronic failure only. But probably there's other sources of failure. So you have to add another 9 as a budget to say, well... in aircraft that's how they got to nine 9s. There's like a hundred things that could fail. And you do the math, so there you go. So you need eight or nine 9s.

JUNKO YOSHIDA: All right, got it. This is my last question: You talked about perception and prediction. And what you mentioned about prediction... Can we ever teach machines to predict? How do you do that? What technology is available to do that?

PHIL KOOPMAN: Right now they're using machine learning for that, too. And statistically, you can get pretty good at prediction, but you're back into the one 9, two 9s.

JUNKO YOSHIDA: It's the same argument.

PHIL KOOPMAN: Now you can still make arguments. You can say, “As long as my prediction is good enough and I'm conservative enough that if I mispredict by a little, I still have some safety margin.” And in my mind, I think what we're going to need to see is, these systems are very good a knowing what they don't know. Instead of always having an answer – because a classifier, a binary classifier has two bins. It's always one bin or the other. And what you need to say is, I'm not sure. If you're not sure, I'm going to go really cautiously. And that's sort of the way out of the problem.

JUNKO YOSHIDA: All right. Well very good. Thank you so much for your time, Phil. It was really a pleasure to meet you finally.

PHIL KOOPMAN: Thanks, Junko. It's a real pleasure to have a chat with you.

BRIAN SANTO: EE Times wrote multiple stories about the AutoSens show. You can find them by scrolling through our home page at eetimes.com, but there are also some handy-dandy links directly to those stories embedded in the transcript of the podcast you’re listening to now.

And now – come along with me as we take a walk down memory lane.

On September 23rd in 1889, Nintendo was founded. The company’s original product was playing cards. All these years later, the company is still the largest producers of playing cards in Japan. Oh, and somewhere along the line the company got into game consoles and has been having some success with those, too.

There are certain sounds that were once omnipresent that have all but disappeared from the world. Have you ever fed punch cards into a computer? When was the last time you heard that sound? The last mechanical cash register I know of is at the Maple Leaf Diner in Portland Oregon; I haven’t seen another in 25 years or more.

On September 24th in 1979, CompuServe initiated the first major commercial online service, the CompuServe Information Service. I don’t know this for a fact, but I’m confident that if CompuServe was open for business on September 24th, the first online flame war was waged no later than September 25th. CompuServe was followed by AOL, Prodigy, and the lesser-known GEnie. A vestige of CompuServe actually still remains; it exists as a web page.

The sound of a dial-up Internet connection being established is another one of those audio experiences lost forever.

 (AUDIO: DIAL-UP MODEM)

BRIAN SANTO: And I gotta say, good riddance to that noise.

And that’s your Weekly Briefing for the week ending September 27th.

This podcast is Produced by AspenCore Studio. It was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss.

You can get to this podcast on the EE Times web site, through services such as Blubrry, iTunes, and Spotify. The transcript of the podcast can be found on EETimes.com, complete with links to the articles we refer to, along with photos and video.

We’ll be back next Friday with a new edition of EE Times on Air. I’m Brian Santo.

 

感谢收听本期推送,全球联播 (EE|Times On Air) 现已同期在喜马拉雅以及蜻蜓FM上线,欢迎订阅收听!
广告