广告
伦敦科技周、超算竞赛、自动驾驶的HD地图
00:00
00:00

BRIAN SANTO:

I’m Brian Santo, EE Times Editor in Chief, and you're listening to EETimes on Air. This is your Briefing for the week ending June 21st.

我是EE Times主编Brian Santo,你正在的是EETimes全球联播。这是截至621日的一周热点新闻汇报

Our lineup this week includes:

我们本周的联播包括:

A guided tour through London’s Tech Week, an annual extravaganza of new technologies. Unsurprisingly, this year there was an emphasis on artificial intelligence, 

带你参观伦敦科技周,这是新技术的年度盛会。不出所料,今年的重头戏是人工智能;

We’ll have a report on the race to build the fastest supercomputers,

接下来我们将向你报告构建最快超算的竞赛情况;

…And … you know those GPS apps you use for driving? Self-driving vehicles use maps, too, but they need maps that are far more accurate. We’ll hear about that in a moment.

然后,你知道开车使用的GPS app吗?自动驾驶车辆也需要地图,但它们所使用的地图要精确得多。我们马上就会听到这个消息。

First up, EE Times editor Sally Ward-Foxton attended several events during London’s Tech Week. The UK is bidding to become a major hub for AI technology, but the same idea has occurred to other countries as well.

首先,EE Times编辑Sally Ward-Foxton在伦敦科技周期间参加了几项活动。英国正在竭力争取成为AI技术的中枢,但其他国家也有同样的雄心。

And a quick translation of English to English for you. Glastonbury is a music festival not dissimilar to the New Orleans Jazz Festival, where savvy festival veterans know to show up in knee high rubber boots because enormous mud puddles are not uncommon.

为你做个快速的英语-英语翻译。Glastonbury是一个与新奥尔良爵士音乐节有不少类似之处的音乐节,精明的音乐节老手都会身穿过膝的长靴来参加,因为会场随处可见泥浆水坑。

Here’s Sally. 

有请莎莉。

以下为英文原稿,Enjoy!

SALLY WARD:

Last week I attended several events as part of London Tech Week, a series of conferences and exhibitions all about emerging technology, with a particular focus on AI.

London is positioning itself as an innovation hub for AI technology. The government’s figures say there are at least three times as many AI startups in London as any other city in Europe, and it’s the UK’s fastest growing sector. Prime Minister Theresa May opened the event by pledging millions of pounds of government funding to support the development of AI and related technologies such as quantum computing.

But London isn’t the only city with its eyes on the AI prize.

Alexandra Dublanche, a representative from the Paris Regional Government, introduced the city’s AI 2020 plan, which aims to support SMEs who want to get into AI with a range of programs designed to foster innovation. This includes AI technology challenges set by the Paris Region with several million Euros in prizes.

Aside from London and Paris, there are many other regions battling to be the home of AI technology.

Another presentation at the AI Summit was from Foteini Agrafioti, Chief Science Officer for the Royal Bank of Canada. She’s also the head of Borealis AI, the bank’s research institute for AI technology. She made a compelling case positioning Canada as the natural home of AI, given the country’s academic prowess in the subject; it was of course scientists from the University of Toronto that famously won the ImageNet contest in 2012. Today, Canada has more than 600 researchers and 60 faculty members in universities working on cutting edge AI research, she said.

The event’s exhibition also hosted big pavilions from countries such as Romania, Ukraine, and South Africa, keen to show off their country’s burgeoning AI offerings.

I also attended the CogX event (short for CognitionX), which is billed as “The Festival of AI and Emerging Technology.” It definitely had shades of Glastonbury on Monday, with several stages in large tents on the lawn, as the rain poured and the area became a quagmire.

While the CogX program covers everything from research to ethics, the presentations on the future of AI hardware were of particular interest.

Analyst James Wang from ARK Invest presented a detailed overview of the AI chip startup landscape. He described it as “The AI Chip Hunger Games,” with dozens of startups, plus almost all the processor incumbents, desperately trying to become the next ARM, the next Intel, or the next Nvidia. He did say that not all the contestants will make it through the coming years – especially since half the startups are still only shipping powerpoints.

James Wang also noted that there are at least half a dozen startups pursuing optical computing for AI – using lasers to do fast matrix multiplication – which sounds interesting, so we’ll have to keep an eye on that technology.

Graphcore CEO Nigel Toon presented a compelling 20-minute monologue on how today’s hardware is holding back the development of artificial intelligence. We’ll need to improve hardware performance by at least a factor of 100, he said. Graphcore’s IPU chip is set to address this. It’s the most complex processor ever built, with 24 billion transistors and more than 2,000 processor cores.

Outside of digital computing, there were a couple of interesting presentations about alternative technologies.

Mike Henry, co-founder of American startup Mythic, presented the company’s analog computing technology, which stores 8 bits in a single Flash memory transistor, then uses a Flash memory array as an efficient matrix-multiply engine. In this case, there’s no bottleneck between the processor and the memory, because the processor IS the memory. It’s relatively cheap to build, as the technology uses the standard 40nm Flash process.

Mythic is pitching its technology squarely at AI inference chips in edge devices, where Henry said there are plenty of niches for 60+ inference chip start-ups to play in.

We also heard from the CEO of Oxford Quantum Circuits, Ilana Wisby.

Her company is pursuing quantum computing using superconducting metals cooled down to just above absolute zero – 10 milliKelvin – and that’s the easy part. Building repeatable, reliable Qbits is still rather difficult; and to build something useful, you’d need an array of at least 50 high-quality Qbits that can be addressed and manipulated on demand, she said. The potential of this technology is massive, but it’s still quite a way off. Oxford is working on building a quantum device for early stage applications in 5 to 7 years.

This is Sally Ward-Foxton reporting from London Tech Week for EETimes. 

BRIAN SANTO:

Rick Merritt is based in Silicon Valley, but he was monitoring the International Supercomputer Conference held this week in Frankfurt, Germany. There’s a perpetual international competition to build the fastest supercomputer.

So, Rick, any news from Frankfurt on which country gets supercomputer bragging rights?

RICK MERRITT:

There were no major change in the rankings of the top 500 supercomputers that came out recently. But this is really the quiet before the storm because Intel, AMD, IBM, Nvidia and Cray are all working in various collaborations on three major exascale projects in the US, and China has three exascale projects of its own in the works. So by 2021 we're going to see a lot of shake up on the list, and it will be quite dramatic.

BRIAN SANTO:

Okay. There are different ways to build a supercomputer, and different tests to measure supercomputer performance, and now the benchmarking is becoming a little controversial, right? 

RICK MERRITT:

Actually, the goal posts are moving, because exascale and the petaflops systems, all the systems on the list today are measured on the Linpack benchmark, which most agree is not a really good measure of real-world performance.  So there’s a new benchmark on the list, too, called the High Performance Conjugate Gradient (or HPCG). And using that, most of the top US, European and Japanese systems still rank quite high.  But interestingly, the China systems drop down quite a bit when you use that benchmark.

BRIAN SANTO:

Uh-hm. So who is currently in the lead?

RICK MERRITT:

Overall, China still really leads in the number of top supercomputers, with 219 systems on the list versus just 116 for the US. But the US does better in terms of the total performance on the list with 38% versus just under 30% for China.

BRIAN SANTO:

Alright. So that’s the system-level view. Anything new when it comes to ICs for supercomputers?

RICK MERRITT:

One interesting sidelight at the event was that Nvidia announced it's supporting Arm or releasing open source software to support Arm processors next to its own accelerators. And Nvidia by far is very popular in supercomputers as an accelerator provider with its GPUs. It's used in 112 of the 133 systems that have some kind of accelerator. And Nvidia is already supporting Intel and IBM Power processors, so it was no surprise they are going to start supporting Arm as well, especially since the European Union has an exascale project that's using Arm processors, and they certainly would like to be the accelerator for that.

BRIAN SANTO:

Thanks, Rick.

Next up: high definition mapping for automated vehicles, or AVs. You’ll also hear about ADAS, which is simply a reference to automated systems that are being deployed in vehicles to assist drivers.

For this segment we’re going to turn it over to International editor Junko Yoshida. Junko?

JUNKO YOSHIDA:

I recently had a chance to visit Phil Magney, founder and principal at VSI Labs in Minnesota. You know, while I was there, I started thinking about: “Why the hell would autonomous vehicles need a MAP?”

So I asked Phil to come to our show and give us a few basics on HD mapping for machines.

So what are the differences between the navigation map you and I use while we are driving versus a so-called HD, high definition, map designed for autonomous vehicles? What are the basic differences?

PHIL MAGNEY:

I think the best way to think about that is with regards to the fidelity.

JUNKO YOSHIDA:

Yeah.

PHIL MAGNEY:

A typical navigation map that's used for pedestrians or humans as they're going from Point A to Point B is largely a two-dimensional map. And so it's giving you information, telling you where to turn and so forth. But obviously the human then is controlling the vehicle.

Meanwhile, in an autonomous vehicle, really what that map is to the autonomous vehicle is geo-coded metadata.

JUNKO YOSHIDA:

Oh, wow!

PHIL MAGNEY:

So in other words, it's precision data. Much, much more precision than a standard definition map that we're used to in our cars. This is a high definition map that contains all the road geometry, very specific information on the lanes and where those lanes are located, even certain markings like where the vehicle should stop at an intersection. This is all part of the data that's in that high definition map for the autonomous vehicle.

JUNKO YOSHIDA:

So it's almost like a cheat sheet for sensors. I mean, sensors are supposed to pick up a lot of things, but this is going to... the map will augment-- I use the term "cheat sheet," but they let them cheat you, right?

PHIL MAGNEY:

Yeah, well, that's one way to look at it. I typically refer to HD maps as really geo-coded metadata that is essentially intelligence about the real world that that vehicle can utilize in making better decisions, smarter decisions, safer decisions. And it also takes some load off of the processing stack because certain things that it would normally interpret from its sensors, for example, now it's going to know that information already from the map in the vehicle.

JUNKO YOSHIDA:

Ahhh. Okay. Yeah, I think that during the event, your event, I think somebody was talking about that HD map could actually help you drive, I mean help the mobile car drive even when the street is covered with snow. Is that right?

PHIL MAGNEY:

That is accurate. Yeah, we've actually, in Minnesota, we do a lot of applied research, as you know.

JUNKO YOSHIDA:

Yeah.

PHIL MAGNEY:

And we've been working with high definition maps in these inclement weather conditions to see how much benefit it adds to the vehicle. And you're absolutely right: When you have, when the lane lines are completely covered, if you have a high definition lane model within that mapping stack, then you basically have virtual lane lines. As long as you can localize yourself with some precision, then you can use those virutal lane lines, even if the cameras can't see them.

JUNKO YOSHIDA:

That's good. Now, I would hate to bring this up. I think we tend to bring up Tesla a lot every time when we talk about the future of autonomous vehicles. But I have to bring this up because Elon Musk, I think a couple of months ago, during the Autonomy Day that the company held in the Bay Area. He famously said GPS maps for self-driving cars are a “really bad idea.” Do you agree with that?

PHIL MAGNEY:

Actually, I do not. He's developed a system, or the company has developed a system based on heavy, heavy use of cameras, obviously, and heavy, heavy use of artificial intelligence. If you throw enough brute force computing at the problem, you can solve it that way. But if you could have something that would improve the performance or certainly improve the safety, then why would you not do it? So that's my take on that.

And my other take on Tesla is, it's not entirely true that they don't rely on mapping data at all. There are some mapping assets that are used with autopilot, and specifically there's a feature called Navigate on Autopilot, and that's using lane-level information to help guide that vehicle.

 So you have to take his comments a little bit with a grain of salt. Maybe he had tried... I mean, the whole business of high defintion maps is relatively new as far as commercial companies that offer solutions. That's why the robotaxis are building their own. So maybe he just hasn't had a chance to fully identify a good solution for Tesla just yet.

JUNKO YOSHIDA:

Okay. Got it. All right. Well, thank you very much for taking time to talk to us, Phil. It's always a great pleasure talking to you.

PHIL MAGNEY:

Well, it's my pleasure, Junko. Thank you. This is a topic that's very important to us all, and we'll keep you posted as we continue with our research on these topics.

BRIAN SANTO:

And finally, this week’s bit of tech history:

One hundred and 16 years ago this week, in 1903, the Ford Motor Company was founded.

In 1949, Jay Forrester recorded an idea for core memory. Core memory would take us to the moon, but it was eventually replaced by semiconductors.

In June of 1976, the Viking 1 probe entered orbit around Mars. It would become the first US craft to land on the red planet.

AUDIO:

Touchdown! We have touchdown! (CHEERING) 

BRIAN SANTO:

And that was your Weekly Briefing for the week ending June 21st.

This podcast is produced by AspenCore Studio. It was engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss.

You can find this podcast at EETimes.com and on your favorite sites for podcasts, including Blubrry – which ironically drops the double-E – along with iTunes, Spotify and Stitcher.

We’ll be back with another episode next week.

I’m Brian Santo and this is EE Times On Air.

感谢收听本期推送,全球联播 (EE|Times On Air) 现已同期在喜马拉雅以及蜻蜓FM上线,欢迎订阅收听!
广告