Racing to Offer Mobileye ‘Killer Chip’​

MADISON – With Mobileye acknowledged as the king of the hill in the hottest automotive vision market, it has become incumbent upon competitors to say that they’ve got “a Mobileye killer solution.”

Aspirations aside, though, no rival has yet demonstrated a credible solution of its own. At least, not yet.

Potential contenders, however, are saying that they’re ready to take on the challenge. During the Consumer Electronics Show earlier this month, companies ranging from MediaTek and Renesas to NXP and Ambarella, told us they are working on “alternatives” to Mobileye’s EyeQ chips.  On the opening day of CES, On Semiconductor announced that it has licensed CEVA’s imaging and vision platform for its automotive advanced driver assistance (ADAS) product lines.

Asked about who’s likely to become the next Mobileye and what it would take, industry analysts and players offered opposing views. Some declared the game already over, others believe the market remains wide open.

For example, Pierre Cambou, activity leader, Imaging & Sensors, Yole Développement, is the most pessimistic. He told us, “Image processor companies will hardly catch up with Mobileye. This is already too late, as the ecosystem is rather well defined now.”

Meanwhile, Mike Demler, a senior analyst at The Linley Group, believes, “There are a lot of openings here, and a lot of competitors.” He explained, “We tend to focus on the more far-out Level-3/4/5 systems, but Level 2 is still at an early stage of deployment.”

Gideon Wertheizer, CEO of CEVA, also believes there’s still a lot of room left for vision SoC vendors to compete – especially in ADAS and in-vehicle vision solutions (such as driver monitoring). By adding more smarts and vision SoC functions to their image sensors that are otherwise getting commoditized, “sensor guys have a potential to make it a lucrative business,” he told us.

Automotive Imaging Market Breakdown -- External processing share will increase to 30% of total cost (Source: Yole Developpement)

Automotive Imaging Market Breakdown — External processing share will increase to 30% of total cost (Source: Yole Développement)

But before getting into further analysis of the competitive landscape on vision SoCs, it is useful to segment the imaging market for the automotive sector.

Image sensors, vision SoCs, sensor fusion
As Wertheizer pointed out, there are three segments: image sensors (generating imaging sensory data), vision SoCs capable of offering Autonomous Emergency Braking (AEB), Lane Departure Warning and other functions, and sensor fusion (fusing data from multiple cameras and adding other sensory data from radar, lidar and V2X as redundancy).

Of the three, the sensor fusion market is likely to be most hotly contested. Mainly designed for autonomous cars such as Level 4 and Level 5, the sensor fusion chip is based on a high performance, multi-core architecture. It will be paired with a high computing power processor – like that of Intel and Nvidia – capable of Artificial Intelligence (AI)-based “driving policy,” Wertheizer explained.

In his opinion, this still is an uncharted territory where there exists no industry standard, no industry consensus.

What remains unclear is how much image processing should be done on vision SoCs before the data migrates to sensor fusion. Torsten Lehmann, senior vice president and general manager responsible for NXP’s car infotainment & driver assistance, told us, “There is a growing trend among car OEMs and Tier 1’s who say they want rawer imaging data” so that they can fuse it on their own.

For the time being, though, Mobileye’s EyeQ5, due for a 2018 launch, will be the incumbent chip and everyone’s target. While companies like Qualcomm and MediaTek are surely hoping to play in the sensor fusion chip market, Wertheizer observed, “Tier 1’s and car OEMs might be also interested in developing their own in-house [sensor fusion] chips.”

Vision SoCs
What’s considered more immediate, and still a growing segment, is the vision SoC market – separate from sensor fusion – designed for ADAS in Level 2 and 3 cars.

Automotive Imaging System Revenue Forecast 2012 – 2021

Imaging system + vision processing units, which is already at $2B in 2015, will reach $10B in revenue by 2021 (Source: Yole Developpement)

Imaging system + vision processing units, which is already at $2B in 2015, will reach $10B in revenue by 2021 (Source: Yole Développement)

As Yole’s Cambou said, “Clearly the current [Vision SoC] landscape is segregated in two – the ADAS and infotainment.” He put Mobileye in the ADAS segment, while he sees Texas Instruments, Toshiba, Renesas and all others in the “infotainment” category. He sees “Nvidia and Toshiba doing the right move to be upgraded in ADAS, while TI and Renesas less likely to move in this direction.”

Others, however, see the situation a little differently. They believe that many more SoC vendors are poised to gun for vision SoC sockets in autos.

In Ceva’s view, in theory, this is where image-sensor companies like On Semiconductor plan to add more value to their own sensors. They could then potentially clash with video processing chip vendors like Renesas, Texas Instruments, Toshiba or even Rockchip.

Linley Group’s Demler is convinced that the race [for ADAS chips] is hardly over. “Mobileye said they shipped 4.5 million chips last year, and they highlighted Level-2 AEB systems.” He explained, “That’s just 5% of passenger vehicles sold in 2016, so not exactly ‘dominance’ when you put it in perspective of the overall industry.”

Demler pointed out that Mobileye has a valuable relationship with Delphi. But there are many other suppliers, each with their own relationships. “Look at how easily Tesla switched to Nvidia.”  Demler sees the ADAS space “very dynamic and rapidly evolving,” where other processor companies such as NXP, Qualcomm, Tohsiba, Renesas, TI are all competing. Demler added, “Some tier one’s may also build their own chips, using IP from Cadence, Ceva, Synopsys, or Videantis.” He concluded: “In the race to develop self-driving cars, Mobileye may currently have the lead but it’s way too early to call them dominant.”

Playbook for Mobileye’s rivals
But then, what’s the playbook for competing vision chip companies versus Mobileye?

Alberto Broggi

Alberto Broggi

“Better vision algorithms” and “a more open system” are the two answers given by Alberto Broggi, general manager of VisLab (Parma, Italy). Ambarella, best known for its high-end video compression and image processing chips, is now planning to move into the autonomous car market, via its 2015 acquisition of VisLab, an automotive vision firm with expertise in autonomous vehicles, including its 15,000 km 2010 test drive from Parma to Shanghai.

Ambarella, although relatively new to the auto market, sees a solid chance to get a foot in the door. Besides its high-resolution image-processing expertise in high dynamic range, low-light conditions that can run on a low power SoC, Ambarella now says it is integrating traditional computer vision and deep learning, neural network capabilities from VisLab.

Phil Magney, founder & principal advisor at Vision Systems Intelligence (VSI), observed, “Mobileye is the best at image recognition because their hardware and software is so tightly integrated (this is the same reason Apple typically works better than others).  Most vision processor alternatives don’t have dedicated vision algorithms – they typically port third party vision algorithms to their instruction sets.”

In Magney’s opinion, “It is possible that that Mobileye could lose their grip on image since there is a lot of innovation on the image side applying neural networks.”

And then, there’s a prevailing argument against Mobileye for its “black box” solution.

Magney said, “Some OEMs and Tier 1’s want to go deeper into the value chain to have more control over the applications. Mobileye solutions are still black box in this regard, meaning that Mobileye is providing the whole stack on the perception side.”

Luca De Ambroggi, principal analyst, automotive semiconductors at IHS Markit, agrees.

For Mobileye’s rivals, obviously, the race comes down to “performance, including accurate and reliable software and algorithms where Mobileye is very strong,” said De Ambroggi. If you cannot compete just on performance, flexible solutions are the other card you can play, he added. By flexible solutions, he means “more ‘open’ stack to allow OEM to differentiate and add their own value.”

On the other hand, Yole Développement’s Cambou doesn’t necessarily believe that “open” stack is the answer.

He said, “First, let’s acknowledge that Mobileye has set the standard of video based ADAS and in particular Automatic Emergency Braking (AEB).” Tesla, Volvo, Ford, Mazda, GM, Renault and the world’s leading automaker, Volkswagen, have been the main beneficiaries, explained Cambou.

“There was initially big reluctance from the big tier one companies to partner with Mobileye since its approach was relatively closed (think Apple),” Cambou acknowledged. “However, the robustness of its technology did translate into large success, not just for Mobileye itself but also for companies such as TRW, Autoliv, Magna and more recently Valeo and Delphi.”

In Cambou’s opinion, the automotive ecosystem Mobileye has been able to build has given the Israeli company an immeasurable lead. The vision SoC business for the automotive industry “is now turning into a big boys’ game.”

Cambou said Mobileye is outdistancing other vision SoC companies further by trying to solve problems in the next chapter of autonomous driving. As observed in a press briefing by Mobileye’s co-founder and CTO during CES, “Mobileye is no longer focusing on the sensing side (camera and hardware) which will be the part handled by Tier 1’s. Mobileye wishes to become the platform for the real time mapping part in cooperation with the mapping companies (Here, Zenrin, TomTom, Google, Baidu…), while the driving part itself will be handled by car manufacturers and ECU providers,” Cambou explained.

To-do list for Vision SoC designers
When Mobileye and its rivals square off, there are certain things vision SoC designers must do. In software, said Cambou, you need to be first and foremost “an image analysis specialist fully up to date with latest Convoluted Neural Network (CNN) approach.” Second, you must “master real time video handling / image data analytics.”

As far as hardware goes, Cambou added, you must “have access to best-in-class digital technology node.” He added, “I am talking 7nm FinFet.” Then, you must “be able to master all SoC integration level, and then have access to the best image processing IPs (GPU,CPU, MPCPU…).”

According to Ceva’s Wertheizer, that’s where smartphones’ apps processor experience might shine. Although they may not be expert in vision algorithms, “the benefit of having been smartphone guys is that they know how to build complex SoCs in a short design cycle,” he noted.

Hardest of all, for SoC companies challenging Mobileye, is the complex, often intertwined automotive ecosystem. Cambou advised, “Be ready to invest massive amounts of money. This is a big boys’ game. He added, “Partner with automakers and [enter] multiple technology partnership agreements on all levels – including camera, IP, Maps.”

Already complex web of automated driving partnerships have been forged among largest players (Source: Yole Developpement)

Already complex web of automated driving partnerships have been forged among largest players (Source: Yole Développement)

Renesas, MediaTek and Ambarella all say that they will sample their own automotive vision SoCs – competing with Mobileye fair and square – in 2017.

Exactly how each of these SoCs might look remains unknown until they’e announced. Most likely, coming out this year will be “vision SoCs” for ADAS, rather than “sensor fusion chips” – in a strict sense – aimed at Level 4 and Level 5 autonomous driving.

But vision-chip companies are all saying that, in image analysis, they will offer both traditional computer vision-based HOG and newer deep learning-based CNN. Asked why a dual track, VisLab’s Broggi said, “We are worried about corner cases. Deep learning can’t answer all the questions yet.”


Start typing and press Enter to search