From one auto maker to the next, there can be significant differences in the capabilities and quality of their various driver-assist technologies.
Whether it’s a vision-based sensor system or a driver monitoring system (DMS), some vendors have reliable products and others simply shouldn’t be trusted yet, according to one semiconductor executive, On Semiconductor’s Ross Jatou.
The problem has less to do with any given supplier’s capabilities than it does with the lack of relevant benchmarks or standards establishing minimum acceptable performance levels.
On Semiconductor has forged a position as a leading image sensor supplier for the automotive market, and it works with a wide range of auto OEMs, Tier 1 automotive suppliers, and other technology companies catering to the auto industry. EE Times buttonholed Ross Jatou, On Semi’s senior vice president and general manager for intelligent sensing group during last month’s AutoSens conference.
We asked: what is its growth strategy in the ADAS and autonomous vehicle markets? What ammunition does On Semi have? Is it angling to become the “Mobileye” of driver monitoring systems?
Our conversation wasn’t limited to On Semi’s own automotive strategy. We asked Jatou about what looks like a drastically changing relationship between car and driver, his assessment on AAA’s disappointing test results on active ADAS features, and the auto industry’s remaining technology gaps. Here’s an excerpt of our conversation.
EE Times: While preparing for this interview, I realized that before joining On Semi in 2015, you were head of automotive hardware at Nvidia, when Nvidia was just cracking the automotive market…
Ross Jatou: That’s where I really started my passion for automotive and making cars more intelligent, more fun to be around. Nvidia’s focus then was on infotainment, using the power of 3D graphics… along with navigation, audio processing. Just brainstorming how to harness this power to make the car a more pleasant, safe experience, naturally led Nvidia to driver assistance and autonomous driving.
EE Times: I imagine the two companies — Nvidia and On — are so different.
Ross Jatou: Absolutely. On Semiconductor has a long heritage. Its roots in automotive go back to the 60s, actually, prior to On becoming On. It’s been a privilege because it basically gives me an opportunity to see almost all the car manufacturers and all the Tier Ones. We get to engage with them on their strategies, their architectures for the car and where their goals and visions are for the intelligent car of the future.
Car and Driver
EE Times: The changing relationship between car and driver is central to the era of ADAS, and to autonomous vehicles future. How do you perceive this shift?
Ross Jatou: You kept mentioning “car and driver.” It reminds me of the magazine “Car & Driver,” all about car enthusiasts. And car enthusiasts were all about the engine, and maybe the audio system. It was a small group of enthusiasts.
By adding all this intelligence to vehicles, whether infotainment, digital instruments or clusters, ADAS or autonomous driving, carmakers have really expanded the enthusiasts out there.
So, I see a better relationship between the driver, even passengers, and the car. In the past, that was kind of a utility…a car was an appliance — kind of like a toaster. It’s something you get and you use it.
Now, the enthusiastic market for the vehicle has expanded. A lot more people now have a relationship with [things like] over-the-air updates, Android auto or the Apple solution… So automotive is no longer a static purchase. This is what you get for the lifetime of the car. It’s updating and it’s improving.
Another aspect: the [car and driver] relationship in the past was kind of uncompromising. Drivers were told, this is how it works. If there’s a failure in 10 years, people are not happy with the experience.
Now with this new relationship with an intelligent vehicle, it’s almost as though the driver has more respect for the intelligence of the vehicle. Drivers are a little bit more accommodating and empathetic for some errors. For example, if you just go to the web, and type in “Tesla reboot,” you’ll see all the reboot instructions, and you see that people are okay with it. If at times the infotainment system locks up, they tell users how to reboot it. This was unheard of in the past.
EE Times: I have a few bones to pick with what you just said. I think you need to set aside Tesla. Tesla has a huge fan base, and it’s an exception. I don’t think every carmaker enjoys that strong relationship between the brand and the driver.
Another thing I want to bring up. In the smartphone era, consumers are getting used to the reality that sometimes things don’t work. They’ve got to turn off their smartphone and reboot.
The same goes for software updates, which are not necessarily always reliable. Updates could come at an inconvenient time and might inadvertently disable some apps’ features. In the end, we condition consumers to think, “Oh, well, we’ve got to put up with this,” which I don’t think is a good thing at all.
Ross Jatou: Absolutely, especially when it comes to safety-critical applications in driver assistance and autonomous driving. We cannot compromise on the safety of these vehicles.
Whether hardware or software integration in the vehicle, the litmus test I use is: Will I put my ten-year-old son in a driverless car and send him to school? That really makes people ask themselves, “Have I looked at all possible failure modes? Have we addressed them?”
On sensor modalities
EE Times: How do you rate today’s sensing technologies available to the automotive market? Do you think the industry is still struggling to find better solutions?
Ross Jatou: On Semiconductor and other companies have a breadth of sensor modalities — including image sensing, radar, LiDAR, depth sensors. You could have a very small, very cost-effective sensor or a very large, very heavy-duty sensor that can go to Mars and gather data and send it back. You have that whole spectrum. and selecting which one is important.
Now, about today’s technology in vehicles. How do we rate them?
Yeah, I would say they have their shortcomings… really because of the constraints placed in the vehicle. There are cost and size constraints. There’s always a balance of what’s the best technology that I can put in, but maintain the size and cost of the vehicle, especially in human vision.
So, if it’s a rearview camera, just providing information to the driver, surrounding you. Those have a lot more constraint on size and costs. As you get to higher level automation, level two and beyond, sensor fusion helps.
So where one sensor is falling behind, the other sensor helps out. The analogy I give my team is the animal kingdom, like a rhinoceros. A rhinoceros is very powerful, very intelligent, but 15 feet away cannot distinguish a tree from a human. It cannot classify those two objects. With their heightened ability to smell, they can compensate for bad eyesight.
So when your image sensors are challenged with poor visibility, whether it’s fog or nighttime, a radar in that case can help. Now, I’m really bullish on our LiDARs. But their constraint is cost…But when you talk about higher-level automation and shared vehicles or commercial vehicles, LiDAR could make economic sense.
EE Times: Are you guys in the LiDAR business?
Ross Jatou: Absolutely. We’re in in some of the most upcoming vehicles that are adding LiDAR. Through acquisition of a very bright company in Ireland called SensL Technologies, we now have silicon photomultipliers (SiPM), single photon avalanche diode (SPAD) for LiDAR sensing products. SPAD arrays look more like a camera. It’s basically an array and allows you to take images with depth. Meanwhile a SiPM gives you a lot more range. It’s often in terms of scanning type of lidars.
EE Times: LiDAR is one area where a lot of companies are showing off technologies. Yet it’s an area with no consensus on which technology will win the market. I suspect that different technologies will serve different segments. Right?
Ross Jatou: That’s right. I think it’s going to be just like image sensors, where global shutter sensors make more sense for monitoring the driver… while a rolling shutter makes sense for image sensors looking outside. For near field LiDAR — something closer to the car, I think SPAD array will be a definite winner. Then, it’s SiPM for long range.
EE Times: So, do you expect Elon Musk who famously said that “LiDAR is a fool’s errand” would change his mind?
Ross Jatou: You know, with the current cost structure of LiDAR, there’s some truth to what Elon was saying. It’s going to be very difficult, but there are a lot of innovative LiDAR systems now being put together. That really brings the cost down. If you just depend on imaging, you’re at the mercy of visibility. Imaging is a passive sensor. Perception is tied to the light that’s available.
LiDAR is an active sensor. You’re shooting a laser and get reflections. So, you’re creating your own light, right? That is a major win.
Further, it can get depth data instantaneously from the LiDAR. With no LiDAR, you must use computation to approximate the distance with 2D imaging. You also use various algorithms to do that or leverage AI to come up with depth data from 2D imaging. But it’s still kind of a secondary way of getting the 3D data rather than directly.
But really the big hurdle is cost. I believe if SPAD array today were as cost-effective as imagers, I think Musk would have a different opinion.
Driver Monitoring System
EE Times: I want to switch gears and talk about driver monitoring systems (DMS). DMS is becoming more important, especially when cars go from level two to level two-plus, perhaps even level three. Where are we with DMS today? Does its quality meet regulators’ requirements?
Ross Jatou: There are already some vehicles that have DMS. [Editor’s note: the Euro NCAP standards and the European Commission (EC) regulation mandates the DMS technology from 2022] And in expanding the EuroNCAP score, I think by 2025, DMS effectively becomes a requirement for almost all vehicles to get five stars.
So the demand for DMS is increasing, which is a great path. I’m encouraged by what EuroNCAP is doing, with or without high-level autonomous features. I think DMS is very important, even if you have no high-level autonomous features, to monitor driver drowsiness, whether you’re driving a truck across the state or you are with your children. And now with people always texting at times, I think it’s an absolute must.
I’m very bullish on increasing in-cabin monitoring. Now, back to your question: Is it meeting the regulators’ requirements? I’m surprised because I don’t know what regulators’ expectations are today.
In my opinion, DMS is a glorified checkbox. Do you have a driver monitoring? Check. There’s not a lot of requirements and specifications. I do know that EuroNCAP is working with the community to make it a little bit more sophisticated. Some leaders in driver monitoring, like Seeing Machines, are involved in driving the [DMS] standards and specs, and helping guide that process.
But today, all they are asking is, “do you have driver monitoring or not?” It’s basically not much more than that. Then you get the check mark and you proceed. It’s not well-regulated. We need that, like we have regulations for electronic braking.
Challenges in DMS
Ross Jatou: There are challenges in driver monitoring. For example, if you have a monitoring system with near infrared, by theory, it should go through sunglasses, so that you can see the person’s eyes. But not all sunglasses are the same. Some block near infrared, some allow it fully through, some in between.
Another aspect is what’s called global shutter efficiency. It matters. How much or how good are you blocking ambient light? Because the sun also has that light and it can confuse the sensor. So, it thinks it’s the [vehicle’s] light shining on the driver to see the eyes, but it’s really coming from outside. Being able to block ambient light is very important.
Obviously, resolution is important to decipher small details. It’s important to capture that you’re blinking.
And of course, the algorithms the DMS uses are important. Does it have redundancy? Does it look just on the eyes, or does it look on the shape of the face as well? Or does it use both? If one eye is blocked because I turned my head, can you still tell that I’m paying attention? So, it becomes a, quite a sophisticated algorithm.
All those things are important. I’ve seen the whole spectrum of DMS solutions, and some can be actually quite dangerous [if you are] to [totally] rely on. More testing and more standards are needed.
Can you be the Mobileye of DMS?
EE Times: Let me ask you this. On Semi already does an excellent job offering imagers for ADAS — designed for external vision. Similarly,
I think there’s a good chance for On Semi to become the Mobileye of the DMS market. But to do that, you must partner with some software algorithm companies. What’s your strategy?
Ross Jatou: On Semiconductor is a really ecosystem-friendly company. We’re not trying to take over the job of some of our partners. We work with others quite well, and we have many algorithm partners in the driver monitoring space.
We deliver semiconductor solutions to almost all the leading folks. I know, from some industry reports, that Seeing Machines, based in Australia, is leading in DMS with nearly a half the global market. We work with them quite well, along with others in Europe, Israel, in the U.S., and in Asia, of course. We deliver our semiconductor solutions to them. It’s tough to predict who will become the Mobileye of the DMS market.
Reuse processors for DMS
Ross Jatou: But I do know, if you have more processing capabilities, it opens the door for putting those [DMS] algorithms on existing processors. For example, you could have the driver monitoring sit on the same Mobileye chip, while the chip is doing the ADAS functions. You can use the performance headroom, if it is available.
So, this hasn’t been announced, but there is an open system that allows other algorithms, letting the driver monitoring folks through and letting them reside on that same processor.
EE Times: But On Semi is not in the processor business, right? Or are you interested in getting into some sort of platform processor business for ADAS and other applications?
Ross Jatou: We do make application specific processors, for example image signal processors. So if we take a raw sensor and if you just want to create a human vision pretty picture — the selfie of the car for rear view or surround, we process that raw data and provide human vision data.
So, [we offer] more function-specific, and application-specific processors. It’s really based on a fixed function need. We don’t today make a general processor like x86.
AAA’s Report on Active ADAS
EE Times: Let’s talk about active ADAS feature. You’ve seen AAA’s recent study on several vehicle models – done both on test courses and regular roads. They concluded that the performance of active ADAS functions by all these vehicles are uneven. That was a bit of surprise for me. What was your take?
Ross Jatou: In all honesty, it wasn’t a surprise. We work with a lot of these vendors and we know that there are different algorithms, architectures and sensors used in these vehicles.
So, their perception and processing capabilities are [all] different, because they have different strategies and theories on how to handle it. Some use traditional computer vision algorithms, some use a little bit more AI. Different cars perform differently. ADAS systems perform differently.
But the kind of bumps in the road of ADAS that this report brings up is, “Hey, they don’t work consistently all the time.” You expected them to kind of work all the time with no failures. But even on that front you know, it wasn’t a major surprise. Tier Ones, algorithm folks or carmakers never claimed that [active ADAS features] are a replacement for the driver. They’re a set of tools for the driver to assist him in driving.
EE Times: True, but you know, here’s my soap box. I understand that nobody is telling consumers that this is going to replace the driver.
At the same time, the driver, once he or she is beginning to understand what the new car is capable of, the driver will get used to it. It’s human nature. As the driver begins to understand what the car can do, the driver will begin to depend on his car to save his life. What do we do about that?
Ross Jatou: Junko, you’re absolutely right. People over-trust technologies. The call to action to the industry is to continue to improve their systems. They have to further make standards and raise the bar.
On the other hand, although I can’t name the vehicles or the sensors, I can tell you that today’s vehicles – currently on the market – have technologies developed a few years back. What you’re going to see in the coming years will come with substantially improved solutions.
The post Interview with ON Semi’s Ross Jatou on ADAS & AVs appeared first on EE Times Asia.
from EE Times Asia https://ift.tt/3ky5Wyt
No comments:
Post a Comment
Please do not enter any spam link in the comment box.