Apple has sparked a race to use LiDAR scanners. Apple built one into its iPad Pro 11, and now it seems everyone wants one in their products.
Apple’s maneuver and the popular response to it have had repercussions throughout the electronics industry. IC and sensor suppliers are reevaluating their product roadmaps; a few have already altered their business models.
But what exactly is a “LiDAR scanner”? Apple adopted the term to describe a new sensor that measures depth — in other words, it’s a sensor that detects objects in three dimensions.
“LiDAR” in tablets and smartphones is, in general, “just a sub-category from 3D sensing,” explained Pierre Cambou, principal analyst in the photonics and display division at Yole Développement.
Many system designers — whether developing self-driving cars, smartphones or tablets — have been exploring ways to add “depth” information to pixels and colors captured by 2-D image sensors. LiDARs are being adopted by the automotive industry to detect and map distances of objects around highly automated vehicles, for example.
Apple’s newly introduced iPad Pro 11 is using a LiDAR scanner to offer more professional-looking augmented reality. It is designed for Apple’s ARkit3.5 development kit.
What makes this LiDAR scanner significant — and why other mobile device vendors, including Huawei and Vivo, appear going after it — is a specific technology used inside the unit to sense and measure depth.
Different options for 3D sensing
There are several technology options available for system designers to do 3D sensing. They include stereo vision, structured light, and time of flight (ToF). To make things even more complicated, ToF now comes in two flavors: indirect time of flight (iToF) and direct time of flight (dToF). iToF measures the phase shift, while dToF measures the direct time of flight.
Apple’s iPhone X offers Face ID by using structured light. Its depth estimation works by having an IR emitter send out 30,000 dots arranged in a regular pattern. Theses dots are invisible to people, but not to the IR camera, so that the camera reads the deformities in the pattern as it is reflected off surfaces at various depths.
With the rollout of the iPad Pro 11, 3D sensing has gotten richer and more granular with the adoption of a direct time-of-flight sensor. To date, Apple’s iPad Pro is the only consumer product leveraging dToF. Many smartphone vendors have been already using iToF for taking better pictures (ToF can blur backgrounds in photos), but not dToF.
The structured light method provides high depth accuracy, but its downside is its complex post-processing necessary to calculate the depth from the pattern matching.
In contrast, the advantage of the dTOF method is its ability to offer simple post processing. Its challenge, though, has been believed to be that it requires photodetectors with high sensitivity (such as single-photon avalanche diodes) and a large form factor in order to measure the time-of-flight with a small number of incident photons in a single measurement.
Thus far, among the 3D imaging methods, the iTOF method has been most common. It provides high depth accuracy, simple post processing, and high spatial resolution using small photodetectors that are widely used in 2D image sensors.
Nonetheless, for 3D sensing, Apple chose the less traveled road. The company opted for structured light for face ID. It is now using dToF for AR.
So, here are the questions everyone in the 3D sensing world is asking: What is dToF? What are its building blocks? And who made them?
A teardown by System Plus Consulting, a division of Yole Développement, showed details inside Apple iPad Pro 11’s 3D sensing module.
In EE Times’ interview, Sylvain Hallereau, senior technology and cost analyst at System Plus, explained that iPad Pro 11’s “LiDAR scanner” consists of an emitter — a vertical cavity surface emitting laser (VCSEL) from Lumentum, and a receptor — near infrared (NIR) CMOS image sensor that does direct measurement of time of flight, developed by Sony.
SPAD array NIR CMOS image sensor from Sony
This teardown’s cross-section of Sony’s CMOS image sensor was revealing to experts who follow the development of photonics. That includes Yole’s Cambou, who in a recent blog wrote that what “looked somewhat similar to an old indirect Time-of-Flight (iToF) design with 10 micron pixels” turned out to be “the first ever consumer CMOS Image Sensor (CIS) product with in-pixel connection — and, yes, it is a single photon avalanche diode (SPAD) array.”
The “in-pixel connection” is an important qualifier. Sony integrated the NIR CMOS image sensor with SPAD using 3D stacking for ToF sensors for the first time. In-pixel connection made it possible to put the CMOS image sensor together with the logic wafer. With the logic die integrated, the image sensor can do simple calculations of distance between the iPad and objects, Hallereau explained.
Sony has elbowed its way into the dToF segment by developing this new generation SPAD array NIR CMOS image sensor featuring 10 µm size pixels and a resolution of 30 kilopixel.
However, this isn’t just a Sony technological feat. It’s also about Sony’s business transformation.
The Japanese CMOS image sensor (CIS) behemoth traditionally did more imaging than sensing. But as Cambou tells it, “A year ago, Sony renamed its semiconductor division ‘Imaging & Sensing.’ Then it made two separate moves. The first was the supply of iToF sensors to Huawei and Samsung, generating in the order of $300 million in 2019. The second was this design win of dToF sensors for Apple iPads.”
Cambou suspects that dToF sensors could eventually end up in iPhones. In his analysis, “Sony’s sensing revenues will probably exceed $1 billion in 2020 out of a business just surpassing the $10 billion landmark. This successful transition from imaging to sensing has been instrumental to Sony’s continuous strength in the CIS market. It will be a building block for the prosperous future of the division.”
VCSEL from Lumentum
In addition to the CIS from Sony, the LiDAR is equipped with a VCSEL from Lumentum. The laser is designed with multiple electrodes connected separately to the emitter array.
Taha Ayari, technology & cost analyst at System Plus, focused his attention on a new processing step, called mesa contact, added by Lumentum in its VCSEL. A VCSEL emits light from the wafer surface. Fine-tuning emission requires power management and the application of different controls to emitter arrays. Ayari believes Lumentum has added this processing step to enhance wafer probe testing.
To generate the pulse and drive the VCSEL power and beam shape, the emitter uses a driver IC from Texas Instruments. The IC uses wafer-level chip-scale packaging (WLCSP), five-sides molded.
Finally, a new diffractive optical element (DOE) from Himax perches atop the VCSEL to generate the dot pattern, according to System Plus.
In the following pages, we share several slides created by System Plus, which illustrate what the teardown disclosed, while adding a few slides that put the LiDAR market in perspectives.
The post Look Inside iPad Pro 11’s LiDAR Scanner appeared first on EE Times Asia.
from EE Times Asia https://ift.tt/2UkAIQ5
No comments:
Post a Comment
Please do not enter any spam link in the comment box.