Saturday, February 24, 2018

Omnivision Paper on 2nd Generation Stacking Technology

MDPI Special Issue on the 2017 International Image Sensor Workshop publishes Omnivision paper "Second Generation Small Pixel Technology Using Hybrid Bond Stacking" by Vincent C. Venezia, Alan Chih-Wei Hsiung, Wu-Zang Yang, Yuying Zhang, Cheng Zhao, Zhiqiang Lin, and Lindsay A. Grant.

"In this work, OmniVision’s second generation (Gen2) of small-pixel BSI stacking technologies is reviewed. The key features of this technology are hybrid-bond stacking, deeper back-side, deep-trench isolation, new back-side composite metal-oxide grid, and improved gate oxide quality. This Gen2 technology achieves state-of-the-art low-light image-sensor performance for 1.1, 1.0, and 0.9 µm pixel products. Additional improvements on this technology include less than 100 ppm white-pixel process and a high near-infrared (NIR) QE technology."

Friday, February 23, 2018

Yole on Automotive Sensing

Yole Developpement releases "Sensors for Robotic Vehicles 2018" report:

"As far as we know, each robotic vehicle will be equipped with a suite of sensors encompassing Lidars, radars, cameras, Inertial Measurement Units (IMUs) and Global Navigation Satellite Systems (GNSS). The technology is ready and the business models associated with autonomous driving (AD) seem to match the average selling prices for those sensors. We therefore expect exponential growth of AD technology within the next 15 years, leading to a total paradigm shift in the transportation ecosystem by 2032. This will have huge consequences for high-end sensor and computing semiconductor players and the associated system-level ecosystems as well.

...in 2022 we expect sensor revenues to reach $1.6B for Lidar, $44M for radar, $0.6B for cameras, $0.9B for IMUs and $0.1B for GNSS. The split between the different sensor modalities may not stay the same for the 15 years to come. Nevertheless the total envelope for sensing hardware should reach $77B in 2032, while, for comparative purposes, computing should be in the range of $52B.
"

TowerJazz Update on its CIS Business

SeekingAlpha: TowerJazz Q4 2017 earnings report has an update on the foundry's image sensor business:

"For CMOS image sensor we use the 300 millimeter 65 nanometer capability to develop unique high dynamic range and extremely high sensitivity pixels with very low dark current for the high-end digital SLR and cinematography and broadcasting markets.

In these developments, we've included are fab 2 stitching technology to enable large full frame sensors. In addition, we developed a unique family of global shutter state-of-the-art pixels ranging from 3.6 micron down to 2.5 micron to note the smallest in the world with extremely high-shutter efficiency using the unique dual light pipe technology already developed at TPS Go for high quantum efficiency and high image uniformity.

And lastly within the CIS regime, we've pushed the limits of our x-ray dye size developing a one dye per wafer x-ray stitch sensor to produce a 300 millimeter a 21 cm x 21 cm imager. All of the above technologies have been or are being implemented in our CIS customers next generation products and are ramping or are plan to begin ramping this year with some additional next year.

Our Image sensor end markets including medical, machine vision, digital SLR camera, cinematography and security among others represented about 15% of our corporate revenues or $210 million and provided the highest margins in the company. We are offering the most advanced global shutter pixel for industrial sensor market with a 2.8 micron global shutter pixel on 110 nanometer platform. The smallest global shutter pixel in the world already in manufacturing. Additionally, as mentioned we have a 2.5 micron state of the art global shutter pixel in development at 65 nanometer, 300 platforms with several leading customers allowing high sensor resolution for any given sensor size enabling TowerJazz to further grow its market leadership.

We also offer single photon avalanche diode which is state of the art technology and ultra fact global shutter pixel for automotive radars based on time of flight principle, answering automotive market needs. We have engaged with several customers in the development of their automotive radar and expect to be a major player in this market in the coming future.

During 2017, we announced a partnership with Yuanchen Microelectronics for backside illumination manufacturing in Changchun China that provide us the BSI process segment for CIS 8 inch wafer manufactured by TowerJazz to increase our service to our worldwide customer base in mass production. So I will be ready for this mass production early second half of this year with multiple customers already having started their product designs.

In addition, we developed backside illumination and stack way for technology on 12 inch wafers in the Uozu factory serving as a next generation platform for high end photography and high end security market. We now offer both BSI and column level stack wafer PDKs to our customers.

We are investing today in three main directions. Next generation global shutter technology for industrial sensor market. Backside illumination stack wafers for the high end photography market and special pixel technology for the automotive market.
"

An earlier presentation shows the company's CIS business in a graphical format:

Thursday, February 22, 2018

Automotive Videos

ULIS publishes a Youtube demo of its thermal sensors usefulness in ADAS applications. One can see how hot the car tires become on the highway, while keep being cool in city driving:



Sensata prizes Quanergy LiDAR performance:

Wednesday, February 21, 2018

Denso Vision Sensor for Improved Night Driving Safety

DENSO has developed a new vision sensor that detects pedestrians, cyclists, road signs, driving lanes and other road users at night. Working in conjunction with a millimeter-wave radar sensor, the new vision sensor allows automobiles to automatically activate emergency braking when obstacles are identified, helping reduce accidents and improve overall vehicle safety. It is featured in the 2018 Toyota Alphard and Vellfire, which were released in January this year.

It improves night vision by using a unique lens specifically designed for low-light use, and a solid-state imaging device with higher sensitivity. An improved white-line detection algorithm and road-edge detection algorithm also broaden the operating range of lane-keeping assistance and lane departure alert functions, while a 40% size reduction from previous models reduces costs and makes installation easier.

Recognition of human eyes
Recognition of vision sensor

Chronocam Changes Name to Prophesee, Raises More Money

GlobeNewswire: Chronocam, said to be the inventor of the world’s most advanced neuromorphic vision system, is now Prophesee, a branding and identity transformation that reflects the company's expanded vision for revolutionizing how machines see.

Prophesee SA (formerly Chronocam) receives the initial tranche of its Series B financing round, which will total $19M. Led by a new unnamed strategic investor from the electronics industry, the round also includes staged investments from Prophesee’s existing investors: 360 Capital Partners, Supernova Invest, iBionext, Intel Capital, Renault Group, and Robert Bosch Venture Capital. The latest round builds on the $20m Prophesee has raised over the past three years, and will allow it to accelerate the development and industrialization of the company’s image sensor technology.

The roots of Prophesee’s technology run deep into areas of significant achievements in vision, including the breakthrough research carried out by the Vision Institute (CNRS, UPMC, INSERM) on the human brain and eye during the past 20 years, as well as by CERN, where it was instrumental in the discovery of the invisible Higgs Boson, or “The God Particle” in 2012 after more than 30 years of research. Early incarnations of the Prophesee technology helped in the development of the first industry-grade silicon retina which is currently deployed to restore sight to the blind.

Thanks to its fast vision processing equivalent to up to 100,000 fps, Prophesee’s bio-inspired technology enables machines to capture scene changes not previously possible in machine vision systems for robotics, industrial automation and automotive.

Its HDR of more than 120dB lets systems operate and adapt effectively in a wide range of lighting conditions. It sets a new standard for power efficiency with operating characteristics of less than 10mW, opening new types of applications and use models for mobile, wearable and remote vision-enabled products.

Our event-based approach to vision sensing and processing has resonated well with our customers in the automotive, industrial and IoT sectors, and the technology continues to achieve impressive results in benchmarking and prototyping exercises. This latest round of financing will help us move rapidly from technology development to market deployment,” said Luca Verre, co-founder and CEO of Prophesee. “Having the backing of our original investors, plus a world leader in electronics and consumer devices, further strengthens our strategy and will help Prophesee win the many market opportunities we are seeing.

Prophesee AI-based neuromorphic vision sensor

Inerview with Nobukazu Teranishi

Nikkei publishes an interview with Nobukazu Teranishi, inventor of the pinned PD who recently was awarded the Queen Elizabeth Prize for Engineering.

"Now... except for Sony, which leads the world in the image sensor sector, Japanese companies have fallen behind, particularly in the semiconductor industry.

Teranishi said that changes are necessary for Japan to continue to compete globally.

He also suggested that engineers and technical experts should be held in higher esteem in Japan.

"Excellent engineers are a significant asset. Companies overseas shouldn't be able to lure them out of Japan just with better salaries. If they are that valuable, their value should to be recognized in Japan as well," he said.

Determining salaries by how long people have been at the company seems like "quite a rigid structure," he said.

He added that engineers get little recognition for the work they do, with individual names rarely mentioned within the company or in the media.

Looking ahead to the future of image sensors, Teranishi feels one peak has been reached, with around 400 million phones produced annually that incorporate his technology. Next, he says, is the era of "images that you don't see."

For facial recognition and gesture input for games, he said, "No one sees the image but the computer is processing information. So there are many cases where a human doesn't see the image.
"

CIS Wafer Testing Presentation

Taiwan Jetek Technology publishes a presentation on CIS wafer-level testing.

Tuesday, February 20, 2018

IR-Enhancing Surface Structures Compared

IEEE Spectrum: TED publishes UCD and W&WSens Devices invited paper on light-bending microstructures to enhance PD QE and IR sensitivity "A New Paradigm in High-Speed and High-Efficiency Silicon Photodiodes for Communication—Part I: Enhancing Photon–Material Interactions via Low-Dimensional Structures" by Hilal Cansizoglu, Ekaterina Ponizovskaya Devine, Yang Gao, Soroush Ghandiparsi, Toshishige Yamada, Aly F. Elrefaie, Shih-Yuan Wang, and M. Saif Islam.

"[Saif] Islam and his colleagues came up with a silicon structure that makes photodiodes both fast and efficient by being both thin and good at capturing light. The structure is an array of tapered holes in the silicon that have the effect of steering the light into the plane of the silicon. “So basically, we’re bending light 90 degrees,” he says."


The paper compares the proposed approach with other surface structures for IR sensitivity enhancement:

Monday, February 19, 2018

Corephotonics and Sunny Ship Millions of Dual Camera Modules to Oppo, Xiaomi and Others

Optics.org: Corephotonics has partnered with Sunny Optical to bring to market a variety of solutions based on the company’s dual camera technologies. Under this agreement, Sunny has already shipped millions of dual cameras powered by Corephotonics IP to various smartphone OEMs, including Xiaomi, OPPO and others.

The new partnership combines Sunny’s automatic manufacturing capacity, quality control and optical development capabilities with Corephotonics’ innovation in optics, camera mechanics and computational imaging. This strategic license agreement covers various dual camera products, including typical wide + tele cameras, as well as various folded dual camera offerings, allowing an increased zoom factor, optical stabilization and a reduced module height.

The partnership allows Sunny to act as a one-stop-shop dual camera vendor, providing customized dual camera designs in combination with well-optimized software features. The collaboration leverages Sunny's manufacturing lead and strong presence in the Chinese dual-camera market.

Sunny Optical has the powerful optical development capability and automatic lean manufacturing capacity. We have experimented with virtually all dual camera innovations introduced in recent years, and have found Corephotonics dual camera technologies to have the greatest contribution in camera performance and user experience. Just as important is the compliance of their dual camera architecture with high volume production and harsh environmental requirements,” said Cerberus Wu, Senior Marketing Director of Sunny Optical.

We are deeply impressed by Sunny's dual camera manufacturing technologies, clearly setting a new benchmark in the thin camera industry," added Eran Briman, VP of Marketing & Business Development at Corephotonics. “The dual camera modules produced under this collaboration present smartphone manufacturers with the means to distinguish their handsets from those of their rivals through greatly improved imaging capabilities, as well as maximum flexibility and customizability."

EETimes Reviews ISSCC 2018

EETimes Junko Yoshida publishes a review of ISSCC 2018 image sensor session, covering Sony motion detecting event-driven sensor:


Microsoft 1MP ToF sensor:


Toshiba 200m-range LiDAR:


and much more...

Saturday, February 17, 2018

Materials of 3rd Workshop on Image Sensor and Systems Published

Image Sensor Society web site published most of the papers from 3rd International Workshop on Image Sensor and Systems (IWISS2016) held at the Tokyo Institute of Technology in November 2016. There are 18 invited papers and 20 posters presented at the Workshop, mostly from Japan and Korea.

Thanks to NT for the pointer!

Friday, February 16, 2018

LIN-LOG Pixel with CDS

MDPI Special Issue on the 2017 International Image Sensor Workshop publishes NIT paper "QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout" by Yang Ni.

"In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout."


"The readout noise was measured at 2.2 LSB, which is 268 µV. Taking into account the source follower gain, the temporal noise on the floating diffusion was estimated at 335 µV. With a floating diffusion node capacitance estimated from design at 4 fF, the noise electron number is 12.3 electrons. The temporal noise in the logarithmic regime was measured at 6 LSB, which represents 34 electrons inside the buried photodiode. From this Johnson noise, the photodiode capacitance can be estimated at 6.2 fF which is quite close to the estimation from the layout."

Thursday, February 15, 2018

DALSA Discusses Facial Recognition

Teledyne DALSA starts publishing a series of articles on facial recognition science. The first part discusses fairly generic issues, such as the resolution that humans use for facial recognition task. It's all dynamic:

"The ganglion cells in the human retina can produce the equivalent of a 600 megapixel image, but the nerve that connects to the retina can only transmit about one megapixel."

"Analysts predict that the global facial recognition market is expected to grow from USD 4.05 Billion in 2017 to USD 7.76 Billion by 2022. Companies are very interested in the possibilities of facial recognition technologies and global security concerns are driving interest in better biometric systems."

ISSCC Review: Sony, TSMC, NHK, Toshiba, Microsoft, TU Delft, FBK

Albert Theuwissen continues his review of ISSCC 2018 presentations. The second part includes Sony 3.9MP, 1.5um pixel pitch event-driven sensor:

"The overall resolution of 3.9 Mpixels is reduced to on 16×5 macro pixels. In this “macro” pixel mode, the power consumption is drastically reduced as well, and the sensor behaves in a sort of sleeping mode. Once the sensor detects any motion in the image (by means of frame differencing), the device wakes up and switches to the full resolution mode."

TSMC presents their 13.5MP 1.1um pixel sensor and NHK unveils 8K 36MP 480fps sensor for slow-mo sports shooting at the oncoming Tokyo Olympics.

The third part of the review starts with Toshiba hybrid LiDAR that is enhanced by a Smart Accumulation Mode that, basically, tracks the subjects in depth domain. As long as it works, the detection range can reach 200m, but it relies on a lot of intelligence inside what is supposed to be just a dumb sensor delivering the "food for thought" to the main CPU or NPU.

Microsoft presented an evolution of its ToF sensor used in Kinect-2 - higher resolution, smaller pixels, BSI, higher QE, better shutter efficiency, etc. AGC has been added to the pixel, and background light suppression has been removed, if we compare this pixel with the previous Microsoft design.

TU Delft and FBK presented SPAD designs. The FBK one is aimed to entangled photon microscopy to increase the resolution by a factor of N, where N is the number of mutually entangled photons.

Albert Theuwissen concludes his review on an optimistic note:

"Take away message : everything goes faster, lower supply voltages, lower power consumption, stacking is becoming the key technology, and apparently, the end of the developments in our field is not yet near ! The future looks bright for the imaging engineers !!!"

Panasonic 8K GS OPF Sensor

Panasonic has developed an 8K (36MP), 60fps, 450ke- saturation sensor with global shutter and with sensitivity modulation function. The new CMOS sensor has an organic photoconductive film (OPF).

"By utilizing this OPF CMOS image sensor's unique structure, we have been able to newly develop and incorporate high-speed noise cancellation technology and high saturation technology in the circuit part. And, by using this OPF CMOS image sensor's unique sensitivity control function to vary the voltage applied to the OPF, we realize global shutter function. The technology that simultaneously achieves these performances is the industry's first."

The new technology has the following advantages:
  • 8K resolution, 60fps framerate, 450Ke- saturation and GS function are realized simultaneously.
  • Switching between high sensitivity mode and high saturation mode is possible using gain switching function.
  • The ND filter function can be realized steplessly by controlling the voltage applied to the OPF.

This Development is based on the following technologies:
  1. "OPF CMOS image sensor design technology", in that, the photoelectric-conversion part and the circuit part can be designed independently.
  2. "In-pixel capacitive coupled noise cancellation technique" which can suppress pixel reset noise at high speed even at high resolution
  3. "In-pixel gain switching technology" that can achieve high saturation characteristics
  4. "Voltage controlled sensitivity modulation technology" that can adjust the sensitivity by changing the voltage applied to the OPF.

Panasonic holds 135 Japanese patents and 83 overseas patents (including pending) related to this technology.

Wednesday, February 14, 2018

Analyst: Himax/Qualcomm 3D Sensing Platform Struggles in China

Barron's quotes financial analyst Jun Zhang of Rosenblat saying:

"As we look across the landscape it appears to us that HIMX continues to struggle to find OEMs to incorporate its solution as our latest industry research suggests OPPO is working with Orbbec, Xiaomi with O-film (002456-SZ:NR) & Mantis Vision for its Mi7 Plus and Huawei on an internal solution. We also be- lieve other tier-2 and 3 OEMs are targeting a 2019 launch for their phones. On the conference call, management commented that its 3D sensing solution will be ready for mass production in Q2 but did not announce any design wins. Based on the long lead times for 3D sensing modules, Himax should have needed to have already secured a design-win if to be part of any solution."

SeekingAlpha earnings call transcript has the company's CEO Jordan Wu predicting: "3D sensing will be our biggest long term growth engine and, for 2018, a major contributor to both revenue and profits, consequently creating a more favorable product mix for Himax starting the second half of 2018."

Teledyne Announces Readiness of Wafer Level Packaged IR Sensors

BusinessWire: Teledyne DALSA completed the qual of a its Wafer-Level-Packaged Vanadium Oxide (VOx) Microbolometer process for LWIR imaging.

Teledyne DALSA’s manufacturing process, located in its MEMS foundry in Bromont, Quebec, bonds two 200 mm wafers precisely and under high vacuum, forming an extremely compact 3D stack. This technology eliminates the need for conventional chip packaging - which can account for 75% or more of the overall device cost.

This is an important milestone in our journey to bring a credible price/performance VOx solution to market,” said Robert Mehrabian, Chairman, President and CEO of Teledyne. “With the qualification process complete we will now begin ramping up production lines for a 17-micron pixel 320×240 (QVGA) device, closely followed by a 17-micron 640×480 (VGA), with longer-term plans to introduce a highly compact 12-micron detector family.

ISSCC 2018 Review - Sony, Panasonic, Samsung

Albert Theuwissen publishes a review of ISSCC papers, starting from Sony BSI-GS CMOS imager with pixel-parallel 14b ADC: "One can make a global shutter in a CMOS sensor in the charge domain, in the voltage domain, but also in the digital domain. The latter requires an ADC per pixel (also known as DPS : digital pixel sensor). And this paper describes such a solution : a stacked image sensor with per pixel a single ADC."

Panasonic organic sensor: "The paper claims that the reset noise is lowered by a factor of 10, while the saturation level is increased by a factor of 10 (but the high saturation mode cannot be combined with the low noise level)."

Samsung 24MP CIS with 0.9um pixel: "All techniques mentioned are not new, but their combination for a 0.9 um is new."

Omnivision HDR Promotional Video

Omnivision publishes HDR marketing video:

Update: Omnivision removed the video and sent me the following update:

"The video was made public by mistake. Once we finalize the video we will re-publish and share with OmniVision’s customers and media contacts."

Tuesday, February 13, 2018

Sony Presents GS Sensor with ADC per Pixel

Sony presents 1.46MP stacked BSI CMOS sensor with Global Shutter and newly developed low power pixel-parallel ADC to convert the analog signal from all pixels, simultaneously exposed, to a digital signal in parallel.

The inclusion of nearly 1,000 times as many ADCs compared to the traditional column-parallel ADC architecture means an increased demand for current. Sony addressed this issue by developing a compact 14-bit A/D converter which is said to boast the industry's best performance in low-current operation. The FoM of the new ADC is 0.24e-・nJ/step. (power consumption x noise) / {no. of pixels x frame speed x 2^(ADC resolution)}.

The connection between each pixel on the top chip uses Cu-Cu connection, that Sony put into mass production as a world-first in January 2016.

Main Features:
  • Low-current, compact pixel-parallel A/D converter
    In order to curtail power consumption, the new converter uses comparators that operate with subthreshold currents, resulting in the low current, compact 14-bit ADC. This overcomes the issue of the increased demand for current due to the inclusion of nearly 1,000 times as many ADCs in comparison with the traditional column ADC.
  • Cu-Cu (copper-copper) connection
    To achieve the parallel A/D conversion for all pixels, Sony has developed a technology which makes it possible to include approximately three million Cu-Cu (copper-copper) connections in one sensor. The Cu-Cu connection provides electrical continuity between the pixel and logic substrate, while securing space for implementing as many as 1.46 million A/D converters, the same number as the effective megapixels, as well as the digital memory.
  • High-speed data transfer construction
    Sony has developed a new readout circuit to support the massively parallel digital signal transfer required in the A/D conversion process using 1.46 million A/D converters, making it possible to read and write all the pixel signals at high speed.

More Hamamatsu Videos

Hamamatsu publishes two more videos - "LiDAR, Radar, and Cameras: Measuring distance with light in the automotive industry" and "SiPM: Operation, performance, and possible applications," both by Slawomir Piatek, senior lecturer of physics at New Jersey Institute of Technology.



Sony Automotive Image Sensors Marketing

Sony explains its Safety Cocoon concept:

Monday, February 12, 2018

Hamamtsu SiPM Theory and Comparisons with PMT

Hamamtsu publishes two hour-long educational webcasts "Silicon photomultipliers: theory & practice" and "Low light detection: PMT vs. SiPM."



Jaroslav Hynecek Elevated to IEEE Fellow

Jaroslav Hynecek has been elevated to IEEE fellow for contributions to solid-state image sensors.

Thanks to NT for the info!

Facial Recognition Glasses for Chinese Police

The Verge reports that China police is testing facial recognition glasses at train stations in the “emerging megacity” of Zhengzhou, where they’ll be used to scan travelers during the upcoming Lunar New Year migration.

The glasses are developed by Beijing-based LLVision Technology Co. The company says they’re able to recognize individuals from a pre-loaded database of 10,000 suspects in just 100ms, but cautions that accuracy levels in real-life usage may vary due “environmental noise.”

Sunday, February 11, 2018

SPAD Sensor for Entangled Photon Imaging

SPIE publishes FBK paper and presentation video of "SUPERTWIN: towards 100kpixel CMOS quantum image sensors for quantum optics applications" by Leonardo Gasparini; Bänz Bessire; Manuel Unternährer; André Stefanov; Dmitri Boiko; Matteo Perenzoni; David Stoppa.

"Quantum imaging uses entangled photons to overcome the limits of a classical-light apparatus in terms of image quality, beating the standard shot-noise limit, and exceeding the Abbe diffraction limit for resolution. In today experiments, the spatial properties of entangled photons are recorded by means of complex and slow setups that include either the motorized scanning of single-pixel single-photon detectors, such as Photo-Multiplier Tubes (PMT) or Silicon Photo- Multipliers (SiPM), or the use of low frame rate intensified CCD cameras. CMOS arrays of Single Photon Avalanche Diodes (SPAD) represent a relatively recent technology that may lead to simpler setups and faster acquisition. They are spatially- and time-resolved single-photon detectors, i.e. they can provide the position within the array and the time of arrival of every detected photon with less than 100 ps resolution. SUPERTWIN is a European H2020 project aiming at developing the technological building blocks (emitter, detector and system) for a new, all solid-state quantum microscope system exploiting entangled photons to overcome the Rayleigh limit, targeting a resolution of 40nm. This work provides the measurement results of the 2nd order cross-correlation function relative to a flux of entangled photon pairs acquired with a fully digital 8×16 pixel SPAD array in CMOS technology. The limitations for application in quantum optics of the employed architecture and of other solutions in the literature will be analyzed, with emphasis on crosstalk. Then, the specifications for a dedicated detector will be given, paving the way for future implementations of 100kpixel Quantum Image Sensors."

The trend in CMOS SPAD-array design goes towards:

(i) the miniaturization of the pixel (below 10µm) to increase the output image resolution;

(ii) SPAD optimization to improve the photon detection efficiency (PDE) while reducing DCR, after-pulsing and crosstalk;

(iii) 3D stacking of chips, with a top tier optimized for sensing that includes the array of SPADs and a bottom tier optimized for processing (i.e., counting, timestamping and buffering);

(iv) smart mechanisms for timestamping photons, such TDC sharing and time-gated counting in the analog domain;

(v) the on-chip implementation of pre-processing stages, such as timestamp histogramming, to reduce the sensor output data size and increase the frame rate, thus enabling synchronization with fast sources of photons (from 100kHz up to tens of MHz).

Friday, February 09, 2018

Elphel Quad 3D Camera Adds CNN and Tile Processor

Open-source Elphel camera project that uses quad camera for 3D vision, now adds Convolutional Neural Network with intention to reach the depth range of few hundreds to thousands meters with cameras spaced apart by just 150mm:

"We plan to fuse the methods of high resolution images calibration and processing, already emulated functionality of the Tile Processor (TP), RTL code developed for its implementation and the Convolutional Neural Network (CNN). Compared to the CNN alone this approach promises over a hundred times reduction in the number of input features without sacrificing universality of the end-to-end processing. The TP part of the system is responsible for the high resolution aspects of the image acquisition (such as optical aberrations correction and image rectification), preserves deep sub-pixel super-resolution using efficient implementation of the 2-D linear transforms. Tile processor is free of any training, only a few hyperparameters define its operation, all the application-specific processing and “decision making” is delegated to the CNN."

SMIC CIS Sales Grow 70% YoY

SeekingAlpha publishes SMIC earnings call transcript with update on its CIS business:

"We have already pinpointed a number of key platforms to address and today I'll highlight two of them. Our NOR flash platform and CMOS image sensor platform. These two have revenue to SMIC grow almost 70% in last year, compared with the year before. We continue to build on our platform strategy and seek to expand our customers' business. We are working hard to implement this market adjustment strategy within the company."

More Details from Sony IEDM 2017 Presentation

Fuse publishes few more slides from Sony presentation on 3-layer chip stacking flow at IEDM 2017.

"The final product is an impressive 19.3M pixels of 1.22 x 1.22 μm each and a 1 Gbit DRAM. Sony used TSVs that have a minimum diameter of 2.5 μm and a pitch of 6.3 μm with a line of 2 μm and space of 0.64 μm. In total they have over 35,000 TSVs – about 15,000 connecting the pixel substrate and the DRAM substrate and about 20,000 more connecting the DRAM substrate to the logic substrate.

The chip achieved 120 fps for all 19.3M pixels and can produce 960 fps FHD (1,920 x 1,080) super slow motion video.
"

3-layer stacking process flow
TEM cross-section

Thursday, February 08, 2018

Velodyne Talks about LiDAR Advantages, Tesla Denies the Need

Velodyne publishes a video on LiDAR advantages in automotive applications:




SeekingAlpha publishes Tesla Q4 2017 earnings call transcript with CEO Elon Musk saying:

Q: "Elon, on your autonomous vehicle strategy, why do you believe that your current hardware set of only camera plus radar is going to be able to get you to fully-validated autonomous vehicle system? Most of your competitors noted that they need redundancy from lidar hardware to given the robustness of the 3D point cloud and the data that's generated. What are they missing in their software stack and their algorithms that Tesla is able to obtain from just the camera and plus radar?

Further, what would be your response if the regulatory bodies required that level of redundancy is really needed from an incremental lidar hardware?
"

Elon Musk: "Well, first of all, I should say there's actually three sensor systems. There are cameras, redundant forward cameras, there's the forward radar, and there are the ultrasonics for near field. So, the third is also – the third set is also important for near-field stuff, just as it is for human.

But I think it's pretty obvious that the road system is geared towards passive optical. We have to solve passive optical image recognition, extremely well in order to be able to drive in any given environment and the changing environment. We must solve passive optical image recognition. We must solve it extremely well.

At the point at which you have solved it extremely well, what is the point in having active optical, meaning lidar, which does not – which cannot read signs; it's just giving you – in my view, it is a crutch that will drive companies to a local maximum that they will find very difficult to get out of.

If you take the hard path of a sophisticated neural net that's capable of advanced image recognition, then I think you achieve the goal maximum. And you combine that with increasingly sophisticated radar and if you're going to pick active photon generator, doing so in 400 nanometer to 700 nanometer wavelength is pretty silly, since you're getting that passively.

You would want to do active photon generation in the radar frequencies of approximately around 4 millimeters because that is occlusion penetrating. And you can essentially see through snow, rain, dust, fog, anything. So, it's just I find it quite puzzling that companies would choose to do an active photon system in the wrong wavelength. They're going to have a whole bunch of expensive equipment, most of which makes the car expensive, ugly and unnecessary. And I think they will find themselves at a competitive disadvantage.

Now perhaps I am wrong. In which case, I'll look like a fool. But I am quite certain that I am not.
"

Wednesday, February 07, 2018

Imec Lens-Free Microscope Relies on Super-Small Pixels

Imec lens-free microscopy leverages super small pixels in modern sensors and currently uses 1.1um pixel pitch. Imec uses a specialty laser based illumination coupled with holographic software for the image reconstruction with better than 1um resolution images with fields of view as large as 20 - 40 mm2 in a very low cost and compact form factor, unmatched by any microscope technology so far.

If any CIS manufacturer would be able to bring to market in near future much smaller pixels (e.g. 0.6um) with large enough resolution, IMEC expects it would enable a major breakthrough where characterization of even smaller objects below 500nm resolution like proteins, bacteria’s and fine particles matter pollution would be finally possible.

Imec demos its lens-free platform in Vimeo video:

Image Sensors Europe Interviews

Image Sensors Europe to be held in mid-March 2018 in London, UK publishes a number of interviews ahead of the conference.

Ian Riches, Global Automotive Practice from Strategy Analytics:

Q: What do you see as the most significant changes coming up in vision systems development and their applications within automotive in the next 12-24 months?

A:
a) Much more use of machine vision in the currently largely “dumb” applications of park assist and surround view.
b) Camera resolutions markedly increasing
c) A lot more in-cabin sensing

Albert Theuwissen, Founder of Harvest Imaging:

There are several reasons why I think (= am convinced) that monolithic CMOS imagers are superior to hybrid imagers :

  • The hybrid imagers are always based on 3T structures, while monolithic can make use of the 4T imagers. This results in a lower noise for the latter,
  • The dark current, dark current non-uniformities and isolated hot-pixel count is always better for monolithic silicon,
  • Monolithic silicon has improved quite a lot w.r.t. their response to near-IR light, such that they show even better (QE) performance in the near-IR than most of the hybrid imagers,
  • Monolithic silicon has a better signal-to-noise ratio than the hybrid imagers.

These statements are valid within the wavelength range of monolithic silicon (visible spectrum up to 1.1 um). Outside this wavelength range, the story can be completely different.

Q: What would you say are the 3 biggest game changers that will soon hit the image sensors industry and how can we prepare for it?

A:
  1. The increase of stacked imager technology - more companies are following this trend outside of just image-sensor companies, for example companies that (will) have signal-processing chips available that can be stacked to imagers. On the other hand, the stacking technology is quickly moving to the stacking on pixel level. This will result in ultra-fast devices with a huge amount of parallel processing capabilities on the (stacked) chip.
  2. The use of near-IR information will not only add more features to the cameras, but also introduce new applications, e.g. face recognition in mobile phones, measurement of distances, etc.
  3. Imagers are no longer mainly used for making beautiful images, but more and more applications are being created to use image sensors for totally other functions. Examples are the time-of-flight applications, the auto focus pixels, use in autonomous driving cars, etc.

Funding News: Gigajot, Mantis Vision

PrZen: Gigajot has been awarded a National Science Foundation (NSF) Small Business Innovation Research (SBIR) grant for $225,000 to conduct R&D on a high-speed, high-resolution and high-sensitivity camera. This Quanta Image Sensor (QIS) camera will be the first megapixel CMOS camera in the market with photon-counting capability and room temperature operation. This prototype product can be beneficial in scientific and medical imaging, life science, astronomical imaging, and other applications.

"Scientists and medical doctors are not satisfied with the cameras they use in the laboratories and hospitals," said Saleh Masoodian, CEO, Gigajot. "By implementing the Gigajot's QIS devices into the scientific and medical cameras, scientists and researchers will be able to conduct more accurate measurements and researches with the innovative imaging technology."

"The novel technology is based on the mainstream commercial CMOS fabrication processes to realize high-yield and low-cost production. Besides the scientific imaging products, this technology will ultimately improve the performance of consumer imaging devices," said Jiaju Ma, CTO, Gigajot.

Once a small business is awarded a Phase I SBIR/STTR grant (up to $225,000), it becomes eligible to apply for a Phase II grant (up to $750,000). Small businesses with Phase II grants are eligible to receive up to $500,000 in additional matching funds with qualifying third-party investment or sales.


Yahoo: A 12-year old 3D structured light camera maker Mantis Vision is about to get $36m investment from China-based Luenmei Quantum Ltd.