Wednesday, March 21, 2018

Recent Progress of Visible Light Image Sensors

CERN publishes Nobukazu Teranishi's 58 page-large presentation "Recent Progresses of Visible Light Image Sensors" at the Detector Seminar at CERN on February 23, 2018. There is a lot of interesting slides, including spares in the end. Here is just a small part of the content:

Tuesday, March 20, 2018

Nikkei Reviews Sony Paper at ISSCC

Nikkei publishes a 4-part review of Sony ISSCC 2018 presentation on event-driven sensor:

ST Talk about Dirty Glass in ToF Imaging

ST video presents issues with dirty cover glass in ToF devices, followed by a sort of obvious solution:

Monday, March 19, 2018

Up-Conversion Device to Give 1550nm Sensitivity to CMOS Sensors

Nocamels, The Times of Israel: Gabby Sarusi from Ben-Gurion University of the Negev "has developed a stamp-like device of which one side reads 1,500-nanometer infrared wavelengths, and converts them to images that are visible to the human eye on the other side of the stamp. This stamp — basically a film that is half a micron in thickness — is composed of nano-metric layers, nano-columns and metal foil, which transform infrared images into visible images.

An infrared sensor costs around $3,000, Sarusi said. A regular vision sensor used by autonomous cars costs $1-$2. So, by adding the nanotech layers, which cost around $5, Sarusi said, one can get an infrared sensor for about $7-$8.

Thanks to DS for the pointer!

Omnivision Nyxel Technology Wins Smart Products Leadership Award

Frost & Sullivan’s Manufacturing Leadership Council prizes Omnivision by Smart Products and Services Leadership Award for Nyxel NIR imaging technology.

Sunday, March 18, 2018

SF Current and RTN

Japanese Journal of Applied Physics publishes Tohoku University paper "Effect of drain current on appearance probability and amplitude of random telegraph noise in low-noise CMOS image sensors" by Shinya Ichino, Takezo Mawaki, Akinobu Teramoto, Rihito Kuroda, Hyeonwoo Park, Shunichi Wakashima, Tetsuya Goto, Tomoyuki Suwa, and Shigetoshi Sugawa. It turns out that lower SF current can reduce RTN, at least for 0.18um process used in the test chip:

Saturday, March 17, 2018

ST Announces 4m Range ToF Sensor

The VL53L1X TOF sensor extends the detection range of ST's FlightSense technology to four meters, bringing high-accuracy, low-power distance measurement, and proximity detection to an even wider variety of applications. The fully integrated VL53L1X measures only 4.9mm x 2.5mm x 1.56mm, allowing use even where space is very limited. It is also pin-compatible with its predecessor, the VL53L0X, allowing easy upgrading of existing products. The compact package contains the laser driver and emitter as well as SPAD array light receiver that gives ST’s FlightSense sensors their ranging speed and reliability. Furthermore, the 940nm emitter, operating in the non-visible spectrum, eliminates distracting light emission and can be hidden behind a protective window without impairing measurement performance.

ST publishes quite a detailed datasheet with the performance data:

GM 4th Gen Self-Driving Car Roof Module

GM has started production of a roof rack for its fourth generation Cruise AV featuring 5 Velodyne LiDARs and, at least, 7 cameras:

Friday, March 16, 2018

MEMSDrive OIS Technology Presentation

MEMSDrive kindly sent me a presentation on its OIS technology:

Pictures from Image Sensors Europe 2018

Few assorted pictures from Image Sensors Europe conference being held these days in London, UK.

From Ron (Vision Markets) twitter:

Image Sensors twitter:

From X-Fab presentation:

Thursday, March 15, 2018

Rumor: Mantis Vision 3D Camera to Appear in Samsung Galaxy S10 Phone

Korean newspaper The Investor quotes local media reports that Mantis Vision and camera module maker Namuga are developing 3-D sensing camera for Samsung next-generation Galaxy S smartphones, tentatively called the Galaxy S10. Namuga is also providing 3-D sensing modules for Intel’s RealSense AR cameras.

TechInsights: Samsung Galaxy S9+ Cameras Cost 12.7% of BOM

TechInsights Samsung Galaxy S9+ cost table estimates cameras cost at $48 out of $379 total. The previous generation S8 camera was estimated at $25.50 or 7.8% of the total BOM.

TechInsights publishes a cost comparison of this year and last yera;s flagship phones. Galaxy S9+ appears to have the largest investment in camera and imaging hardware:

ICFO Graphene Image Sensors

ICFO food analyzer demo at MWC in Barcelona in February 2018:

UV graphene sensors:

Samsung CIS Production Capacity to Beat Sony

ETNews reports that Samsung is to convert its 300mm DRAM 13 line in Hwasung to CMOS sensors production. Since last year, the company also working to convert its DRAM 11 line in Hwasung into an image sensor production (named as S4 line). Conversion of S4 line will be done by end of this year. Right after that, Samsung is going to convert its 300mm 13 line. The 13 line can produce about 100,000 DRAM wafers per month. Because image sensor has more manufacturing steps than DRAM, the production capacity is said to be reduced by about 50% after conversion.

At the end of last year, production capacity of image sensor from 300mm plant based on wafer input was about 45,000 units.” said ETNews source. “Because production capacities of image sensor that will be added from 11 line and 13 line will exceed 70,000 units per month, Samsung Electronics will have production capacity of 120,000 units of image sensor after these conversion processes are over.

Sony CIS capacity is about 100,000 wafers per month. Even with Sony capacity extension plans are accounted, Samsung should be able to match or exceed Sony production capacity.

While increasing production capacity of 300mm CIS lines for 13MP and larger sensors, Samsung is planning to slowly decrease output of 200mm line located in Giheung.

Samsung capacity expansion demonstrates its market confidence. Samsung believes that its image sensor capabilities approach that of Sony. The number of the company's outside CIS customers is over 10.

Wednesday, March 14, 2018

ULIS Video

ULIS publishes a promotional video about its capabilities and products:

Vivo Announces SuperHDR

One of the largest smartphone makers in China, Vivo, announces its AI-powered Super HDR that follows the same principles as regular multi-frame HDR but merges more frames.

The Super HDR’s DR is said to reach up to 14 EV. With a single press of the shutter, Super HDR captures up to 12 frames, significantly more than former HDR schemes. AI algorithms are used to adapt to different scenarios. The moment the shutter is pressed, the AI will detect the scene to determine the ideal exposure strategy and accordingly select the frames for merging.

Alex Feng, SVP at Vivo says “Vivo continues to push the boundaries and provide the ultimate camera experience for consumers. This goes beyond just adding powerful functions, but to developing innovations that our users can immediately enjoy. Today’s showcase of Super HDR is an example of our continued commitment to mobile photography, to enable our consumers to shoot professional quality photos at the touch of a button. Using intelligent AI, Super HDR can capture more detail under any conditions, without additional demands on the user.

Tuesday, March 13, 2018

Prophesee Expands Event Driven Concept to LiDARs

EETimes publishes an article on event-driven image sensors such as Prophesee's (former Chronocam) Asynchronous Time-Based Image Sensor (ATIS) chip.

The company CEO Luca Verre "disclosed to us that Prophesee is exploring the possibility that its event-driven approach can apply to other sensors such as lidars and radars. Verre asked: “What if we can steer lidars to capture data focused on only what’s relevant and just the region of interest?” If it can be done, it will not only speed up data acquisition but also reduce the data volume that needs processing.

Phrophesee is currently “evaluating” the idea, said Luca, cautioning that it will take “some months” before the company can reach that conclusion. But he added, “We’re quite confident that we can pull it off.”

Asked about Prophesee’s new idea — to extend the event-driven approach to other sensors — Yole Développement’s analyst Cambou told us, “Merging the advantages of an event-based camera with a lidar (which offers the “Z” information) is extremely interesting.”

Noting that problems with traditional lidars are tied to limited resolution — “relatively less than typical high-end industrial cameras” — and the speed of analysis, Cambou said that the event-driven approach can help improve lidars, “especially for fast and close-by events, such as a pedestrian appearing in front of an autonomous car.

Samsung Galaxy S9+ Cameras

TechInsights publishes an article on Galaxy S9+ reverse engineering including its 4 cameras - a dual rear camera, a front camera and an iris recognition sensor:

"We are excited to analyze Samsung's new 3-stack ISOCELL Fast 2L3 and we'll be publishing updates as our labs capture more camera details.

Samsung is not first to market with variable mechanical apertures or 3-layer stacked image sensors, however the integration of both elements in the S-series is a bold move to differentiate from other flagship phones.

The S9 wide-angle camera system, which integrates a 2 Gbit LPDDR4 DRAM, offers similar slo-mo video functionality with 0.2 s of video expanded to 6 s of slo-mo captured at 960 fps. Samsung promotes the memory buffer as beneficial to still photography mode where higher speed readout can reduce motion artifacts and facilitate multi-frame noise reduction.

iFixit reverse engineering report publishes nice pictures showing a changing aperture on the wide-angle rear camera:

Monday, March 12, 2018

3DInCites Awards

Phil Garrou's IFTLE 374 reviews 3DInCites Award winners. Two of them are related to image sensors:

Device of the Year: OS05A20 Image Sensor with Nyxel Technology, OmniVision:

"OmniVision’s OS05A20 Image Sensor was nominated for being the first of its image sensors to be built with Nyxel ™ Technology. This approach to near-infrared (NIR) imaging combines thick-silicon pixel architectures with careful management of wafer surface texture to improve quantum efficiency (QE), and extended deep trench isolation to help retain modulation transfer function without affecting the sensor’s dark current. As a result, this image sensor sees better and farther under low- and no-light conditions than previous generations."

Engineer of the Year: Gill Fountain, Xperi:

"Known as Xperi’s guru on Ziptronix’ technologies, Gill was nominated for his most recent contribution, expanding the chemical mechanical polishing process window for Cu damascene from relatively fine features. His team developed a process that delivers uniform, smooth Cu/Ta/Oxide surfaces with a controlled Cu recess with very small variance across wafer sizes. He has been an integral part of Xperi’s technical team and his work allows the electronics industry to apply direct bond interconnect (DBI) for high-volume wafer-to-wafer applications."

Interview with Steven Sasson

IEEE publishes an interview with Steven J. Sasson who invented the first digital camera in 1975 while working at Eastman Kodak, in Rochester, N.Y. A notable Q&A:

Q: What tech advance in recent years has surprised you the most?

A: Cameras are everywhere! I would have never anticipated how ubiquitous the imaging of everything would become. Photos have become the universal form of casual conversation. And cameras are present in almost every type of environment, including in our own homes. I grossly underestimated how quickly it would take for us to get here.

Beer Idenitfication with Hamamatsu Micro-spectrometer

Hamamatsu publishes a beer identification article showing it as an application for its micro-spectrometers:

Forza Silicon Applies Machine Learning to Production Yield Improvement

BusinessWire: Forza Silicon CTO, Daniel Van Blerkom, is to present a paper titled “Accelerated Image Sensor Production Using Machine Learning and Data Analytics” at Image Sensors Europe 2018 in London on March 15, 2018.

The machine learning has been applied to sensor data sets to identify and measure critical yield limiting defects. “Image sensors offer the unique opportunity to image the yield limiting defect mechanisms in silicon,” said Daniel Van Blerkom. “By applying machine learning to image sensor test procedures we’re able to quickly and easily classify sensor defects, identify root-cause and feedback the results to improve the process, manufacturing flow and sensor design for our clients.

ON Semi Announces X-Class CMOS Image Sensor Platform

BusinessWire: ON Semiconductor announces X-Class image sensor platform, which allows a single camera design supporting multiple sensors across the platform. The first devices in the new platform are the 12MP XGS 12000 and 4k / UHD resolution XGS 8000 sensors for machine vision, intelligent transportation systems, and broadcast imaging applications.

The X-Class image sensor platform supports multiple CMOS pixel architectures within the same image sensor frame. This allows a single camera design to support multiple product resolutions and different pixel functionality, such as larger pixels that trade resolution at a given optical format for higher imaging sensitivity, designs optimized for low noise operation to increase DR, and more. By supporting these different pixel architectures through a common high bandwidth, low power interface, camera manufacturers can leverage existing parts inventory and accelerate time to market for new camera designs.

The initial devices in the X-Class family, the XGS 12000 and XGS 8000, are based on the first pixel architecture to be deployed in this platform – a 3.2 µm global shutter CMOS pixel. The XGS 12000 12 MP device is planned to be available in two speed grades – one that fully utilizes 10GigE interfaces by providing full resolution speeds up to 90 fps, and a lower price version providing 27 fps at full resolution that aligns with the bandwidth available from USB 3.0 computer interfaces. The XGS 8000 is also planned to be available in two speed grades (130 and 75 fps) for broadcast applications.

As the needs of industrial imaging applications such as machine vision inspection and industrial automation continue to advance, the design and performance of the image sensors targeting this growing market must continue to evolve,” said Herb Erhardt, VP and GM, Industrial Solutions Division, Image Sensor Group at ON Semiconductor. “With the X-Class platform and devices based on the new XGS pixel, end users have access to the performance and imaging capabilities they need for these applications, while camera manufacturers have the flexibility they require to develop next-generation camera designs for their customers both today and in the future.

The XGS 12000 and XGS 8000 will begin sampling in the 2Q2018, with production availability scheduled for the 3Q2018. Additional devices based on the 3.2 µm XGS pixel as well as products based on other pixel architectures are planned for the X-Class family in the future.

: ON Semiconductor also announces a fully AEC-Q100 qualified version of its circa-2016 2.1 MP CMOS sensor, AR0237 for the OEM-fitted dash cam or before-market in-car DVR market.

The AR0237AT is a cost-optimized, automotive qualified version of the same sensor that can operate across the full automotive operating temperature range of -40°C to +105°C and deliver the right performance at the right price point. The low-light performance of the AR0237AT is improved when it is coupled to a Clarity+ enabled DVR processor. ON Semiconductor’s Clarity+ technology employs filtering to optimize the SNR of automotive imaging solutions, which can deliver an additional 2X increase in light capture.

Sunday, March 11, 2018

Adafruit Publishes ST FlightSense Performance Data

Adafruit publishes a datasheet of its distance sensor using ST SPAD-based ToF chip VL53L0X.

Update: Upon a closer look, the official ST VL530L0X datasheet has all these tables with the performance data.