The difference between stacked CMOS, back-illuminated CMOS and traditional CMOS sensors
August 23, 2021
The difference between stacked CMOS, back-illuminated CMOS and traditional CMOS sensors
The phenomenon of the photoelectric effect was discovered by Hertz (the unit of frequency is named after him), but it was correctly explained by Einstein.Simply put, light or certain electromagnetic waves will produce electrons when irradiated on certain photosensitive materials, which is the photoelectric effect.
This turns light into electricity, and the change of optical signal will bring about the change of electrical signal. Therefore, people use this principle to invent the photosensitive element.
There are two types of photosensitive elements that we are familiar with, one is CCD and the other is CMOS. Early CMOS was much worse than CCD, but with the development of technology, the quality of CMOS has now taken a qualitative leap, and CMOS is cheap and has good power consumption performance.
sensor structure technology
Traditional (front-illuminated) CMOS, back-illuminated (Back-illuminated)
CMOS, stacked CMOS
The biggest and most basic difference lies in its structure.It is not only CMOS that affects the final imaging effect, but also the lens and camera algorithm.In fact, the more advanced structure is not necessarily better, it depends on what process is used (such as 180nm immersion lithography or 500nm dry etching) and technology (such as Sony "Exmor" each column parallel independent analog CDS + digital-to-analog conversion + Digital CDS's iconic noise reduction readout loop).
Excellent process and technology can make it have better quantum efficiency, inherent thermal noise, gain, full well charge, latitude, sensitivity and other key indicators even without using a newer structure of CMOS.Under the same technology and craftsmanship, the lowest level is indeed crushing.Human progress is constantly discovering and solving problems. The emergence of back-illuminated and stacked CMOS is also to solve the various problems of previous CMOS.
Traditional (front-illuminated) CMOS
Compare the front-illuminated and back-illuminated cross-sectional comparison icons:
The traditional CMOS is the "front-illuminated" structure on the left side of the figure, and the general CMOS pixels are composed of the following parts:microlenses,On-chip color filters,Metal cable (circuit layer),Photodiodes And the substrate.When light enters the pixel, after passing through the on-chip lens and color filter, it first passes through the metal wiring layer, and finally the light is received by the photodiode.
Micro lens:It is a very small convex lens on each physical pixel of CMOS to converge light.
Color filter:The color of incident light can be decomposed in RGB mode. The Bayer arrangement we sometimes hear is the arrangement of these filters. Such as the most classic RGGB arrangement.
Metal cable:There are usually several layers, mainly for signal transmission.
Photodiode:That is, for CMOS, the real photosensitive part where the photoelectric effect occurs.
Everyone knows that metal is opaque and can reflect light.Therefore, the light in the metal cable layer will be partially blocked and reflected. Due to process limitations, only 70% or less of the light reaches the photodiode after passing through the metal circuit layer;And this reflection may also cross-talk the pixels next to it, causing color distortion.(At present, the metal used in the middle and low-end CMOS cable layer is relatively cheap aluminum (Al), which basically maintains a reflectivity of about 90% for the entire visible light band (380-780nm)
Due to these shortcomings of front-illuminated, the Back-Illuminated CMOS design came into being.It puts the circuit layer behind the photodiode, so that the light can directly shine on the photodiode, and the light goes down to the photodiode with almost no obstruction or interference. The light utilization rate is extremely high, so the back-illuminated CMOS sensor can be better With the use of the irradiated light, the image quality is better in a low-illuminance environment.
Back-illuminated CMOS can have higher light utilization efficiency, so that it has higher sensitivity in low-illuminance environments.At the same time, because the circuit does not affect the photodiode to receive light, the circuit layer can be made thicker, so that more processing circuits can be placed, which helps to increase the signal processing speed.
Compared with ordinary front-illuminated sensors, devices equipped with back-illuminated sensors can increase the sensitivity of about 30%-50% in low-light environments, so they can shoot higher-quality photos or videos in low-light environments. , The noise is smaller.The richer processing circuit can process the original image signal with a larger amount of data.
Stacked CMOS first appeared on Sony's CMOS for mobile terminals. The original intention of stacking was not to reduce the size of the entire lens module. This is just a side benefit.
The production of CMOS is similar to the production of CPU. A special photolithography machine is required to etch the silicon wafer to form a pixel section and a circuit section.The pixel area is where the pixels are planted, and the processing loop is another overall control circuit that manages this group of pixels.
1,is the pixel area
2, is the processing circuit
When etching, there will be a problem. Take Sony’s small CMOS used in mobile phones as an example. For the manufacturing process of the pixel area, it can use a 65nm process (which can be simply understood as manufacturing accuracy), but for the area where the circuit is processed, The 65nm process is not enough. If it can be manufactured using the 45nm process, the number of transistors on the processing circuit can be doubled. In this way, the image can be processed faster from the pixels and the picture quality can be better. But because the etching is performed on the same piece of silicon, it cannot be manufactured using two processes.
So it is easy to think that if these two areas are separated, the pixel area is placed on a silicon chip and manufactured with a 65nm process, and the processing circuit is placed on another silicon chip, manufactured with a 45nm process, and then they are stacked and put together. This contradiction is resolved. This is stacked CMOS.
1,is the pixel area
2,is the processing circuit
3,is the cache
With a stacked structure, we can get more transistors in the processing circuit and have a faster speed.Therefore, HDR and upgrades, which were not easy to achieve, are now becoming very common. The reading speed has also become faster, so the jelly effect is smaller.Moreover, since the pixel area and the processing circuit area are stacked, the pixel area can be made larger.
Moreover, the use of stacking can bring some special technologies.For example, our common Bayer arrangement is mostly RGGB, and the brightness of the picture is calculated from the value of RGB color light through the brightness equation (Y=0.299R+0.587G+0.114B). But using stacked technology, people have developed a new Bayer arrangement RGBW, where RGB corresponds to common red, green and blue, W corresponds to white, and is sensitive to brightness.In this way, the low-light sensitivity of the sensor is greatly improved.
Stacked, back-illuminated, and front-illuminated, these three types are separate and there is no subordination relationship.We can use the back-illuminated technology, and then use the stacked structure to maximize the advantages.
In order to improve the efficiency of light collection by pixels, optical waveguides need to be introduced. During the dry etching process of the optical waveguide, the silicon wafer and pixel area will be damaged. At this time, a heat treatment step called "annealing process" is required to recover the silicon wafer and pixel area from the damage. It is necessary to heat the entire block of CMOS.Okay, here comes the problem. After such a heat, the processing circuit on the same wafer must have a certain degree of damage. The resistance value of the capacitor that has been "built" in the past must be changed after annealing. This damage must be It will have a certain impact on the readout of electrical signals.In this way, the processing circuit is shot while lying down, and the "annealing" of the pixel area is necessary.
There is another problem. The CMOS process currently built by Sony for mobile terminals is 65-nanometer dry engraving. This 65-nanometer process is completely sufficient for the "planting" of the CMOS pixel area.
However, 65 nanometers is not enough to "build" the processing loop area. If a 30 nanometer (actually upgraded to 45nm process) process can be used to build the circuit, then the number of transistors on the processing loop will almost double, which has a significant impact on the pixel area. "Teaching" will also have a qualitative leap, and the picture quality will definitely improve accordingly.But because they are made on the same wafer, the pixels and loop areas need to be made under the same process.Processing circuit: "It's always me who suffers!" Such a thing that cannot be achieved at the same time, if it is solved, it will be great!So Sony engineers came up with the idea of wafer substrate (BOSS debut).Let's look at this structure diagram first. The original processing circuit was built on the same wafer as the pixel area.
How about putting the processing circuit there?
First, the difference in thermal conductivity between the SOI and the substrate is used to separate the two by heating.The pixel area is made on a machine with a 65nm process, and the processing loop is made on a machine with a higher process (45nm).Then put them together, and the stacked CMOS was born.The two problems encountered above:① When the pixel is "annealed", the loop area lies in the shot.
② Process limitations when fabricated on the same wafer.All solved!The stacked type not only inherits the advantages of the back-illuminated type (the pixel area is still back-illuminated), but also overcomes its limitations and defects in production.
Due to the improvement and progress of the processing loop, the camera will also be able to provide more functions, such as hardware HDR, slow motion shooting and so on.When the pixels and processing circuits are separated, the size of the camera will become smaller, but the function and performance will not decrease, but it will be better.The pixel area (the size of the CMOS) can be enlarged accordingly to grow more or larger pixels. The processing loop will also be optimized accordingly (the most important thing will not be shot in the "annealing").