In-depth understanding of Camera basic knowledge points (1)
November 18, 2023
The hardware layer of the camera, as the bottom layer of the entire framework, receives the real light and shadow effects from the objective world through the hardware module, converts them into digital signals known to the computer, and continuously provides stable and imaging effects according to a certain data format. Excellent image data, the entire part is complex and efficient. It can be said that an excellent hardware foundation is like the foundation for the entire camera frame. Having a good foundation makes it possible to build a skyscraper. Next, let’s go into details. Let’s introduce the basic situation of each component in this part.
2. Basic hardware structure
Today's camera hardware systems are complicated, but if you study carefully, you will find that the core components are actually nothing more than the lens, photoreceptor, and image processor. The lens is used to focus light, and the photosensitive device is used for photoelectric conversion. , and the image processor is used to process image data. Next, we will start to explore the world of camera systems with these three components.
Move the turntable of time forward and let us go back to our respective elementary school days. At that time, the teacher assigned us a homework assignment. The task was to make a simple model of small hole imaging. This simple model was the most familiar to me. The original and simplest imaging system, but that is a question I have always had, why the image is so blurry. This question was only solved after I came into contact with a real camera. It turns out that everything is caused by light.
According to the principle of small hole imaging, one end of the small hole is the light source, and the other end is the imaging plane. The light passes through the small hole and is incident on the plane. Countless rays of light are incident on this plane, forming an image of the light source. But there is a problem. , that is, the light spreads around according to the divergent path. A certain beam of light emitted from a certain point of the light source will reach a certain point of the imaging plane after passing through the small hole, but obviously, this point will also receive light from another light source. Another beam of light is emitted from the point, thus forming light interference, which in turn affects the final imaging effect. Therefore, in order to improve this problem, the lens was invented. In fact, the lens is a convex lens that we come into contact with in daily life. Its fundamental purpose is to solve the problem of mutual interference of light. Its principle is to use the refraction principle of the convex lens to divert light from the same point. , re-converged to one point, thus greatly improving the imaging effect. The re-convergence point here is the image point of the light source point behind the lens. As the light source point continues to change, its image point will change accordingly, so we often pass the light from infinite distance through the lens. The point that is then converged is called the focus of the lens, and the distance from the focus to the center of the lens is called the focal length. Once the lens is made, the focal length is determined.
2. Aperture shutter
For a finished lens, the diameter of the lens cannot be adjusted at will, so a component called the aperture is added to it. This component generally uses a regular polygon or circular hole-shaped grating. The lens is controlled by adjusting the opening and closing size of the grating. The instantaneous amount of light entering, however, it is not enough to rely solely on the aperture to control the total amount of light entering. Another component called the shutter is needed, which mainly determines the length of exposure. The original shutter is adjusted by adjusting the cover in front of the lens. With the advancement of the times, the shutter has now derived multiple implementation methods, including the mechanical shutter, which is a shutter structure that only uses springs or other mechanical structures and does not rely on electricity to drive and control speed. , Electronic shutter, the shutter structure is controlled by electric drive through motor and magnet. The electronic cut-off shutter is a shutter structure with no mechanical structure at all. It has a high shutter speed and a fast impact capture frequency, but the disadvantage is that it is prone to highlight bloom.
The aperture controls the instantaneous amount of light entering, and the shutter controls the exposure time. Through the cooperation of the two, the purpose of controlling the amount of light entering is achieved, thereby further truly reproducing the light and shadow effects of the scene, and avoiding overexposure, which greatly Improved the overall image quality.
3. Focus motor
As mentioned before, the incident light rays will converge in a cone-shaped path after passing through the lens to a point, which is called the image point, and then diverge in a cone-shape. All light rays emitted from the same distance will converge to their respective When the image point is on the image point, a plane composed of the image points is formed, and this plane is generally called the image plane. And because this plane is the convergence of all the image points, the image is clear on the plane, and now The essence of today's focusing is to move the lens so that the image plane coincides with the plane of the photosensitive device, thereby forming a clear image on the photosensitive device. Generally speaking, focusing can be accomplished by manually moving the lens, but more generally, it is accomplished by a device called a focus motor. In addition to manually adjusting the lens to complete the focusing operation, the more mainstream method now is to automatically move the lens to complete the focusing operation. With the continuous development of technology, today's focusing has developed an automatic focusing strategy, including phase focusing and contrast. Focus. The basic principle is to adjust the lens back and forth to make the image plane coincide with the light-sensitive plane of the photoreceptor, thereby forming a clear imaging effect. In addition, for more complex camera systems, in order to obtain better imaging quality, multiple lens combinations are generally used to achieve this. Firstly, chromatic aberration can be eliminated, and secondly, the distance between lenses can be adjusted through the motor to dynamically modify the entire The focal length of the lens group can meet the imaging needs of more complex scenes.
d) Photoreceptor (Sensor)
As mentioned before, the function of the lens is to gather light to form an image plane, but how to convert this so-called image plane into image information known to computers? This requires the use of the photoreceptor here. The photoreceptor is not a proprietary invention of modern society. In fact, this concept existed in Europe as early as the early 19th century. A Frenchman named Niepce used asphalt to Adding lavender oil, and using a lead-tin alloy plate as the film base, he shot the scene outside the window seen from the upstairs of his house, a photo called "Pigeon Nest", and the asphalt here mixed with lavender oil is a A simple photosensitive material, from which photosensitive technology began to enter a period of rapid development. In 1888, the American Kodak Company produced a new type of photosensitive material, soft and rewindable film, which was a qualitative leap in photosensitive materials. , then in 1969 at Bell Labs, the CCD digital photosensitive device was invented, pushing the entire photosensitive technology into the digital era. Subsequent technological innovations, CMOS, which facilitated large-scale mass production, emerged as the times require, pushing the imaging system to a smaller size. A big step forward in a better direction. As CMOS technology continues to develop, it has gradually replaced CCD with its obvious advantages and has become the mainstream photosensitive device in camera systems.
4. IR Filter
Due to the characteristics of the photosensitive material, it will sense light in the wavelength range of visible light, such as part of the infrared light. Since this part of the infrared light is invisible, it is of no practical use to us (of course, this is not absolute , in some cases it is necessary to collect infrared light information (such as night vision cameras), and it may interfere with subsequent ISP processing, so it is often necessary to use a filter to filter infrared light, avoid infrared light interference, and correct the incoming light. Filters are generally divided into interference-type IR/AR-CUT (coated on a low-pass filter chip, using the principle of interference destructiveness) and absorption-type glass (using the principle of spectral absorption).
For some special scenes, such as shooting needs in dark light environments, due to the lack of light itself, sufficient photosensitivity operations cannot be completed. However, in order to obtain normal shooting needs, external fill light is often needed as additional lighting compensation. Based on this, flash lamps came into being. For mobile phones, they are mainly divided into two types: xenon lamps and LED lamps. Since LED flash lamps have the advantages of lower power consumption and smaller size, they are the mainstream choice for mobile phone flash lamps. In addition, many mobile phones now adopt a dual-color flash strategy. The dual-color flash can adjust the intensity of the two lights according to the needs of the environment, which can more closely approximate the effect of natural light. Compared with a single flash, the intensity is improved, and the color temperature is also higher than that of an ordinary dual flash. To be more accurate, the overall effect is better.
6. Image Processor (ISP)
Once the photosensitive device completes the photoelectric conversion, the data will be given to the image processor. The first step that the ISP needs to do is to remove the dark current noise. What is dark current noise? This starts with the photosensitive device. For CCD/CMOS, not all of them are usually used for photosensitivity. Some of them are specially blocked and used to collect dark current when there is no photosensitivity. In this way Eliminate the noise caused by dark current.
Due to the different refractive index properties of the lens, as the field of view slowly increases, the oblique beam that can pass through the lens gradually decreases, resulting in a higher brightness in the center of the image than at the edge. This phenomenon occurs in optics. It is called vignetting in the system. Obviously, this difference will make the image unnatural, so ISP needs to correct this deviation next. The correction algorithm is lens shadow correction. The specific principle is to make the brightness uniform in the middle of the image. Taking the area as the center, calculate the darkening speed of the image due to attenuation at each point, thereby calculating the compensation factors of the three RGB channels, and correct the image based on these compensation factors.
Subsequently, since the photosensitive device uses three primary colors of red, green, and blue to collect light, the data will generally show a mosaic-like arrangement effect. At this time, demosaicing processing needs to be completed. The basic principle is Through a certain interpolation algorithm, the missing color component of the pixel is guessed from the nearby color components, and the true color effect of each pixel is restored, thereby forming an image data with true color, and the data format at this time is the RAW data format. , that is, the most original image data.
When the photoreceptor performs photoelectric conversion, each link will produce a certain deviation, and this deviation will eventually be expressed in the form of noise. Therefore, it is necessary to perform certain noise reduction processing on this irrelevant information - noise. Currently, nonlinear denoising algorithms are mainly used, such as bilateral filters. When sampling, not only the relationship between pixels in spatial distance is considered, but also the degree of similarity between pixels is considered, thereby maintaining the general block division of the original image. Holds well for edge information.
After further reducing the noise, the ISP needs to process the image white balance. Due to the different external color temperatures in different scenes, the value of the RGB component needs to be adjusted according to a certain ratio, so that in the photoreceptor, white still appears white. . White balance can use manual white balance, which achieves the purpose of white balance by manually adjusting the proportional relationship between the three color components. More generally, automatic white balance processing is used. Here, the ISP is responsible for the mission of automatic white balance. The current image is analyzed to obtain the proportional relationship of each color component, and then the imaging effect is adjusted.
After adjusting the image white balance, you need to further adjust the color error. The error here is mainly caused by color penetration between the color blocks of the filter. Generally, during the tuning process, the image captured by the camera module will be compared with the standard image. A correction matrix is obtained by comparison. ISP uses this matrix to perform image color correction on the captured image, thereby achieving the purpose of restoring the true color in the shooting scene.
The above is a brief list of several basic functions of the image processor. Although the ISPs produced by each manufacturer are different, they basically include the above steps. It can be seen that the image processor is used to improve the entire The imaging effect of the camera system.