The difference between line scan cameras and area scan cameras is the size of the sensors they use.
Image comparison between the area scan camera and the line scan camera during each exposure shooting.
In the picture above, each grid represents 1 pixel.
Imaging principle
Noted: Each exposure obtains 1 line of image, which means that 1 line of the CMOS sensor in the line scan camera is photographed. It does not mean that the final output image is output by this line of CMOS sensor. The final output image may be processed from images captured by 1 line, 2line, 3line, 4line, 6line, 8line or 16line CMOS sensor, and finally a 1 line image is obtained.
How does a line scan camera capture an “area scan” image?
The image seen on screen:
Each image is stitched from each line of images
Each line of images is captured by a line scan camera (1-256 lines) and then obtained after image processing.
Differences in the way of taking pictures between line scan cameras and area scan cameras
Area scan camera: shooting wide format requires multiple cameras Line scan camera: one camera can shoot wide format.
Main advantages of line scan cameras VS area scan cameras
Faster: capable of taking tens of thousands to hundreds of thousands of pictures per second Longer: thousands of meters of cloth can be displayed in one image Wider: Can capture a few meters of width with clear details Clearer: Very clear imaging of plane objects with uniform speed
Shooting direction of line scan cameras
The direction of movement is perpendicular to the long side of the line scan camera’s scanning window.
Line rate of line scan camera and frame rate of area scan camera
Frame rate is the camera’s photo-taking speed, usually defined by the number of images taken per second. Its unit is FPS, which is Frame per Second.
Line rate is actually a special frame rate: for CMOS with only a few lines of pixels, the shooting speed is very fast, reaching tens of thousands to hundreds of thousands of pictures per second. However, because the picture taken approximates a line and can’t be used directly, to avoid misunderstanding, it is not called frame rate, but line rate.
Frequency of mentions, everyone first thinks of waves and cycles.
If we imagine the blue line in the above picture (a photo taken by a line scan camera, a linear image) as a wave crest , the number of blue lines that appear in one second is the frequency of this wave. This is the origin of the word “line rate”.
Line height of line scan camera
Theoretically, line scan cameras can shoot infinitely, and through stitching, we can get an infinitely long image.
Infinite length means infinite size. No computer can handle this amount of images and data.
Therefore, in practical applications, it is necessary to capture an image after taking X lines according to application requirements.
This X lines is the line height, which is how many pixels are “high” in this image.
Trigger Modes of Line Scan Cameras: Line Trigger and Frame Trigger
1)Line Trigger
The line scan camera captures one line of image after receiving the line trigger signal.
Line trigger is suitable for situations where the motion speed is not very uniform or where some special imaging techniques are required, such as long and short exposure HDR, time-sharing strobe, etc. There all require the use of line trigger.
2)Frame Trigger
Line scan camera captures one frame of image after receiving the frame trigger signal. The height of this frame can be customized. After receiving the frame trigger signal, the line scan camera continues to take pictures at the set line rate until the specified line height is reached.
Frame trigger is more suitable for objects in uniform motion.
Ideal line rate, motion speed and customer required accuracy
What is ideal line rate?
The speed of taking photos must match the speed of movement in order to capture clear images. The ideal line rate is the one that matches the movement speed of the object being captured. Each specific application has its own different ideal line rate.
Customer required accuracy. When customers say the accuracy needs to reach 0.xmm, they mean that the captured image should be able to distinguish a pattern with a length or width of 0.xmm. To meet this requirement, it usually takes 2-5 pixels to present this 0.xmm. This way, when the image algorithm or the human brain sees this image, it can easily find, recognize, and detect the 0.xmm pattern.
Pixel accuracy. As shown in the figure below, pixel accuracy refers to the length of a segment of the object being captures, represented by the number of pixels projected onto that target. The distance of one pixel from the object determines the length (accuracy) it can express: it can be 0.1mm, several meters, or even several light-years (Webb Space Telescope).
The pixels are all square, so it also means “X direction accuracy = Y direction accuracy”.
Ideal line rate = motion speed/pixel accuracy. For example, there is an object, motion speed 2000mm/s, pixel accuracy 0.1mm, then its ideal line rate is 2000mm/s / 0.1mm = 20000/s, that is 20k(another meaning of this formula, how many 0.1mm segments can be divided from 2000m).
Resolution. If the object is 1000mm wide and the pixel accuracy is 0.1mm, considering redundancy on both sides, then the resolution is 1050/0.1=10500. That is, at this level of accuracy, 10500 pixels are needed to clearly image an object of this width.
Relationship between actual line rate, motion speed and ideal line rate
Under the premise of pixel accuracy, the faster the motion speed, the higher the ideal line rate.
Actual line rate = ideal line rate Normal image(Picture 1)
Actual line rate < ideal line rate The image is compressed(Picture 2) Reason: Diameter of 10 centimeters equals 1000 pieces of 0.1mm. Because the shooting was slow, only 800 shots were taken, and when the 800 lines were stitched together, it only measured 8 centimeters, so the image became “shorter.”
Actual line rate > ideal line rate The image is elongated(Picture 3) Reason: Diameter of 10 centimeters equals 1000 pieces of 0.1mm. Because the shooting was quick, 1200 shots were taken, and when 1200 lines are stitched together, it measures 12 centimeters, making the image become “longer.”
How does a line scan camera capture color images?
Color images consist of three components: R, G, and B. Therefore, to capture color images, a line scan camera also needs to collect data for the three components.
As shown in the figure below, it is the common line scan camera R, G, B components acquisition mode:
Primary colors: Red, Green, Blue. When sampled at a ratio of 1:1:1, the most authentic colors are obtained.
Line scan cameras in three lines, with each line representing a color, produce better quality color images.
Line scan cameras with two lines can also capture color, but the color reproduction is not as good as that of a 3-line line scan cameras.
How many lines does a mono line scan camera have?
Mono images have only one-dimensional variables, which are gray values. Therefore, under sufficient lighting conditions, line scan camera with single line can clearly image the target.
So why are there still mono line scan cameras with 2 lines, 4 lines, 8 lines, and even 256 lines?
That is because:
Some applications find it difficult to install lighting, and the condition of sufficient illumination is hard to meet, so TDI technology is used to achieve clear imaging in low-light conditions (TDI technology will be discussed later in this article).
Time-sharing strobe technology (will be mentioned later) requires line scan cameras in multi-lines.
Long and short exposure techniques (will be mentioned later) requires line scan cameras in multi-lines.
The multi-line technology of mono line scan cameras: TDI
In a relatively dark environment, the condition for clear imaging when taking photos is that the CMOS chips must capture enough light, and the methods to acquire sufficient light include:
Plan 1: Increase the light intensity per unit time (lighting source, large lens to acquire light)
Plan 2: Increase CMOS exposure duration (extend exposure time)
Plan 3: Increase the photosensitive area of the CMOS (large Optical Size, large pixel size)
Short story: On July 12, 2022, the first starry sky photo taken by the Webb telescope, with an exposure time of 12.5 hours, obtained a clearer image than the 10 days exposure time of the Hubble telescope. The mainreason is that the Webb telescope has a stronger ability to acquire light—its lens optical format is larger.
The following limitations make TDI technology useful:
Due to the need for the line scan camera’s shooting frequency (line rate) to match the motion speed, the exposure time cannot be arbitrarily increased, so Plan 2 cannot be used.
In some situations, it is impossible to enhance lighting (in narrow spaces), so Plan 1 cannot be used.
Using a large target CMOS sensor is too expensive or too large to be installed on-site, so Plan 3 cannot be used.
The essence of TDI: By using different lines of CMOS areas to take pictures of the same area of an object, the electrons excited by photons in each line are superimposed onto the next line, allowing the next line to indirectly receive the “light” from the previous line. After multiple superimpositions, the image in the last line becomes sufficiently “bright” and clear.
The essence of TDI: By using different lines of CMOS areas to take pictures of the same area of an object, the electrons excited by photons in each line are superimposed onto the next line, allowing the next line to indirectly receive the “light” from the previous line. After multiple superimpositions, the image in the last line becomes sufficiently “bright” and clear.
Unit: electrons The previous line of electrons is superimposed onto the next line.
TDI imaging effect: From the 1st line to the 10th line, the image gradually becomes brighter and clearer.
TDI imaging effect: photovoltaic industry case
Time-sharing strobe
Time-sharing strobe is a special scanning method for line scan camera. Different from the constant lighting method of traditional line scan camera, the time-sharing strobe controller switches the type or brightness of the light source when the line scan camera captures each line of image, so that multiple light sources are orderly spaced in the image.
After image acquisition is completed, the original image is split and reassembled to obtain images with multiple light source effects in one scan, thus reducing costs, improving compatibility, and obtaining the best imaging effect.
Application in Photovoltaic Industry
HDR
HDR:High Dynamic Range
The main function of HDR is to make the dark areas (underexposure) in a picture brighter and the bright areas (overexposure) darker.
HDR is more adaptable to the light and darkness of different areas of the subject, allowing all areas to be clearly imaged.
HDR implementation technology for line scan cameras: long and short exposure Odd lines: short exposure. Even lines: long exposure. Finally, two images are obtained. The algorithm is used to merge the two images into one, and an HDR image is obtained. Note: It doesn’t matter if it’s short exposure in odd or even lines. The point is that the long and short exposures in alternate lines.
Application in Photovoltaic Industry
How to select a line scan camera
The basic steps are as follows:
Calculate the resolution: Width/Minimum detection accuracy = Pixels required per line (Figure A: Width >> Object; Figure B: Width > Object;Figure C: Width ≈ Object)
Determine pixel accuracy: width/number of pixels = pixel accuracy
Determine the line rate: Movement speed per second/pixel accuracy = ideal line rate
Select the camera based on resolution and line rate
Calculation example:
For example, if the width is 1800 mm, the customer requires an accuracy of 1 mm and a movement speed of 25000 mm/s
Camera: 1800/1 = 1800 pixels, minimum 2000 pixels, select a 2k line scan camera. If you want to improve the clarity, you can use 2-5 pixels to express 1mm, then multiply the existing pixel value by 2-5 times.
Pixel accuracy: 1800/2048=0.9
Ideal line rate: 25000mm/0.9mm=27.8KHz
Select a 2K line scan camera with an actual line rate greater than 27.8KHz
1.Definition and Purpose2.Different Design Requirements3.Performance Differences4.Price and Cost5.Output Format Differences For those who are not familiar with network cameras and industrial cameras, the two types of cameras may seem indistinguish...
So, what does a machine vision system consist of?1.Illumination light source2.Lens3.Industrial camera4.Image transmission interface Machine vision is the simplest introduction: using machines to replace human eyes in work. Machine vision is not an...
The selection process for industrial camera lenses is a step-by-step process of clarifying the various parameters of the industrial lens. As an imaging device, industrial camera lenses are generally matched with light sources, and the camera form...
What is GigE Vision?Advantages of GigE Interface:Introduction to Do3think Industrial Cameras In today's era of technology, the demand for automation and intelligent technology is increasing across various industries. Machine vision technology can ...
HelloPlease log in