Image acquisition#

The principal phenomenon at the origin of the acquisition of an image is the electromagnetic spectrum. Images based on radiation from the electromagnetic spectrum are the most familiar, especially images from visible light, as photography. Other images based on the electromagnetic spectrum include radiofrequency (radioastronomy, MRI), microwaves (radar imaging), infrared wavelengths (thermography), X-rays (medical, astronomical or industrial imaging) and even gamma rays (nuclear medicine, astronomical observations).

In addition to electromagnetic imaging, various other modalities are also employed. These modalities include acoustic imaging (by using infrasound in geological exploration or ultrasound for echography), electron microscopy, and synthetic (computer-generated) imaging.

Examples of image modalities
../_images/interferometry1.jpg

Fig. 4 Interferometry.#

../_images/x-ray1.jpg

Fig. 5 Radiograph of the right knee.#

../_images/thermography1.jpg

Fig. 6 Thermogram of a passive building, with traditional building in the background.#

../_images/MRI1.jpg

Fig. 7 MRI of the brain (axial section) showing white matter and grey matter folds.#

../_images/seismic1.jpg

Fig. 8 Seismic reflection image.#

../_images/electron-microscopy1.jpg

Fig. 9 Image of pollen grains taken on a electron microscope.#

In the sequel, we focus on electromagnetic imaging.

Sensor#

A photodiode is the most common and basic sensor for image acquisition. It is contructed of silicon materials so that its output voltage is proportional to incoming light.

../_images/photodiode1.svg

Fig. 10 Electronic symbol of a photodiode.#

To acquire a 2D digital image, the typical system is by using a matrix of single sensors. Two technologies coexist.

  • The prevailing technology to read the output voltage of each single sensor is the CMOS (complementary metal oxide semi-conductor) approach: each single sensor is coupling with its own analog-to-digital conversion circuit. This simple counting process makes CMOS technology cheap and low energy. However, it may be less sensitive and can produce distortions in case of rapid movings in the field of view.

  • On the other hand, the CCD (charge coupled device) approach has narrowed since the 2010s. The fundamental idea of the CCD is the use of a unique conversion circuit. The potential of each sensor is moved pixel by pixel: the potentials are shifted by one column, then those at the last column are counted by a unique circuit. CCD progressively disapears because of several reasons: moving potentials is not instantaneous and consumes energy, and it sometimes creates undesirable side effects, such as blooming.

A color or multispectral image (\(B>1\)) is made of as many grayscale image as bands. Therefore, to acquire a traditional RGB image, three grayscale images are acquired, each at a different wavelength. The Bayer filter (Fig. 11) is the widely used technique for generating color images. It is a mosaic of red, green and blue filters on a square grid of photosensors. Note that the filter pattern is half green, one quarter red and one quarter blue. The reason for having twice green filter than the other colors is to mimic human vision which is naturally more sensitive to green light.

../_images/bayer-filter1.svg

Fig. 11 The Bayer filter on an image sensor.#

Note

Several websites give a tool for generating a colour from the proportion of red, green and blue, or, on the contrary, to give the proportion of red, green and blue from a specific color. For example, look and play with this one!

Sampling and quantization#

The final step of digital image formation is digitization, which is both the samling and quantization of the observed scene.

Sampling#

Sampling corresponds to mapping a continuous scene onto a discrete grid. This is naturally done by the matrix of pixels. Sampling a continuous image leads to a loss of information. Intuitively, it is clear that sampling reduces resolution: structures of about the scale of the sampling distance and finer will be lost. Thus the number of pixels on the sensor, directly related to the sampling, is crucial. Besides, significant distortions occur when sampling an image with fine structures, as seen in Fig. 12.

../_images/moire.jpg

Fig. 12 Me with my favourite moiré shirt. Left: image of size 1000×1000, right: image of size 595×595.#

This kind of distortion is called the moiré effect. The same phenomenon exists in signal processing and is called aliasing. To avoid the moiré effect, one has to satisfy the sampling theorem which imposes a maximal sampling step in function of the frequencies present in the image. In practice, one prefer to put an optical low-pass filter before the sensor to vanish the high frequencies responsible of the moiré effect.

Quantization#

Quantization correspond to mapping the continuous light intensities to a finite set of number. Typically, image are quantized into 256 gray values; then, each pixel then occupies one byte (8 bits). The reason for assigning 256 gray values to each pixel is not only because it is well adapted to the architecture of computers, but also because it is good enough to give humans the illusion of a continuous change in gray values.

../_images/quantization.png

Fig. 13 Quantization of the same image with (from left to right) 256, 16, 4, and 2 gray levels.#

However, quantization naturally introduces errors on the intensities. If the quantization levels are equally spaced with a distance \(d\) and all gray values are equally probable, the standard deviation of the quantification error is lower than \(0.3d\) [Jähne 2005, p. 253]. For most common applications, the error is sufficiently low to be acceptable. But some applications, such as medical imaging or astronomy, require a finer resolution and, in consequence, use more than 256 gray levels.

Distortions#

In addition to the moiré effect and quantization noise, other distortions can affect the image acquisition. The two main distortions are noise and blurring.

Noise#

Noise introduces erroneous intensities in the digital image. Sources of noise are multiple, from electronic noise due to the imaging system itself to the acquisition conditions (low-light-level for example). The main noise models are desribed in [denoising:noise-sources]: Gaussian noise, Poisson noise and salt-and-pepper noise. In specific imaging systems, other noise can be encountered. For example, in radar imaging systems the noise is considered to be multiplicative and is called speckle noise.

../_images/dark-current1.jpg

Fig. 14 Noise on a photograph taken in a dark room (this noise is called ``dark current’’).#

Point spread function#

Despite the high quality of an imaging system, a point in the observed scene is not imaged onto a point in the image space, but onto a more or less extended area with varying intensities. The function that describes the imaging of a point is an essential feature of the imaging system and is called the point spread function or PSF (in French: fonction d’étalement du point). Generally, the PSF is assuming to be independent of the position. Then, the imaging system can be treated as a linear shift-invariant system, which is mathematically described as a convolution (see the dedicated chapter). Sometimes, an imaging system is described not by its PSF but by its Fourier transform, and it is called optical transfer function, or OTF (in French: fonction de transfert optique).

../_images/budapest1.jpg

Fig. 15 Example of blur (parliament of Budapest shoot by a camera).#