
Introduction: An Imaging Colorimeter is Not a “Camera that Takes Photos”#
An imaging colorimeter and an industrial camera may look similar—both have lenses, sensors, and data interfaces. However, the essential difference is that a camera outputs an image, while a colorimeter outputs physical quantities.
The Raw Data recorded by a camera sensor is a Digital Number (DN) for each pixel, representing a digitized characterization of the number of photons received by that pixel after photoelectric and analog-to-digital conversion. This gray value itself has no physical units—DN = 2048 represents neither a specific luminance value nor a specific color.
Transforming these dimensionless gray values into physically meaningful luminance values (cd/m²) and CIE color coordinates (x, y) requires a rigorous Calibration Chain. Each link in this calibration chain eliminates a specific type of systematic error, eventually granting the sensor’s raw output the legitimacy of physical metrology.
This article will fully elaborate on the data conversion process from RAW gray values to cd/m² and CIE xy color coordinates, covering four core steps: dark current elimination, flat-field correction, linearity correction, and absolute calibration.
Step 1: Dark Current Noise Elimination—Establishing the Zero Benchmark#

Physical Causes of Dark Current#
Even under completely dark conditions, the output of an image sensor (CCD or CMOS) is not zero. This is due to the thermal excitation effect in the semiconductor material at room temperature—the thermal motion of silicon atoms randomly releases electrons, which are captured by the pixel’s potential well and read out along with photo-generated electrons, forming a false signal. In addition, electronic noise from the readout circuit and the inherent bias voltage of the analog-to-digital converter contribute additional non-zero bases.
These non-photo-generated signals are collectively called Dark Current Noise, consisting of the following components:
Fixed Bias/Offset. Inherent bias from the readout circuit and ADC, independent of exposure time. It exists in every readout and has a basically constant amplitude.
Dark Current. Generated by thermally excited electrons, proportional to exposure time and exponentially related to sensor temperature—dark current approximately doubles for every 6-8°C increase in temperature.
Random Read Noise. Random noise introduced by the readout circuit during each readout process, following an approximately Gaussian distribution. It cannot be removed by a single-frame dark field subtraction but can be reduced through multi-frame averaging.
Standard Operation for Dark Field Correction#
Dark Field Correction is the first step in the calibration chain, aiming to establish the sensor’s “true zero.”
Acquiring Dark Frames: Under completely light-shielded conditions (lens cap on or shutter closed), with the same exposure time, gain, and sensor temperature as during actual measurement, capture N images (N is recommended to be at least 16).
Generating a Master Dark Frame: Take the average value of the N dark images pixel by pixel. The purpose of averaging is to suppress random read noise (whose standard deviation scales by $1/\sqrt{N}$), while preserving the fixed spatial pattern of dark current—different pixels may have different dark current rates due to manufacturing process variations, forming so-called Fixed Pattern Noise (FPN).
Subtracting the Dark Frame: For each actual measurement image, subtract the master dark frame pixel by pixel:
$$I_{dark\_corrected}(x, y) = I_{raw}(x, y) - I_{master\_dark}(x, y)$$Note: Dark frames are strongly correlated with exposure time and temperature. If different exposure times are used during measurement, dark frames must be acquired for each. High-end systems typically establish a dark current model by acquiring dark frames at multiple exposure time points and fitting the dark current rate $k(x,y)$ and bias $b(x,y)$ for each pixel:
$$I_{dark}(x, y, t) = k(x, y) \cdot t + b(x, y)$$With this model, dark frames can be generated by calculation for any exposure time without actual acquisition—this significantly reduces calibration maintenance workload on production lines.
Step 2: Flat-Field Correction—Eliminating Spatial Non-Uniformity#

Sources of Non-Uniformity#
After dark current elimination, the sensor’s zero point is established. The next question is: if the sensor is illuminated by a perfectly uniform light field, will the outputs of all pixels be consistent?
The answer is no. Images typically show a “bright center, dark edges” distribution, which mainly stems from three factors:
Lens Vignetting. Optical lenses have lower transmittance for light passing through their edges than for light passing through the center, causing image edge illuminance to attenuate approximately according to the $\cos^4 heta$ relationship ($ heta$ is the field of view angle). For wide-angle lenses, edge attenuation can reach below 50% of the center luminance.
Photo Response Non-Uniformity (PRNU). Due to microscopic differences in semiconductor manufacturing processes, the quantum efficiency (the efficiency of converting photons to electrons) of different pixels varies slightly. PRNU is an inherent characteristic of the sensor and is approximately proportional to light intensity.
Stray Factors in the Optical Path. Dust on the lens surface or particles on the sensor protection glass can form local shadows in the image.
Implementation of Flat-Field Correction#
Flat-Field Correction eliminates the spatial non-uniformity mentioned above through these steps:
Uniform Light Source Preparation. Aim the imaging colorimeter at the opening of a large-aperture integrating sphere with spatial uniformity better than 98%. Multiple diffuse reflections inside the integrating sphere form a highly uniform Lambertian radiant surface at the spherical opening.
Per-Channel Flat-Field Acquisition. For tristimulus filter-based imaging colorimeters, flat-field images need to be acquired separately for the X, Y, and Z filter channels. Exposure time should be controlled within 50%-70% of the sensor’s full range to ensure a good signal-to-noise ratio while avoiding saturation. Multiple frames are acquired for each channel and averaged to suppress random noise.
Calculating the Gain Correction Matrix. Calculate the normalized gain coefficient for each pixel of the flat-field image (after dark frame subtraction):
$$G(x, y) = \frac{\overline{DN}_{center}}{DN_{flat}(x, y)}$$where $\overline{DN}_{center}$ is the average gray value of the center region of the image. For pixels in vignetted areas, $DN_{flat}(x,y)$ is smaller than the center value, so $G(x,y) > 1$—meaning that in subsequent measurements, the readings of edge pixels will be amplified to compensate for vignetting attenuation.
Applying Correction. For each actual measurement image, multiply by the gain coefficient pixel by pixel:
$$I_{flat\_corrected}(x, y) = I_{dark\_corrected}(x, y) imes G(x, y)$$Key Constraint: Flat-field correction data is strongly correlated with the aperture value (F-stop) and focus distance. Changing the aperture alters the lens’s vignetting characteristics and diffraction performance; changing the focus distance alters the internal spacing of the lens elements, thereby changing the optical path characteristics. Therefore, each combination of aperture and focus distance requires independent flat-field calibration data. High-end imaging colorimeters acquire and store flat-field data for each common optical configuration at the factory and automatically invoke the corresponding correction matrix based on lens parameters during use.
Step 3: Linearity Correction—Ensuring Proportionality between Gray Values and Light Intensity#

Manifestations and Impact of Non-Linearity#
Ideal photoelectric conversion should satisfy a strict linear relationship: if the incident light intensity doubles, the gray value output by the sensor should also double. However, actual sensors deviate from linearity in the following regions:
Near-Saturation Region. When the accumulated charge in a pixel’s potential well approaches its Full Well Capacity, further increases in light intensity can no longer increase the signal output proportionally, appearing as “compression” of the response curve.
Extremely Low Signal Region. At extremely low light intensities near the dark current level, noise takes up too high a proportion, and the effective photoelectric linear relationship is submerged.
ADC Non-Linearity. Quantization errors of the analog-to-digital converter may be systematically higher or lower near certain DN values.
Non-linearity directly affects the accuracy of luminance measurement. For example, if the sensor has 3% compression non-linearity in the high DN region, luminance measurement of high-brightness areas will be systematically 3% lower. More importantly, non-linearity destroys the spliceability of data under different exposure times—in High Dynamic Range (HDR) synthesis applications, this will lead to luminance jumps at the splice points.
Linearity Correction Method#
Opto-Electronic Conversion Function (OECF) Measurement: Using a highly stable integrating sphere light source, keep the light source luminance constant and capture a series of images with systematically varying exposure times. Plot the relationship curve between average gray value and exposure time (DN vs. Exposure Time). Ideally, this curve should be a straight line passing through the origin.
Building a Linearization Lookup Table (LUT): Based on the OECF curve, establish a lookup table that maps actual non-linear DN values to ideal linear DN values. For a 12-bit sensor (4096 levels), this LUT contains 4096 entries, each storing the output value after linearization correction.
Applying Correction:
$$I_{linear}(x, y) = LUT[I_{flat\_corrected}(x, y)]$$The advantage of the lookup table method is its extremely fast computation speed (requiring only one memory lookup) and its ability to accurately compensate for non-linear features of any shape.
Step 4: Absolute Calibration—From Relative Gray Values to Physical Units#

Why Absolute Calibration is Needed#
After dark current elimination, flat-field correction, and linearity correction, the data output by the sensor is already “clean”—eliminating the noise base, spatial non-uniformity, and non-linearity. However, these data are still dimensionless gray values and lack physical metrological meaning.
The task of Absolute Calibration is to establish a quantitative proportional relationship between gray values and physical luminance units (cd/m²).
Calibration Process#
Preparing a Standard Luminance Source. Use a standard luminance source with traceability certification from a national metrology institute—usually an integrating sphere lamp with precise current control. Its output luminance $L_{std}$ (in cd/m²) is given by the traceability certificate, with an uncertainty typically within 1%-2%.
Acquiring Standard Source Images. The imaging colorimeter captures the standard luminance source under the same optical configuration as for actual measurement. After the aforementioned three correction steps, the average gray value $DN_{Y,std}$ of the Y channel is obtained.
Calculating the Calibration Coefficient. The absolute calibration coefficient $K$ is defined as:
$$K = \frac{L_{std}}{DN_{Y,std} / t_{exp}}$$where $t_{exp}$ is the exposure time. Including exposure time in the normalization makes the calibration coefficient $K$ independent of exposure settings—when measuring at different exposure times, simply divide the corrected gray value by the current exposure time and multiply by $K$ to get the absolute luminance value.
Luminance Calculation. For any pixel $(x,y)$ in any measurement image:
$$L(x, y) = K imes \frac{I_{linear}(x, y)}{t_{exp}} \quad [cd/m^2]$$Chromaticity Calibration: From Gray Values to CIE xy Coordinates#
Absolute calibration solves the problem of scaling luminance (Y channel). Complete chromaticity calibration also requires converting gray data from the three channels into CIE XYZ tristimulus values and subsequently calculating color coordinates.
Color Correction Matrix (CCM). As previously discussed, there are deviations between the spectral responses of the imaging colorimeter’s three channels and the CIE curves. The role of the color correction matrix is to compensate for these deviations through linear transformation:
$$\begin{bmatrix} X \ Y \ Z \end{bmatrix} = \mathbf{M}_{CCM} \cdot \begin{bmatrix} CH_1 \ CH_2 \ CH_3 \end{bmatrix}$$where $CH_1, CH_2, CH_3$ are the gray values of the three filter channels after dark current elimination, flat-field correction, and linearity correction.
Solving for the CCM matrix is similar to absolute calibration: use a high-precision reference instrument (spectroradiometer) to measure the reference XYZ values of a set of standard color samples, while capturing images of the same samples with the imaging colorimeter to extract gray values for each channel. Then, solve for the matrix $\mathbf{M}_{CCM}$ using least squares or more advanced optimization algorithms (such as non-linear optimization aimed at minimizing CIEDE2000 color difference).
For systems with larger $f_1'$ values (such as calibrated RGB cameras), a simple 3×3 linear matrix may not be accurate enough, and more complex models are needed:
Root-Polynomial Regression: Introduces non-linear terms such as $\sqrt{CH_1 \cdot CH_2}$ to improve fitting accuracy while maintaining exposure invariance (i.e., output scales proportionally when input signals scale). This is a widely recognized advanced algorithm in the imaging colorimeter field.
Polynomial Regression: Introduces high-order terms such as $CH_1^2, CH_1 \cdot CH_2$. Although fitting accuracy might be higher, because high-order terms break exposure invariance (the $R^2$ term becomes 4x instead of 2x when input doubles), this method is only suitable for scenarios with strictly fixed light conditions and not for measuring objects at different brightness levels.
Color Coordinate Calculation. After obtaining the XYZ tristimulus values, the CIE 1931 color coordinates are calculated by:
$$x = \frac{X}{X+Y+Z}, \quad y = \frac{Y}{X+Y+Z}$$Or in the CIE 1976 UCS chromaticity diagram:
$$u' = \frac{4X}{X+15Y+3Z}, \quad v' = \frac{9Y}{X+15Y+3Z}$$The CIE 1976 u’v’ chromaticity diagram is superior to the CIE 1931 xy diagram in perceptual uniformity and is therefore more widely used in color difference evaluation in the display industry.
Summary of the Complete Data Flow#
Linking the four calibration steps together, the complete data flow from sensor RAW output to the final physical quantity is as follows:
[Sensor RAW Output] DN_raw(x,y)
│
▼
[Step 1: Dark Current Elimination]
DN_dark_corrected = DN_raw - Master_Dark
│
▼
[Step 2: Flat-Field Correction]
DN_flat_corrected = DN_dark_corrected × G(x,y)
│
▼
[Step 3: Linearity Correction]
DN_linear = LUT[DN_flat_corrected]
│
▼
[Step 4a: Color Correction Matrix]
[X, Y, Z] = M_CCM × [CH1, CH2, CH3]_linear
│
▼
[Step 4b: Absolute Calibration]
Luminance L = K × Y / t_exp [cd/m²]
Color Coordinates x = X/(X+Y+Z), y = Y/(X+Y+Z)Each calibration step strictly depends on the output of the previous one—performing flat-field correction without dark current elimination introduces bias; performing absolute calibration without linearity correction introduces systematic error. The order of the calibration chain cannot be changed.
Maintenance and Traceability of Calibration#
Necessity of Regular Re-calibration#
Calibration parameters are not permanently valid. The following factors can cause calibration status to drift over time:
Sensor Aging. Quantum efficiency and dark current characteristics of sensors slowly drift with usage time, with aging accelerated under high irradiance or high-temperature environments.
Degradation of Optical Components. Microscopic degradation of filter coatings and aging of optical glue inside lenses can change the spectral response and transmittance of the system.
Mechanical Drift. Positioning accuracy of filter wheels and fit gaps of lens mounts can cause slight shifts in filter positions in the optical path, altering the effective spectral response.
Therefore, imaging colorimeters require a regular Re-calibration system. The re-calibration cycle depends on usage intensity and accuracy requirements, typically once every six months to a year. Dark frame data, being highly sensitive to temperature, may need more frequent updates in scenarios with significant ambient temperature changes.
Metrological Traceability Chain#
For the measurement results of an imaging colorimeter to have legal metrological significance, its calibration must be traceable to national or international metrology standards. The complete traceability chain is as follows:
- International Standards: Photometric/colorimetric standards maintained by the International Bureau of Weights and Measures (BIPM).
- National Standards: Primary standard lamps maintained by national metrology institutes of various countries (e.g., NIM in China, NIST in the US, PTB in Germany).
- Working Standards: Transfer standards calibrated by national metrology institutes, such as standard luminance sources and standard color cards.
- Calibrated Instrument: The imaging colorimeter is calibrated using these working standards, thus indirectly tracing back to international standards.
Each level of transfer in the traceability chain introduces additional measurement uncertainty. Ultimately, the total uncertainty of luminance measurement for an imaging colorimeter is typically within 2%-5%, and the uncertainty of chromaticity measurement (expressed as Δu’v’) is typically within 0.002-0.005.
Stray Light Correction: The Watershed between “Capture Devices” and “Metrology Instruments”#

Beyond the basic calibration chain, high-end imaging colorimeters require a critical advanced correction—Stray Light Correction.
Stray light refers to light that, after multiple reflections between optical interfaces such as internal lens barrel walls, filter surfaces, and sensor micro-lenses, is dispersed into non-imaging areas of the image. In high-contrast scenes (e.g., white glowing characters on a black background), stray light significantly raises the readings of pixels in dark areas, leading to:
- Seriously underestimated Contrast Ratio measurements.
- “Contamination” of color coordinates in dark areas by chromaticity information from high-brightness areas.
Correction of stray light is typically based on measurement and deconvolution of the Point Spread Function (PSF). By imaging an extremely small point light source in a dark room and measuring the halo distribution of that source in the image, the system’s PSF can be obtained. During actual measurement, the stray light component is subtracted from the original image through a deconvolution algorithm, restoring the true contrast of the scene.
This step is a key watershed distinguishing “cameras for taking photos” from “instruments for metrology.”
Conclusion#
From the raw gray digits output by the sensor to physically meaningful cd/m² and CIE color coordinates, each link in this calibration chain answers the same question: how to extract measurement results that are as close as possible to the physical truth from imperfect hardware.
Dark current elimination establishes the zero benchmark, flat-field correction ensures spatial consistency, linearity correction safeguards the proportionality of values, absolute calibration provides physical units, and the color correction matrix compensates for spectral mismatch. Each step is a targeted elimination of a class of systematic error, each is indispensable, and the order cannot be changed.
Understanding this complete data flow not only helps in correctly using imaging colorimeters to obtain reliable measurement data but also aids in tracing the root of the problem when measurement results appear abnormal—whether it’s a bias deviation caused by insufficient dark current compensation, spatial non-uniformity due to outdated flat-field data, or an absolute value offset from calibration coefficient drift. Systematically understanding the calibration chain is a basic skill in optical metrology engineering practice.
FAQ#
Q1: Can the order of the four steps in the calibration chain be changed?#
No. The order of the calibration chain is strictly fixed: Dark Current Elimination → Flat-Field Correction → Linearity Correction → Absolute Calibration. Each step depends on the output of the previous one—for example, performing flat-field correction without dark current elimination introduces bias (as spatial non-uniformity of dark current would be mistaken for optical vignetting), and performing absolute calibration without linearity correction results in systematic error. Changing the order will lead to incorrect correction results.
Q2: Why is flat-field correction data strongly correlated with aperture value and focus distance?#
Because changing the aperture alters the lens’s vignetting characteristics and diffraction performance—vignetting is more severe at large apertures, causing greater edge luminance attenuation; changing the focus distance alters the internal spacing of lens elements and the optical path characteristics. These changes directly affect the spatial non-uniformity distribution of the image. Therefore, each combination of aperture and focus distance requires independent flat-field calibration data, and high-end imaging colorimeters store correction matrices separately for each common optical configuration.
Q3: Why is stray light correction called the watershed between “capture devices” and “metrology instruments”?#
Because in high-contrast scenes (e.g., white characters on a black background), stray light significantly raises the readings of pixels in dark areas, leading to drastically underestimated contrast measurements and “contamination” of dark-area color coordinates by high-brightness areas. Ordinary cameras do not perform stray light correction, and their measured contrast may be several or even dozens of times lower than the true value. Imaging colorimeters eliminate stray light components by measuring the Point Spread Function (PSF) and executing deconvolution algorithms, restoring the scene’s true contrast—this is one of their key characteristics as metrology-grade instruments.
This article is part of the Imaging Colorimeter Technology Knowledge Base series.
