36 C.F.R. § 1236.41

Current through May 31, 2024
Section 1236.41 - Definitions for this subpart

In addition to the definitions contained in § 1220.18 of this subchapter and § 1236.2 , the following definitions apply to this subpart:

Accuracy is the degree to which the information correctly describes the object or process being measured. It can be thought of in terms of how close a reading or average of readings is to a true or target value. Accuracy is a different measure than precision.

Adobe RGB is a red, green, blue color space developed to display on computer monitors most of the colors that CMYK color printers produce. The Adobe RGB color space is significantly larger than the sRGB color space, particularly in the cyan and green regions.

Aimpoint is a specific value assigned to a given metric to assess performance achievement.

Artifact (defect) is a general term to describe a broad range of undesirable flaws or distortions in digital reproductions produced during image capture or data processing. Some common forms of image artifacts include noise, chromatic aberration, blooming, interpolation, and imperfections created by compression.

Batch is a group of files that are created under the same conditions or are related intellectually or physically. During digitization, batches represent groups of records that are digitized and undergo QC inspection processes together.

Bit depth is the number of bits used to represent each pixel in an image. The term is sometimes used to represent bits per pixel and at other times, the total number of bits used multiplied by the number of total channels. For example, a typical color image using 8 bits per channel is often referred to as a 24-bit color image (8 bits x 3 channels). Color scanners and digital cameras typically produce 24-bit (8 bits x 3 channels) images or 36-bit (12 bits x 3 channels) capture, and high-end devices can produce 48-bit (16-bit x 3 channels) images. Bit depth is also referred to as "color depth."

Clipping is the abrupt truncation of a signal when the signal exceeds a system's ability to differentiate signal values above or below a particular level. In the case of images, the result is that there is no differentiation of light tones when the clipping is at the high end of signal amplitude, and no differentiation of dark tones when clipping occurs at the low end of signal amplitude.

CMYK is a subtractive color model used in printing that is based on cyan (C), magenta (M), yellow (Y), and black (K). These are typically referred to as "process colors." Cyan absorbs the red component of white light, magenta absorbs green, and yellow absorbs blue. In theory, the mix of the three colors will produce black, but black ink is also used to increase the density of black in a print.

Color accuracy is measured by computing the color difference ([DELTA]E2000) between the digital imaging results of the standard target patches and their premeasured color values. By imaging an appropriate target and evaluating through the software, variances from known values can be determined, which is a good indicator of how accurately the system is recording color. Analytical software measures the average deviation of all color patches measured (the mean).

Color channel misregistration is the measurement of color-to-color spatial dislocation of otherwise spatially coincident color features of a digitized object.

Color management is using software, hardware, and procedures to measure and control color in an imaging system, including capture and display devices.

Color space is a specific organization of colors that supports reproducible representations of color in combination with color profiling supported by various devices. A color space can be a helpful conceptual tool for describing or understanding the color capabilities of a particular device or digital file. Examples of color spaces include Adobe RGB 1998, sRGB, ECIRGB_v2, and ProPhoto RGB.

Compression, lossless is a technique for data compression that will allow the decompressed data to be exactly the same as the original data before compression, bit-for-bit. The compression of data is achieved by coding redundant data in a more efficient manner than in the uncompressed format.

Compression, visually lossless is a form or manner of lossy compression where the data that is lost after the file is compressed and decompressed is not detectable to the human eye; the compressed data appearing identical to the uncompressed data.

Digital Image Conformance Evaluation (DICE) is the measurement and monitoring component of the Federal Agencies Digital Guidelines Initiative (FADGI) Conformance Program. The program consists of measuring ISO-compliant reference targets and using analysis software such as OpenDICE for testing and monitoring digitization programs to ensure they meet FADGI technical parameters. Agencies can access FADGI-compliant tools and resources online at http://www.digitizationguidelines.gov/guidelines/digitize-OpenDice.html.

Digitization project is any action an agency (including an agent acting on the agency's behalf, such as a contractor) takes to digitize permanent records. For example, a digitization project can range from a one-time digitization effort to a multiyear digitization process; can involve digitizing a single document into a digital records management system or digitizing boxes of records from storage facilities; or can include digitizing active records as part of an ongoing business process or digitizing inactive records for better access.

Digitized record is a digital record created by converting paper or other media formats to a digital form that is of sufficient authenticity, reliability, usability, and integrity to serve in place of the source record.

Dynamic range is the ratio between the smallest and largest possible values of a changeable quantity, frequently encountered in imaging or recorded sound. Dynamic range is another way of stating the maximum signal-to-noise ratio.

Federal Agencies Digital Guidelines Initiative (FADGI) is a collaborative effort by Federal agencies to articulate Technical Guidelines that form the basis for many of the technical parameters in this part, which equate to the FADGI three-star level. Agencies can access FADGI online at http://www.digitizationguidelines.gov/guidelines/digitize-technical.html.

Grayscale is an image type lacking any chromatic data, consisting of shades of gray ranging from white to black. Most commonly seen as having 8 bits per pixel, allowing for 256 shades or levels of intensity.

Image quality is the degree of perceived or objective measurement of a digital image's overall accuracy in faithfully reproducing an original. A digital image created to a high degree of accuracy meets or exceeds objective performance attributes (such as level of detail, tonal and color fidelity, and correct exposure), and has minimal defects (such as noise, compression artifacts, or distortion).

Lightness uniformity measures how evenly a lens records the lighting of neutral reference targets from center to edge and between points within the image.

Modulation transfer function (MTF)/spatial frequency response (SFR) is the modulation ratio between the output image and the ideal image. SFR measures the imaging system's ability to maintain contrast between progressively smaller image details. Using these two functions, a system can make an accurate determination of resolution related to the sampling frequency.

Newton's Rings are interference patterns that appear as a series of concentric, alternating light and dark rings of colored light (when imaged in a color mode). This type of interference is caused when smooth transparent surfaces come into contact with small gaps of air between the surfaces. The light waves reflect from the top and bottom surfaces of the air film formed between the surfaces, causing light rays to constructively or destructively interfere with each other. The areas where there is constructive interference will appear as light bands and the areas where there is destructive interference will appear as dark bands.

Noise is one or more undesirable image artifact(s) in a digitized record that is not part of the source material.

Pixels per inch (ppi), describes the resolution capabilities of an imaging device, such as a scanner, or the resolution of a digital image. PPI is different from dots per inch (dpi).

Posterization is an effect produced by reducing the number of tones (colors) in an image so that there is a noticeable distinction between one tone and another instead of a gradual shift between them.

Precision is the characteristic of measurement that relates to the consistency between multiple measurements, under uniform conditions, of the same item or process. As opposed to accuracy, precision does not indicate how close a measurement is to a true value.

Quantization is a lossy compression technique that involves compressing a range of values to a single quantum value, usually to reduce file size. This may result in flaws in an image, such as posterization, caused by reducing the data available in an image file to represent aspects like colors.

Raster image is a digitally encoded representation of a subject's tonal and brightness information into a bitmap. Data from digital cameras and scanning devices record light characteristics as numerical values into a grid, or raster, of picture elements (pixels).

Reference target is a chart of test patterns and patches with known standard values used to evaluate the performance of an imaging system.

Reflective digitization is a process in which an imaging system captures reflected light off of scanned objects such as bound volumes, loose pages, cartographic materials, illustrations, posters, photographic prints, or newsprint.

Reproduction scale accuracy measures the relationship between the physical size of the original object and the size in pixels per inch (PPI) of that object in the digital image.

Resolution is the level of spatial detail rendered by an imaging system as measured by MTF/SFR.

Sampling frequency measures the imaging spatial resolution and is computed as the physical pixel count or pixels per unit of measurement, such as pixels per inch (PPI). This parameter provides information about the size of the original and the data needed to determine the level of detail recorded in the file. (See also modulation transfer function (MTF)/spatial frequency response (SFR).)

Sharpening artificially enhances details to create the illusion of greater definition. Image quality testing using the SFR quantifies the level of sharpening introduced by imaging systems or applied by users in post-processing actions.

Source record is the record from which a digitized version or digitized record is created. The source record should be the record copy that was used in the course of agency business.

Spatial resolution determines the amount (for example, quantity, PPI, megapixels) of data in a raster image file in terms of the number of picture elements or pixels per unit of measurement, but it does not define or guarantee the quality of the information. Spatial resolution defines how finely or widely spaced the individual pixels are from each other. The actual rendition of fine detail is more dependent on the SFR of the scanner or digital camera.

sRGB is a standard RGB color space created by HP and Microsoft for use on monitors, printers, and the internet. sRGB uses the ITU-R BT.709-5 primaries that are also used in studio monitors and HDTV, and a transfer function (gamma correction) typical of CRTs (cathode ray tube TVs and computer monitors), all of which permits sRGB to be directly displayed on typical monitors. The sRGB gamma is not represented by a single numerical value. The overall gamma is approximately 2.2, consisting of a linear (gamma 1.0) section near black, and a non-linear section elsewhere involving a 2.4 exponent and a gamma changing from 1.0 through about 2.3.

Tolerance is the allowable deviation from a specified value.

Tone response or optoelectronic conversion function (OECF) is a measure of how accurately the digital imaging system converts light levels into digital pixels.

Transmissive digitization is a process in which the system transmits light through a photographic slide or negative.

White balance error is a measurement of the digital file's color neutrality. The definition of "neutral" is not universal: RGB workflows that use digital count values encode neutral as defined by the International Color Consortium (ICC) color space chosen. L*a*b* workflows define neutral as 0 on the a* axis and b* axis, with the lightness recorded from 0-100 on the L* axis.

36 C.F.R. §1236.41

88 FR 28417, 6/5/2023