Chapter 2. Finding a medium for posterity

Sequences of Time Arrested: the Kodachrome Toronto Registry Initiative

Figure 2.1. Group of [women] workers harvesting tea, Chakva, Russia, ca. 1907–1915. [Digitally-superimposed red-green-blue glass plates. Sergei Mikhailovich Prokudin-Gorskii, Library of Congress: LC-DIG-PPMSC-04430].

Figure 2.1. Group of [women] workers harvesting tea, Chakva, Russia, ca. 1907–1915. [Digitally-superimposed red-green-blue glass plates. Sergei Mikhailovich Prokudin-Gorskii, Library of Congress: LC-DIG-PPMSC-04430].

In this chapter

Before Kodachrome, other colour photography formats were available, although most were experimental and limited by the technical immaturity of colour imaging. None was designed for the non-professional consumer. Further, these processes were often cumbersome, subject to inconsistencies, and delicate in terms of the medium itself (i.e., glass plates) or emulsion used (i.e., unstable dyes). These formats required complex chemistries or unusual combinations which could be less than consistent and not at all cost-effective. They often required expensive, specialized cameras which could expose multiple glass plates simultaneously or successively, each to capture discrete light wavelengths through the use of coloured filters. These factors necessarily limited colour work to development labs, professional studios, or to a solitary person who understood the entire turnkey process. All required extensive funding.

Colour photography before Kodachrome

Early colour: the additive approach

The first successful try at reproducing colour with photography was achieved in 1861, when an additive colour mixing approach made it possible to view a projected colour image from three black and white images. James Clerk Maxwell’s process involved using three coloured filters — red, green and cobalt blue — to produce three discrete images on separate glass plates, each capturing light which passed through their respective filter (Maxwell 1855). After processing, Maxwell then used a lantern to project the separate images on to a screen simultaneously, each through the same respectively coloured filters interposed between the glass plate and projection wall. By registering, or aligning together the three images, Maxwell was able to reproduce a coherent colour image that resembled the original subject: a plaid tartan.

Later, during the early 1900s, Russian photographer Sergei Mikhailovich Prokudin-Gorskii advanced this additive process further by using a single camera equipped with a prism to simultaneously expose three black-and-white glass negative plates, each filtered through one of three wavelengths allowing either red, green, and blue light (Denner 2004). By creating glass positives from the original negative plates (or plate, as all three images were aligned in a vertical row on the same plate of glass), much as Maxwell had, Prokudin-Gorskii could use a “magic lantern” — precursor to the modern slide projector — to project the three aligned images through red, green, and blue filters.

What made Prokudin-Gorskii’s work notable was in the consistency, portability, and prolificness of the hundreds of colour images he produced, including commissions for the tsar prior to the Bolshevik revolution (Library of Congress 2010). The relative compactness of his equipment made photo shoots in the field feasible [Figure 2.1], rather than solely in studio. This made possible the novel colour views of people and places found within their local element. His process made his application of colour portraiture photography a practical extension to commonplace black-and-white photography.

Around the same time as Prokudin-Gorskii, two French brothers, Antoine and Louis Lumière, developed the Autochrome process. Rather than using filters, the Lumière brothers produced an in situ additive process: a very fine coating of potato starch one starch grain thick — the starch grains randomly dyed one of three hues (orange, green, and violet) — and coated on a glass plate; behind the plate was a layer of light-sensitive film emulsion. When exposed, light passing through the glass passed through the translucent grains of dyed starch, each allowing light of that dyed hue to pass through to expose the emulsion (London Upton and Upton 1989, 388). The Autochrome effect was a bit more ethereal and painterly than Prokudin-Gorskii’s approach, but Autochrome was remarkable for its single-plate capture and embedding of three dyes on the same glass plate mixture, thus eliminating the requirement of using three plates and discrete colour filters. In addition, the Lumière brothers commercially offered Autochrome to the public.

The additive process, however, was dependent on bright light sources and projection means to appreciate the content, much like a CRT monitor. It was not practical for printing or lithography. Another approach was necessary for colour imaging technology to evolve into modern colour photography.

Figure 2.2. Subtractive colour. As three hexagons (in cyan, magenta, and yellow) overlap, each removes white, producing colour complements of red, green, and blue. All three, overlapped, make black.

Figure 2.2. Subtractive colour. As three hexagons (in cyan, magenta, and yellow) overlap, each removes white, producing colour complements of red, green, and blue. All three, overlapped, make black.

Early subtractive colour methods

In 1869, French photographer Louis Ducos du Hauron explored how complementary colours — that is, the colour directly opposite a colour in the visible spectrum wheel — could be used as a basis for printing colour images. By using cyan (for red), magenta (for green), and yellow (for blue), each complementary colour subtracts that respective colour from white light [Figure 2.2]. When printing each of the cyan, magenta, and yellow screens in registration, these produce a legible colour image (London Upton and Upton 1989, 388; Rogers 2007, 4–9; Wilhelm and Brower 1993, 49).

This subtractive mixing, by only limiting one colour at a time from a white or transparent base, allows the rest of the light to reflect or pass through, respectively, yielding a “brighter” original image, even when a triple-plate photographic process is used. In turn, subtractive imaging requires less light to project or print as an image with cyan, magenta, and yellow screens. This allowed du Hauron to begin shooting subtractive colour images during the 1870s (London Upton and Upton 1989, 388; Wilhelm and Brower 1993, 49). Du Hauron’s subtractive methodology was the basis for how colour plate printing (e.g., newsprint, books, etc.), lithography, and colour film imaging would become practical. Du Hauron’s subtractive method was later available commercially as the carbro (carbon-bromide) process and used until the 1930s (London Upton and Upton 1989, 389). As with other early colour imaging methods, du Hauron’s technique was cumbersome. It required special equipment and delicate glass plates. This kept colour photography all but beyond reach for non-professional uses.

Kodachrome, v0.1: 1914–1934

Use of the name Kodachrome did not start with the film it later came to be associated. As early as 1922, other experimental colour film processes being tested at Eastman Kodak were labelled provisionally as “Kodachrome.” These tests, each using a subtractive approach, were a product of research and development dating to 1914 (Brayer 2006, 222–3). Unlike modern Kodachrome, this experimental emulsion was a two-colour process which used red and bluish-green dyes. Much like modern Kodachrome, this two-colour film started as a multilayered black and white film, each sensitized to a different light wavelength, only to have dyes added after the emulsion on both sides was developed. This process was confined to two colours because each side of the celluloid was stained with its own dye. While not an accurate reproduction of true, trichromatic colour, it was sufficient enough to create one of the first colour film stocks to be tested as a colour motion picture. Its development was suspended by the start of World War I, as its red dye was being sourced from Germany (ibid.). An example of this process as a motion picture was made public by Kodak in 2010.

Figure 2.3. Simplified cross-section of Kodachrome emulsion and its response to light, as seen from film edge [adapted from Rogers].

Figure 2.3. Simplified cross-section of Kodachrome emulsion and its response to light, as seen from film edge [adapted from Rogers].

Kodachrome, v1.0: 1935–2010

The three-colour subtractive process for Kodachrome was significantly more complex than Kodak’s original, two-colour test emulsion. The basic concept for this three-colour film was proposed by Leopold Godowsky and Leopold Mannes. Both were musicians, as well as inventors. From 1922 to 1939, Godowsky and Mannes worked for Eastman Kodak. The Leopolds secured a patent in 1922 for a two-colour subtractive process, much like the original inception of Kodachrome, and then another patent in 1927 for one of the essential steps which would be used in the future Kodachrome “K” process (Roulier 2008, 21–2). After proposing their idea to Eastman Kodak, both were brought into the company’s research group to help bring to fruition a commercially viable version of this film.
On 15 April 1935, after almost five years of product development, Kodak offered Kodachrome to the public as a 16mm movie film, followed in 1936 by an 8mm movie version as well as 35mm and 828 roll films for still photography (Dmitri 1940, 28; Rogers 2007, 184). In 1938, professional large-format sheets, in assorted sizes ranging from 6.5x9cm to 11x14in, were also marketed for sale.

To make three-colour film imaging feasible, Kodak photo engineers pioneered a multi-layered, black-and-white emulsion — all sandwiched on one side of a gelatine film base [Figure 2.3]. Each light-sensitive layer needed to respond to different parts of the visible light spectrum. The problem was that the emulsion formulas sensitized to each of the three primary colours were also sensitive to shorter (blue) light wavelengths. Beyond the top layer — the designated blue-sensitive layer — this was problematic for the lower two layers and would causing unusably bluish images. To counteract this, Kodak engineers added an intermediary yellow filter beneath the top layer which would cancel blue light from reaching the lower two emulsion layers (Rogers 2007, 187). Following this yellow filter was a blue-green-sensitive layer, followed by a blue-red-sensitive layer, which sat atop the film’s gelatine base. Behind the gelatine base was a black, “rem-jet” coating to prevent stray light from reaching past the clear gelatine and refracting back into the emulsion layers. Without this rem-jet backing, disruptive light haloes around brighter areas, known as halation, would have produced an undesired effect.
Processing of Kodachrome involved multiple stages of development, re-exposure to red and blue light (for the red-sensitive and blue-sensitive sides, respectively), and the complementing addition of cyan (for the blue-red layer), magenta (for the blue-green layer), and yellow (for the blue layer) dye couplers to reproduce the original image. Each dye coupler was a separate development step to subtract from “white” (or clear) that colour from the layer, leaving only those dyes in place which were needed to render the areas which were exposed to light by that respective layer. Anywhere a respective dye was not needed for that layer, it would not “couple” with the exposed emulsion and thus would rinse away after development wash.

The result of this “K” process — originally requiring 28 steps in 1935, later reduced to 18 steps by its last iteration, K-14 — required precise timing and mixtures to develop each layer properly (Rogers 2007, 184–5; Roulier 2008, 22–3). This move from 28 to 18 steps was a product of simplifications to processing the film once chemical bleaching was replaced by coloured light re-exposure; not only was the processing time reduced (originally 3.5hrs), but the move away from chemical bleaching was what stopped the severe colour shifting which plagued Kodachrome film between 1935 and 1938 (Rogers 2007, 184–5). This meant that the complex development, while not impossible to process by hand, was prohibitively difficult enough that it had to be sent to labs equipped with dedicated machines designed to handle the precise processing steps and staffed with chemists to manage quality control. This is why Kodachrome had to either be mailed in for processing or brought to a local lab to have sent to a large, regional lab. Further, Kodachrome’s processing chemistry, some ingredients being more prone to spoilage, needed to stay fresh and in use at all times. This required a constant influx of film to be processed — often with machines running as close to 24 hours as possible.

Kodachrome’s chemistry also changed slightly over the years as some chemicals became either harder to procure, were prohibited under stronger environmental regulation, or were upgraded by technological improvements. In all, four major chemistries for developing Kodachrome were used — the original K process (1935–1962); the K-11 process (1955–1962); K-12 (1961–1978); and K-14 (1974–2010) (Buzit-Tragni, Dune, Grinde, and Morrison 2005, 10). Each process was specifically engineered for the film of that respective generation. In other words, a K-14 roll of Kodachrome could not be processed properly in K-12 chemistry or vice-versa. Still, the core layering and addition of dye couplers during development remained constant throughout. These assured the very consistency and stability noted throughout most of Kodachrome’s commercial life.

“The Kodachrome look”

Kodachrome’s unique visual properties were long known to film photographers. Some of these attributes (particularly so the colour palette) can be mimicked digitally, while others (like “edge acutance”) cannot. It is these physical qualities for which a post-emulsion imaging era now finds itself impoverished.


Unique to the Kodachrome palette are the unusually nuanced tonal variations for reds, blues, and golds. Browns and flesh hues, from dark to pale, are likewise rendered faithfully, making it optimal for portraiture photography.

Further, given Kodachrome’s high contrast — a function of its three imaging layers being stacked — the nuance of blacks and whites are especially evident relative to other films (or even digital imaging), giving it a tremendous range of possible tones. Other hues meanwhile, like foliage greens and purples, tend to be more subdued.

While Kodachrome, from a strictly engineering standpoint, was eventually superseded in objective colour accuracy by other film stocks — that is, the colours reproduced by a film were a closer match to the actual light wavelengths captured during exposure — the way it rendered colour was arguably much the way the human eye and brain interpret and remember colour subjectively. To understand why Kodachrome looks the way it does, it helps to first explore how the human eye works.

Human vision

The retina is lined with several types of photoreceptor cells, each dedicated to a specific vision- or non-vision-related task. Vision-related photoreceptors are known as rods and cones, each of which uses discrete photopigments (a protein sensitized to a specific light wavelength) (Stryer 1996, 557). Rods, which are only used under low lighting, do not render colour so much as low levels of illumination. Despite the lack of rendering colours, the rhodopsin photopigment embedded in rods is most sensitive to bluer wavelengths (e.g., moonlight).

Cones, meanwhile, are what make colour vision possible. Each of the three cone types, embedded with photopsin photopigments, is sensitized to one of three ranges in the visible light spectrum. The photopsin protein for each differs depending on the light wavelength for which it is most sensitive. There is overlap between these three types of cones — peaking roughly in deep blue, green, and orange-yellow — enabling the ability to view all colours across the visible light spectrum, which results in a trichromatic (literally, “three-colour”) vision. A congenital absence of one or more of these photopsin proteins, meanwhile, is what causes different types of colour blindness.

Figure 2.4. Trichromacy sensitivity curves for Kodachrome (CMY) and the human eye (RGB) [parts adapted from Rogers].

Figure 2.4. Trichromacy sensitivity curves for Kodachrome (CMY) and the human eye (RGB) [parts adapted from Rogers].

Kodachrome’s three emulsion layers, much like the human eye, are sensitized in a way to accommodate for the visible light spectrum. Its peak sensitivities, however, do not exactly line up with the peaks of human vision [Figure 2.4]. For those which do not, such as the blue-sensitive layer (yellow dye) or the blue-red-sensitive layer (cyan dye), the sensitivity (density) is higher, which helps to compensate for its non-alignment with the cones’ photopsins [Figures 2.5–2.7]. For the blue-red-sensitive layer particularly (n.b., recall earlier that blue light is cancelled by the yellow filter layer, making the blue-red emulsion an effective red-only emulsion), a higher density of cyan dye is used as the layer’s peak sensitivity to red occurs much deeper into the longer, redder light wavelengths (extending into near-infrared) than the photopsin protein for viewing red light (most sensitive at wavelengths nearer to yellow).

Because by design it requires less luminosity to sensitize Kodachrome’s blue-red layer, a bright red object can be reproduced more intensely. The emulsion records a slightly exaggerated, but nuanced rendering of those reds. To a lesser degree, Kodachrome also reproduces somewhat pronounced blues, since the blue-sensitive layer peaks slightly closer to greenish-blue than the photopsin optimized for blue light and does so at a relatively higher intensity. Depending on the particular Kodachrome product or K-process generation, very minor differences in the curves of these dye renderings do exist, but they nevertheless all share the same basic spectral peaks — thus reproducing very similar results. Likewise, there are minor photopsin variations from person to person, but statistically most human trichromats share the same basic spectral peaks.


In addition, the Kodachrome process, by adding colour dyes to the emulsion during development, departed from virtually all other colour films even to present day — all of which have cyan, magenta, and yellow dyes pre-embedded into the imaging layers. This fundamental difference in processing methods underscores two additional qualities unique to the Kodachrome process — both considered beneficial.

First, as only the dyes needed were coupled during processing to exposed areas of the emulsion, it meant there was less space between the silver halide crystals in the emulsion which would otherwise be occupied by embedded dyes used in other colour films. This allowed for a tighter grain which increased Kodachrome’s effective resolution to approximately 100 lines/mm (Holm and Krakau 2000, 22).

Second, because the dyes were only coupled to where the exposed silver halide in each layer remained, it meant that “dark” areas in the emulsion of an exposed Kodachrome frame (for example, a onyx vase in front of a pale chartreuse wall) were visibly raised relative to bright areas where the unexposed silver halide is washed away. By looking at the film’s emulsion side at an angle, it was possible to see the “etching” or “ridging” effect which, at points where these transitions between light and dark were greatest, produced an unusual optical illusion of sharpness known as edge acutance. This tactile attribute is what enabled Kodachrome images to mimic a pseudo-three-dimensional effect. This perceptible sense of tactility is a cornerstone of the idea of the “urban sensorium” — that is, “using the techniques of cinematography and audio mastering” to effect a realism which taps into multiple senses, not just one (Krieger 2004, 215). Because of varying emulsion thicknesses, it is possible to view a frame’s contents by examining the emulsion side from an oblique angle, without having to hold the image in front of a light source.


The same property which endows Kodachrome with edge acutance also helps to explain why its durability and resistance to fading or colour shifting was so uniquely robust. Unlike other colour films, which have the cyan-magenta-yellow dyes embedded in the imaging layers of the film at the time of its manufacture, Kodachrome remained a triple-layered, black-and-white film until it was processed.

What this meant is that other colour films kept these dyes embedded both before and after film development, even if they were not photochemically activated. Over time, those inactivated dyes, still embedded in the emulsion, could begin appearing as the dyes were made chemically unstable (whether by age, handling, humidity, heat, or other factors). This may be acutely evident with older films which have a magenta, yellow, or (less commonly) cyan cast. Early non-Kodachrome colour film, like Ektachrome, was notoriously prone to these dye instabilities (especially so with magenta); modern films, while significantly more stable than precursors, are also not completely immune (Wilhelm and Brower 1993, 26). The latter, however, are now engineered with far more stable dyes than forerunners; some are rated to retain accurate colours for at least at long as Kodachrome can.

Likewise, Kodachrome’s dyes are not immune to shifting over time. This was especially evident during the first four years of the film’s commercial life (when Kodak was still trying to stabilize the development process). As a consequence, pre-1939 Kodachrome photos and movies were prone to severe dye fading — particularly with pronounced losses of yellow — effecting a desaturated, magenta-bluish colour casting. Once Kodak managed to stabilize the process, the revision dramatically improved long-term stability. In archives reviews of early Kodachrome media, this switchover from the original process is quite evident. In most cases, as each lab was being modified, this switchover occurred during the first half of 1939. The film which was processed afterward became known for its signature stability. The deterioration of the most unstable dye in post-1939 Kodachrome, while still the yellow layer, should not fade appreciably for at least 185 years if stored in an archival-quality environment (Wilhelm and Brower 1993, 203).

Conclusion: why this matters

Comparative research benefits from an index of consistency. It removes one uncertain variable from the equation. Consistency allows researchers to focus more on the integrity of the subject matter itself — not whether the subject’s qualities have been disrupted or deteriorated over time. Consistency becomes the baseline to enable review on the details recorded by that medium — not a necessity to fixate on problems with the medium itself.

Contents ©2012 Astrid Idlewild. Do not excerpt without written permission. A printed version of this SRP is filed with the Blackader-Lauterman Library of Architecture and Art at McGill University. The online version of this manuscript was edited and serialized in 2013.

Series Navigation<< Chapter 1. Finding Kodachrome TorontoChapter 3. Registry methodology >>