Chapter 5. Discussion

Sequences of Time Arrested: the Kodachrome Toronto Registry Initiative

In this chapter

When starting this Registry initiative, it was to locate, describe, and aggregate every known collection to document Toronto on Kodachrome film between the years 1935 and 2010. The purpose for this exercise was to empower Toronto-focussed researchers with a new way to pursue comparative analyses on selected themes, locations, or subjects over time and do so without the distraction of having to accommodate for colour shifting or fading in the source material. This Registry appears to be, from all accounts, the only of its kind anywhere as a Kodachrome-exclusive finding aid — whether generally so or with respect to a specific subject area or geography.


Creating the Registry was a straightforward task. The logistics of continuing it beyond this preliminary stage, however, raises new questions. From here, how can it be maintained? How will the Kodachrome Toronto Registry evolve once the supervised research project is completed? Should the database be managed centrally or left to a community to collaboratively maintain? If the first stage was to learn where a collection with relevant material exists, then is the next step to develop a central repository for conserving these collections and/or publicly making them available for review — allowing ownership to be maintained with owner, but held in kind to assure a consistency of professional preservation from unpredictable elements?

This chapter attempts to resolve some of these questions.

Maintaining the Registry: making it live

Platform limitations

As explained in Chapter 3, the Registry database was prepared with Filemaker Pro, a standalone client (for Mac OS X and Windows) for creating databases and user interfaces for data entry. Its key advantage for this project’s first stage is that it has a relatively low learning curve for building databases from scratch. Filemaker Pro is not an industry standard for enterprise database development. Filemaker Pro’s features, best designed for use in closed environments (with technology predating the World Wide Web by almost a decade), reveal its limitation when trying to configure a database for online use. It was not engineered for online extensibility the way enterprise database platforms like MySQL, Postgres, or Oracle are. Fortunately, data content from a single-client database made with Filemaker Pro is exportable to a more rigorous database format, thus freeing the data for powerful new applications. Given this, the Registry, even in its present state, can be tailored, configured, and expanded with more features, enabling multiple users with administrative privileges and levels of security to manage archives via purpose-built content management platforms.

Database management

Enterprise database systems are the engines that drive content-rich online services — from social media platforms to file hosting, repositories, online retail, and blogging. To prepare a database for use with web applications requires having a proficiency, if not command, in back-end database server administration. Database setup can be command line-intensive. It may necessitate setting up specific environment variables depending on the its operating environment — Linux, Solaris, etc. — and configuring for hardware on which the database server will run. This steeper learning curve for someone less familiar with enterprise database administration makes it a practical obstacle for, at least, this stage of the Kodachrome Toronto Registry.

Once ready, the database delivers the data storage layer. Another layer, a front-end interface, will also be needed.

Content management software

For the Registry to be accessible and searchable online, the database administrator must configure the database’s back-end. Once configured, content management software for managing data — for describing and organizing content, multimedia storage, and for delivering other data-rich features such as geotagging — may be configured. Initial set-up is a one-time preparation task, with periodic software updates and plug-ins as part of a regular maintenance routine. Administration may be handled remotely (as is often the case).

Content management software may be specialized for specific tasks. For finding aids, archives, and records management, there are both proprietary (closed) and open source software options. Each could bring the Kodachrome Toronto Registry online. For example, DuraSpace, or DSpace, is an open source solution under serious consideration for the Registry. DSpace can support not only the Registry’s accession records, but it can also accommodate for the addition of scanned photos, videos, manuscripts, and other rich media content should it expand into a visual repository. Archives and collections management software will allow for a wealth of data fields customizable for specific needs. The Registry, while a basic finding aid in its current guise, could be much more, growing into becoming a physical repository.

What the Kodachrome Toronto Registry initiative will need next is secure a dedicated domain, settle on hosting space, and find a database administrator to set up the architecture on which the Registry will reside. At this time, there is no funding available to make this happen. Nevertheless, it may be an extracurricular goal to reach within the next year, perhaps by mid-2013.

The Registry’s evolution

Registry data collected during field research was prepared as a means to provide a new tool for researchers to locate relevant Kodachrome content — by decade, subject and, most importantly, in what part of the city the media chronicled. For example, the Registry’s “record description” and “tags” elements help to enrich the quality of search results and make known the specific subjects within a collection so they can be more accessible and readily visible to the researcher.

While the work of one researcher may be thorough for those records now being added to the Registry, this singular-node approach presents logistical and functional limitations which run contrary to the kind of mass-aggregation this Registry hopes to one day accomplish — that is, to assume the heady task of registering as many verified collections of Kodachrome (which feature the city) as possible. Absorbing a 100 per cent saturation of every source is not only impractical, but probably also impossible. It would also be prohibitively costly to try. Data gathering is an organic process which should be patient as more people voluntarily approach the Registry.

In the meantime, the Registry’s utility as a simple finding aid can link to other archives databases where collections are physically sited or housed online. At this time, only 17 per cent of the Registry — five of 30 records — includes links to collection sources housed elsewhere. Some of those links also host scanned images for online viewing. As more collections get added to the Registry, this ratio should improve once the means to host digitized photos on this Registry and elsewhere expands.

The Registry should be configured with this foresight in mind — namely, to accommodate the rich media of scanned slides and telecine-digitized movies. The inspiration for this expansion is rooted in three places: public online galleries discussed in Chapter 4; collaborative social participation on sites like Flickr; and by a participatory involvement of citizens, inviting each to help enrich the Registry’s quality.

The Registry, for it to truly flourish, will need two key ingredients: community crowdsourcing and seed funding. Securing both of these different, but equally vital parts will help to continue this field research; the in-person interviews; and for planning the foundation for a future repository in both online and physical capacities.

Crowdsourcing and content enrichment

There’s nothing revelatory about saying this: the quality of data in a record improves as more content is added and curated. Achieving this kind of data-rich saturation for an archival repository or finding aid requires the involvement of substantial human-hours. Traditionally, this meant hiring specialists to manage receiving new collections. But to explore some of the new tools at our disposal means being able to forgo a single-node approach of data entry in lieu of a multi-nodal, collaborative one. In recent years, as it became feasible, the mass participation of crowdsourcing helped to enrich data quicker and more thoroughly than ever before. A wiki is a great example. Crowdsourcing is “derived from outsourcing, where production is transferred to remote and potentially cheaper locations. Analogously, crowdsourcing describes the concept where potentially large user groups carry out work which is expensive and/or difficult to automate)” [n.b., emphasis author’s] (Heipke 2010, 550).

For the Registry, several possibilities for tapping into crowdsourcing would enhance the initiative’s utility and rapidly improve the quality of its data through collaborative participation — especially for Registry records describing publicly browsable portions of an accession record. Once a record is registered and verified by a qualified party (a supervising curator, for instance), then the owner of the contents which that record describes may add digitized content from their collection at their own leisure.

Crowdsourcing on a scale to powering Wikipedia or the Google Earth 3D database would enhance the Registry in several ways. In reviewing content enrichment opportunities for the GLAM sector (an acronym for “Galleries-Libraries-Archives-Museums”), Oomen and Aroyo (2011, 140) discern five categories of crowdsourcing: correction and transcription tasks; contextualization; complementing collection; classification; and crowdfunding.

Three of these carry particular significance for the Registry’s future.

Complementing the collections

The Kodachrome Toronto Registry can most immediately benefit from complementing collections — that is, “[a]ctive pursuit of additional objects to be included in a (Web)exhibit or collection” (ibid., 140). For the Registry, this would relate to the inclusion of new collections from to-be-determined origins. The difference from the first phase of the initiative is that unlike a solitary researcher seeking out new collections, future Registry contributions would originate instead from the community at large (i.e., institutions, the public, etc.) and voluntarily congregated in one location — namely, the Registry web site. In part or whole, these new records would be submitted (or proposed) by the community; those records would still require a verification step to assure the quality of the data is applicable to the Registry’s mandate.

This verification step can also tap into a tiered community of crowdsourcing as it enables voluntary curators who are already well-acquainted with (and qualified to discern) Kodachrome media to make a judgement call on whether a proposed collection record submitted by a new user is ultimately suitable for public review. This step may also help to quickly eliminate spurious submissions sourced from digital cameras, whose images were sent through one of several “Kodachrome” filter effects available commercially for Photoshop or built into the digital camera’s firmware. A verification step is useful for weeding out and discarding dubious entries when the original emulsion cannot be produced on demand for curatorial review. A community tasked with skilled verification could provide scalability which, for one person or a handful of people, would otherwise take too long to accomplish.

Classifying Registry records

The task of classification — by “gathering descriptive metadata related to objects in a collection” (Oomen and Aroyo 2011, 140) — entails collaborative tagging of objects within an image or adding other contextual details for future elements not yet in the Registry. Again, a benefit of this open model for participatory crowdsourcing is a net enrichment of added content. While there are legitimate risks of having questionable data added by voluntary contributors — in turn hampering the overall quality of content integrity — these risks are still minor (and correctable by the community at large) relative to the enrichment of metadata provided by the broader community (Schäfer 2011, 164–5).

In a practical sense, this means the Registry could be expanded to feature much more than merely summary overviews for each recorded collection, as those content expansions would be populated by volunteer users and participants. This is actually a long-term plan for the initiative: to help the Registry evolve into both a virtual repository for digitized photos (described within each of its records) and building a central archives site to conserve, digitize, and make different collections available in one place. This central archives, should it come to fruition, would preserve the ownership of collections. Its role would be to assure that those collections are housed in a climate-controlled, protected site to assure continued preservation for generations to follow. It would also standardize and centralize the digitization of media, assuring a quality of transcribed consistency as conservational efforts would be best equipped to work with Kodachrome’s unique (and often vexing) idiosyncrasies during scanning and digital restoration. These quirks differ materially from the scanning of more conventional emulsions: it requires both a specialized knowledge in how Kodachrome is digitally recorded, and it requires colour calibration techniques to assure faithful reproductions of the original colours (and adjustment of colours for unstable examples processed before the 1939 revision).

Once Kodachrome media can be added digitally to the Registry, this opens possibilities as it invites participants to supply additional (and relevant) metadata on individual images or telecine clips. Within each collection, these metadata may include GPS geotagging (if that site is both recognizable to a person today who can return to that spot to record its co-ordinates) (Masil 2011, 92–3). In the end, the means to facilitate a complete review of a collection’s items is key for making the Registry a better tool for attracting and mining collective knowledge — particularly the knowledge of users whose participation, if delayed by just a few years, will be lost to ageing and mortality.


Not surprisingly, a practical barrier for the Registry’s ambitious plans is funding — namely in two areas: the physical infrastructure (discussed earlier in this chapter) and, more importantly, the intensive human labour involved with higher levels of field research, review, and maintenance for the Registry.

One funding tool could offer a new means to keep the KTR alive and strong. Crowdfunding, defined as the “collective cooperation of people who pool their money and other resources together to support efforts initiated by others” (Oomen and Aroyo 2011, 140) is a non-institutional way to raise capital — particularly germane to a neoliberal economy in which private entrepreneurship has increasingly become venerated while public funding for culture and the arts disappear. Kickstarter, Citizen Effect, ChipIn, and Ulule are a few of these crowdfunding portals. Kickstarter, probably the widest known of these, was until very recently a U.S.-only service. It is now available in Canada and, being the best known, could be key to raising funding for expanding the project for years to come.

Kodachrome Toronto Registry’s future

The Harvey Naylor monograph

The Registry may also be a first step for spin-off projects. The work of Harvey R. Naylor could be expanded into a kind of halo project. In turn, this could generate long-term recognition for the Registry.

During my review of Harvey Naylor’s collection, the idea of curating and editing a book with detailed excerpts from his Kodachrome slides became a step closer to reality after learning that Naylor’s work was transferred to the City just before he died, thus making these a part of the public domain.

A complement to this plan could be a special online exhibit — a joint project between the Kodachrome Toronto Registry and the City of Toronto’s Cultural Services office. This exhibit, either at the City of Toronto Archives, in the City Hall rotunda, or at private art gallery, would also open Naylor’s prolific work to a wider audience. A “halo monograph” would inspire new discoveries and help bring forth other large, yet unknown storehouses of Kodachrome-based work. Even as Naylor’s slides would remain filed and stored with the City of Toronto Archives, an exhibition could generate a greater awareness and appreciation for the invaluable contribution Naylor made to Toronto’s postwar history. Its existence will also help scholars to better visualize the Toronto which once was.

Virtual pasts
Google Earth

Google Earth displays 3D objects on maps. These virtual objects can be viewed from every vantage point, even adding season-appropriate shadow castings based on 3D object data (i.e., height). This makes virtual walkthroughs practical for anyone [Figures 5.1–5.2]. An additional bonus is anyone is able to visit a virtual location from anywhere, so long as they are equipped with Google’s software. These objects are constantly generated and added to a repository through the crowdsourcing of users who rely on 3D rendering tools such as Google SketchUp. SketchUp makes it possible to “wallpaper” photos of a building onto 3D polygons. These digital mock-ups can reproduce the dimensions of that building. These “wallpapered” 3D objects are then exported to the Google SketchUp 3D Warehouse. Once there, these objects can then be imported to the Google Earth database, allowing any user to view these objects much as they might if they were standing there in person.

Figure 5.1. 3D polygon VR simulation, ground view of Queen’s Park, 2009. [Google Earth]

Figure 5.1. 3D polygon VR simulation, ground view of Queen’s Park, 2009. [Google Earth]

This technology can be used for creating a virtual Toronto by turning to Kodachrome media in the Registry — both slides and movies. For the former, the same “wallpapering” of 3D polygons would virtually re-construct buildings, many of them long lost, for placement on the map; these would be co-ordinated with the date the slide was made [Figure 5.3]. Google Earth’s timeline tool would then adjust the axis of time against the axis of space to re-populate an imperfect, but nevertheless faithful re-production of Toronto at that axial intersection.

Figure 5.2. 3D polygon aerial (bird’s eye) VR simulation, Queen’s Park, 2009. [Google Earth]

Figure 5.2. 3D polygon aerial (bird’s eye) VR simulation, Queen’s Park, 2009. [Google Earth]

This undertaking would require massive support from crowdsourcing. Contributions from multiple photographers over different decades could create a patchwork of the city’s past (as much of Toronto was likely never documented by Kodachrome film). For what does exist, it would bring Toronto’s past to life with a colour palette which retains a consistency throughout.

Figure 5.3. Accession KT2011004: potential “wallpaper” for Queen’s Park — for 1945 3D simulation [F. Ellis Wiley/City of Toronto Archives, s0124_fl0001_id0012].

Figure 5.3. Accession KT2011004: potential “wallpaper” for Queen’s Park — for 1945 3D simulation [F. Ellis Wiley/City of Toronto Archives, s0124_fl0001_id0012].

Other virtual touring platforms

While Google Earth’s 3D virtual reality engine is the best known platform for geomapping photos, other platforms similarly rely on crowdsourcing to populate maps both spatially and temporally. Two of these now in use, History Pin and What Was There are available for several global cities. Their approaches are confined to two-dimensional co-ordinates, but summaries of a particular photo, hand-tint, or vintage postcard can be added and viewed alongside that photo.


The logistics of working with multiple collections, should a physical repository come to pass, also invites opportunities for developing public exhibitions from collections in the Registry. Not unlike the Naylor proposal, these exhibitions would be able to mix and match from different Registry collections. An exhibit could concentrate on a particular theme, rather than just artist-creator. Curators can then redirect more attention and energy to the curatorial work of proposed exhibits and less on trying to track down sources which might be spread across several locations or whose owner-custodians are difficult to reach.

What remains

By looking onward, the Kodachrome Toronto Registry initiative is open to a number of ways it could expand the finding aid. Many of these possibilities involve an expansion to the database to feature a digitized repository for each of the collections. This repository might mirror the social engagement of Flickr, or it could be a more traditional archives web site much like the Library of Congress or the Smithsonian Institution.

This digitization goal is within reach, but the human resources to realize this enhanced platform atop the Registry’s base is significant. It will require taking advantage of crowdsourcing tools — both in terms of content enrichment and in how funding sources for the endeavour can be secured. Curiously, establishing a physical repository — whether for digitized assets or for a physical store house — will probably be less of a logistical challenge, as both are fairly fixed and whose extensibility could be determined with careful planning. The task of conserving, archiving, documenting, and indexing, however, are where variables lack a fixity. It is unknown just how much material will be added and, thus, how many human-hours it will require to maintain the Registry. This will require an approach not unlike an ongoing campaign, including the securing of resources to make that happen. Crowdfunding absolutely seems to be the most likely path, but more traditional streams like grant funding might also be helpful.

Whatever the case, to inspire Toronto researchers and the public to see just how vibrant Toronto was during the 20th century is why the Kodachrome Toronto Registry was conceived. The project is supposed to be exciting. Conveying that excitement to the community is key to its survival and prosperity.

Contents ©2012 Astrid Idlewild. Do not excerpt without written permission. A printed version of this SRP is filed with the Blackader-Lauterman Library of Architecture and Art at McGill University. The online version of this manuscript was edited and serialized in 2013.

Series Navigation<< Chapter 4. Field research findingsChapter 6. Conclusion >>

Get every new post on this blog delivered to your Inbox.

Join other followers: