Using case studies of documentary film, Freedom of Information Law document dumps, soundbanks, and a hacker conference, I will demonstrate experiments and results of several years developing open source tools to reorient the idea of documentary around its documents. This is in opposition to a tendancy towards textual and machine-readable metadata, which unduly constrain our wonder, perception, and ability to navigate ambiguous and unknown material.
Snapping a photo captures more than just image data. Information about the camera and its lens, shutter speed and aperture, date and time, &c, have been bundled into the JPEG since the early days of digital photography. By now, that photo is likely to include a GPS trace as well, and as soon as it leaves your camera, computers are hard at work assisting you in identifying and tagging people and places, with auto-completing textual clarity and database precision. Meanwhile NSA spooks try to reassure us that they are only interested in the metadata of our communications--the who and the when, and maybe some keywords. Without denying a power and efficacy to machine-readable metadata, I argue that for humans to navigate and find meaning in unknown and unsorted material, this search will require multi-media tools that immerse us and augment our powers of perception, rather than reduce all navigation to textfields, transcripts, and tags. For temporal media (sound and video), codecs have given us greater and greater instantaneous fidelity, but leave us with few techniques to skim, seek, and survey. Using case studies of documentary film, Freedom of Information Law document dumps, soundbanks, and a hacker conference, I will demonstrate experiments and results of several years developing open source tools to reorient the idea of documentary around its documents.
Speakers: Robert M Ochshorn