Mirko Tobias Schäfer / Assistant Professor
University of Utrecht Department for Media and Culture Studies

Trust in Technical Images

Why is it, that we trust the photo in a passport? And when do we have doubts? And why do we trust the images our navigation system produces? And when should we have doubts? What kind of trust can we have in the images recorded on mobile phones showing what is going on in Libya? In this article we would like to have a look at a number of examples from different moments in media history from the late 19th century until today, in order to address a phenomenon that creates something like a continuous déjà vu when looking at the reception practices of emerging visual media technologies since the 19th century.

One common denominator of these technologies is the fact that the images they produce are automatised, and that the automation is the very basis for our at least relative trust in them, be it the photochemical process through which a photograph is produced, or the functioning of an algorithm. These media technologies tend, on the one hand, to yield promises and maybe even utopian expectations as to the way in which they are able to produce reliable, trustworthy, accurate representations of the real that will make possible a number of seemingly revolutionary new practices. On the other hand there is not only a corresponding negative reaction of dystopian fears pointing towards the threats such technologies represent, but also, and maybe more importantly, a scepticism that rapidly starts interrogating the utopian claims. The debates that ensue, foreground certain functions of a medium and try to negotiate the conditions under which a media dispositif of trust can be established. By this we mean the kind of everyday functioning of a medium that requires a certain amount of trust in the configuration of technological, institutional, and textual practices that we take to be “the medium”, and that allows us to use it without having to continually question whether or not we fall victim to its flaws. These debates involve, furthermore, something like an expert culture of professional practitioners, theorists, users and others that address both the potentials and the limits of the medium, articulating the tensions between ideal conceptualisations and practical limitations that finally constitute the fundamental trust that governs as some kind of a default value our everyday media practice.

In what follows we would like to look at a number of historical and contemporary examples that will help us to identify some of the central issues of these tensions, the negotiation of which, in the end is what we might call media literacy in its most basic form: an understanding of the medium not as a black box, but as a process of translation in the course of which “input” is processed in order to produce an “output” – a technical image. 

Photographing Fairies

In the summer of 1917, 16 years old Elsie Wright and her 10 years old cousin Frances Griffith borrowed Elsie’s father’s camera, charged with one plate, and took a photo of Frances observing a fairies’ dance, and a few months later succeeded in getting another picture of Elise and a gnome. These photographs came to the attention of Sir Arthur Conan Doyle, then very much engaged in spiritualism, who published an article on these quite extraordinary images in the 1920 Christmas issue of the Strand Magazine, the very periodical that made Conan Doyle famous with his stories about the most logical and rational of detectives, Sherlock Holmes. Naturally, these photos met with enormous scepticism, even though Conan Doyle took great pains to refute all objections, publishing in 1922 a book entitled The Coming of the Fairies, collecting all the evidence for the authenticity of the pictures. (-> Fig. 1)

Photographic images are part of what media theorist Vilém Flusser has called “techno-images”, According to Flusser, since the 19th century science and technology have increasingly delegated the process of picture making to machines because of the superior quality of reproduction that can be attained through them (1997: 23). Techno-images, indeed, seem to be seductively convincing in their promise of rendering an accurate depiction of the world. However, what is often neglected is the fact that there is an apparatus between the world and the recipient; or, to put it more precisely, the inner working mechanisms of the apparatus remain opaque, the machine, in other words, operates as a black box. To take this even further, we might say that the existence of the apparatus is in fact not so much neglected, but rather generates additional trust in the objectivity and accuracy of the images it produces. But how exactly is the role of the apparatus addressed here? 

What is striking with regard to the case of the fairy photographs is the degree to which technical knowledge is mobilised in order to prove or disprove the claim that they are genuine. Obviously, by 1920 even the general public was aware of the fact that photos could be manipulated and faked, and so the extraordinary claim about the existence of fairies in Yorkshire could by no means rely on the authority of photographic evidence alone. Or rather, the trust in these records needed to be backed by additional proof. So Conan Doyle’s associate and main source of information in this affair, the Theosophist Edward L. Gardener called upon experts in order to have the pictures analysed, one of which was a certain Mr. Snelling.

"He has had a varied connection of over thirty years with the Autotype Company and Illingworth’s large photographic factory, and has himself turned out some beautiful work of every kind of natural and artificial studio studies. He laughs at the idea that any expert in England could deceive him with a faked photography: “These two negatives”, he says, “ are entirely genuine, unfaked photographs of single exposure, open-air work, show movement in the fairy figures, and there is no trace whatever of studio work involving card or paper models, dark backgrounds, painted figures etc. In my opinion they are both straight untouched pictures.” (Conan Doyle 1997: 29)"

Conan Doyle himself went to see two experts working for Kodak: “They examined the plates carefully, and neither of them could find any evidence of superposition or other trick” (1997: 17). The authenticity of the photos, in other words, was to be verified by specialised practitioners, who, however, could only attest to the apparent absence of manipulation, or rather to the fact that they were unable to detect any. And of course, were careful not to say anything that could be interpreted as a confirmation of the actual existence of the supernatural beings in the photograph.

On the other side of the spectrum of experts, the spiritualist community was not necessarily very happy with the photographs, nor probably with the discussions they were prone to unleash. Conan Doyle consulted someone he called Mr. Lancaster, a man who “combined considerable psychic powers” and who stated in a letter: “The more I think of it, the less I like it (I mean the one with the Parisian-coiffed fairies).” What makes him doubt in particular is the question of what kind of lens could have been used to get a clear picture of the dancing fairies, while the blurring of the waterfall in the background suggests “a one second’s exposure at least” (Conan Doyle 1997: 14-15).

This scepticism on the part of someone, who is said to have seen fairies with his own eyes and thus might be considered a “believer”, is probably due to the same kind of phenomena as are the suspicions of the non-believers, namely the prior experience with all kinds of so-called spirit photography. In itself a rather complex phenomenon, as Tom Gunning (1995) demonstrates, the continuous and often proven justified suspicion of manipulation and fraud when photographs were presented to prove materialisations of ectoplasm or appearances of the dead, since the mid-19th century, had indeed somehow discredited photographs as a trustworthy medium.

At first sight, this seems to be quite a contrast to François Arago’s enthusiasm in his report on the daguerreotype in the French parliament in 1839, stressing among others the fidelity of photographic records. As an example, Arago mentions that the Egyptian hieroglyphs could be reproduced easily and without any errors by daguerreotypes, whereas the handmade copies that were done by draughtsmen during Napoleon’s expedition have many flaws (1995: 38). However, Arago here argues along similar lines as Charles Babbage, who “rhapsodized about the advantages of mechanical labor for tasks that required endless repetition, great force, or exquisite delicacy” (Daston/Gallison 2010: 139). Arago, in other words, is less concerned with photography’s trustworthiness as a witness than with its reliability as a mechanical copying device.

As Lorraine Daston and Peter Galison observe with regard to the role of photography as a scientific tool in the second half of the 19th century, scientists were indeed well aware of the fact that photographs were anything but a direct and unfiltered product of “the pencil of nature” (2010: 125-138). But at the same time, the trust in the fundamental objectivity of the indexical photographic image was not questioned in any radical way. The case of the Cottingley fairies actually constitutes a most interesting example of this ambivalence: while on the one hand everyone clearly is aware of the enormous range of possibilities to manipulate photographs, the attempts to prove the absence of any such intervention reveal that if Conan Doyle and Gardener had indeed been able to convince the sceptics beyond any reasonable and maybe even unreasonable doubt that the images had not been tempered with, then, as a consequence, the existence of the fairies would have been a fact. So in spite of the general knowledge about the manipulability of photographs, the truth claim that is, as it were, inherent to photography as a medium, is not fundamentally questioned. This is what the French film critic André Bazin, in his 1945 essay on “The Ontology of the Photographic Image”, referred to as the “essentially objective character of photography” (1960:7) in a phrase alluding also to the fact that in French the lens of the camera is called objectif (similarly in German: Objektiv).

Cinematographic documents

In 1898, almost a quarter of a century before the affair of the Cottingley fairies, the Polish photographer and cinematographer Boleslas Matuszewski published a little brochure Une nouvelle source de l’Histoire (Création d’un dépôt de cinématographie historique), followed that same year by a book entitled La Photographie animée, ce qu’elle est, ce qu’elle doit être (Matuszewski 1898a, 1898b). In both publications, Matuszewski promotes animated photography as an important source for the production of historical documents and as a scientific tool. In Une nouvelle source de l’Histoire he in particular stresses the advantage of cinematography vis-à-vis photography, claiming that the sheer amount of individual photographic records on a filmstrip protects animated pictures against attempts of manipulation:

"Perhaps the cinematograph does not give history in its entirety, but at least what it does deliver is incontestable and of an absolute truth. Ordinary photography admits of retouching, to the point of transformation. But try to retouch, in an identical way for each figure, these thousand or twelve hundred, almost microscopic negatives...! One could say that animated photography has a character of authenticity, accuracy and precision that belongs to it alone. It is the ocular evidence that is truthful and infallible par excellence" (Matuszewski 1898a: 9, quoted after Matuszewski 1995: 323).

However, as the case of the Cottingley photos also shows, this essential objectivity needs to be checked through a number of protocols in order to exclude any form of manipulation. These protocols are external to the photographs in question in that they try to retrace the process through which it was produced, looking for traces of possible interventions.

Matuszewski here not only uses, as it were, a purely quantitative argument to advocate cinematography’s trustworthiness, by the same token he refers to the technological specificity of animated photography (which more than 60 years later is famously expressed by a character in Jean-Luc Godard’s Le Petit Soldat in the following terms: “Photography is truth, the cinema is truth 24 times per second.”). Matuszewski not only highlights the technological advantage of cinematography over photography, but also the fact that what is recorded by animated pictures cannot be doubted: “It can verify oral tradition, and if human witnesses contradict each other on some matter, it can bring them into accord, shutting the mouth of whoever would dispute it.” (1995: 323) In a footnote, the author refers to a diplomatic incident concerning an alleged misconduct that was said to have occurred during the visit of the French President Félix Faure to St. Petersburg. Matuszweski declares that the projection of one of his own films, recorded at this occasion, “was found indisputably to refute the false assertion from abroad” (1995: 324).

However, in spite of his almost unconditional trust in the documentary powers of cinematography, Matuszewski points out at least one decisive problem, namely that it “does not give history in its entirety”. So whatever appears in the image has to be interpreted with regard to the broader context of the event in question.

This, however, is an issue Matuszewski does not seem to be interested in, contrary to later theorists of documentary film, debating on whether or not a film can indeed function as an “objective” or “trustworthy” document. Obviously, the various ways in which, as John Grierson famously put it, documentary is a “creative treatment of actuality”, are an important aspect here, as every operation of the filmmaker, i. e. choosing a viewpoint, a camera angle, a particular lens, he time span that is recorded, and of course editing, can be considered manipulations that potentially have an impact on the trustworthiness of the film as document.

Still, there is an aspect that Matuszweski does apparently consider problematic for the value of cinematographic records as historical documents: the fact that many of them were produced for entertainment purposes. Consequently, before being admitted to the repository, all animated photographies have to be evaluated: “A competent committee will accept or discard the proposed documents after having appraised their historic value.” (1995: 324) In his second publication, La Photographie animée, Matuszewski stresses again the task of such a committee, which first of all consists in eliminating “everything that is pure amusement and does not represent [a] character of utility” (1898b: 57) So, again, experts are needed to assess the specific quality of the record as document. But on the basis of what competences does such a committee function? Matuszewski does not offer any details on this point. But one might conjecture that the experts will have to be able to both judge the adequacy of the representation and have an understanding of the mode of production of the images, as they will have to assess their scientific and historical value and eliminate pictures made to simply entertain the general audience. This attitude is quite different from the one articulated about a decade later by the producer Charles Urban, who also suggests the creation of an archive, but rather proposes to collect indiscriminately:

"Animated pictures of almost daily happenings, which possess no more than a passing interest now, will rank as matters of national importance to future students, and it behoves our public authorities, and the heads of museums and universities, to see that the institutions under their control become possessed of these important moving records of present events" (Urban 1907, 18-19).

For Urban it is up to future experts to discover, or identify, the value of the pictures preserved in the archive and, one might add, to understand in what way they can “rank as matters of national importance”. Both Matuszweski and Urban, albeit in different ways, reason along lines that one could see as a pragmatics of cinematographic documents (neither one includes staged scenes in these archival projects). Matuszweski quite obviously presumes there is a conflict between the original intentionality of views taken to entertain the general audience, and the scientific nature of the documents the repository wants to preserve. Urban, on the contrary, argues that an animated (documentary) picture, for whatever purpose it was made originally, can turn into a valuable document, when viewed in an appropriate context or perspective. In both cases, the indexical quality of the cinematographic image is presupposed and a first guarantee for its status as a record. For Matuszewski, however, this does not seem to be sufficient: an animated photography made for the purpose of entertainment cannot be a trustworthy historical document. Consequently, a competent – one might say: media literate – committee is needed in order to select the appropriate cinematographic views. According to Urban’s approach, on the other hand, it is up to the competent or media literate viewer to read an animated picture in such a way that its documentary value can be revealed. So once more, there are certain protocols that govern the way in which the images have to be treated.

As has been stated over and over again, photochemical indexicality is the central element in the debates about the documentary value of photography and film. This is the reason why the digital is said to pose a threat to these media in so far as they are considered to produce records of the real. (Interestingly, the electronic video image did not trigger the same kind of debates.) However, as Tom Gunning (2004, 2007) has shown, it is less the indexicality that determines this value, but rather the ways in which photographic and filmic images are used. [1]

So here another aspect comes to the fore: techno-images are not only (or not simply) representations, they are also put to a certain use, and in this respect they do in fact function as some kind of tools.

Faultless computation

Historically the kind of techno-images discussed in this section are rooted in apparatuses to calculate, simulate and eventually control complex processes and that in turn are based upon the 18th century mechanical looms and calculating machines (Campbell-Kelly et al. 2007). The objective of introducing and using such machines was to gain more efficiency in work processes in business, (academic) research, security and military institutions, aviation, navigation and transportation. Some concepts for automatic information processing were born out of the necessity to eliminate failure, similarly to the above-mentioned remark by Arago with regard to the flawless reproduction of Egyptian hieroglyphs with the aid of a daguerreotype. As legend has it, Charles Babbage exclaimed in 1821 when recognizing many calculation errors in a set of mathematical tables: “I wish to God these calculations had been executed by steam!” (Swade 2001). His calculating machines were an attempt to prevent human errors in calculation, errors that might have been appeared at any stage of the production of mathematical tables, from the process calculation over typesetting to printing. His difference engine consequently was designed as a machine that would not only provide a mechanically automatised calculation but also a print of the computed result (Campbell-Kelly 2003). To err is human and therefore it is unsurprising that ingenuity was driven by the aim to delegate tasks to unerring machines. In an environment that becomes increasingly complex,   volatile and filled with information to process, these machines support human decision making processes.

In view of the software generated images and related media practices it seems necessary to keep in mind the historical developments that led to software-based visualization of data analysis and information processing. The above-mentioned term of “programmierte Bilder” indeed evokes an important aspect concerning the compiling of an image. As Schneider (Schneider 2008) points out in her work on looms and punch cards, the binary information processing allowed to connect the instructions for weaving patterns to be stored on a data carrier and later even to be implemented in an automatised process of information processing and execution. The affinity between 'programming' a loom to create a weaving pattern and software-generated images, can in fact be traced back to Ada Lovelace's remark with regard to Charles Babbage’s Analytical Machine:

"We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves" (Lovelace 1842).

The 19th century conceptual thinking of universal machines for problem solving as proposed by Ada Lovelace and Charles Babbage became increasingly interwoven into the computers and software applications of the late 20th century, and will profoundly shape the epistemological processes in our information society. The techno-image rooted in the information processing of  the 19th century Jaquard weaving loom transforms into software applications that not necessarily produce an image but a 'computed mapping' of reality that might be represented in graphs, maps, figures, or computer-generated visualisations.

Computed truth & software-generated techno-images

Contemporary knowledge economies both depend and thrive on software applications. Through their graphical user interfaces the world appears to be manageable, calculable and predictable. From Google Finance to social network analysis tools or weather simulations, these new techno-images are an attempt to translate the complexity of the world into a comprehensible form. They appear (and are used) as objective and scientifically calculated (or: objective, because scientifically calculated) representations of various aspects of our life-world. Whilst their 'computed intellect' is mostly taken for granted and many everyday activities rely on their 'compiled truths', the fact that under their opaque surfaces lie design patterns, pre-cast assumptions and quantitative models that in fact represent a simplification of our complex reality is generally neglected. Although software provides the means to manage complexity, it also produces another level of complexity: its working logics are hidden under opaque interfaces and remain incomprehensible to most users; important tasks are delegated to software, and the output of computation processes co-construct knowledge and, what is maybe even more important, influences decisions on many levels in contemporary knowledge economies. Following Bruno Latour and his approach to non-human actors (e.g. 1991; 2008, ), software cannot be considered to be neutral, but is an active agent in shaping knowledge and pro ducing facts, thus transforming the very things it is supposed to analyse and represent or process neutrally. This raises questions about the technical design of software applications and the way in which they are embedded into our knowledge processes.

There is one importance difference between this kind of software generated visualisations and photography, cinematography, or video, i.e.the fact that here the images function much less as representations than as tools. This concurs with Vilém Flusser’s notion of the image as being in the first instance a means to an end. Thus he remarks on the Lascaux paintings: “They are good images if they lead to a successful hunt” (1998: 112).

As far as visualisations are concerned, a plethora of new images have emerged, from computer generated calculations and spreadsheets over power point presentations to computer simulations. The seducing aesthetics, together with the symbolic capital of the calculating machine as accurate and unerring have contributed much to the role such images play in today’s information society. Although Vilém Flusser primarily thought of technologies such as photography, film, video, and computer generated images, the visualisation of data analysis and information processing constitute techno-images as well. A Google search result list, a chart of a social network, a visualisation of market data, or a simulation of weather conditions are equally part of this category. These images are produced by software applications that, based on mathematical models, data and algorithms, compute a result that is provided as a visual representation. Art historians Horst Bredekamp, Birgit Schneider and Vera Dünkel use the term “programmierte Bilder” (Bredekamp, Schneider, Dünkel 2008:182). This might appear confusing in our particular context of Flusser's notion of techno-images, because according to Flusser it is the images that program the recipient. Maybe a term such as compiled or assembled image would be more appropriate. We here would like to refer explicitly to images that are compiled as visualisations of data on the basis of mathematical models, which generate the images with input from statistical data and information from various data bases or archives as well as real-time data input.

Financial markets, over time, have not only wired, but in fact developed into almost completely electronic trading places, pushing the traditional trading floor as seen on the evening news to the fringes. Today, an estimate of only 3% of the traded volume is carried out on the traditional trading floor. Individual private clients can not only participate in trading by calling a bank’s call-centre where call-centre agents sit in front of monitors showing the traded stocks and prices in real time. They can also participate with the help of web applications that allow them to shift the shares in their depot and conduct calls and sales. Translating the market into comprehensive images became an important aspect for both sides of the business, for the corporate dealers as much as for the private shareholders. This development did not only connect the trader, who becomes increasingly geographically independent, to the various stock markets, or the stock market with a larger group of  participants, but also interconnected the various financial markets, which led to a more and more complex global economy (Sassen, 2008: 350).

Images representing the market and tools for monitoring and analysing price movements became vital due to the densely connected market place, and in particular its velocity. The technology based analysis became crucial because it “did something other forms of financial expertise could not do: it provided market actors with an account of the ‘market’ as an orderly, totalizing phenomenon” (Preda 2007:61).

Figure 2 shows an example for a techno-image of stock markets: the Map of the Market developed by Martin Wattenberg (1999) provides a quick and comprehensive overview of the market and its latest developments. Divided in differently sized tiles, coloured from red to green, the map displays the 500 most traded companies and updates every 15 minutes. The size of a tile indicates the market capitalization of a company and the intensity of the colour indicates price movements, green for up, and red for down, white for no change. Size and position of the various tiles are calculated out of a performance analysis of the past three months. Similarly performing companies are grouped together, the entire range of the market being divided into 11 different areas, such as energy, consumer staple, health care, financial etc.

Many tools are now available even for individual end users: the Rabo Bank provides clients with its TraderMonitor. In order to enable them to manage their portfolios. Also, search engine providers offer services for mapping the stock markets: Google Finance and Yahoo! Finance. However, none of these tools work in real time; often there is a delay of up to 15 minutes. The techno-image provided in these services is therefore a delayed representation of a highly volatile market and at its best it provides users with the notion of close connectedness to the unfolding trading events. Professional traders use different tools, but to them the latency problem of mapping the market in real-time remains valid as well, even more since they deal on a much larger scale then private end-users. The latency of network technology can cause disadvantages when trading becomes a matter of nano-seconds. [2]

Other consequences of networked markets are an increase of participants that require the trading platforms, from the market places to the connected banking firms to invest in their technological infrastructure in order to maintain a stable system of uninterrupted trading. The new software applications and the hardware, such as keyboards, provide additional sources for crucial mistakes. From typing errors, the infamous 'fat finger trades' to erroneous stock listings or flawed  order processes due to software bugs, many additional sources for errors are created. The market place providers react with the introduction of 'emergency breaks' that are supposed to halt the entire market when irrational activities and implausible figures appear. 

Distrust in techno-images

"Die Bewusstseinsebene, der diese Codes entsprechen, ist noch nicht erreicht worden. Daher sind sie so außerordentlich gefährlich: sie programmieren uns, ohne in ihrem Wesen durchblickt worden zu sein, und bedrohen uns so als undurchsichtige Wände, anstatt uns als sichtbare Brücken mit der Wirklichkeit zu verbinden" (Flusser, 1998:105).

Flusser's concerns regarding techno-images resurface in various critical comments on computer simulations. Recently MIT professor Sherry Turkle analysed the ambivalent character of computer simulations: “Simulation makes itself easy to love and difficult to doubt. It translates the concrete materials of science, engineering, and design into compelling virtual objects that engage the body as well as the mind” (2009: 7). These tools and the images they produce undoubtedly allow great advancement in science, economy and technology. The more powerful these tools become, however, the more their users depend on them. Turkle reminds us that we must not lose the tension of using technology and simultaneously distrusting it (2009: 10). While at their surface they appear as objective images of analysis and information processing, their working mechanisms remain generally incomprehensible to most users. “The Controversial Status of Simulations" a paper by Günter Küppers and Johannes Lenhard describes clearly why doubt in computer simulations is necessary: "simulations are numerical imitations [sic] of the unknown solution of differential equations, or the imitation of complex dynamics by a suitable generative mechanism" (2004).

It happens that computer simulations circulate that in fact lack empirical confirmation or employ mathematical models that are highly controversial, as are the theoretical assumptions thus inscribed into the simulation. Recently Pilkey and Pilkey-Jarvis provided an extensive critique of the use of mathematical models in environmental sciences, arguing that the “ordering complexity” makes it difficult to provide an appropriate predictive model or simulation:

"Perhaps the single most important reason that quantitative predictive mathematical models of natural processes on the earth don’t work and can’t work has to do with ordering complexity. Interactions among the numerous components of a complex system occur in unpredictable and unexpected sequences. In a complex natural process, the various parameters that run it may kick in at various times, intensities, and directions, or they may operate for various time spans" (2007:32).

Reality changes in a simulation, or as Evelyn Fox-Keller put it "[in] the new practice of simulation it was an idealized version of the physical system that was to be simulated, the aim of which was to produce equations (or models) that would be both physically realistic and computationally traceable" (Fox-Keller 2003: 205).

In April 2010 the ash cloud of an Icelandic volcano eruption led to a Europe-wide grounding of air traffic. An image (→ Fig. 3) of the ash cloud covering large parts of the European airspace was widely circulated in the media and became subject to a heated debate between airline operators and public administrators. The allegation was that the image was not representing the actual situation in the airspace, but merely the potential diffusion as based on computer simulations employing models on diffusion and not actual data of ash concentration in the affected regions. The controversial decision of EU administration was subsequently criticised and led to widespread critique of information processing technology, computer simulation and the visual representation of data analysis (Schirrmacher, Gelernter). [3]

Indeed, computer-generated data analysis and simulation of past or future events often visualized in charts, maps, graphs or animated info-graphics play a powerful role in our contemporary knowledge society. This has in fact been an issue of academic critique from various perspectives and disciplines. [4] In the case of the ash cloud simulation, the general discontent with the objective appearance of the techno-images at stake has been expressed in open debate, fuelled of course by the large financial losses the decisions based on them caused to airlines. If we proceed to blindly trust into the images produced by software, computer scientist David Gelernter warns, we will suffer two dangerous consequences: Firstly we'll be covered in an ash cloud of anti-knowledge and secondly a moral and intellectual passivity will emerge that won't doubt or argue against the images. [5]

Dialectics of Trust

Flusser repeatedly emphasises the objective character of techno-images as problematic. His concern was that due to their ability to appear objective and accurate they might terribly mislead their audience. A similar problem is recognizable in the reception of the images we have discussed. At the time of their emergence as products of new and promising media technologies they appeared superior in their accurate depiction or their immaculate execution of complex computations. The apparatus therefore acquires a specific agency that is constituted in its promise to objectivity:

 an inscribed promise: these apparatuses carry a promise of improvement, scientific accuracy and objectivity. Their value for epistemological processes is seen in their unaffected computation process providing allegedly unbiased results. Simultaneously these image technologies confront us with 

an ambivalent quality: while the computation processes and the representation of their results as images certainly serves in an extremely fruitful way many objectives in science, economy and technology, these images have an ambivalent quality that is very much revealed in the media practices. Used in popular discourse images can be changed or turned into arguments. Taken out of context they do not only lack vital additional information but turn into something that is very different from the original image. Here, the reception through an either general audience or learned experts leads to very a different decoding of the images. The context where the image is published is also crucial for the reception and the assigned level of trust.

Since the invention of photography in the mid-19th century, issues of trust in technical images have been discussed in various, yet often similar ways. In the tension between utopian expectations and sceptical interrogations, dispositifs of trust emerge as a result of continuous negotiations regulating the uses these images are put to. Especially when migrating from one domain to another, visual representations may sometimes be used to support excessive claims concerning their status as “proof”, but at the same time both the general public and specialised expert cultures contribute to an ongoing critique accompanying our everyday media practices. Media literacy, in this context, means to understand that the medium is not a black box, but an active instance translating, and thus interpreting and shaping, data input into the image we see. This does not necessarily mean to acquire expert knowledge in order to analyse such translational process, but at least being aware of the fact that such processes exist. The media dispositifs of trust we all rely on in our everyday media practices can function, precisely, because of the ongoing critical assessment that is the foundation of every public sphere, the very condition of its possibility.

Notes

[1] Rodowick 2007 takes a view opposite to Gunning’s. See also Kessler 2009 for a discussion of this debate.

[2] The website Latency Stats presents facts on network latency in electronic markets and tries to raise attention of market providers, cable and information providers to tackle the problem of network jitter. <http://www.latencystats.com>

[3] Most notably the debate in the Frankfurter Allgemeine Zeitung initiated by editor in chief Frank Schirrmacher discussed the role of algorithmic information processing in decision processes. Computer scientist David Gelernter seconds Schirrmacher's critique by emphasizing the dangers of relying to easily on software-based analysis. The debate was taken to an international level through a panel discussion at the Digital Life Design conference.

[4] Most recently there have been publications calling for a critical revision of computer simulations (e.g. Turkle 2009), mathematical models in environmental sciences (Pilkey and Pilkey-Jarvis 2007), mathematical models and tools in financial markets (Callon 2007; MacKenzie 2009; Taleb 2009), software applications (Marino 2006, Manovich 2008), network technologies (Galloway 2004; Chun 2006; Zittrain 2008).

[5] “Erstens, dass wir in eine dauerhafte Aschewolke aus Antiwissen eingehüllt werden, wenn Softwaremodelle falsche Vorhersagen treffen, die durch das ehrwürdige Imprimatur der wissenschaftlichen Priesterschaft abgesegnet, von der Presse wie ein hässliches Gerücht in Umlauf gebracht, von den Vereinten Nationen überhastet gebilligt und von Politikern auf der ganzen Welt zur Grundlage ihres Handelns gemacht werden. […] Die zweite, noch größere Gefahr ist: So, wie sich in einem prozessfreudigen, mit Rechtsanwälten vollgepfropften Staat moralische Passivität ausbreitet” (Gelernter 2010).

Bibliography

Arago, Dominique-François (1995) Rapport sur le Daguerréotype [1839]. La Rochelle: Rumeur des Ages.
Bazin, André (1960) The Ontology of the Photographic Image. In: Film Quarterly 13, 4, 4-9.
Bredekamp et al. (2008)
Campbell-Kelly (2003) From Airline Reservations to Sonic The Hedgeho. A history of the software industry. Cambridge, MA: MIT Press.
--- et al. (2007) The history of mathematical tables. From Sumer to spreadsheets. Oxford: Oxford University Press.
Conan Doyle, Arthur (1997) The Coming of the Fairies [1922]. London: Pavillion Books.
Daston, Lorraine / Galison, Peter (2010) Objectivity. New York: Zone Books.
Flusser, Vilém. 1997. Medienkultur. Frankfurt a.M.: Fischer.
--- (1998) Kommunikologie. Frankfurt a. M.: Fischer.
Fox-Keller, Evelyn (2003) Models, Simulation, and 'Computer Experiments'. In: Hans Radder (ed.), The Philosophy of Scientific Experimentation, Pittsburgh: University of Pittsburgh Press, pp. 198-215.
Gelernter, David (2010) Gefahren der Softwaregläubigkeit. Die Aschewolke aus Antiwissen. In: Frankfurter Allgemeine Zeitung 26 April 2010, online: <http://www.faz.net/s/RubCEB3712D41B64C3094E31BDC1446D18E/Doc~E36DC935956554960A206495346999283~ATpl~Ecommon~Scontent.html>
Grierson, John (19??)
Gunning, Tom (1995) Phantom Images and Modern Manifestations. Spirit Photography, Magic Theater, Trick Films, and Photography’s Uncanny. In: Patrice Petro (ed.), Fugitive Images. From Photography to Video. Bloomington, Indianapolis: Indiana University Press, 42-71.
--- (2004) What’s the Point of an Index? or, Faking Photographs. Nordicom Review 1-2, 39-49.
--- (2007) Moving Away from the Index: Cinema and the Impression of Reality. Differences: A Journal of Feminist Cultural Studies 18, 1, 29-52.
Kessler, Frank (2009) What You Get Is What You See: Digital Images and the Claim on the Real. In: Marianne van den Boomen et al. (ed.) Digital Material. Tracing New Media in Everyday Life and Technology, Amsterdam: Amsterdam University Press, 187-197
Küppers, Günter / Lenhard, Johannes (2004) The Controversial Status of Simulations.
Kubina, Lukas (2010) The ash cloud of anti knowledge, Blogpost at DLD Conference, online: <http://www.dld-conference.com/2010/04/the-ash-cloud-of-antiknowledge.php>
Lovelace, Ada (1842) Sketch of the Analytical Engine invented by Charles Babbage, with notes by translator Ada Lovelace, online: <http://www.fourmilab.ch/babbage/sketch.html>
Matuszewski, Boleslas (1898a) Une nouvelle source de l’Histoire (Création d’un dépôt de cinématographie historique. Paris: Imprimerie Noizette et Cie.
--- (1898b) La Photographie animée, ce qu’elle est, ce qu’elle doit être. Paris: Imprimerie Noizette et Cie.
--- (1995) A New Source of History [1898].  In: Film History 7, 3, 322-324.
Pilkey, Orrin H. and Pilkey-Jarvis, Linda (2007) Useless Arithmetic. Why Environmental Scientists Can't Predict the Future, New York: Columbia University Press.
Preda (2007)
Rodowick, David (2007) The Virtual Life of Film. Cambridge, Mass.; London: Harvard University Press.
Sassen, Saskia (2008) Territory, Authority, Right. Princeton: Princeton University Press
Schirrmacher, Frank (2010)
Swade (2001) The cogwheel brain : Charles Babbage and the quest to build the first computer. London: Abacus.
Turkle, Sherry (2009) Simulations and its discontents. Cambridge, MA: MIT Press.
Urban, Charles (1907) The Cinematograph in Science, Education and Matters of State. London: Charles Urban Trading Co.
Walter, Francois. 2010. Katastrophen. Eine Kulturgeschichte vom 16. bis ins 21. Jahrhundert. Stuttgart: Reclam.
Wattenberg, Martin (1999) Visualizing the stock market. In: Proceedings CHI 99, online: <http://www.research.ibm.com/visual/papers/marketmap-wattenberg.pdf>

Date May 2013 Category Publications

with Frank Kessler: Trust in Technical Images, manuscript 2013.

Download as pdf

2000 - 2022 Mirko Tobias Schäfer

made with Müller