Alfred Nordmann, Professor of Philosophy at Darmstadt Technical University and Adjunct Professor at the Department of Philosophy, University of South Carolina, is a prominent figure in the Philosophy and History of Science and Technoscience. Among his research interests are History of Epistemology, Heinrich Hertz, Ludwig Wittgenstein as well as philosophical dimensions of nanotechnoscience, synthetic biology and climate engineering. In his opening talk at the Dimensions of Measurement Conference, he proposes to shift the philosophical and historical attention to measurement from questions concerning representation, thus culminating in debates on scientific realism, towards examining measurement as a “technology that creates and stabilizes coincidences”.
Measurement – Representation or Technology?
– Opening Talk at Dimensions of Measurement Conference, March 2013
As the philosophy of measurement is coming into its own, we may find that it emancipates itself from the questions of late 20th century philosophy of science which was preoccupied with the challenge to salvage some version of “scientific realism.” Accordingly, we might not be asking in quite the same way, if at all, what a measurement represents and what is required for something to be measurable, how different kinds of scales relate to the order and orderings of things. And with the question of representation, the provocations of realism versus constructivism lose their force.
If this is right, how should we then conceive our philosophical and historical interests in the dimensions of measurement? Here is a suggestion. Representation is not the fact of measurement that we start from, it is not the Grundbegriff or fundamental concept or our investigations. When it comes in, it does so as a derived concept, as a particular technical achievement of measurements to which we assign the specific role of identifying a feature of reality.
By way of illustrating this shift of perspective, let me put it in a wider context and refer to the discussion of photography as a technology of measurement. This discussion owes considerably to Patrick Maynard’s argument that theories of photography are ill-served if they begin with the photographic image, with the depiction or representation, with the pictures that we put in our photo albums or see in the newspapers. By beginning with these we posit a kind of original relation or paradigm, and view everything else in relation to it, as variations of the original theme. When we start from the familiar photographic image, we treat photography as a kind of seeing, asking how photographic images are similar to or differ from retinal images, straining notions of receptive or passive seeing on the one hand, of interpretive or constructive seeing on the other. We are better served by viewing photography as a technology for measuring and recording the intensity of light. From this perspective, the application of this technology in the range of visible light is a special case, and within that range, the creation of images that appear realistic to the sensory apparatus of humans is an even more special case – not unlike the technical decision to add visualization software on top of numerical simulations to render an output that resolves data in a familiar visual vocabulary. And as these technical decisions are being made, we can talk about causality, about translation or transformation rules, about robustness such that we finally arrive at questions of veracity and define notions of representational accuracy.
This shift in the way of looking at photography is quite like a shift that some recommend for looking at science, myself included. We are ill-served, I want to argue, if the philosophy of science begins with theories, with true or false propositions that purport to describe the world, and thus with representations, broadly conceived. By beginning with these we again posit a kind of original relation or paradigm, namely that of the knowing subject that needs to make sense of sensory impressions by interpreting them as owing to the external world. This is the paradigm of the mind that is asking how it can grasp a mind-independent reality, the paradigm of how one can achieve an agreement between theory and reality. If we then take the fact of science as our best evidence that such agreement can be achieved, science becomes reduced to being a cipher for our ability to contain skepticism and manage an epistemological predicament. And indeed, this holds even for much of the so-called practice-turn of philosophy of science, when we now consider laboratory practice or modeling practice in light of this original relation – as a practice that serves to forge, albeit in a local and piecemeal fashion, an agreement between bits of theory and bits of reality. But what if we were to inverse the perspective and viewed scientific practice as a kind of technological practice. Scientists would now appear to be concerned first and foremost with making things work, making concepts and tools and experimental situations work together – what they create are artefactual realities in their own right, even as concepts go into their making. And as we observe how things behave in these constructed realities, we can find – as a special kind – instances where these behaviors look a lot like the way in which some things behave “on their own”. And we then take these special cases seriously by asking whether these similarities of behavior actually owe to the same causal dynamics, whether some of our constructed realities can actually stand in for seemingly unconstructed or natural events. And then, by doing some further work, we can actually discover assurances of this interesting special relation between the works we create and happenings in the world. The so-called pure sciences that seek to describe the world by theories would thus be an offspring of the mechanical arts – a claim that can be made for Francis Bacon’s conception of modern science and that is manifested by synthetic biology’s attempt to provide understanding through the act of creation. And as difficult as it may be to maintain this inversion of perspective against the entire history of modern science and its various self-understandings, just like Maynard’s take on photography, it opens up new avenues of questioning.
I want to conclude this quick survey by pointing to one prominent debate regarding measurement, namely the measurement of intelligence. Many have approached this debate from the point of view that measurements should be representational, asking whether we have reason to believe that there is such a thing as IQ that might actually be measured by an intelligence test, whether there really is a mental trait or personal capacity that is measured by the so-called scholastic aptitude or SAT-test. Though nothing in this field is uncontroversial, one difficulty of this debate was that it led into what appeared to be an antinomy of sorts: in the absence of convincing arguments that these tests provided good representations and measured real traits, one still had to account for the fact that they could be used as predictors. Here, it becomes quite evident what is gained by dropping the representational paradigm and viewing measurement in its technological setting. By constructing a test that required the same kind of performance that students are rehearsing in school and will rehearse again in college, one creates a kind of homogenous structure in which prediction becomes a trivial affair: what you can do once, you can do again, be it on your math test in school, on the SAT-test, or in college. Prediction is now a function of the likeness of tests, and one test-score is a measure of the other, and there is no need to posit that one is measuring a character or trait. This is a greatly simplified picture, to be sure – but though predictive power becomes trivial on this picture, the technologies of measurement are anything but trivial when they serve to homogenize behaviors, standardize activities, calibrate performances. And of course, with this simplified picture in place, representation may well enter in again when we ask whether SAT tests only record overt test behaviors, whether they might provide a measure of the likeness of tasks, whether they represent, as Noam Chomsky suggested many years ago, what a society values or what counts as a social norm.
I will leave it at that and thus with the rather open-ended suggestion that the philosophy of measurement is speaking to a shift of perspective from measurement as representation to measurement as a technology that creates and stabilizes coincidences which in particular settings afford “representations”.
– kindly provided by Alfred Nordmann