On the naming of parts

by Bernie Cohen

The field of modeling is rich in terminological confusion and misunderstanding, in which some of the terms have formal definitions that are radically different from their everyday usage. An eminent MIT Professor of Engineering used to introduce his students to the subtle concepts of precision, accuracy and significance with the following (non-PC) example.

You ask a lady her age and she tells you she is 35. This statement has a precision of plus or minus 6 months, could be inaccurate by as much as 10 years and, if she is attractive, has no significance whatsoever.

What follows is an attempt to cast some light on the terminological confusion and misunderstanding.

  • In mathematics, a theory is an abstract algebraic structure, with a signature defining the syntax of its sorts and operations and a set of axioms defining equivalence classes over its syntactically valid expressions. The assertion that certain statements are, or are not, in the same equivalence class is a theorem, which may be proved within the formal system in which the theory is expressed. A theory is said to be consistent if no two of its theorems contradict each other and complete if every valid expression in it is provably true or false. (Godel’s Incompleteness Theorem raises its ugly head here: no theory powerful enough to define arithmetic can be both consistent and complete.).

This definition may seem completely meaningless to the engineer, or even to the scientist, but theories are, indeed, devoid of any ‘meaning’ in the sense that they do not refer to anything in the world of experience.

  • A model, in this context, is a theory morphism that assigns a set to each sort, and a function to each operation, of a theory, in such a way that all the axioms of the theory hold in the model. That such a morphism does or does not constitute a model of the theory is a theorem.

Still not very meaningful until we interpret the sets in our models as referring to the values taken by certain kinds of things in the world and the functions as referring to the behaviours of those things. Now our model becomes a theory (in the scientific sense) of any part of the world in which things of those kinds occur.

Actually, such a model of the world initially has the status of a hypothesis, which must be rigorously tested before graduating to the status of theory. Hypothesis testing is the foundation of scientific method. Unlike mathematics, whose methods deal largely with verification using formal proofs, scientific method works largely with refutation, the demonstration that a hypothesis is false. Sir Karl Popper went so far as to insist that any hypothesis that was not, in principle, refutable could not be deemed to be scientific at all, thereby excluding astrology from the sciences.

Hypothesis refutation involves a combination of mathematical proof and empirical observation. The method is to prove a theorem in the underlying theory whose interpretation predicts certain as yet unobserved behaviour of things in the world. An experiment is designed, using suitable effectors, sensors and instruments, to induce the predicted behaviour. This procedure is effectively an investigation of commutativity. Given any function in the model and any value in its domain, we may execute that function on that value and then interpret the result in the world. Alternatively, we could first interpret the function and its input value and observe how that part of the world behaves. If the theory and its interpretation commute, then these two procedures should always produce the same result.

If the predicted behaviour is not observed, non-commutativity has been demonstrated and the hypothesis is deemed to be false. If, however, the behaviour is observed, we have not proved commutativity, but merely that the interpretation of that particular function with that particular value gives an accurate account of the observed behaviour. The hypothesis is not deemed to be true, but merely to have survived that test. A hypothesis that has survived many such tests, repeated by different experimentalists in different conditions, especially where the predicted results are, in some cultural sense, surprising, is eventually granted the status of a law, although that terminology has fallen into disuse.

Actually, hypotheses are not cast and tested individually. Rather, collections of them are interrelated and stand or fall together as a body of theory that defines and governs some branch of science. Their interrelationship stems from their shared ontology, the way that they identify and distinguish things in the world. Contrary to popular belief, the world is not ontologically prior, that is, empirical reality does not consist of distinct objects. Rather, each of us impresses upon our experience of empirical reality our own ontology, constructed to suit our purposes and circumstances. For example, the Inuit distinguish many more kinds of snow than do other North Americans because they both experience much more of it and have to make their living through it (literally). As W. V. O. Quine put it, ‘to be is to be the value of a variable’.

It was once believed, by Plato, Porphyry, Leibniz and other eminent philosophers, that there was a universal ontology, that is, that all the things in the world could be uniquely classified under a hierarchical scheme of differentiation. Sadly, those days have gone. Even if such an ontology was proposed, its universality could not be verified. Science has already encountered many instances of ontological change. For example, for many years, chemistry recognised a type of thing called phlogiston which was given off when a combustible substance was burned, and another called calx which was the residue left after the departure of the phlogiston. Now we have no place for phlogiston in our ontology but we can retain calx as referring to the oxidised residue of burning.

So in order to communicate among scientific disciplines, we must find ways of composing our different ontologies, finding mappings among their terms that do not violate any of the theories on which the laws of the various disciplines rest.

One way of doing this is to construct a GUT (Grand Unifying Theory), or TOE (Theory of Everything), which is indeed a major project in physics today. The problem has been to reconcile general relativity and quantum mechanics, hypotheses which are as close to laws as anything in science but which, unfortunately, contradict each other. The approach, as it has been on many other occasions, is to introduce new ontological distinctions ‘underneath’ those of the competing hypotheses, which can then be re-expressed, and subsequently successfully composed, in the new ontological structure. In this case, the elements of the new ontology are strange things indeed. Just as we had got used to fundamental ‘particles’ that were also ‘waves’ and could be observed only for fleeting moments at ridiculously high energies, we were presented with their unobservable components, the quarks, and with the quarks’ inconceivable components, 14-dimensional strings.

But although a successful TOE would reduce physics to a single theory, we would not reduce our accounts of our empirical experience to its ontology. The reductionist programme was once a cornerstone of scientific philosophy but those days are long gone and engineering was largely responsible.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.