Wednesday, 16 November 2022

Potentiality and probability

As outlined in the previous post, Barbara Vetter (Potentiality from Dispositions to Modality Oxford University Press, 2015) developed the concept of potentiality in her theory of dispositional powers. In that theory potentials are dispositions that are responsible for the manifestation of possibilities. The possibilities then tend to become actual events or states of affairs. The concept of 'potential' in philosophy, in a sense close to that discussed here, goes back at least to Aristotle in Metaphysics Book \(\Theta\).

In contrast probabilities are weightings summing to one that describe in what proportion the possibilities tend to appear. I propose that the potential underpins the actual appearance of the possibilities while probability shapes it. This will be discussed further in this post. 

Barbara Vetter proposed a formal definition of possibility in terms of potentiality:

POSSIBILITY:  It is possible that \(p =_{def}\) Something has an iterated potentiality for it to be the case that \(p\).

 So, it is further proposed that the probabilities are the weights that can be measured through this iteration using the frequency of appearances of each possibility. Note that this indicates how probabilities can be measured but it is not a definition of probability.

In the field of disposition research there is an unfortunate proliferation of terms meaning roughly the same thing. The concept of 'power' brings out a disposition's causal role but so does 'potential'. As technical terms in the field both are dispositions. Now 'tendency' will also be introduced, and it is often used as yet another flavour of disposition.

 Tendencies

Barbara Vetter mentions tendencies in passing in her 2015 book on potentiality and, although she discusses graded dispositions, tendencies are not a major topic in that work. In "What Tends to Be the Philosophy of Dispositional Modality" Rani Lill Anjum and Stephen Mumford (2018) provide an examination of the relationship between dispositions and probabilities while developing a substantial theory of dispositional tendency. In their treatment powers are understood as disposing towards their manifestations, rather than necessitating them. This is consistent with Vetter's potentials. Tendencies are powers that do not necessitate manifestations but nonetheless the power will iteratively cause the possibility to be the case.

In common usage a contingency is something that might possibly happen in the future.  That is, it is a possibility. A more technical but still common view is that contingency is something that could be either true or false. This captures an aspect of possibility, but not completely because there is no role for potentially; something responsible for the possibilities.  There is also logical possibility in which anything that does not imply a contradiction is logically possible. This concept may be fine for logic but in this discussion, it is possibilities that can appear in the world that are under consideration. Here an actual possibility needs a potentiality to tend to produce it.

Example (adapted from Anjum and Mumford)

Struck matches tend to light. Although disposed to light when struck, we all know that there is no guarantee that they will light as there are many times a struck match fails to light. But there is an iterated potentiality for it to be the case that the match lights. The lighting of a struck match is more than a mere possibility or a logical possibility. There are many mere possibilities towards which the struck match has no disposition - that is no potential in the match towards struck matches melting, for instance.

Iterated potentiality provides the tendency for possible outcomes to show some patterns in their manifestation. In very controlled cases the number of cases of success in iterations of match striking could provide a measure of the strength of the power that is this potentiality. This would require a collection of matched that are essentially the same.

Initial discussion of probability

Anjum and Mumford introduce their discussion of probability through a simple example that builds on a familiar understanding of dispositional tendencies associated with fragility.

"The fragility of a wine glass, for instance, might be understood to be a strong disposition towards breakage with as much as 0.8 probability, whereas the fragility of a car windscreen probabilities its breaking to the lesser degree 0.3. Furthermore, it is open to a holder of such a theory to state that the probability of breakage can increase or decrease in the circumstances and, indeed, that the manifestation of the tendency occurs when and only when its probability reaches one." 

This example is merely an introduction and needs further development but already the claim that "the manifestation of the tendency occurs when and only when its probability reaches one" shows that it is not a model for objective probability. What is needed is a theory of dispositions that explains stable probability distributions. Of course, if the glass is broken then the probability of it being broken is \(1\). However, this has nothing to do with the dispositional tendency to break. What is needed is a systemic understanding of the relationship between the strength of a dispositional tendency and the values or, in the continuous case, shape of a probability distribution.

In the quoted example above each power is to be understood in terms of a probability of the occurrence of a certain effect, which is given a specific value. The fragility of a wine glass, for instance, might be understood to be a strong disposition towards breakage with as much as 0.8 probability, whereas the fragility of a car windscreen is less, and the probability of its breaking is a lesser degree 0.3. But given a wine glass or windscreen produced to certain norms and standards it would be expected that the disposition towards breakage would be quite stable. A glass with a different disposition would be a different glass.

Anjum and Mumford claim, that in some understandings the manifestation of a possibility occurs when and only when its probability reaches one (see Popper, "A World of Propensities", 1990: 13, 20). This is a misunderstanding of how probability works. Popper distinguished clearly between the mathematical probability measure and what he called the physical propensity, which is more like a force, but Popper does limit a propensity to have a strength of at most \(1\).   As I will attempt to show below, Popper in proposing propensity interpretation of objective probabilities oversimplifies the relationship between dispositions and probabilities. This confusion led Humphreys to draft a paper (The Philosophical Review, Vol. XCIV, No. 4 (October 1985)) to show that propensities cannot be probabilities. As indeed they are not. They are dispositions. That would leave open the proposition that probabilities are dispositional tendencies, but that also will turn out to be untenable.

The proposal by Anjum and Mumford that powers can over dispose does seem to be sound. Over disposing is where there is a stronger magnitude than what is minimally needed to bring about a particular possibility. This indicates that there is a difference between the notion of having a power to some degree and the probability of the power’s manifestation occurring. Among other conclusions, this also shows that the dispositional tendency does not reduce to probability, preserving its status as a potential. 

 Anjum and Mumford continue the discussion using 'propensity' as having a tendency to some degree, where degree is non-probabilistically defined.  Anjum and Mumford use the notions of ‘power’, ‘disposition’ and ‘tendency’ more or less interchangeably, whereas an object may have a power to a degree there are powers that are simply properties. In what follows I try to will eliminate the use of 'propensity', except where commenting of the usage of others, and use 'tendency' to qualify either 'power', 'potential' or 'disposition' rather than let it stand on its own.

A probability always takes a value within a bounded inclusive range between zero and one. If probability is \(1\) then probability theory stipulates that it is almost certain (occurs except for a set of cases of measure zero). In contrast to what Anjum and Mumford claim it is not natural to interpret this as necessity because there are exceptions. For cases where there are only a finite set of possibilities then probability \(1\) does mean that there are no exceptions. But as this is a special case in applied probability theory there is no justification in equating it with logical or metaphysical necessity.

A power must be strong enough to reach the threshold to make the possibilities actual.  Once the power is strong enough then the probability distribution over the possibilities may be stable or affected by other aspects of the situation. So, instead of understanding powers and their degrees of strength as probabilistic, powers and their tendencies towards certain manifestations are the underpinning grounds of probabilities.  Consider the example of tossing a coin.

A coin when tossed has the potential to fall either heads or tails. This tendency to fall either way can be made symmetric and then the coin is 'fair'. From which probability weightings of \(1/2\) for each outcome (taking account of the tossing mechanism) can be assumed and then confirmed by measuring the proportion of outcomes on iteration. The reason why the head and the tail events are equally probable statistically, when a fair coin is tossed, is that the coin is equally disposed towards those two outcomes due to its physical constitution. The probability weightings derive, in this example, from a symmetry in the potentiality, which in turn derives from the physical composition and detailed geometry of the coin.


https://en.m.wikipedia.org/wiki/Rai_stones)
https://en.m.wikipedia.org/wiki/Rai_stones)

Consider a society that uses very large stone discs as currency.  On examination of the disc, it would be possible to conject that if it were tossed then there would be two possible outcomes and that those outcomes are equally likely. But this disposition is not realised because of the effort required to construct the tossing mechanism, as such a stone may weigh several metric tons. The enabling disposition that would give rise to the iteration of possibilities would have been this missing tossing mechanism. It is not a property of the disc. The manifestation of the dispositional tendency of the disc to come to lie in one of two states needs an external mechanism that is disposed through design to toss the coin in a certain way. If the mechanism is constructed it may be too weak. It may tend to only flip the coin once giving a sequence such as 

... T H T H T H H T H T H T H T H T H T H ...

that would give a frequency of T close to \(1/2\) but the sequence does not exhibit the potential for random outcomes to which the coin disposed. 

 Probabilities and chance

From the above: potential and possibility are more fundamental than (or prior to) probability. Both are needed to construct and explain objective probability. The alternative, subjective probability, is based on beliefs about possibilities but that is not the same thing as what is actually possible and how things will appear independently of anyone's beliefs or judgements.

In this blog I have already referred to a dispositional tendency begin to explain objective probabilities in quantum mechanics. The term propensity has been used to describe these probabilities. I now think that was wrong. Propensity should be reserved for the dispositional tendency that is responsible for the probabilities to avoid this term merging the underpinning dispositional elements and probability structure. Anjum and Mumford claim that they have made a key contribution to clarifying the relationship between dispositional tendencies and probability through their analysis of over disposition

Anjum and Mumford claim "information is lost in the putative conversion of propensities to probabilities" but only if the dispositional grounding of probabilities is forgotten.  Their discussion is strongly influences by their interest in application to medical evidence where a major goal is reduction of uncertainty.  Anjum and Mumford propose two rules on how dispositions and probability relate.


  1. The more something disposes towards an effect \(e\), the more probable is \(e\), ceteris paribus; and the more something over disposes \(e\), the closer we approach probability \(P(e) =1\).
  2. There is a nonlinear ‘diminishing return’ in over disposing. E.g., if over disposing \(e\) by a magnitude \(x2\) produces a probability \(P(e) =0.98\), over disposing \(x3\) might increase that probability ‘only’ to \(P(e) =0.99\), and over disposing \(x4\) ‘only’ to \(P(e) =0.995\), and so on.

While these rules are fine as propositions, they miss the mark in explaining the relationship between dispositions and probability. In the coin tossing example strengthening the mechanism is not about strengthening one outcome. Over disposing does provide support for the distinction between the strength of the disposition and value of the probability but the relationship between underpinning potentials, dispositional mechanisms, and the iterated outcomes needs to be made clear.

Anjum and Mumford also discuss coin tossing and make substantially the same points as I made above. However, having clarified the distinction between propensity and probability, they revert to using the term propensity in a way that risks confusing the concepts of dispositional tendency and probabilities with random outcomes. They say "50% propensity" rather than "50% probability". They then introduce the term "chance" that they relate to outcomes in some specified situations. Propensity is then reserved by them for potential probability while chance is the probability of an outcome in a situation. This is more confusing than helpful.

Anjum and Mumford go on to a discussion of radioactive decay that is known to be described by quantum theory. They make no mention of quantum theory (this will be corrected by them in Chapter 4) and strangely claim that radioactive decay is not probabilistic.   The probability distributions derived from quantum mechanics unambiguously give the probability of decay per unit time. There are, per unit time, two possibilities "decay" or "no decay". Their error is to claim, "only one manifestation type" (decay) and from this that there is only one possibility. Ignoring quantum mechanics, they write:

 "The reason it is tempting to think of radioactive decay as probabilistic is that there is certainly a distinct tendency to decay that varies in strength for different kinds of particles, where that strength is specified in terms of a half-life (the time at which there is a 50/50 chance of decay having occurred)."

 But no, the reason to think that radioactive decay is probabilistic is that our best theory of nuclear phenomena explains it in terms of probabilities. This misunderstanding leads them to introduce the concept of indeterministic propensities. However, they have arrived at the concept it is left open as to whether there are non-probabilistic indeterminate powers, but radioactive decay is not an example.

The examples of the concept chance provided by Anjum and Mumford can be derived in their examples from a correct application of probability theory. Chance is often used as a term for 'objective probability', and I have done so in previous posts. I will continue to follow that usage and exploit the clarification obtained from the analysis above that shows that objective probability depends on the possibilities that are properties of an object. The manifestation of these possibilities may require an enabling mechanism.    The statistical regularities displayed by these manifestations on iteration are due primarily to the physical properties of the object unless the enabling mechanism is badly designed.

Thet term 'propensity' has given rise to much confusion in the literature. Now that we are reaching an explanation of objective probability the term 'propensity' might better be avoided. 

I propose that the model of objective probability is that:

OBJECTIVE PROBABILITY An object has probabilistic properties if it is physically constituted so that it has a potential to manifest possibilities that show statistical regularities.

It it possible to describe statistical regularities without invoking the term 'probability'.

Although some criticism of Anjum and Mumford is implied here, I recognise that their contribution has done much to disentangle considerations about the strength of dispositions that describe tendencies form a direct interpretation as probabilities. However, the value of the three distinctions they have identified is mixed

  • Chance and probability are not fundamentally distinct and just require a correct application of probability theory
  • Probabilistic dispositional tendencies are distinct from non-probabilistic dispositional tendencies: this is a real and fruitful distinction
  • Deterministic and indeterministic dispositional tendencies also provide a useful distinction but it remains to be seen whether there are fundamental non-probabilistic indeterministic dispositions.

The next post will continue this theme with a discussion of dispositional tendencies in causality and quantum mechanics, engaging once more with the 2018 book by Anjum and Mumford.

 

Monday, 24 October 2022

Concepts of causal powers

The concept of dispositions has played a key role in the formulation of objective quantum chance in this blog. However, there is ambiguity about what these are as powers. Ruth Porter Groff has helpfully addressed this issue and identified four senses of the term. She identifies dispositional power to be conceptualised as an:

  • Activity
  • Capacity
  • Essence
  • Necessitation.

While Groff indicates that there is a further task to work out which is correct in a given context. That is, depending on what is (what the world is like), there could be powers of distinct types. For example, at the various levels of reality (Hartmann) different concepts could play their part. The intention is that clarity on which concept of power applies will strengthen any theory of quantum chance.

These concepts of powers need to be contrasted with what is called the Humean view that there are no necessary connections or causes in nature. That is, there are no powers.

Activity

Consider a film in which each frame is static. Playing the film gives the impression of movement or activity. If activity in the world is like the film, then activity is an illusion or a metaphor. In which case activity is not an aspect of how things are, and the ontology can be called passive. If activity is not just a sequence of static configurations, then activity may not be an illusion. We can follow Groff and use the term anti-passivist to refer to the opinion that activity is a real and irreducible component of the world.

Activity is taken to cover a range of things. Movement, deliberation (moving away from the visual film example), inquiry and chemical reaction (to take an inorganic example). Any instance of causation is an activity.

The view that there is activity in the world has common sense on its side. For example, action as captured by verbs as part of the deep structure of language.

From this perspective, to say that things in the world have causal powers is to say that things engage in activity and are able to do. Reality is in this sense genuinely, irreducibly, non-metaphorically dynamic. In contrast Esfeld's [1] primitive ontology, that is in favour with some who defend the Bohmian version of Quantum Mechanics, is passive and at best kinematic. It is an example of an extreme (or to use Esfeld's own term Super) Humean ontology.

 Real activity contrasts with the Humean view that inanimate matter is essentially passive and never intrinsically active. In real activity the action of things depends on their causal powers. Examples of activity, from Cartwright [2] Hunting causes and using them: Approaches in philosophy and economics, are:

1. The carburettor feeds gasoline and air to a car’s engine ...

2. The pistons suck air in through the chamber...

3. The low-pressure air sucks gasoline out of a nozzle ...

4. The throttle valve allows air to flow through the nozzle ...

5. Pressing the pedal opens the throttle valve more, speeding the airflow and sucking in more gasoline...

6. …

These examples indicate that mechanisms can undertake activity.

Capacity

A capacity is a way that something presently is, such that it could be a way that it presently is not. The phenomenon of a capacity is thus inherently modal, invoking possibility. That is capacity is the potential to be in a possible state for that thing.  This type of potentiality is attached to the nature of a thing and is sometimes called real, or metaphysical possibility, in contrast to logical possibility.

A capacity may be engaged in activity but there is nothing about the concept of a real possibility that requires realism about activity to be either built into it or entailed by it. A power, as a capacity, is a property that need not be fully actualised in the present in order to exist.

Capacities are properties that include, as part of their identity in the present, non-actualized but nevertheless real potential manifestations. By contrast, for the Humean the only properties that exist are categorical properties; properties that are fully occurrent or actual.

Essence

Essentialism is when things (or some kinds of things) are such that they could not be, in part or in full, otherwise without ceasing to be what they are.  Among dispositionalists, Bird [3] defines powers, or potencies, as fundamental properties whose identities are not just dispositional, but fixed. A power as an essence is to be a property whose identity is essential to it.

Necessitation

Necessitation is when one thing is the case, some other thing must be the case. A necessary connection is a power in this sense. Equating powers with necessary connections is proposed by Armstrong [4].

It is not clear that even those anti-passivists who are most focused on defending the reality of necessary connections (in the name of defending the notion of a law) do believe in metaphysical necessitation.

What is meant by the term ‘metaphysical necessitation’? To answer this, we need to know

  1. whether one who affirms it believes
    1. that things of a given kind necessarily tend to behave in one way or another, or
    2. that they must behave in one way or another; and also
  2. whether or not one who affirms the existence of metaphysical necessitation holds that given behaviours necessarily bring about assigned outcomes.

Accepting the above would tend someone strongly towards metaphysical determinism and intuitively this would be a natural consequence of necessitation. However, it may be that a version of metaphysical necessity that commits only to the existence of necessary tendencies does not translate into a commitment to what may be called ‘causal necessitarianism,’ or even hard determinism.

Conclusion

It is conceivable that all the concepts described above have role to play in understanding the role of powers in the world. However, from the posts in this blog on quantum physics and in particular quantum chance it is capacity that seems to be the best fit. This is because the potential to have a physical property is attached to a physical object and possibilities are captured by the set of possible values that can be made actual. For example, an electron has the property of spin with the potential to take certain value. The possible values that can be actualised and with whichever probabilities depends on this potential and the context the particle find itself in.

The concept of activity also has causal force but seems better suited to powers of designed mechanism or psychological or social situations. If a theory can be developed of what gives rise to the occurrence of an actual event in Quantum Mechanics, then activity may be a valid concept for the power in question.

Essence plays a role in quantum chance in that the power that governs the tendency for an object, such as an electron, to take particular values of spin is an essential property of an electron. That is, an electron would not be an electron if spin and quantum chance were not aspects of its state.

Necessary connections have a role to play in a physical theory even in the presence of objective chance. The evolution of the wavefunction governed by the Schrödinger equation is deterministic making the state of the object necessarily connected to its state at an earlier time.

However, it is proposed that potentiality and possibility are the key concepts, equivalent to capacity, which play the key role in the developing theory of quantum chance. The most complete treatment to date of powers as potentiality and possibility is by Barbara Vetter [ 5] whose classification of potentiality as a localised modality and possibility as a non-localised modality looks promising. We may then have the quantum state of the electron representing the chance potential to take certain spin values while the possible spin values that can be actualised will depend on the non-localised situation that constrains the quantum state.

 

[1] Esfeld, M., & Deckert, D.-A. (2017). A Minimalist Ontology of the Natural World (1st ed.). Routledge.

[2] Cartwright, N. (2007). Hunting causes and using them: Approaches in philosophy and economics. Cambridge University Press.

[3] Bird, A. (2007). Nature’s metaphysics: Laws and properties. Oxford University Press.

[4] Armstrong, D. M. (2005). Four disputes about properties. Synthese, 144(3), 309-320.

[5] Vetter, B. (2015). Potentiality: From dispositions to modality. Oxford University Press.

 

Friday, 30 September 2022

Review: October 2022




This blog is intended to
  • Help organise and develop my thinking on quantum mechanics, the role of probability, and the ontological status of particles and states.
  • Examine and develop ontologies in other areas.
 My guiding principle is that entities have properties and interactions that are independent of whatever anyone or anything knows about them. Experiments are for finding out about the physical world not for instantiating it. That is, the physical world is there even when no one is looking. So far, the discussion has engaged with ontologies that deal with an autonomous physical world independent of considerations of what is known about the system or who is interfering with it. There are quantum theories that focus on what can be known about a physical system rather than what the systems behaviour is per se. The original Copenhagen interpretation of quantum mechanics falls into this category, but a more recent approach is Quantum Bayesianism. Commonly shortened to QBism, it interprets the quantum state as capturing a degree of belief. There are questions about the ontological assumptions associated with the theory, with options ranging from a form of idealism to "participatory realism". I will return to this in future posts.  

This blog has inherited a lot of material from a draft paper I prepared on ontology and quantum mechanics. The substance of that paper has now been captured as a set of blog posts. One reason for putting aside the draft paper was the recognition of the further reading on ontology that I needed to do. At the time of moving from the academic paper format to the blog, I had only started to delve into the New Ontology by Nicolai Hartman [1]. What I won from Hartmann was a structure of ontological strata and spheres of being. This structure is much richer than Karl Popper's three-world ontology [2]. An academic paper is appropriate once all the problems with concepts have been addressed or at least clarified, whereas a blog is more flexible and informal and that should help in developing my ideas.

Critical ontology

I have taken the title of the blog from a earlier paper by Hartmann: Wie ist kritische Ontologie überhaupt möglich?. Critical ontology implies, in this blog, a constructive but forensic attitude to concepts and theories in science and philosophy whether I am initially sympathetic to them or not. This is an attitude that I think is consistent with both Hartmann's and Popper's approach to philosophy.

Quantum chance

A major challenge, and focus so far, is how to include objective probabilities in an ontology for quantum mechanics. Popper's proposal for a propensity interpretation introduced a dispositional model for objective chance but its ontological status remains ambiguous with its presentation strongly dependent on artificial experimental arrangements rather than the situations in which physical entities mostly find themselves. Popper's intent seems to be to reduce quantum indeterminacy to classical probability with a dispositional interpretation. I do not think that can be directly achieved. 
In previous posts not enough emphasis was given to explaining the special status of quantum chance. The use of the letter to represent the quantum state should not be taken to imply that quantum mechanics has been reduced to classical probability. There are major differences, such as interference terms, as shown in the mathematical presentation.

Here I want to explain another major difference. Classical probability theory has its origins in the analysis of games of chance and statistics was initially developed to deal with the vast quantities of data associated with entities of interest to the state.  In games of chance the numbers on dice or patterns of playing cards are actual but hidden. Similarly in the use of statistics by the state, members of the population are actual, and while not hidden, the state needs to work with averages and distributions. In quantum chance the probabilities are not merely a means of dealing with hidden or irrelevant variables. It is known from the work of Kochen and Specker [3] that in general the quantum variables are not actual. The electron does not have an actual spin value that is unknown because in general its spin value is only a potential value. This means that the dispositional powers that lead to quantum chance are fundamental.

A consideration that requires further work is the mechanism for property values to become actual that is not tied to a measurement. There are proposals for spontaneous wavefunction reduction but if they only decrease the variance (or some other measure of the spread) of the wavefunction this only reduces the number of possibilities without selecting one to be actualised. This seems to me to indicate a major open problem.

These considerations indicate the need for a theory that recognises dispositions as fundamental properties of physical entities. The concepts developed by Alexander Bird [4] show promise although it is a still open to investigation how widely ranging dispositional properties are in nature. 

Ontological status of mathematics and physical theories

 The ontological status of mathematical representations in physical theories also needs further development. In Hartmann's ontological structure pure mathematics belongs to the ideal rather than the real sphere. Physical theories seem to fit better objectivated mode belonging to the spirit stratum of the real sphere, but they apply structures with an origin in pure mathematics and provide a description of aspects of the inorganic stratum of the real sphere. This means it is necessary to understand to what extent mathematical entities belong to the ideal rather than the real sphere and what the interaction is between the ideal sphere, the spirit stratum, and the inorganic stratum in the real sphere. Whereas Hartmann goes back to Aristotle to develop an ontology that includes ideal and real spheres of existence, an Aristotelean ontology that places mathematics in the real sphere is also a possibility. Franklin [5] has developed a version of this.

Next steps

My next task is to absorb the content of books like those of by Franklin and Bird, and capture what I learn in future posts. It is hoped that my earlier posts in this blog will then be developed, clarified, and improved. In addition, I will examine and discuss the proposals for gravity induced state reduction [6], event-oriented theories [7], and quantum Bayesian approaches.


[1] Hartmann, N., New Ways of Ontology, Taylor and Francis, 2017 (translated from Hartmann, N., Neue Wege der Ontologie, W Kohlhammer, 1949)

[2] Popper, K. R., Objective Knowledge, Oxford: Clarendon Press, 1972

[3] Kochen, S. & Specker, E. P., The Problem of Hidden Variables in Quantum Mechanics, 
J. Math. & Mech., 1967, 17, 59 

[4] Bird, A., Nature’s Metaphysics, Oxford University Press, 2007

[5] Franklin, J., An Aristotelian Realist Philosophy of Mathematics, Palgrave Macmillan UK, 2014

[6] Penrose, R., Shadows of the Mind, Oxford University Press, 1994

[7] Fröhlich, J. & Pizzo, A., The Time-Evolution of States in Quantum Mechanics according to the ETH-Approach, Communications in Mathematical Physics, 2021


Monday, 12 September 2022

The double slit experiment

Having discussed the issues with quantum measurement in general, and shown that standard interpretation of quantum mechanics is incomplete, Young's double slit experiment with electrons will be discussed in two variants (for background see Chapter 1 of Hall [1]):
  • The standard configuration, to be described below, Figure (1).
  • A configuration with a pointer that acts behind the slits to point in the direction of the passing electron, Figure (2). 

Figure (1) The setup for the double slit experiment is shown. An electron source sends one particle at a time toward the screen with the slits. The slits are marked by \(\delta_1\) and \(\delta_2\) and a sample region \(\Delta\) is shown on the detector screen.

Standard configuration

The experiment, Figure (1), is as follows:
  • There is a source of electrons that move towards a screen with two slits.
  • The intensity of the beam is low and only one electron is moving towards the detector at any time.
  • The slits are marked $\delta_1$ and $\delta_2$
  • \(\Delta\) be some arbitrary region on the electron detector.

Let $Y_1$ and $Y_2$ be the projection operators for position in the regions of the two slits $\delta_1$ and $\delta_2$. Then $Y_1 + Y_2$ is the projection of position for the union $\delta_1 \cup \delta_2.$ Let $X$ be the operator for position in a local region $\Delta$ on the detection screen. Assume the electron is only constrained to pass through the slits without being constrained as to which, then under those conditions the conditional probability is given by the Law of Alternatives:

\[\begin{eqnarray}
p(X|Y_1 + Y_2) &=& p(X|Y_1)p(Y_1|Y_1 + Y_2) + p(X|Y_2)p(Y_2|Y_1 + Y_2)\nonumber \\
& &+ [\textbf{tr}(Y_1 \rho Y_2 X)+\textbf{tr}(Y_2 \rho Y_1 X)]/ \textbf{tr}(\rho (Y_1+ Y_2)).
\end{eqnarray} \, \, \, \, \, \, \, (1)\]

This can be written more compactly as
\[
p(X|Y_1 + Y_2) = p(X|Y_1)p(Y_1|Y_1 + Y_2) + p(X|Y_2)p(Y_2|Y_1 + Y_2) + p(X| Y_1+Y_2 )_I  \, \, \, \, (2)
\]
where \( p(X| Y_1+Y_2 )_I\) is the interference term. 

Note that if $X$ commutes with either $Y_1$ or $Y_2$, this interference term vanishes, because \(Y_i Y_j = 0\) for \( i \ne j \) . This can be observed if the detector is right next to the two-slit screen because \(X\) then coincides with either \(Y_1\) or \(Y_2\). If the detector is a distance from the two-slit screen (e.g. \(X= \Delta\)), then the state of the electron evolves unitarily via  $\alpha_t,$ so $\alpha_t(Y_i)=u_t Y_i u^{-1}_t$ no longer commutes with $\alpha(X)$, giving rise to the non-zero interference term.
 
The standard explanation of the interference effect is that the state of the particle is, or acts as, a coherent pair of waves emanating from the slits, which exhibit constructive and destructive interference effects. This was, of course, the explanation for Young's original experiment with light. For individual quantum particles, however, there is the unexplained local event observed at the detection screen.  This is a problem for the theory. While the Born interpretation of the wavefunction provides a probability distribution for the particle position it requires the detection screen, operating outside what is described by the mathematical theory, to act as a sampling mechanism for that distribution.

The explanation given in this blog is that the two-slit screen functions as a preparation of the state for the particle, by which the state is conditioned, or reduced, to pass through the region $\delta_1\cup\delta_2 $. This reduction is not a position measurement, since $\delta_1\cup\delta_2$ is not a localised region (as it would be for a single-slit screen). Once the particle reaches a detection screen then, in interaction with the screen, it appears in a random local region $\Delta$ and its position takes a value. Just as in the standard Born interpretation of the wavefunction, it is not explained in the theory how the electron takes the value that the detector detects other than invoking random sampling of the possible values.

So, the interference pattern on the detector is built up over time as more electrons arrive and are sampled by the detector. 

The introduction of an interaction with a pointer

This section is adapted from Bricmont [2], Appendix 5.A and Maudlin [3].


The experiment, illustrated in Figure (2), is now as follows
  • There is again a source of electrons that move towards a screen with two slits. 
  • The intensity of the beam is low and only one electron is moving towards the detector at any time.
  • The slits are marked $\delta_1$ and $\delta_2$.
  • \(\Delta\) be some arbitrary region on the electron detector.
  • A pointer \(P\) is introduced. It is a quantum object with three states neutral, \(P_0\), points to slit \(1\), \(P_1\) and points to slit \(2\), \(P_2\). The interaction with the electron causes the pointer to move towards it.
Figure (2) The setup for the double slit experiment is as in Figure (1) but for the addition of a three state pointer that interacts with the electron as it passes through slit \(\delta_1\) or \(\delta_2\).

Again let $Y_1$ and $Y_2$ be the projection operators for position in the regions of the two slits $\delta_1$ and $\delta_2$. Then $Y_1 + Y_2$ is the projection of position for the union $\delta_1 \cup \delta_2.$ Let $X$ be the operator for position in a local region $\Delta$ on the detection screen. The operator representing the pointer has three eigenstates and therefore a three-dimensional Hilbert space \(\mathcal{H}_P\). Without the pointer the Hilbert space is \(\mathcal{H}_0\). The Hilbert space of the total system is \(\mathcal{H}_P \otimes \mathcal{H}_0\). The total system consists of a single electron and a pointer constrained by the screen with the two slits, and the detector.

The possible constituent states are: 
  • \(\phi_1\) be the state of the pointer pointing towards the slit \(\delta_1\)
  • \(\phi_2\) be the state of the pointer pointing towards the slit \(\delta_2\)
  • \(\phi_0\) be the state of the pointer pointing in the neutral direction \(P_0\)
  • \(\psi_1\) be the state of the electron passing through slit \(\delta_1\)
  • \(\psi_2\) be the state of the electron passing through slit \(\delta_2\)
  • \(\Psi_0\) be the state of the electron with the pointer in the neutral position \(P_0\).
where \(\phi_0\), \(\phi_1\) and \(\phi_2\) are eigenstates and therefore orthogonal. This is not the case for \(\psi_1\) and \(\psi_2\). 

Assuming the pointer starts in its neutral state, the initial wave function is
\[\begin{eqnarray}
\Psi_0 &=& \phi_0 \otimes (\psi_1 +\psi_2) \nonumber\\
&=&\phi_0 \otimes \psi_1 + \phi_0 \otimes \psi_2
\end{eqnarray}\]
Two treatments of the situation will now be discussed. In the first, the electron carries its charge through either the \(\delta_1\) or \(\delta_2\) and the pointer reacts and in the second the charge is not constrained to pass through only one slit at a time. The first treatment would be consistent with the ontology of Bohmian mechanics or stochastic mechanics. The second would be consistent with the electron with its charge passing through both slits or not physically existing at all at that point in the experiment. This is consistent with the ontology proposed by Bell [4] for the formulation of quantum mechanics proposed by GRW [5]. In their ontology there can be a local event only with extremely low probability in a run of the experiment. 

Treatment I: The pointer reacts to which slit the electron passes through

Here the situation is idealised to assume that the pointer reacts perfectly to the electron going through either slit 1 or slit 2. This is not a measurement because the reaction is neither registered nor signalled. At no point does anyone know which slit the electron has passed through.

Time unitary evolution in quantum mechanics is linear, therefore \(\Psi_0\) evolves to

\[
\Psi = \phi_1 \otimes \psi_1 + \phi_2 \otimes \psi_2.
\]
Inserting this for the state into equation (1), and using the notation for the interference term in equation (2), gives
\[
p(X| Y_1+Y_2 )_I=\frac{\textbf{tr}(Y_1 \rho_\Psi Y_2 X)+\textbf{tr} (Y_2 \rho_\Psi Y_1 X)}{\textbf{tr} (\rho_\Psi (Y_1+ Y_2 ))} 
\]
\[
p(X| Y_1+Y_2 )_I= \frac{\mathfrak{N}}{\mathfrak{D}},
\]
where
\[
\mathfrak{N} =(\phi_2 \otimes \psi_2 ,P \otimes X \phi_1 \otimes \psi_1) +(\phi_1 \otimes \psi_1, P \otimes X \phi_2 \otimes \psi_2 )
\]
\[
\mathfrak{D}=(\phi_1 \otimes \psi_1 , \phi_1 \otimes \psi_1)+(\phi_2 \otimes \psi_2, \phi_1 \otimes \psi_1)
\]
\[
+(\phi_1 \otimes \psi_1, \phi_2 \otimes \psi_2) +
(\phi_2 \otimes \psi_2, \phi_2 \otimes \psi_2)
\]
Using that the states of pointer are orthogonal
\[
p(X| Y_1+Y_2 )_I= 0
\]

The quantum interference term disappears. \(p(X| Y_1+Y_2 )\) is just a combination of the pattern for each slit on its own. So, even though no measurement is registered the presence of the pointer and its interaction with the electron is enough to eliminate the interference pattern. This is often explained (by Feynman [6] for example) by the electron being watched to determine which slit the electron passes through. The pointer is reacting to but not determining the outcome. The interference pattern disappears due to what is known as entanglement, not measurement.

Treatment II: The pointer does not react to which slit the electron passes through

In this treatment the assumption that the total charge is carried through only one of the two slits is not made or if it does the pointer cannot unambiguously react to it. This leads to a more general linear combination of the possibilities. Generally, the \(\Psi\) evolves to
\[\Psi = \sum_{i \in \{0,1,2\}} a_i \phi_i \otimes \psi_1 + \sum_{i \in \{0,1,2\}}b_i \phi_i \otimes \psi_2.
\]
\[
a_0=b_0, a_1 = b_2, a_2=b_1.
\]
The pattern to be observed on the detection screen in this treatment would now be 
\[
(\Psi, P \otimes X \Psi) = (\sum_{i \in \{0,1,2\}} a_i \phi_i \otimes \psi_1 + \sum_{i \in \{0,1,2\}}b_i \phi_i \otimes \psi_2, \]

\[
 P \otimes X [\sum_{i \in \{0,1,2\}} a_i \phi_i \otimes \psi_1 + \sum_{i \in \{0,1,2\}}b_i \phi_i \otimes \psi_2]).
\]
Using the orthogonality of the pointer states,
\[
(\Psi, P \otimes X \Psi) =\sum_{i \in \{0,1,2\}}|a_i|^2 (\psi_1, X \psi_1) + \sum_{i \in \{0,1,2\}}|a_i|^2 (\psi_2, X \psi_2)+\]
\[\sum_{i \in \{0,1,2\}} a^*_1 a_2 (\psi_2, X \psi_1) + \sum_{i \in \{0,1,2\}}a^*_2 a_1 (\psi_1, X \psi_2)
\]
where superscript \(*\) denotes the complex conjugate.
Using \(C= \sum_{i \in \{0,1,2\}}|a_i|^2\) to simplify to 
\[
(\Psi, P \otimes X \Psi) = C( (\psi_1, X \psi_1) + (\psi_2, X \psi_2))+
 \mathfrak{Re}\{2 a^*_1 a_2 (\psi_2, X \psi_1)\}.
\]

So, the interference pattern (\(\mathfrak{Re}\{2 a^*_1 a_2 (\psi_2, X \psi_1\}\)) persists. This behaviour is consistent with a physical situation where no charged particle exists in the region of the slits, as in the Bell ontology for the GRW collapse theory.

Experimental tests and ontological comparisons

The setup with the pointer, as described above, is an idealisation. This pointer is a quantum object that will react reliably to a passing charge particle but with no registration of the direction pointed. If there is no passing charged particle, then there would be nothing to react to. 

It is conceivable that the pointer could be realised by a molecule with an appropriate electrical dipole moment that can be fixed in position immediately behind the screen, between the two slits, but free to rotate. Maudlin [3] discusses the setup with a reacting proton trapped between the slits. Any practical experiment would implement the pointer in a way that would inevitably deviate for the ideal. This could lead to a situation where the interference pattern is weakened but not destroyed.

If quantum theories are constructed to be empirically equivalent but with distinctly different ontological models, then a discussion of how credible these ontological models are within different scenarios can provide a valid critical comparison. The result in Treatment I is consistent with an ontology in which the electron carries its charge on one continuous trajectory, such as in Bohmian mechanics or Nelson's stochastic mechanics. That is, each electron exists in the region of only one of the slits. Then the presence of a pointer reacting to the charge but not measuring it would be sufficient to destroy the interference pattern. This would give support to

  • A Bohm or Nelson type theory in which the electron follows a continuous trajectory through the experimental setup. The trajectory is deterministic in the case of Bohm but stochastic in the case of Nelson.
  • A quantum chance theory. The local appearance of the electron as a dispositional property that appears as a value locally in the region of only one of the slits due to the interaction of the electron with the pointer. However, the theory does not as it stands describe how this appearance is made actual. It would be an assumption that the pointer acts to sample the distribution.
By contrast, if pointer shows no reaction, as in Treatment II, then that would undermine the explanatory force of the Bohm or Nelson ontologies and indicate a quantum world in which one or more of the following is the case:
  • A registering measurement is needed to destroy the interference pattern. This could be called the Copenhagen point of view.
  • The charge is spread across possible positions (although this would have to be split equally across the two slits to give no pointer reaction)
  • The charged particle may not actually exist in the region of pointer. Although the Bell ontology for GRW could be thought of as a mechanism for locally actualising the charge, the mechanism that they propose does not occur frequently enough to produce the effect in this experiment.
  • The proposal for a theory of quantum chance in which dispositional property of the electron to appear at a locality does not entail the actual appearance due to the interaction with the pointer.
Treatments I and II both assume a behaviour of the pointer. There is a full mathematical formalism that would, in principle, provide the answer to whether the theory predicts that the interference pattern persists or disappears once the electron point interaction is included in the Hamiltonian of the system shown in Figure (2). 

The theory put forward in this blog is open to either the outcome where the pointer reacts and to that in which it does not. This is because it provides no quasi-classical insight into what the pointer or the electron may do. The quantum chance theory provides transition probabilities that must be calculated from first principles. They correspond to dispositional powers that do not appear as such in any Field of Sense. This is because the dispositional properties, although existing in the real sphere, only provide an effect when an interaction and context affords a Field of Sense, and the form of the appearance depends on the details of the interaction. What is clear is that there is no local appearance of the electron to which the pointer can react. The physical details of the pointer interaction with the quantum state my give rise to an actual local appearance of the electron but it may not. A treatment using the total system Hamiltonian will have no mechanism to break the symmetry between the two slits and so cannot be expected to eliminate the interference pattern.

[1] Hall, B. C., Quantum Theory for Mathematicians, Springer, 2013

[2] Bricmont, J., Quantum Sense and Nonsense, Springer Nature, 2017

[3] Maudlin, T., (2019) Philosophy of Physics: Quantum Theory, Princeton University Press., 2019

[4] Bell, J. S., Are there quantum jumps?, Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy, Cambridge University Press, 2004, 201-212

[5] Ghirardi, G. C., Rimini, A. & Weber, T., Unified dynamics for microscopic and macroscopic systems, Phys. Rev. D, American Physical Society, 1986, 34, 470-491

[6] Feynman, R., The Feynman Lectures on Physics, Volume III Quantum Mechanics, 
California Institute of Technology, 2013






Saturday, 10 September 2022

The so-called measurement problem in quantum mechanics

 In some following posts specific experimental situations will be discussed. To prepare for these it is appropriate to start with a general discussion of measurement in quantum mechanics.       

Adapted from Kochen [1].                        

 The Measurement Problem refers to a postulate in standard quantum mechanics, which assumes that an isolated system undergoes unitary evolution via Schrödinger's equation and then an eigenvalue of the operator representing the observable being measured (an observable is a property of the system that the experimental setup is designed to measure) is randomly selected as the result of the measurement, as presented by Bohm [2], for example. However, if a property $\hat{A}$ of a system $S$ is measured by an apparatus $T$, the total system $S+T$, if assumed to be isolated, then undergoes unitary evolution. The random selection of an eigenvalue is an additional mechanism.

The mathematical formulation of an ideal measurement, in standard quantum mechanics, is as follows for system \(S\) in a pure state \(\phi_k\):

  • Take the spectral decomposition of an operator representing an observable to be $A =\sum_i a_i \pi_{i}$.
    • Each $\pi_{i}$ is a one-dimensional projection with eigenstate $\phi_{i}$ and \(\{a_i\}_i\) is the set of eigenvalues. 
  • The apparatus $T$  is assumed to be sensitive to the different eigenstates of $A$. 
    • Hence, if the initial state of $S$ is $\phi_{k}$ and the apparatus $T$ is in a neutral state $\psi_0$, so that the state of $S+T$ is $\phi_{k}\otimes \psi_0$
  • The system evolves into the state $\phi_{k}\otimes \psi_k$, where the $\{\psi_i\}_i$ are the states of the apparatus operator corresponding to the states $\{\phi_{i}\}_i$ of the system, \(S\). 
  • \(T\) and its interaction with \(S\) will have been chosen to achieve this 
    • a perfectly designed measurement apparatus to be in \(\psi_l\) whenever \(S\) is in \(\phi_{l}\) for all \(l\). 

This all looks reasonable, and the key assumption is that the measuring apparatus does what it is supposed to. But now, for the case of a more general initial state, $\phi=\sum_i c_i \phi_{i}$:

  • By linearity, if $S$ is in the initial state $\phi=\sum_i c_i \phi_{i}$, then 
  • $S+T$ evolves into the state $\Gamma=\sum_i c_i \phi_{i} \otimes\psi_i$.  

A problem with this for standard quantum mechanics is that the completed measurement gives a particular apparatus state $\psi_k$, say,  indicating that the state of $S$ is $\phi_{k}$, so that the state of the total system is $\phi_{k}\otimes \psi_k$, in contradiction to the derived evolved state  $\sum_i c_i \phi_{i} \otimes \psi_i$. This evolution does not describe what happens in an experiment.

In contrast, the reduction can also be considered from the viewpoint of the conditioning of the states. If the state $p$ of $S+T$ just prior to measurement is $\rho_\Gamma$, corresponding to $\Gamma=\sum_i c_i \phi_{i} \otimes \psi_i$ then after the measurement it is in the conditioned state, by equation~(**) in the post Quantum chance

\[\begin{eqnarray}
p(\cdot |(\pi_{\phi_k} \otimes I)( I \otimes \pi_{\psi_k}))&\nonumber\\
=&\frac{(\pi_{\phi_k} \otimes I)( I \otimes \pi_{\psi_k}) \rho_\Gamma (\pi_{\phi_k} \otimes I)( I \otimes \pi_{\psi_k})}{ \textbf{tr}((\pi_{\phi_k} \otimes I)( I \otimes \pi_{\psi_k}) \rho_\Gamma )}\nonumber\\
=&\frac{(\pi_{\phi_k} \otimes I)( I \otimes \pi_{\psi_k}) (\sum_i c_i \phi_{i} \otimes \psi_i) (\pi_{\phi_k} \otimes I)( I \otimes \pi_{\psi_k})}{ \textbf{tr}((\pi_{\phi_k} \otimes I)( I \otimes \pi_{\psi_k}) (\sum_i c_i \phi_{i} \otimes \psi_i) )}\nonumber\\
=&\pi_{\phi_k \otimes \psi_k}. \nonumber
\end{eqnarray}\] 

Hence, the new conditioned state of $S+T$ is the reduced state $\phi_{k} \otimes \psi_k$ This is not surprising as it is conditioned on being in just the \(k\)th state of \(\rho_\Gamma\) and so it projects that element out.  This is not a resolution of the measurement problem but merely makes use of the probabilistic formulation of the theory to show conditioning forces the state reduction.

 Whereas standard quantum mechanics must add a means to reconcile the unitary evolution of $S+T$ with the measured reduced states of $S$ and $T$, the interpretation argued for in this paper take the opposite approach to the orthodox interpretation. The point of departure is not the unitary development of an isolated system, but rather the result of an interaction.  It is the conditions under which dynamical evolution occurs that must be further investigated, rather than the additional reduced state mechanism. Therefore, it should not be taken for granted, as assumed in standard quantum mechanics, that an isolated system evolves unitarily.  The question to be addressed is whether in a measurement the $\sigma$-complex structure of $S+T$ undergoes a symmetry transformation at separate times of the process. This is formalised as the condition for the existence of a representation $\alpha:\mathbb{R}\to {\mathop{\rm Aut}\nolimits} (Q)$. The outcome of a measurement cannot be given by a unitary process.

A completed measurement or a state preparation has two distinct elements of $Q(\mathcal{H}) (=Q(\mathcal{H}_S \otimes \mathcal{H}_T))$ at initial time 0 which end up being mapped to the same element at a later time $t$. One such element is an initial state \(\phi \otimes \psi_0\) results in a state \(\phi_k \otimes \psi_k\), for some $k$. However, a second such element \(\phi_k \otimes \psi_0\) also results in the state \(\phi_k \otimes \psi_k\). If the state $\phi$ is chosen to be distinct from $\phi_{k}$, then the two elements \(\pi_{\phi \otimes\psi_0}\)  and \(\pi_{\phi_k \otimes\psi_0}\) of $Q(\mathcal{H})$ both map to the same element \(\pi_{\phi_k \otimes\psi_k}\).  However, any automorphism $\alpha_t$ is a one-to-one map on $Q$, so the measurement process cannot be described by a representation $\alpha:\mathbb{R}\to {\mathop{\rm Aut}\nolimits} (Q)$, and hence a  \textit{unitary evolution cannot explain what is observed}.     

The Measurement Problem must be resolved by a theory that includes state reduction in its dynamics in addition to periods of unitary evolution. The GRW theory provides an example of a partial mechanism for this. Partial because it only reduces the wavefunction to one that is more localised rather than full transition from possibility to actuality.  Bohmian mechanics avoids this by proposing a particle trajectory dynamics that requires no more state reduction than in classical probability.  In Bohmian mechanics the particle always has an actual position.                                                                     

For a composite system it should not only be outside forces that can break symmetry, but internal interactions. In the state \(\Gamma=\sum_i c_i \phi_i \otimes \psi_i\) introduced above the total, but still isolated, system \(S + T\) has a set of \(i\) property values associated with the states \(\{\phi_i \otimes \psi_i\}_i \). However, the interacting object  \(T\) as part of the system \(S + T\) will have the state \(\phi_{k}\)  of \(S\) appear with probability \(| c_k |^2 \). This provides a matrix mechanics interpretation of reduction as a physical transition probability for the system \(S\) in the presence of the apparatus \(T\).  State reduction does take place in isolated compound systems with internal interactions and the reduction of the state is due to the combined system's properties but traceable to the dispositional power to take specific property values associated with \(S\).  

In an experiment the results are recorded at the time of the experiment. This experimental recording is not part of the formal theory. The theory provides transition probabilities but nothing to time the transition.

[1] Kochen, S., A Reconstruction of Quantum Mechanics, ArXiv e-prints, 2015

[2] Bohm, Arno, Quantum Mechanics, Springer, 2001

The heart of the matter

The ontological framework for this blog is from Nicolai Hartmann's  new ontology  programme that was developed in a number of very subst...