Welcome Guest 

Show/Hide Header

Welcome Guest, posting in this forum requires registration.

Pages: [1]
Author Topic: Michael Ruse, the Pope, and naivety about Randomess.
Posts: 65
Post Michael Ruse, the Pope, and naivety about Randomess.
on: June 6, 2011, 16:43

Michael Ruse, a philosopher of science focusing on biology, has been one of the best, and the most prolific of writers on the topic. He also, more than anybody else, has contributed to an understanding of evolution as a social movement, not just a science. So, he should know his stuff.

But here is what he says in a recent blog on Huffington Post

Gould was ... was saying that there is no design. Human evolution had no more forethought than, say, the pattern that a pile of sand makes when emptied from a bucket. And while Gould was a bit of a maverick in some ways, there is no modern evolutionary biologist who would disagree with him on this. Evolution depends on mutations that simply don't have direction.

And again:

To put direction into evolution is to be a supporter of the non-scientific theory of Intelligent Design.

This is in response to what the Pope recently said:

If man were merely a random product of evolution in some place on the margins of the universe, then his life would make no sense or might even be a chance of nature. But no, reason is there at the beginning: creative, divine reason.

Now, Michael Ruse's comment is not only bad theology - after all the possibility of humanity was always there in any set of the laws of nature behind a universe we live in - but bad science as well.

Think about it. He is saying that there is no direction to evolution - and his reasoning is that it is because there are no directions to random mutations. That is a bit like saying that the big bang couldn't have occurred because quantum fluctuations are random. Yes, quantum fluctuations are random, but it is immaterial to the issue. The point is that every once in a while a big one comes along, creating a universe

Why does evolution have direction? The proof, obvious to any physicist, is simplicity itself. Evolution involves creation of structures of increasing complexity, each built up from previously built structures of lower complexity. So, first, cells have to come into being. Only after self-sustaining and self-reproducing cells exist is it possible for stable self-reproducing multi-cellular organisms to come into being, and so on and so on.

If, as Ruse thinks, the process didn't have direction, it wouldn't work this away. You could have no self-replicating cells, because random processes would drive them out of existence. But, of course, once a cells evolve, they are self-replicating and stable structures. And they are the building block for the next layer of biological complexity. So, obviously, evolution has direction. Ultimately, the proof is that we - and other biological systems exist with a marvelous stability.

Jedi Master
Posts: 92
Post Re: Michael Ruse, the Pope, and naivety about Randomess.
on: June 6, 2011, 18:59

As I understand it, what appears as randomness locally may not be globally, there are degrees of freedom and there are constraints, and in between there are choices made about how to relieve the stress between the two. With respect to certain desired outcomes, some possibilities are more globally optimal than others, and the more reflective local representations of this globally optimal state of affairs are, the more coherent the course of evolution will be, as it will be able to anticipate undesirable outcomes and respond accordingly. Perhaps in the game of life the goal is "win-win" or super-rational (mutually self-enforcing as opposed to the selfish form of "rational" strategies). Basically to be like a garden with harmony on all levels.

"Composition is of three kinds.
1. Accidental composition.
2. Involuntary composition.
3. Voluntary composition.
There is no fourth kind of composition. Composition is restricted to these three categories."

"As difference in degree of capacity exists among human souls, as difference in capability is found, therefore, individualities will differ one from another. But in reality this is a reason for unity and not for discord and enmity. If the flowers of a garden were all of one color, the effect would be monotonous to the eye; but if the colors are variegated, it is most pleasing and wonderful. The difference in adornment of color and capacity of reflection among the flowers gives the garden its beauty and charm. Therefore, although we are of different individualities, . . . let us strive like flowers of the same divine garden to live together in harmony. Even though each soul has its own individual perfume and color, all are reflecting the same light, all contributing fragrance to the same breeze which blows through the garden, all continuing to grow in complete harmony and accord." -`Abdu'l-Bahá

"Please God, that we avoid the land of denial, and advance into the ocean of acceptance, so that we may perceive, with an eye purged from all conflicting elements, the worlds of unity and diversity, of variation and oneness, of limitation and detachment, and wing our flight unto the highest and innermost sanctuary of the inner meaning of the Word of God." (The Báb)

"This universe is not created through the fortuitous concurrences of atoms; it is created by a great law which decrees that the tree bring forth certain definite fruit."

‎"As to thy question whether the physical world is subject to any limitations, know thou that the comprehension of this matter dependeth upon the observer himself. In one sense, it is limited; in another, it is exalted beyond all limitations. The one true God hath everlastingly existed, and will everlastingly continue to exist. His creation, likewise, hath had no beginning, and will have no end. All that is created, however, is preceded by a cause. This fact, in itself, establisheth, beyond the shadow of a doubt, the unity of the Creator."
(Bahá'u'lláh, Gleanings from the Writings of Bahá'u'lláh, LXXXII, p. 162-163)

"Another big theme has to do with randomness. If one looks at current biology, there are occasional uses of calculus. There's quite a bit of use of the idea of digital information. But there's a lot of use of statistics.

There are a lot of times in biology where one says "there's a certain probability for this or that happening. We're going to make our theory based on those probabilities."

Now, in a sense, whenever you put a probability into your model you're admitting that your model is somehow incomplete. You're saying: I can explain these features of my system, but these parts--well they come from somewhere else, and I'm just going to say there's a certain probability for them to be this way or that. One just models that part of the system by saying it's "random", and one doesn't know what it's going to do.

Well, so we can ask everywhere, in biology, and in physics, where the randomness really comes from in the things we think of as random. And there are really three basic mechanisms. The first is the one that happens with, say, a boat bobbing on an ocean, or with Brownian motion. There's no randomness in the thing one's actually looking at: the boat or the pollen grain. The randomness is coming from the environment, from all those details of a storm that happened on the ocean a thousand miles away, and so on. So that's the first mechanism: that the randomness comes because the system one's looking at is continually being kicked by some kind of randomness from the outside.

Well, there's another mechanism, that's become famous through chaos theory. It's the idea that instead of there being randomness continually injected into a system, there's just randomness at the beginning. And all the randomness that one sees is just a consequence of details of the initial conditions for the system.

Like in tossing a coin.

Where once the coin is tossed there's no randomness in which way it'll end up. But which way it ends up depends in detail on the precise speed it had at the beginning. So if it was started say by hand, one won't be able to control that precise speed, and the final outcome will seem random.

There's randomness because there's sort of an instability. A small perturbation in the initial conditions can lead to continuing long-term consequences for the outcome. And that phenomenon is quite common. Like here it is even in the rule 30 cellular automaton.

But it can never be a complete explanation for randomness in a system. Because really what it's doing is just saying that the randomness that comes out is some kind of transcription of randomness that went in, in the details of the initial conditions. But, so, can there be any other explanation for randomness? Well, yes there can be. Just look at our friend rule 30.

Here there's no randomness going in. There's just that one black cell. Yet the behavior that comes out looks in many respects random. In fact, say the center column of the pattern is really high quality randomness: the best pseudorandom generator, even good for cryptography.

Yet none of that randomness came from outside the system. It was intrinsically generated inside the system itself. And this is a new and different phenomenon that I think is actually at the core of a lot of systems that seem random."

Speaking of Cellular Automata and Complexity Theory...

Cellular automata (CA) are a class of spatially and temporally discrete, deterministic mathematical systems characterized by local interaction and an inherently parallel form of evolution. First introduced by von Neumann in the early 1950s to act as simple models of biological selfreproduction, CA are prototypical models for complex systems and processes consisting of a large number of identical, simple, locally interacting components. The study of these systems has generated great interest over the years because of their ability to generate a rich spectrum of very complex patterns of behavior out of sets of relatively simple underlying rules. Moreover, they appear to capture many essential features of complex self-organizing cooperative behavior observed in real systems. Although much of the theoretical work with CA has been confined to mathematics and computer science, there have been numerous applications to physics, biology, chemistry, biochemistry, and geology, among other disciplines. Some specific examples of phenomena that have been modeled by CA include fluid and chemical turbulence, plant growth and the dendritic growth of crystals, ecological theory, DNA evolution, the propagation of infectious diseases, urban social dynamics, forest fires, and patterns of electrical activity in neural networks. CA have also been used as discrete versions of partial differential equations in one or more spatial variables.
A cellular game is a dynamical system in which sites of a discrete lattice play a "game" with neighboring sites. Strategies may be deterministic or stochastic. Success is usually judged according to a universal and fixed criterion. Successful strategies persist and spread throughout the lattice; unsuccessful strategies disappear.
Complexity: An extremely difficult "I know it when I see it" concept to define, largely because it requires a quantification of what is more of a qualitative measure. Intuitively, complexity is usually greatest in systems whose components are arranged in some intricate difficult-to-understand pattern or, in the case of a dynamical system, when the outcome of some process is difficult to predict from its initial state. In its lowest precisely when a system is either highly regular, with many redundant and/or repeating patterns or when a system is completely disordered. While over 30 measures of complexity have been proposed in the research literature, they all fall into two general classes: (1) Static Complexity -which addresses the question of how an object or system is put together (i.e. only purely structural informational aspects of an object), and is independent of the processes by which information is encoded and decoded; (2) Dynamic Complexity -which addresses the question of how much dynamical or computational effort is required to describe the information content of an object or state of a system. Note that while a system's static complexity certainly influences its dynamical complexity, the two measures are not equivalent. A system may be structurally rather simple (i.e. have a low static complexity), but have a complex dynamical behavior."

Reality as a Cellular Automaton: Spacetime Trades Curves for Computation

At the dawn of the computer era, the scientific mainstream sprouted a timely alternative viewpoint in the form of the Cellular Automaton Model of the Universe, which we hereby abbreviate as the CAMU. First suggested by mathematician John von Neumann and later resurrected by salesman and computer scientist Ed Fredkin, the CAMU represents a conceptual regression of spacetime in which space and time are re-separated and described in the context of a cellular automaton. Concisely, space is represented by (e.g.) a rectilinear array of computational cells, and time by a perfectly distributed state transformation rule uniformly governing cellular behavior. Because automata and computational procedures are inherently quantized, this leads to a natural quantization of space and time. Yet another apparent benefit of the CAMU is that if it can be made equivalent to a universal computer, then by definition it can realistically simulate anything that a consistent and continually evolving physical theory might call for, at least on the scale of its own universality.

But the CAMU, which many complexity theorists and their sympathizers in the physics community have taken quite seriously, places problematic constraints on universality. E.g., it is not universal on all computational scales, does not allow for subjective cognition except as an emergent property of its (assumedly objective) dynamic, and turns out to be an unmitigated failure when it comes to accounting for relativistic phenomena. Moreover, it cannot account for the origin of its own cellular array and is therefore severely handicapped from the standpoint of cosmology, which seeks to explain not only the composition but the origin of the universe. Although the CAMU array can internally accommodate the simulations of many physical observables, thus allowing the CAMU’s proponents to intriguingly describe the universe as a “self-simulation”, its inability to simulate the array itself precludes the adequate representation of higher-order physical predicates with a self-referential dimension.
Before we explore the conspansive SCSPL model in more detail, it is worthwhile to note that the CTMU can be regarded as a generalization of the major computation-theoretic current in physics, the CAMU. Originally called the Computation-Theoretic Model of the Universe, the CTMU was initially defined on a hierarchical nesting of universal computers, the Nested Simulation Tableau or NeST, which tentatively described spacetime as stratified virtual reality in order to resolve a decision-theoretic paradox put forth by Los Alamos physicist William Newcomb (see Noesis 44, etc.). Newcomb’s paradox is essentially a paradox of reverse causality with strong implications for the existence of free will, and thus has deep ramifications regarding the nature of time in self-configuring or self-creating systems of the kind that MAP shows it must be. Concisely, it permits reality to freely create itself from within by using its own structure, without benefit of any outside agency residing in any external domain.

Although the CTMU subjects NeST to metalogical constraints not discussed in connection with Newcomb’s Paradox, NeST-style computational stratification is essential to the structure of conspansive spacetime. The CTMU thus absorbs the greatest strengths of the CAMU – those attending quantized distributed computation – without absorbing its a priori constraints on scale or sacrificing the invaluable legacy of Relativity. That is, because the extended CTMU definition of spacetime incorporates a self-referential, self-distributed, self-scaling universal automaton, the tensors of GR and its many-dimensional offshoots can exist within its computational matrix.

An important detail must be noted regarding the distinction between the CAMU and CTMU. By its nature, the CTMU replaces ordinary mechanical computation with what might better be called protocomputation. Whereas computation is a process defined with respect to a specific machine model, e.g. a Turing machine, protocomputation is logically "pre-mechanical". That is, before computation can occur, there must (in principle) be a physically realizable machine to host it. But in discussing the origins of the physical universe, the prior existence of a physical machine cannot be assumed. Instead, we must consider a process capable of giving rise to physical reality itself...a process capable of not only implementing a computational syntax, but of serving as its own computational syntax by self-filtration from a realm of syntactic potential. When the word "computation" appears in the CTMU, it is usually to protocomputation that reference is being made.

It is at this point that the theory of languages becomes indispensable. In the theory of computation, a "language" is anything fed to and processed by a computer; thus, if we imagine that reality is in certain respects like a computer simulation, it is a language. But where no computer exists (because there is not yet a universe in which it can exist), there is no "hardware" to process the language, or for that matter the metalanguage simulating the creation of hardware and language themselves. So with respect to the origin of the universe, language and hardware must somehow emerge as one; instead of engaging in a chicken-or-egg regress involving their recursive relationship, we must consider a self-contained, dual-aspect entity functioning simultaneously as both. By definition, this entity is a Self-Configuring Self-Processing Language or SCSPL. Whereas ordinary computation involves a language, protocomputation involves SCSPL."

"The self-configuration of reality involves an intrinsic mode of causality, self-determinacy, which is logically distinct from conventional concepts of determinacy and indeterminacy but can appear as either from a localized vantage. Determinacy and indeterminacy can thus be viewed as "limiting cases" associated with at least two distinct levels of systemic self-determinacy, global-distributed and local-nondistributed. The former level appears deterministic while the latter, which accommodates creative input from multiple quasi-independent sources, dynamically adjusts to changing conditions and thus appears to have an element of "randomness".

According to this expanded view of causality, the Darwinian processes of replication and natural selection occur on at least two mutually-facilitative levels associated with the evolution of the universe as a whole and the evolution of organic life. In addition, human technological and sociopolitical modes of evolution may be distinguished, and human intellectual evolution may be seen to occur on collective and individual levels. Because the TE model provides logical grounds on which the universe may be seen to possess a generalized form of intelligence, all levels of evolution are to this extent intelligently directed, catalyzed and integrated."

"The upper diagram illustrates ordinary cybernetic feedback between two information transducers exchanging and acting on information reflecting their internal states. The structure and behavior of each transducer conforms to a syntax, or set of structural and functional rules which determine how it behaves on a given input. To the extent that each transducer is either deterministic or nondeterministic (within the bounds of syntactic constraint), the system is either deterministic or “random up to determinacy”; there is no provision for self-causation below the systemic level. The lower diagram, which applies to coherent self-designing systems, illustrates a situation in which syntax and state are instead determined in tandem according to a generalized utility function assigning differential but intrinsically-scaled values to various possible syntax-state relationships. A combination of these two scenarios is partially illustrated in the upper diagram by the gray shadows within each transducer.

The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is self-descriptive or autologous, intrinsically and retroactively defined within the system, and “pre-informational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively self-configures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic self-utility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal.

A system that evolves by means of telic recursion – and ultimately, every system must either be, or be embedded in, such a system as a condition of existence – is not merely computational, but protocomputational. That is, its primary level of processing configures its secondary (computational and informational) level of processing by telic recursion. Telic recursion can be regarded as the self-determinative mechanism of not only cosmogony, but a natural, scientific form of teleology."

"Disputes between evidential decision theory and causal decision theory have continued for decades, with many theorists stating that neither alternative seems satisfactory. I present an extension of decision theory over causal networks, timeless decision theory (TDT). TDT compactly represents uncertainty about the abstract outputs of correlated computational processes, and represents the decision-maker's decision as the output of such a process. I argue that TDT has superior intuitive appeal when presented as axioms, and that the corresponding causal decision networks (which I call timeless decision networks) are more true in the sense of better representing physical reality. I review Newcomb's Problem and Solomon's Problem, two paradoxes which are widely argued as showing the inadequacy of causal decision theory and evidential decision theory respectively. I walk through both paradoxes to show that TDT achieves the appealing consequence in both cases. I argue that TDT implements correct human intuitions about the paradoxes, and that other decision systems act oddly because they lack representative power. I review the Prisoner's Dilemma and show that TDT formalizes Hofstadter's "superrationality": under certain circumstances, TDT can permit agents to achieve "both C" rather than "both D" in the one-shot, non-iterated Prisoner's Dilemma. Finally, I show that an evidential or causal decision-maker capable of self-modifying actions, given a choice between remaining an evidential or causal decision-maker and modifying itself to imitate a timeless decision-maker, will choose to imitate a timeless decision-maker on a large class of problems."

"Fundamental properties of the world in which all life evolved, such as space, time, force, energy and audio frequencies, are modeled in physics and engineering with differentiable manifolds. A central question of neurophysiology is how information about these quantities is encoded and processed. While the forces of evolution are complex and often contradictory, the argument can be made that if all other factors are equal, an organism with a more accurate mental representation of the world has a better chance of survival. This implies that the representation in the central nervous system (CNS) of a physical phenomenon should have the same intrinsic mathematical structure as the phenomenon itself. The philosophical principal, put forth by Monad (1971) and others, that under certain conditions, biological evolution will form designs that are in accordance with the laws of nature is referred to as teleonomy.

All of the diverse sensory input an organism receives must be combined with internal mental state and integrated together to form a coherent understanding of the environment and a single plan of action. For this to happen, all of the manifolds must be in some way unified. A common assumption is that all of the “low-level” manifold representations are converted to a set of “high-level” symbols and that these high-level symbolic representations are the basis for the unification. A central thesis of this article is that this need not be the case; we can leave the sensory input representations in their multi-dimensional form and instead create a unified system of computational manifolds."

"Modeling, a sophisticated form of abstract description, using mathematics and computation, both tied to the concept of number, and their advantages and disadvantages are exquisitely detailed by Robert Rosen in Life Itself, Anticipatory Systems, and Fundamentals of Measurement. One would have hoped that mathematics or computer simulations would reduce the need for word descriptions in scientific models. Unfortunately for scientific modeling, one cannot do as David Hilbert or Alonzo Church proposed: divorce semantics (e.g., symbolic words: referents to objects in reality) from syntax (e.g., symbolic numbers: referents to a part of a formal system of computation or entailment). One cannot do this, even in mathematics without things becoming trivial (ala Kurt Godel). It suffices to say that number theory (e.g., calculus), category theory, hypersets, and cellular automata, to mention few, all have their limited uses. The integration between all of these formalisms will be necessary plus rigorous attachment of words and numbers to show the depth and shallowness of the formal models. These rigorous attachments of words are ambiguous to a precise degree without the surrounding contexts. Relating precisely with these ambiguous words to these simple models will constitute an integration of a reasonable set of formalisms to help characterize reality."

Posts: 65
Post Re: Michael Ruse, the Pope, and naivety about Randomess.
on: June 8, 2011, 07:23

Here is a reply of mine in Huffington Post:

Stephen Friberg

“I'm a physicist, not a biologist, so I'm not a big fan of the way that biologists - and Dr. Ruse - import "naive" or "folk" ideas about randomness into biology. Physicists don't usually make that mistake.

The folk idea about randomness is that its just random - there is no purpose or direction to what is happening. The scientific idea - and it would be nice to see biologists catching on - is that it is intrinsic to all natural processes.

For evolution, random processes are important. Before life appeared, stars happened, and stars are structured phenomena - after all, they have to ignite - that are brought into being by random processes leading to a gravity-dr­iven mass density increase. In other words, the structure of a star is not a random process..

So, when a scientist says that something so highly structured as the human mind is arrived at by random processes AND that it there is no arrow or direction to it those random processes, they are not making sense as far as physicists are concerned. Much more likely is that there are stable structures arrived at - like the suns as described above - by random processes.

Dr. Ruse is the foremost philosophi­cal expert on evolution and the conflicts surroundin­g it, so he should be well aware that culturally­-derived ideas flavor its interpreta­tion of data. It is highly probable that a blind spot from 19th century cultural wars is at work that prevents biologists from seeing”

Posts: 65
Post Re: Michael Ruse, the Pope, and naivety about Randomess.
on: June 8, 2011, 07:24

A comment on my reply and a further reply


Stephen, perhaps you would like to clarify exactly where Ruse imports folklore about randomness­. Ruse refers to "random" three times: first in reference to the Pope, second when quoting the Pope, and third in reference to Gould. Perhaps as a physicist you misunderst­and evolution and life scientists­?

You seem to associate randomness with purpose and direction when you say, "the folk idea about randomness is that its just random - there is no purpose or direction to what is happening,­" followed by the strange assertion that biologists do not grasp randomness as intrinsic to nature. Really? What do you believe biologists and geneticist­s think are the raw materials of evolution if not random genetic variations­? You state random processes are important for evolution. Excellent. Do you believe biologists and geneticist­s do not recognize mutation, recombinat­ion, and genetic drift as random?

What does design, purpose, or direction have to do with genetic variabilit­y? Design and purpose are first and final causes in theology; they are not concepts amenable to scientific investigat­ion which studies secondary causes. As for direction, perhaps you do not understand that adaptation and natural section are NOT random. Reviewing an undergrad text or speaking to a colleague in another department will quickly resolve these errors, banish notions of folklore and culture wars, and hopefully reassure you life scientists who study living processes are not as clueless as you imagine.
Favorite (0) Flag as Abusive
Permalink | Share it
Stephen Friberg
0 Fans
0 minute ago (10:20 AM)
This comment is pending approval and won't be displayed until it is approved.

Hi Eukarya. Thanks for the comment.

Please contrast what Ruse says ("to put direction into evolution is to be a supporter of the non-scient­ific theory of Intelligen­t Design") with what you say ("As for direction, perhaps you do not understand that adaptation and natural selection are NOT random.") Is there not a contradict­ion?

Or, consider Ruse's comments about Gould: "he was saying that there is no design. Human evolution had no more forethough­t than, say, the pattern that a pile of sand makes when emptied from a bucket. And ... there is no modern evolutiona­ry biologist who would disagree with him on this. Evolution depends on mutations that simply don't have direction.­" Ruse here associates random mutations - what physicists think of as driving mechanisms - with lack of direction in biological systems.

Ruse claims - either correctly or incorrectl­y (incorrect­ly from your view, no?) that all evolutiona­ry biologists think likewise. Just to be clear: it is a mistake to assume that random fluctuatio­ns driving a system cause random outcomes. This is what Ruse is implying and is what I call naive folk thinking. And he certainly isn't alone in these claims.

What does this mean for purpose and direction, cultural wars, and the like? To me - and I am interested in your perspectiv­e - is that naive claims that evolution is directionl­ess aren't consistent with science.

Jedi Master
Posts: 92
Post Re: Michael Ruse, the Pope, and naivety about Randomess.
on: June 9, 2011, 16:00

The mainstream acknowledges evolution itself isn't a random process only that the production of variation *may* be, however the production of variation itself involves constraints which filter possibilities, and these constraints may span spacetime itself...however the current version of causality is based on Markovianism (the CTMU is basically one of a larger class of temporal feedback (or atemporal) models based on ontogenically transitive causal relations making it trans-Markovian). I don't think anyone reasonable believes evolution (much less cosmic origins) is totally random as in a million monkeys using trial and error and eventually coming up with Shakespeare, the point of randomness is to inject variation, but the source of variation itself has constrains (one-to-many endomorphism or heirarchically nested global unitarity manifold in which the boundary of the boundary is zero with "more is different" symmetry breaking acting as the source of random variation) sometimes errors in someway can be simulated before being actualized and avoided since some future outcome possibilities are more self-consistent overall and tend to be optimal in more than one reference frame of spacetime, this involves higher-order causality.

More is Different:

"[T]here is no doubt that in the beginning the origin was one: the
origin of all numbers is one and not two. Then it is evident that in
the beginning matter was one, and that one matter appeared in
different aspects in each element. Thus various forms were produced,
and these various aspects as they were produced became permanent, and
each element was specialized. But this permanence was not definite,
and did not attain realization and perfect existence until after a
very long time. Then these elements became composed, and organized and
combined in infinite forms; or rather from the composition and
combination of these elements innumerable beings appeared. (Some
Answered Questions 181).

"7. Is evolution a random process?

Evolution is not a random process. The genetic variation on which
natural selection acts may occur randomly, but natural selection
itself is not random at all. The survival and reproductive success of
an individual is directly related to the ways its inherited traits
function in the context of its local environment. Whether or not an
individual survives and reproduces depends on whether it has genes
that produce traits that are well adapted to its environment."

"In quantum mechanics, the principle of superposition of dynamical
states asserts that the possible dynamical states of a quantized
system, like waves in general, can be linearly superposed, and that
each dynamical state can thus be represented by a vector belonging to
an abstract vector space. The superposition principle permits the
definition of so-called “mixed states” consisting of many possible
“pure states”, or definite sets of values of state-parameters. In such
a superposition, state-parameters can simultaneously have many values.

The superposition principle highlights certain problems with quantum
mechanics. One problem is that quantum mechanics lacks a cogent model
in which to interpret things like “mixed states” (waves alone are not
sufficient). Another problem is that according to the uncertainty
principle, the last states of a pair of interacting particles are
generally insufficient to fully determine their next states. This, of
course, raises a question: how are their next states actually
determined? What is the source of the extra tie-breaking measure of
determinacy required to select their next events (“collapse their wave

The answer is not, as some might suppose, “randomness”; randomness
amounts to acausality, or alternatively, to informational
incompressibility with respect to any distributed causal template or
ingredient of causal syntax. Thus, it is either no explanation at all,
or it implies the existence of a “cause” exceeding the representative
capacity of distributed laws of causality. But the former is both
absurd and unscientific, and the latter requires that some explicit
allowance be made for higher orders of causation…more of an allowance
than may readily be discerned in a simple, magical invocation of

The superposition principle, like other aspects of quantum mechanics,
is based on the assumption of physical Markovianism. (A Markoff
process is a stochastic process with no memory. That is, it is a
process meeting two criteria: (1) state transitions are constrained or
influenced by the present state, but not by the particular sequence of
steps leading to the present state; (2) state transition contains an
element of chance. Physical processes are generally assumed to meet
these criteria; the laws of physics are defined in accordance with 1,
and because they ultimately function on the quantum level but do not
fully determine quantum state transitions, an element of chance is
superficially present. It is in this sense that the distributed laws
of physics may be referred to as “Markovian”. However, criterion 2
opens the possibility that hidden influences may be active.) It refers
to mixed states between adjacent events, ignoring the possibility of
nonrandom temporally-extensive relationships not wholly attributable
to distributed laws. By putting temporally remote events in extended
descriptive contact with each other, the Extended Superposition
Principle enables coherent cross-temporal telic feedback and thus
plays a necessary role in cosmic self-configuration. Among the
higher-order determinant relationships in which events and objects can
thus be implicated are utile state-syntax relationships called telons,
telic attractors capable of guiding cosmic and biological evolution.

Given that quantum theory does not seem irrevocably attached to
Markovianism, why has the possibility of higher-order causal
relationships not been seriously entertained? One reason is spacetime
geometry, which appears to confine objects to one-dimensional
“worldlines” in which their state-transition events are separated by
intervening segments that prevent them from “mixing” in any globally
meaningful way. It is for this reason that superposition is usually
applied only to individual state transitions, at least by those
subscribing to conservative interpretations of quantum mechanics.

Conspansive duality, which incorporates TD and CF components, removes
this restriction by placing state transition events in direct
descriptive contact. Because the geometric intervals between events
are generated and selected by descriptive processing, they no longer
have separative force. Yet, since worldlines accurately reflect the
distributed laws in terms of which state transitions are expressed,
they are not reduced to the status of interpolated artifacts with no
dynamical reality; their separative qualities are merely overridden by
the state-syntax dynamic of their conspansive dual representation.

In extending the superposition concept to include nontrivial
higher-order relationships, the Extended Superposition Principle opens
the door to meaning and design. Because it also supports distribution
relationships among states, events and syntactic strata, it makes
cosmogony a distributed, coherent, ongoing event rather than a spent
and discarded moment from the ancient history of the cosmos. Indeed,
the usual justification for observer participation – that an observer
in the present can perceptually collapse the wave functions of ancient
(photon-emission) events – can be regarded as a consequence of this
logical relationship.
Deterministic computational and continuum models of reality are
recursive in the standard sense; they evolve by recurrent operations
on state from a closed set of “rules” or “laws”. Because the laws are
invariant and act deterministically on a static discrete array or
continuum, there exists neither the room nor the means for
optimization, and no room for self-design. The CTMU, on the other
hand, is conspansive and telic-recursive; because new state-potentials
are constantly being created by evacuation and mutual absorption of
coherent objects (syntactic operators) through conspansion, metrical
and nomological uncertainty prevail wherever standard recursion is
impaired by object sparsity. This amounts to self-generative freedom,
hologically providing reality with a “self-simulative scratchpad” on
which to compare the aggregate utility of multiple self-configurations
for self-optimizative purposes.

Standard recursion is “Markovian” in that when a recursive function is
executed, each successive recursion is applied to the result of the
preceding one. Telic recursion is more than Markovian; it
self-actualizatively coordinates events in light of higher-order
relationships or telons that are invariant with respect to overall
identity, but may display some degree of polymorphism on lower orders.
Once one of these relationships is nucleated by an opportunity for
telic recursion, it can become an ingredient of syntax in one or more
telic-recursive (global or agent-level) operators or telors and be
“carried outward” by inner expansion, i.e. sustained within the
operator as it engages in mutual absorption with other operators. Two
features of conspansive spacetime, the atemporal homogeneity of IEDs
(operator strata) and the possibility of extended superposition, then
permit the telon to self-actualize by “intelligently”, i.e.
telic-recursively, coordinating events in such a way as to bring about
its own emergence (subject to various more or less subtle restrictions
involving available freedom, noise and competitive interference from
other telons). In any self-contained, self-determinative system, telic
recursion is integral to the cosmic, teleo-biological and volitional
levels of evolution."

"Except for Averroes, who had very little influence on other Islamic
philosophers, the philosophers of the East were united in the view
that a divine intelligible order--either the contents of God's mind or
will, or belonging to the subordinate Active Intellect--is the
formative cause of the compositions of biological species when they
first appear on earth. These compositions appear as soon as the
physical environment is suitable to receive them, with simpler
compositions, like minerals and plants, appearing first, and more
complex structures, like animals and human beings, appearing last. The
essential attributes of each of these beings is created in accordance
with the predetermined intelligible order, not because of chance.

Although Avicenna mistakenly identified Plato's Idea-Forms with
logical universals, he was still a Platonist in the sense that he had
the material forms of things result from an incorporeal intellect and
in making God's knowledge the cause of the existence of things. The
main difference between a logical universal and a Platonic Form is
that while the former is abstracted from individuals, the latter is
causative of individuals.

Mullá Sadrá's novel move of incorporating motion and transformation
into the category of substance, and Shaykh Ahmad's extension of this
principle to the essences of things themselves, allowed for the real,
continuous, and dynamic transformation and evolution of things in the
temporal dimension. This was a dramatic departure from the eternal
static cosmos of classical biology, a departure which was paralleled
by the ideas of Leibniz among the European philosophers.

The views presented represent mainly a "vertical order of becoming"
from God to physical things and from physical things back to God, not
a "horizontal order of becoming" restricted to the material world, as
is the concept of Darwinian evolution. Things "become" as a result of
their realities, whether this be gradually or at once. According to
Shaykh Ahmad, a thing's "coming-into-existence" is not completely up
to God's will, but is also a voluntary act on the part of the created
to receive existence. The important notion here is that everything
that exists in the universe exists by design and has a purpose.
Movement toward that goal implies the unfoldment of previously
existing potentials, whereas "evolution," in the meaning of Darwin,
implies the transmutation of species without any underlying goal."

Jedi Master
Posts: 92
Post Re: Michael Ruse, the Pope, and naivety about Randomess.
on: June 12, 2011, 03:27

I have difficulty separating disciplines from each other, as there tends to be a lot of mutually relevant information among them. This may lead to sloppy thinking as one transgresses into other fields of expertise, but it seems new ideas are born from the I'm in pursuit of new windows of opportunity for exchange, particularly concepts related to information itself.

"It turns out that very naturally the referent of quantum physics is not reality per se but, as Niels Bohr said, it is "what can be said about the world", or in modern words, it is information. Thus, if information is the most fundamental notion in quantum physics, a very natural understanding of phenomena like quantum decoherence or quantum teleportation emerges. And quantum entanglement is then nothing else than the property of subsystems of a composed quantum systems to carry information jointly, independent of space and time; and the randomness of individual quantum events is a consequence of the finiteness of information.

The quantum is then a reflection of the fact that all we can do is make statements about the world, expressed in a discrete number of bits. The universe is participatory at least in the sense that the experimentalist by choosing the measurement apparatus, defines out of a set of mutually complementary observables which possible property of a system can manifest itself as reality and the randomness of individual events stems form the finiteness of information."

"Notice that the same list of processes can be used to explain
non-adaptive evolutionary change (e.g. denetic drift). Also notice
that the only source of new phenotypic variations is what I have
called the "engines of variation": all of those processes that produce
heritable phenotypic changes in phylogenetic lines of organisms in
populations. There are at least fifty such processes (you can see a
summary list here). While it is the case that "random mutation" is
included in this list, there are many other processes in this list
that do not involve "mutation" (in the genetic sense) and which also
are not "random" (at least insofar as that term is often used)."

"Creationists and supporters of Intelligent Design Theory ("IDers")
are fond of erecting a strawman in place of evolutionary theory, one
that they can then dismantle and point to as "proof" that their
"theories" are superior. Perhaps the most egregious such strawman is
encapsulated in the phrase "RM & NS". Short for "random mutation and
natural selection", RM & NS is held up by creationists and IDers as
the core of evolutionary biology, and are then attacked as
insufficient to explain the diversity of life and (in the case of some
IDers) its origin and evolution as well.

Evolutionary biologists know that this is a classical "strawman"
argument, because we know that evolution is not simply reducible to
"random mutation and natural selection" alone. Indeed, Darwin himself
proposed that natural selection was the best explanation for the
origin of adaptations, and that natural selection itself was an
outcome that necessarily arises from three prerequisites:"

On “Why” and “What” of randomness

To a mathematical theory of evolution and biological creativity

Emergence, Intelligent Design, and Artificial Intelligence

"We consider the implications of emergent complexity and computational simulations for the Intelligent Design movement. We discuss genetic variation and natural selection as an optimization process, and equate irreducible complexity as defined by William Dembski with the problem of local optima. We disregard the argument that methodological naturalism bars science from investigating design hypotheses in nature a priori, but conclude that systems with low Kolmogorov complexity but a high apparent complexity, such as the Mandelbrot set, require that such hypotheses be approached with an high degree of skepticism. Finally, we suggest that computer simulations of evolutionary processes may be considered a laboratory to test our detailed, mechanistic understandings of evolution."

"So we don't even know exactly how this organelle works, but we know that it has arose through "billions of years of evolution." Soon thereafter in the conversation, George Church, Professor of Genetics at Harvard Medical School and Director of the Center for Computational Genetics, similarly marveled at the complexity of the ribosome:

The ribosome, both looking at the past and at the future, is a very significant structure -- it's the most complicated thing that is present in all organisms. Craig does comparative genomics, and you find that almost the only thing that's in common across all organisms is the ribosome. And it's recognizable; it's highly conserved. So the question is, how did that thing come to be? And if I were to be an intelligent design defender, that's what I would focus on; how did the ribosome come to be?"

"Biochemistry professor Michael Behe, the originator of the term irreducible complexity, defines an irreducibly complex system as one "composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning"."

Posts: 65
Post Re: Michael Ruse, the Pope, and naivety about Randomess.
on: June 12, 2011, 13:49

What an excellent list of resources. BTW, Anton Zeilinger, whom you reference above, has built a physics empire based on the quantum correlation and entanglement phenomena I did as a grad student.

Pages: [1]
Mingle Forum by cartpauj
Version: 1.0.34 ; Page loaded in: 0.093 seconds.
Share    Send article as PDF