As I understand it, what appears as randomness locally may not be globally, there are degrees of freedom and there are constraints, and in between there are choices made about how to relieve the stress between the two. With respect to certain desired outcomes, some possibilities are more globally optimal than others, and the more reflective local representations of this globally optimal state of affairs are, the more coherent the course of evolution will be, as it will be able to anticipate undesirable outcomes and respond accordingly. Perhaps in the game of life the goal is "winwin" or superrational (mutually selfenforcing as opposed to the selfish form of "rational" strategies). Basically to be like a garden with harmony on all levels.
"Composition is of three kinds.
1. Accidental composition.
2. Involuntary composition.
3. Voluntary composition.
There is no fourth kind of composition. Composition is restricted to these three categories."
http://bahailibrary.com/compilations/bahai.scriptures/7.html
"As difference in degree of capacity exists among human souls, as difference in capability is found, therefore, individualities will differ one from another. But in reality this is a reason for unity and not for discord and enmity. If the flowers of a garden were all of one color, the effect would be monotonous to the eye; but if the colors are variegated, it is most pleasing and wonderful. The difference in adornment of color and capacity of reflection among the flowers gives the garden its beauty and charm. Therefore, although we are of different individualities, . . . let us strive like flowers of the same divine garden to live together in harmony. Even though each soul has its own individual perfume and color, all are reflecting the same light, all contributing fragrance to the same breeze which blows through the garden, all continuing to grow in complete harmony and accord." `Abdu'lBahá
"Please God, that we avoid the land of denial, and advance into the ocean of acceptance, so that we may perceive, with an eye purged from all conflicting elements, the worlds of unity and diversity, of variation and oneness, of limitation and detachment, and wing our flight unto the highest and innermost sanctuary of the inner meaning of the Word of God." (The Báb)
"This universe is not created through the fortuitous concurrences of atoms; it is created by a great law which decrees that the tree bring forth certain definite fruit."
http://bahailibrary.com/abdulbaha_divine_philosophy.html&chapter=3
"As to thy question whether the physical world is subject to any limitations, know thou that the comprehension of this matter dependeth upon the observer himself. In one sense, it is limited; in another, it is exalted beyond all limitations. The one true God hath everlastingly existed, and will everlastingly continue to exist. His creation, likewise, hath had no beginning, and will have no end. All that is created, however, is preceded by a cause. This fact, in itself, establisheth, beyond the shadow of a doubt, the unity of the Creator."
(Bahá'u'lláh, Gleanings from the Writings of Bahá'u'lláh, LXXXII, p. 162163)
http://www.planetbahai.org/cgibin/articles.pl?article=222
"Another big theme has to do with randomness. If one looks at current biology, there are occasional uses of calculus. There's quite a bit of use of the idea of digital information. But there's a lot of use of statistics.
There are a lot of times in biology where one says "there's a certain probability for this or that happening. We're going to make our theory based on those probabilities."
Now, in a sense, whenever you put a probability into your model you're admitting that your model is somehow incomplete. You're saying: I can explain these features of my system, but these partswell they come from somewhere else, and I'm just going to say there's a certain probability for them to be this way or that. One just models that part of the system by saying it's "random", and one doesn't know what it's going to do.
Well, so we can ask everywhere, in biology, and in physics, where the randomness really comes from in the things we think of as random. And there are really three basic mechanisms. The first is the one that happens with, say, a boat bobbing on an ocean, or with Brownian motion. There's no randomness in the thing one's actually looking at: the boat or the pollen grain. The randomness is coming from the environment, from all those details of a storm that happened on the ocean a thousand miles away, and so on. So that's the first mechanism: that the randomness comes because the system one's looking at is continually being kicked by some kind of randomness from the outside.
Well, there's another mechanism, that's become famous through chaos theory. It's the idea that instead of there being randomness continually injected into a system, there's just randomness at the beginning. And all the randomness that one sees is just a consequence of details of the initial conditions for the system.
Like in tossing a coin.
Where once the coin is tossed there's no randomness in which way it'll end up. But which way it ends up depends in detail on the precise speed it had at the beginning. So if it was started say by hand, one won't be able to control that precise speed, and the final outcome will seem random.
There's randomness because there's sort of an instability. A small perturbation in the initial conditions can lead to continuing longterm consequences for the outcome. And that phenomenon is quite common. Like here it is even in the rule 30 cellular automaton.
But it can never be a complete explanation for randomness in a system. Because really what it's doing is just saying that the randomness that comes out is some kind of transcription of randomness that went in, in the details of the initial conditions. But, so, can there be any other explanation for randomness? Well, yes there can be. Just look at our friend rule 30.
Here there's no randomness going in. There's just that one black cell. Yet the behavior that comes out looks in many respects random. In fact, say the center column of the pattern is really high quality randomness: the best pseudorandom generator, even good for cryptography.
Yet none of that randomness came from outside the system. It was intrinsically generated inside the system itself. And this is a new and different phenomenon that I think is actually at the core of a lot of systems that seem random."
http://www.stephenwolfram.com/publications/recent/biomedical/
Speaking of Cellular Automata and Complexity Theory...
Cellular automata (CA) are a class of spatially and temporally discrete, deterministic mathematical systems characterized by local interaction and an inherently parallel form of evolution. First introduced by von Neumann in the early 1950s to act as simple models of biological selfreproduction, CA are prototypical models for complex systems and processes consisting of a large number of identical, simple, locally interacting components. The study of these systems has generated great interest over the years because of their ability to generate a rich spectrum of very complex patterns of behavior out of sets of relatively simple underlying rules. Moreover, they appear to capture many essential features of complex selforganizing cooperative behavior observed in real systems. Although much of the theoretical work with CA has been confined to mathematics and computer science, there have been numerous applications to physics, biology, chemistry, biochemistry, and geology, among other disciplines. Some specific examples of phenomena that have been modeled by CA include fluid and chemical turbulence, plant growth and the dendritic growth of crystals, ecological theory, DNA evolution, the propagation of infectious diseases, urban social dynamics, forest fires, and patterns of electrical activity in neural networks. CA have also been used as discrete versions of partial differential equations in one or more spatial variables.
...
A cellular game is a dynamical system in which sites of a discrete lattice play a "game" with neighboring sites. Strategies may be deterministic or stochastic. Success is usually judged according to a universal and fixed criterion. Successful strategies persist and spread throughout the lattice; unsuccessful strategies disappear.
...
Complexity: An extremely difficult "I know it when I see it" concept to define, largely because it requires a quantification of what is more of a qualitative measure. Intuitively, complexity is usually greatest in systems whose components are arranged in some intricate difficulttounderstand pattern or, in the case of a dynamical system, when the outcome of some process is difficult to predict from its initial state. In its lowest precisely when a system is either highly regular, with many redundant and/or repeating patterns or when a system is completely disordered. While over 30 measures of complexity have been proposed in the research literature, they all fall into two general classes: (1) Static Complexity which addresses the question of how an object or system is put together (i.e. only purely structural informational aspects of an object), and is independent of the processes by which information is encoded and decoded; (2) Dynamic Complexity which addresses the question of how much dynamical or computational effort is required to describe the information content of an object or state of a system. Note that while a system's static complexity certainly influences its dynamical complexity, the two measures are not equivalent. A system may be structurally rather simple (i.e. have a low static complexity), but have a complex dynamical behavior."
http://www.cna.org/isaac/Glossb.htm
Reality as a Cellular Automaton: Spacetime Trades Curves for Computation
At the dawn of the computer era, the scientific mainstream sprouted a timely alternative viewpoint in the form of the Cellular Automaton Model of the Universe, which we hereby abbreviate as the CAMU. First suggested by mathematician John von Neumann and later resurrected by salesman and computer scientist Ed Fredkin, the CAMU represents a conceptual regression of spacetime in which space and time are reseparated and described in the context of a cellular automaton. Concisely, space is represented by (e.g.) a rectilinear array of computational cells, and time by a perfectly distributed state transformation rule uniformly governing cellular behavior. Because automata and computational procedures are inherently quantized, this leads to a natural quantization of space and time. Yet another apparent benefit of the CAMU is that if it can be made equivalent to a universal computer, then by definition it can realistically simulate anything that a consistent and continually evolving physical theory might call for, at least on the scale of its own universality.
But the CAMU, which many complexity theorists and their sympathizers in the physics community have taken quite seriously, places problematic constraints on universality. E.g., it is not universal on all computational scales, does not allow for subjective cognition except as an emergent property of its (assumedly objective) dynamic, and turns out to be an unmitigated failure when it comes to accounting for relativistic phenomena. Moreover, it cannot account for the origin of its own cellular array and is therefore severely handicapped from the standpoint of cosmology, which seeks to explain not only the composition but the origin of the universe. Although the CAMU array can internally accommodate the simulations of many physical observables, thus allowing the CAMU’s proponents to intriguingly describe the universe as a “selfsimulation”, its inability to simulate the array itself precludes the adequate representation of higherorder physical predicates with a selfreferential dimension.
...
Before we explore the conspansive SCSPL model in more detail, it is worthwhile to note that the CTMU can be regarded as a generalization of the major computationtheoretic current in physics, the CAMU. Originally called the ComputationTheoretic Model of the Universe, the CTMU was initially defined on a hierarchical nesting of universal computers, the Nested Simulation Tableau or NeST, which tentatively described spacetime as stratified virtual reality in order to resolve a decisiontheoretic paradox put forth by Los Alamos physicist William Newcomb (see Noesis 44, etc.). Newcomb’s paradox is essentially a paradox of reverse causality with strong implications for the existence of free will, and thus has deep ramifications regarding the nature of time in selfconfiguring or selfcreating systems of the kind that MAP shows it must be. Concisely, it permits reality to freely create itself from within by using its own structure, without benefit of any outside agency residing in any external domain.
Although the CTMU subjects NeST to metalogical constraints not discussed in connection with Newcomb’s Paradox, NeSTstyle computational stratification is essential to the structure of conspansive spacetime. The CTMU thus absorbs the greatest strengths of the CAMU – those attending quantized distributed computation – without absorbing its a priori constraints on scale or sacrificing the invaluable legacy of Relativity. That is, because the extended CTMU definition of spacetime incorporates a selfreferential, selfdistributed, selfscaling universal automaton, the tensors of GR and its manydimensional offshoots can exist within its computational matrix.
An important detail must be noted regarding the distinction between the CAMU and CTMU. By its nature, the CTMU replaces ordinary mechanical computation with what might better be called protocomputation. Whereas computation is a process defined with respect to a specific machine model, e.g. a Turing machine, protocomputation is logically "premechanical". That is, before computation can occur, there must (in principle) be a physically realizable machine to host it. But in discussing the origins of the physical universe, the prior existence of a physical machine cannot be assumed. Instead, we must consider a process capable of giving rise to physical reality itself...a process capable of not only implementing a computational syntax, but of serving as its own computational syntax by selffiltration from a realm of syntactic potential. When the word "computation" appears in the CTMU, it is usually to protocomputation that reference is being made.
It is at this point that the theory of languages becomes indispensable. In the theory of computation, a "language" is anything fed to and processed by a computer; thus, if we imagine that reality is in certain respects like a computer simulation, it is a language. But where no computer exists (because there is not yet a universe in which it can exist), there is no "hardware" to process the language, or for that matter the metalanguage simulating the creation of hardware and language themselves. So with respect to the origin of the universe, language and hardware must somehow emerge as one; instead of engaging in a chickenoregg regress involving their recursive relationship, we must consider a selfcontained, dualaspect entity functioning simultaneously as both. By definition, this entity is a SelfConfiguring SelfProcessing Language or SCSPL. Whereas ordinary computation involves a language, protocomputation involves SCSPL."
http://www.megafoundation.org/CTMU/Articles/Supernova.html
"The selfconfiguration of reality involves an intrinsic mode of causality, selfdeterminacy, which is logically distinct from conventional concepts of determinacy and indeterminacy but can appear as either from a localized vantage. Determinacy and indeterminacy can thus be viewed as "limiting cases" associated with at least two distinct levels of systemic selfdeterminacy, globaldistributed and localnondistributed. The former level appears deterministic while the latter, which accommodates creative input from multiple quasiindependent sources, dynamically adjusts to changing conditions and thus appears to have an element of "randomness".
According to this expanded view of causality, the Darwinian processes of replication and natural selection occur on at least two mutuallyfacilitative levels associated with the evolution of the universe as a whole and the evolution of organic life. In addition, human technological and sociopolitical modes of evolution may be distinguished, and human intellectual evolution may be seen to occur on collective and individual levels. Because the TE model provides logical grounds on which the universe may be seen to possess a generalized form of intelligence, all levels of evolution are to this extent intelligently directed, catalyzed and integrated."
http://www.teleologic.org/
"The upper diagram illustrates ordinary cybernetic feedback between two information transducers exchanging and acting on information reflecting their internal states. The structure and behavior of each transducer conforms to a syntax, or set of structural and functional rules which determine how it behaves on a given input. To the extent that each transducer is either deterministic or nondeterministic (within the bounds of syntactic constraint), the system is either deterministic or “random up to determinacy”; there is no provision for selfcausation below the systemic level. The lower diagram, which applies to coherent selfdesigning systems, illustrates a situation in which syntax and state are instead determined in tandem according to a generalized utility function assigning differential but intrinsicallyscaled values to various possible syntaxstate relationships. A combination of these two scenarios is partially illustrated in the upper diagram by the gray shadows within each transducer.
The currency of telic feedback is a quantifiable selfselection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is selfdescriptive or autologous, intrinsically and retroactively defined within the system, and “preinformational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively selfconfigures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic selfutility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal.
A system that evolves by means of telic recursion – and ultimately, every system must either be, or be embedded in, such a system as a condition of existence – is not merely computational, but protocomputational. That is, its primary level of processing configures its secondary (computational and informational) level of processing by telic recursion. Telic recursion can be regarded as the selfdeterminative mechanism of not only cosmogony, but a natural, scientific form of teleology."
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf
"Disputes between evidential decision theory and causal decision theory have continued for decades, with many theorists stating that neither alternative seems satisfactory. I present an extension of decision theory over causal networks, timeless decision theory (TDT). TDT compactly represents uncertainty about the abstract outputs of correlated computational processes, and represents the decisionmaker's decision as the output of such a process. I argue that TDT has superior intuitive appeal when presented as axioms, and that the corresponding causal decision networks (which I call timeless decision networks) are more true in the sense of better representing physical reality. I review Newcomb's Problem and Solomon's Problem, two paradoxes which are widely argued as showing the inadequacy of causal decision theory and evidential decision theory respectively. I walk through both paradoxes to show that TDT achieves the appealing consequence in both cases. I argue that TDT implements correct human intuitions about the paradoxes, and that other decision systems act oddly because they lack representative power. I review the Prisoner's Dilemma and show that TDT formalizes Hofstadter's "superrationality": under certain circumstances, TDT can permit agents to achieve "both C" rather than "both D" in the oneshot, noniterated Prisoner's Dilemma. Finally, I show that an evidential or causal decisionmaker capable of selfmodifying actions, given a choice between remaining an evidential or causal decisionmaker and modifying itself to imitate a timeless decisionmaker, will choose to imitate a timeless decisionmaker on a large class of problems."
http://singinst.org/upload/TDTv01o.pdf
"Fundamental properties of the world in which all life evolved, such as space, time, force, energy and audio frequencies, are modeled in physics and engineering with differentiable manifolds. A central question of neurophysiology is how information about these quantities is encoded and processed. While the forces of evolution are complex and often contradictory, the argument can be made that if all other factors are equal, an organism with a more accurate mental representation of the world has a better chance of survival. This implies that the representation in the central nervous system (CNS) of a physical phenomenon should have the same intrinsic mathematical structure as the phenomenon itself. The philosophical principal, put forth by Monad (1971) and others, that under certain conditions, biological evolution will form designs that are in accordance with the laws of nature is referred to as teleonomy.
All of the diverse sensory input an organism receives must be combined with internal mental state and integrated together to form a coherent understanding of the environment and a single plan of action. For this to happen, all of the manifolds must be in some way unified. A common assumption is that all of the “lowlevel” manifold representations are converted to a set of “highlevel” symbols and that these highlevel symbolic representations are the basis for the unification. A central thesis of this article is that this need not be the case; we can leave the sensory input representations in their multidimensional form and instead create a unified system of computational manifolds."
http://www.gmanif.com/pubs/TRCIS060203.pdf
"Modeling, a sophisticated form of abstract description, using mathematics and computation, both tied to the concept of number, and their advantages and disadvantages are exquisitely detailed by Robert Rosen in Life Itself, Anticipatory Systems, and Fundamentals of Measurement. One would have hoped that mathematics or computer simulations would reduce the need for word descriptions in scientific models. Unfortunately for scientific modeling, one cannot do as David Hilbert or Alonzo Church proposed: divorce semantics (e.g., symbolic words: referents to objects in reality) from syntax (e.g., symbolic numbers: referents to a part of a formal system of computation or entailment). One cannot do this, even in mathematics without things becoming trivial (ala Kurt Godel). It suffices to say that number theory (e.g., calculus), category theory, hypersets, and cellular automata, to mention few, all have their limited uses. The integration between all of these formalisms will be necessary plus rigorous attachment of words and numbers to show the depth and shallowness of the formal models. These rigorous attachments of words are ambiguous to a precise degree without the surrounding contexts. Relating precisely with these ambiguous words to these simple models will constitute an integration of a reasonable set of formalisms to help characterize reality."
http://edgeoforder.org/pofdisstruct.html
