May 18, 1994 Paul Estin A long essay combining: (1)"Things I've been thinking about lately" (2) A personalized outline of the book _Complexity_ (by M. Mitchell Waldrop) (3) Remaining questions, plus, how it all applies back to my research [Note: Although this essay is nearly 50K long, nonetheless it is written very sketchily. While I've tried to give examples where possible, the writing gets very abstract and high-falootin' in parts. I will happily re-explain and/or expand upon virtually any phrase or idea contained herein. Indeed, there are places in which a couple of seemingly throwaway words have formed the basis for entire essays I've written some time previously. But this essay mostly written as an outline for myself, so there are bound to be portions of it which are indecipherable to others.] (1a) "Things I've been thinking about lately"-- introduction In recent months, several ideas have been combining in my mind. I seem to have reached a "critical mass" in terms of both experience in the world and knowledge of various academic subjects. A major factor was taking my prelims in cognitive psychology (including some Artificial Intelligence) in August 1993, but I'd also been reading and thinking about philosophy (of mind, of science, of ethics), economics and politics, biology and evolution, computer systems, even a bit of physics and chemistry. I'd been reading Douglas Hofstadter and Daniel Dennett's collection _The Mind's I_, talking to my housemate about economics, TA-ing a "Judgment and Decision Making" class, and researching object recognition with Ed Smith and Doug Medin and "what makes an argument convincing" with Frank Yates. Over January-April 1994 I sat in on an undergraduate seminar run by John Lawler, a professor of linguistics, in which we read Hofstadter's _Go"del Escher Bach_, Gregory Bateson's _Mind and Nature_, and Lakoff and Johnson's _Metaphors We Live By_. What had started happening was this: I was realizing that several "revelations" were really the *same* revelation, in different domains or situations. Let me start with an early example. In mid-1992, I had a political revelation. I had the notion that, it's not *merely* true that "good intentions are not enough", but that, furthermore, good intentions are *secondary* in importance to basic policy principles. I was actually more willing to vote in favor of a candidate "A" whom I didn't like, but whom I thought had basic policies that would lead to good results, over a candidate "B" whom I thought had wonderful intentions-- a good person whom I'd happily invite to dinner-- but who believed in dysfunctional principles. For example, one principle I believe is that government should NOT try to micro-manage the economy-- it should use tradeable incentives instead of inflexible regulations for controlling pollution, for example. I believe that such action leads to better overall results; a better economy helps the poor to not be poor, for example. So, if I didn't particularly like Candidate A but their previous legislative actions had shown that they believe much the same as me about government micromanagement, I'd probably vote for him/her over candidate B who "cares more" but who wants to create a "Federal Bureau to Help the Poor" that (I believe) would increase bureaucracy, hurt the economy, and keep the poor mired in poverty. (I'm exaggerating slightly, but, yes, later that year I held my nose and pulled a lever for Bush instead of Clinton.) At roughly the same time in 1992, I had a revelation in the realm of interpersonal relationships. I was starting to realize that however much a friend or SO "cared" about my well-being, if he/she didn't agree with some basic "principles of operation" like honesty and reliability, they often hurt me worse than someone I didn't trust in the first place. What good is "not meaning any harm" or "being sorry someone got hurt" if the person goes right ahead and hurts a person anyway (to my mind because he/she follows principles which tend in general to get people hurt)? Then I realized that the two revelations, political and personal, were really the *same* revelation. More abstractly: with any kind of semi-hierarchical system, there's really a great limit to how well top-down processing can/will operate. In the domain of health, say, while *wanting* to be healthy is clearly important (and has effects in one's behavior), it's one's actual diet, exercise, and lifestyle behavior that are more important in determining whether one stays healthy or suffers an accident or disease. At a high level, there exist "intentions" or high-level commands; at medium levels, there exist "processes" which operate using various "principles" and organization; at the bottom level, there are the actual results. While judging from results *alone* isn't fair (except in the infinite "long run" of all possible hypothetical cases), it also isn't appropriate to judge from high-level commands or intentions alone. Intentions, processes and principles, and results-- they're *all* important. Often (always?) it's the middle levels which are the most "telling." Later, this "double-revelation" synched up with *other* revelations I'd been having. I started viewing all sorts of different things as "multi-leveled systems", with internal and external feedback, and organization. For example, I started thinking about personal morality in those terms. To a utilitarian, "whatever works" is "right"-- i.e., one judges from results. To an intentionalist, "good intentions" are all that matter. Others have morality based on various principles and processes, be it as "do what feels right", or a Catholic religious morality containing complex heuristics grounded in more general principles (some more transparent than others). Here's the situation: ideally we'd all like to do "what works", but in a complex and changing world, one *has* to form generalizations, or else no learning about "what works" can take place, since *exact* repetitions of situations do not occur. Thus, FEEDBACK is important-- one needs "reality checks"-- but feedback is only "accurate" at the lowest level (a specific situation), and must be subjectively interpreted at higher levels (asking the question, "Exactly what went wrong?") in order to be useful in altering one's personal heuristics, principles, and overall moral structure (though there may exist a "highest level" which is virtually immune to feedback, the "end goals" or "basic assumption principles"). So internal CONSISTENCY is also important. Not having double standards, for example. Or, learning that what works in a familiar situation may also be applicable in an unfamiliar one (top-down application of principles). (Side note: the notion that consistency *alone* is very powerful comes from results in the field of judgment and decision making which I learned from Frank Yates. It turns out that, in terms of judgment accuracy, having a "pretty good" model and applying it *consistently*-- something computer models often do better than humans-- often produces judgment results superior to those made by a "smarter"/ more discriminating/ more informed judge who is less consistent.) As regards moral philosophy, I now find myself being a sort of "high-level utilitarian"-- I do what I believe "works" (feedback), but the beliefs are grounded in principle (consistency). When principles conflict with emotions, I decide which to change based on both experience and consistency. Thus, a principle isn't a principle if it doesn't sometimes make one say "no" when one feels like saying "yes", and vice versa; yet principles must be subject to feedback as well. Overall, a moral system is a matter of personal adaptation, with processes which are both "bottom-up" (reality provides feedback to the system) and "top-down" (applying principles to form heuristics and heuristics to specific situations), both modified by personal interpretation. So, my principles are semi-stable and at the same time dynamic, a complex mix of "what principles I was taught" with "what my experience in the world teaches me." If I'd had different experiences, especially early on, I'd have been a different person with a different morality (And this isn't just true of moral philosophy, it's true for my "belief system" more generally-- my mental models of how I believe the world works.) This lack of inevitability is a slightly scary notion. Earlier in life, I'd sometimes consoled myself that events were more or less inevitable. I did less panicking and fretting that way! It didn't really matter to make a quick decision to quit being a physics major in college, for example, because if physics wasn't right for me, I'd figure it out sooner or later, and wind up in the field I *was* suited for. It didn't really matter what happened to me, in general, because I'd wind up being more or less the same person regardless. So, to now realize, *really* REALIZE deep down, that very small differences in personal decisions could dramatically effect later results (a la chaos theory) at to personal identity... well, that was a sobering thought. (1b) "Things I've been thinking about lately"-- some concepts, ideas, and models Well, I could go on and on along the previous line of thought, but let me instead try to summarize some of my other "basic principles" of "how the world/universe works", to give you an idea of some of my main intellectual thoughts just prior to reading the book _Complexity_. I suppose that what I'm really striving for is a "theory of everything", not in the reductionistic sense of the physicists' "Grand Unified Theory", but in the sense of learning abstractions and analogies and metaphors to link together different complex systems. Lots of different SYSTEMS are "similar" in some deep sense, and they can be understood better by setting up analogies and figuring out commonalities between domains. "There are *fewer* things in Heaven and Earth than are dreamt of in your philosophy;" there are only so many ways internal and external feedback can be put together, only so many basic patterns. So, if you figure out something about a system in one domain, it might help you understand a system in a different domain. And if you figure out some abstractions about a system, they might apply to multiple domains. So, for example, my abstract visualization of levels and feedback and consistency applies to moral philosophy, economic systems, biological evolution, capitalism, ideology and religion, interactive entertainment, representative democracy, government programs, insurance, emergent properties of the mind, the value of role-playing and simulation to include more feedback in a representation, artificial intelligence, object-oriented programming, educational systems, argumentation, the news media... virtually everything! ABSOLUTISM vs. RELATIVISM: The world often consists of local (temporary?) equilibria, not universal absolutes. "Truth" is partially subjective and "relative", but note that it isn't *arbitrary*-- it's grounded in one's personal, local experience within a domain or kind of situation; to the extent experiences are universal, "truth" will be universal. The only way to get a sense of how universal something is, is to look for examples "out there". For example, in determining the universals of cognition, cross-cultural research is important. Still, INDIVIDUAL EXPERIENCE *is* important. "Learning" and "understanding" often come from metaphor and analogy, coming from personal experience and example, not abstraction. "Meaning" can't always (ever?) be laid out completely by giving an exact definition, because it's internal and comes from reference (to other terms and to externals); since two people's experiences are bound to be different, a lot of semantic arguments result. One of the problems with cognitive psychology is that individual differences are brushed aside in the interest of simplicity; while that's necessary at times in order to get data at all, a lot of interesting phenomena are (at best) ignored or (at worst) misunderstood. (And yes, I plead guilty to ignoring individual differences in my own research.) FUZZY CATEGORIES. Categories don't have precise or predefined boundaries; they aren't objectively "real". People form categories and concepts because such generalizations are *useful*, for the sake of explanation and prediction. *Naming* a generalized category makes the commonalities between members easier to grok. However, people often get into silly arguments when they insist on pushing the use of categories beyond their usefulness, believing, for example, that political terms like "liberal" and "conservative" are *real* and *definable*, and that an individual *must* either belong or not belong to a category. But categories don't have to be binary. Political terms (and other categories) are just approximations, MODELS. Categories don't have exact boundaries, and there may exist more accurate approximations such as "socially liberal but fiscally conservative". However, as in the example, the more accurate approximations may be more complex, having more interrelations and free parameters. Overall, there are probably endless varieties of models, which can be "better" in different ways; some are simpler, have fewer parameters, and are easier to apply, others are more accurate if and when their parameters *can* be fit. Thus, no one category is necessarily "right". It depends on what you're using it for and what information is available; categorization (modeling) doesn't exist in a vacuum. (Which is to say, the term "categorization" is itself a fuzzy category!) FUZZY SCIENCES: All systems are fuzzy to varying degrees, as are the sciences which study them. Granted, when the goal is "to understand the universe", one has to divide up the subject *somehow*. However, there are costs and missed opportunities involved in forming a particular categorization, one which stays stuck in place due to tradition and bureaucracy. By focusing on one part (a biological cell, for example), one tends to miss higher levels and interrelations (interactions with other cells, organs, individuals, etc.) In science, multidisciplinary work-- learning across domains-- is tough to do because academic fields (categories) are divided from one another. Expanding on METAPHOR... often, in trying to understand a complicated "system" (e.g. the U.S. Congress, General Motors, the educational system), people PERSONIFY it; that is, they use a "person" metaphor. This makes some sense; a "social person with desires and intentions" is the only experiential metaphor some people *have* for understanding a complex system. The problem is, it's not always accurate. For example, in *some* ways it's useful to talk about General Motors having "intentions", but in other ways that's an incomplete or misleading metaphor. It's useful to have other models and metaphors available. For example, anyone who's played around with Conway's Game of Life knows that sometimes complex behavior is an EMERGENT PROPERTY of a system, there's no way to reduce a higher-level phenomenon to a lower-level entity. "The whole is greater than the sum of its parts"; the *interrelations* of a system are critical, not just its parts. That's the only way to start explaining "consciousness", for example; there isn't any higher level "consciousness central control." I've been talking about hierarchies, but there are also cases which *aren't* clean hierarchies. To use Hofstadter's terms, there exist STRANGE LOOPS and TANGLED HIERARCHIES. In divvying up cognitive psychology, for example, only the *lower* levels are somewhat cleanly defined: perception is "modularized" and is rather separate from higher-level cognitive processes, so one can model the system largely in terms of bottom-up processing *only*. That's a nice situation to study, where it occurs. But higher levels of cognition are progressively "mushier" and more intertwined. For example, categorization depends on similarity, but "similarity" isn't a cleanly defined "primitive", it itself depends partly on categorization. (2) A personalized outline of the book _Complexity_ (by M. Mitchell Waldrop) All those concepts, and more, were bouncing around in my head when I finally got around to reading a book which a cousin had suggested to me last summer, _Complexity_, by M. Mitchell Waldrop. _Complexity_, you see, turned out in many ways to be *exactly* what I needed to read. Having realized what the world is NOT (it isn't deterministic, it doesn't settle towards an optimal equilibrium, it isn't completely objective, etc.), I wanted to start thinking about what it WAS. (Disclaimer: As mentioned above, I believe there is no truly objective "reality", but there *are* comparatively better approximations and models of "reality". I *don't* throw up my hands and say, "It's all relative." Far from it.) So let me go through the book, chapter by chapter, isolating some main points. Within and after each chapter summary I'll highlight (in brackets) ways in which my own thinking was affected. (I should note that _Complexity_ is not just about the concepts of complex adaptive systems. It's also a fascinating personal history of the founding and early work of the Santa Fe Institute and the people comprising it, and it contains many fascinating looks into pragmatic academic politics. But I'm going to focus just on the concepts outlined in the book.) Ch.1 "The Irish Idea of a Hero" (mostly about Brian Arthur) Neo-classical economic theory, which dominated economics for a generation, teaches that economics is treatable by theories which assume that returns are always decreasing, and that "left to themselves", systems reach a stable (calculatable) equilibrium point. While many aspects of economics *can* be successfully approximated that way, many cannot (e.g. technological change, stock market behavior). Arthur studied "increasing returns", situations in which "them that has, gets." VHS was technically inferior to Betamax, but just a few more people happened to buy VHS machines early on, which led to more VHS movies being available, which led to even more people choosing VHS or Beta, and so forth until VHS became the standard. There were *increasing* returns, or, to use engineering terms, there was POSITIVE feedback, not NEGATIVE feedback. Other examples of a particular standard getting "locked in" over possibly superior competitors: the QWERTY keyboard, the internal combustion engine over the steam engine, the "light water" method of cooling nuclear reactors. Table from p.37 (made by Arthur in 1979): OLD ECONOMICS NEW ECONOMICS * Decreasing returns * Increasing returns * Based on 19th-century physics * Based on biology (structure, (equilibrium, stability, pattern, self-organization, deterministic dynamics) life cycle) * People identical * Focus on individual life; people separate and different * If only there were no extern- * Externalities and differences alities and all had equal become driving force. No abilities, we'd reach Nirvana. Nirvana. System constantly unfolding. * Elements are quantities and * Elements are patterns and prices. possibilities. * No real dynamics in the sense * Economy is constantly on the that everything is at equili- edge of time. It rushes brium. forward, structures constantly coalescing, decaying, changing. * Sees subject as structurally * Sees subject as inherently simple. complex. * Economics as soft physics. * Economics as high-complexity science. Philosophically, Arthur came to the problem that, whereas it's well and good to say of a problem, "It's not so simple", people tend not to be very satisfied being told merely that "the problem is inherently indeterminate." One can't just tear up an old paradigm, one has to also provide a viable alternative. Later chapters start getting a better handle on that problem, in part by using "computer experiment" (simulation). Nevertheless, there are limits; those who believe that "science" necessarily involves *prediction* (and not merely *explanation*) are going to be disappointed. But evolutionary biology, geology, and astronomy consist almost entirely of explanations rather than predictions, and *they* are all sciences, presumably. If one only follows the old paradigm of classical physics, a lot of interesting phenomena turn out to be inherently unstudyable. [Arthur's whole idea reminds me most of the "punctuated equilibrium" model of biological evolution, as popularized by Stephen Jay Gould.] [I've long been against the idea of government subsidies-- why should the government be able to pick "winners" better than individual investors working with their own money?-- and previously the only justification I've been able to think of has been to support government subsidies in areas where the potential payoff is just too far in the future, and requires too great an initial investment, e.g. space tech. Arthur's economics have added a new wrinkle-- there are cases of "startup industries" in which big changes in the eventual marketplace dynamic can be made by small changes early on. Of course, the question still remains as to why the government can make picks better than individuals, but at least in this case the payoffs are potentially large enough that the gamble isn't too bad. I remain opposed, however, to government bailouts of mature industries.] Ch.2 "The Revolt of the Old Turks" (George Cowan, Murray Gell-Mann, Phillip Anderson, Ken Arrow, and the start of putting together the Santa Fe Institute) Computer simulation is starting to become a "third science", halfway between theory and experiment. Computers allow tackling problems that were never tractable before, by allowing the construction of entire self-contained worlds, involving enormous amounts of calculation. In linear systems, the whole equals the sum of its parts. Such systems are relatively tractable to analyze, reductionism "works" fairly well for understanding them, and prediction is easily possible. A lot of nature *is* linear. But a lot of nature is *not* linear. In nonlinear systems, little changes can produce big (and surprising) changes; systems are *dynamic*. "The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe." - Phillip Anderson, 1972. [Nothing really new here for me, mostly just a restatement of things I knew from chaos theory. The "third science" bit is an interesting way to think about computer simulation, though. Sometimes one forgets just how revolutionary computers can be.] Ch.3 "Secrets of the Old One" (about Stuart Kaufmann's work) Stuart Kaufmann's "vision" is to explain the inherent tendency towards dynamic order, including self-organization-- matter's attempts to organize itself into ever more complex structures, even in the face of the second law of thermodynamics ("In a closed system, entropy increases.") Early on, he studied dynamics of regulatory gene networks (using simple two-gene-connection simulations) as an example of nonlinear dynamics tending towards *some* sort of dynamic order. Kaufmann also wondered about a Big Question: what is the origin of life? Are self-organization and self-replication so tied to the specifics of "DNA and proteins" that life does not exist without them? If so, then the origin of life was a bizarrely improbable random fluke indeed; yet, what is the beginning of life, if not with the first DNA or the first protein set? Kaufmann imagined another start. "Imagine that you had a primordial soup containing some molecule A that was busily catalyzing the formation of another molecule B. The first molecule probably wasn't a very effective catalyst, since it essentially formed at random. But then, it didn't need to be very effective. Even a feeble catalyst would have made B- type molecules form faster than they would have otherwise. Now, suppose that molecule B itself had a weak catalytic effect, so that it boosted the production of some molecule C. And suppose that C also acted as a catalyst, and so on... somewhere down the line ... a molecule Z (might) close the loop and catalyze the creation of A... The compounds in the soup could have formed a coherent, self- reinforcing web of reactions... It would have become an 'autocatalytic set.'" (p. 123-4) Once a sufficiently complex autocatalytic set is in place, it can grow and "metabolize" and even "reproduce" by sloshing over into a neighboring pond. If the neighboring pool already has an autocatalytic set of its own, you get competition, and, thus, natural selection. Simulations show that beyond a certain point of complexity, as measured by the number of types of molecules in a pond, the formation of an autocatalytic set is almost inevitable. And some autocatalytic sets are open to adaptation-- small changes don't destroy the cycle, but instead may even improve it. In fact, the more complex a set is, the more room there is for adaptation. You get a web of connections with ever-increasing complexity and flexibility. Eventually, you might even get the precursors of cell walls, DNA, and proteins. But no matter what you end up with, "the ball is already rolling." The *economic* event analogous to an autocatalytic set would be the case when a country's economy grows and diversifies enough to attain a certain level of complexity; then, it undergoes an explosive increase in growth and production which economists refer to as an "economic takeoff". Complexity is similar to chaos theory-- complex chaotic behavior can emerge from very simple nonlinear interactions-- but whereas chaos theory contains only one set of forces and no sense of adaptive change, complexity theory examines the emergent behavior of complex adaptive systems formed of different agents. [This was a "Oh, wow!" chapter that helped me clarify ideas about emergence of dynamic order. As a side note, it also would have convinced me, if I hadn't been convinced already, that free trade is very important, because it allows for an easier "economic takeoff" by increasing the effective "critical mass" (size and complexity of an economy).] Ch.4 "'You Guys Really *Believe* That?'" (Brian Arthur gives his talk on increasing returns, the physicists and the economists begin to find common ground.) [It's interesting that physicists aren't as mathematically rigorous as economists. Physicists get a "sense" of a system first; economists plug away with fantastic mathematical complexity, oblivious to reality checks. :-)] Ch.5 "Master of the Game" (about John Holland's work) John Holland studied "complex adaptive systems", including brains, ecologies, immune systems, cells, developing embryos, ant colonies, economies, political parties, and scientific communities. (1) All of these consist of a network of many "agents" (neurons, species, etc.) which work in parallel, within an environment produced by an agent's interactions with the other agents in the system. The control of a complex adaptive system tends to be highly dispersed; there is no "master neuron" in the brain. Coherent behavior arises from competition and cooperation among the agents themselves. (2) A complex adaptive system has many levels of organization, with agents at one level serving as the building blocks for agents at a higher level. Complex adaptive systems are constantly revising and rearranging their building blocks as they gain experience. At some deep, fundamental level, these processes of learning, evolution, and adaptation are the same. (3) All complex adaptive systems anticipate the future, not necessarily consciously, by making predictions based on *implicit* internal models. (4) Complex adaptive systems typically have many *niches*, each one of which can be exploited by an agent adapted to fill that niche. Holland went further than *talking* about all these ideas; he had actual working computer simulations-- he had started formulating them as far back as the early 1960s! The original "genetic algorithm" worked as follows: (1) Start with a population of "digital chromosomes". (2) Test each chromosome on the problem at hand by running it as a computer program, and then giving it a score ("fitness") that measures how well it does. (3) Take those individuals which are fit enough to reproduce (the rest die), and create a new generation of individuals through sexual reproduction (like chromosomal crossover in real biological organisms... intermix chromosomes ABCDEFG and abcdefg to form, perhaps, children ABCDefg and abcdEFG.) (4) Repeat the cycle, with the children competing against their parents. The "genetic algorithm" converges to an optimal (or at least very very good) solution quite rapidly, without ever having to know beforehand what the solution is. Holland's ideas were quite foreign to the AI community of that time, and to some extent AI still hasn't fully grasped them; in particular, standard AI has little interest in learning. "We can add that on later" is the implicit belief. Also, symbolic processing tends to be very rigid in AI programs. How does b-i-r-d *mean* everything that "bird" means? It doesn't. It can't. To really build understanding into the system, you have to make meaning *emergent*, not built it in top-down and coded to a single symbol. Later, Holland developed "classifier systems" which modified the top-down, externally-imposed "fitness ratings" he'd used earlier. The "chromosomes" had already been turned into "classifers" (self- activating if-then rules which posted "messages" on a "bulletin board"). Whereas standard AI implemented top-down conflict- resolution procedures to settle which of two incompatible messages actually gets applied, Holland set up an "auction". The classifiers weren't *rules* so much as competing *hypotheses*, each with a plausibility (strength) which formed a basis for bidding. The plausibility ratings were still somewhat arbitrary and externally-imposed, so Holland later replaced the auction system with something more analogous to a full-fledged free-market economy, with the reinforcement of agents taking place via a profit motive. "Messages on the bulletin board" became goods and services up for sale. Agents are firms which bid for supplies and (indirectly, via the messages from the "last round") each others' services. Rewards are passed directly to the final finishing agent but also indirectly, since the finishing agent had to pay its "supplier" agents. This method of indirect reward is called "bucket brigade" algorithm. An early version of the classifier system (1978) learned how to run a maze (ten times faster with the genetic algorithm than without) and it showed "transfer", applying rules used in one maze to run other mazes later on. Holland's graduate students later applied the classifier system to a wide variety of problems, from learning to play poker, to negotiating a simulated "food and poison environment" requiring a mental map, to controlling a simulated gas pipeline. [John Holland has an office about 25 yards from mine, though he's not around a lot.] [For prelims, I'd already read a late-'80s paper by Holland et al. So the classifier system wasn't completely new to me. But the first time I'd read about it, my reaction was sort of a "well, gosh, it's nice to see someone working on 'learning in an inconsistent environment,'" but while I felt a sense of "wow", I didn't yet have the preparation to *really* understand its importance as a working simulation of true emergence. It's just amazing how much Holland was ahead of his time; the guy's a genius. So I suppose it's fitting that he got a MacArthur "genius grant" a couple years ago.] [It's interesting to note Holland stressing the *unimportance* of consistency, compared to my view. His point is that in an uncertain and unstable world, consistency is overrated. I think we might be meaning slightly different things by "consistency", though.] [I'm still working on understanding some details of what the classifier "rules" really look like, and so forth. I don't quite have a good feel for all of Holland's system.] [Furthermore, I'm trying to decide to what extent the system works with hard-and-fast categories instead of "fuzzy" ones, and hierarchical systems instead of "messy hierarchical" ones. Is the classifier system really so rigid, or is that just the way Holland (via Waldrop) describes it, for simplicity's sake? Because if *is* that rigid, then there's room for improvement, in my opinion.] Ch.6 "Life at the Edge of Chaos" (artificial life; complexity as the dynamic semistable "edge" between static order and unstable chaos) Chris Langton started by playing around in 1971-2 with Conway's Game of Life. It's a simple computer program "universe" consisting of a grid of blocks which are either "alive" or "dead" and some simple rules to determine what the "next generation" would look like. If a "cell" had two or three "neighbors" (out of a possible eight) it would live on to the next generation; too many or too few, and it would die. If a blank spot was surrounded by exactly three neighbors, a new cell would be "born" at that location. Despite the simple "cartoon biology", if one starts with a sufficiently complex starting grid, the rules usually generate behavior that seems intuitively "lifelike"-- interacting patterns of cells, "oscillators", "gliders" that moved across the screen reproducing their own pattern every fourth generation, "glider guns", "glider eaters", etc. Emergent behavior, with a vengeance! "Artificial life" began as an attempt to capture evolution (biological or cultural) in the same sense as artificial intelligence attempts to capture neuropsychology. Conway's Game of Life is just a special case of finite "cellular automata", with just two states ("alive" and "dead") and a few rules. John von Neumann had developed a 29-state, large grid automaton pattern which was self-reproducing. Langton discovered a pattern that was smaller and worked with a mere eight states. (An "arm" grows out, curls back on itself, and makes a loop identical to the starting point.) Stephan Wolfram contended that all cellular automata rules fell into one of four *universality classes*. Class I are the "doomsday rules"-- no matter what pattern of living and dead cells you start with, they all die fairly rapidly. Class II rules take virtually any pattern and turn it into something only slightly more interesting than doomsday-- a stable set of blobs, perhaps a few periodic oscillators as well. Class III rules went to the opposite extreme-- total chaos, with nothing stable and nothing predictable; structures would break up almost as soon as they formed. Finally, there were Class IV rules, which included the rare, impossible-to-pigeonhole rules that produced coherent structures that propagated, grew, split apart, and recombined in an intuitively "lifelike" way. They essentially never settled down. Langton tried to find a pattern, some parameter that predicted which class a set of rules would belong to. It turns out to be a very simple parameter "lambda", just the probability that any given cell would be "alive" in the next generation. If lambda is near 0 (or near 1, which is just the mirror image), the rules will fall into Class I or II. Near 0.5, the rules fall into Class III. But for rules which had a lambda near 0.273 (including Conway's Game of Life), you got Class IV. Langton went on to set up some detailed analogies (p. 234): * Cellular Automata Classes: I & II -> "IV" -> III * Dynamical Systems (Chaos Theory): Order (single point attractor or set of periodic attractors) -> "Complexity" -> Chaos (strange attractors) * Matter: Solid -> "Phase Transition" -> Liquid * Computation: Halting -> "Undecidable" -> Nonhalting * Life (very hypothetical and unproven compared to the others): Too static -> "Life/ intelligence" -> Too noisy [Langton's analogies fit with the "feedback systems" idea I'd been thinking about but couldn't verbalize fully. On the one hand, many systems "need more feedback" to be more successful. Take people's "belief systems", for example. People's principles and heuristics often seemed to me to be inherently "too stable" and not sufficiently open to feedback. People ignore "reality checks" that should force them to notice holes and flaws in their belief systems, and improve them, but they don't.] [But clearly "more feedback" or "more openness to feedback" wasn't the whole story, which is why I tried to work in "consistency", which isn't enough. In the extreme case, you can imagine bombarding someone with so much information, disproving any conjecture they make the instance they make it, that the person gives up and concludes that the incoming information is completely random and unpredictable.] [Well, using Langton's analogies, people can indeed have belief structures which are "too orderly" and need more feedback to be forced to change to a more optimal state, but they can also be "too chaotic" and fail to coalesce at all. Real people, in fact, have belief systems which are both insufficiently sensitive to feedback and insufficiently consistent. And yet-- that fits too; to be like Langton's analogies, belief systems *should* be a mixture of stability and dynamism.] [Come to think of it, Langton's 0.00 -> 0.273 -> 0.50 reminds me vaguely of Frank Yates' probability judgment analyzer.] Ch.7 "Peasants Under Glass" Craig Reynolds had a simulation in which behavior very much like birds' flocking behavior was an emergent property of simple rules, applied at the level of individual "boids". Brian Arthur had a vaguely defined dream-- if flocking behavior could be simulated with such simple rules, wouldn't it be nice to have an "economy under glass" in which little agents, preprogrammed to get smart and interact with each other, could produce realistic economic behavior? ("Oh, look! This morning they've developed central banking!") It would be a politically acceptable way of pushing the "new approach to economics"-- by tackling one old chestnut of a problem at a time. Holland faced up to a major philosophical flaw in classifier systems, which was that the "payoff" came via the deus ex machina hand of the programmer. That's "cheating." How can "winning" and "losing" be *internally* defined? "Organisms in an ecosystem don't just evolve, they coevolve. Organisms don't change by climbing uphill to the highest point of some abstract fitness landscape... (The fitness-maximizing organisms of classical population genetics actually look a lot like the utility-maximizing agents of neoclassical economics.) Real organisms constantly circle and chase one another in an infinitely complex dance of coevolution." (p. 259) What Holland came up with was "Echo", a highly simplified biological community in which digital organisms roam the digital environment in search of the resources they need to stay alive and reproduce: the digital analogues of the water, grass, nuts, berries, etc. When the creatures meet, of course, they also try to make resources out of each other. Echo models an elemental form of interaction; namely, combat. (Flashback to Robert Axelrod's "iterated Prisoner's Dilemma" competition in the late 1970s, in which the "tit for tat" strategy won.) It turns out that in a population of organisms coevolving via the genetic algorithm, either "tit for tat" or a strategy very much like it appears and spreads through the population very quickly. Different versions of Echo model stock markets, immune systems, and trading, with minor changes. Eventually Echo-like simulations should allow people to get a "feel" for different policy options, without either having to know all the details, or causing real-world policy disasters. SimCity done large. (So far Washington types are unimpressed, alas.) Meanwhile, stock markets have been modeled which include such real-world behaviors (incomprehensible to neoclassical economics) as "bubbles" of speculation, self-fulfilling prophecies, etc. [Such "computer experiments" might be the only may to "explain" other sorts of decision behavior, ones which are interdependent like stock market trading.] [The "landscape" is pretty good image for belief systems in general, though I'm undecided how well "a person's set of beliefs" should be modeled by an evolutionary vs. a coevolutionary model. In any case, even for a "simple" evolutionary model in which reward is external, there is the idea that one settles into a semi-stable dynamic system of principles. It's not an "optimal" set of beliefs-- there's no such *thing* as an "optimal" model of the way the world works and no way to compute it even if there were-- but it might be a semistable "locally optimal point".] Ch.8 "Waiting for Carnot" This chapter goes back to Kaufmann's notion that dynamic order is somehow *inevitable*, given a fairly broad set of conditions. If a system is *near* the complexity phase ("edge of chaos"), but is "too orderly" or "too chaotic", mutation and natural selection (or processes analogous to them) will push it further in towards the edge. [Sure, on planet Mercury "order is forever" and no life occurs, and on Jupiter "chaos is forever" and no life arises. But on an Earthlike system, with the building blocks of carbon compounds, water as a solvent, and lots of other variety of molecules to "play" with... might *some* sort of life be virtually inevitable?] The Santa Fe group is searching for a putative "new second law". Just as there was a long time between scientists having *notions* like "cold doesn't flow towards hot" and Sadi Carnot coming along in 1824 and giving the first statement of what would come to be known as the Second Law of Thermodynamics (and another 70 years until a rigorous statistical explanation was given), it might be a while before the vague idea that "dynamic order and emergence are sort-of inevitable" can be precisely formulated. Right now, no one even knows what *form* the "new second law" will take. Compare Santa Fe computer experiments to connectionism. Similarities: Both have nodes/agents linked together into a network, emergent properties arise from the *pattern* of connections. Even Holland's classifier system is like a connectionist model-- the set of nodes = set of all possible internal messages; connections = classifier rules. Differences: Connectionist networks usually have a more limited space of possibilities, with less internal feedback, and using only "exploitation" (modifying the strength of the connections given) and not "exploration" (ripping out old connections and putting in new ones, a la Holland's genetic algorithm). But none of those limitations are *inherently* true of connectionist networks.) A new second law might incorporate Langton's number "0.273" in some way. Another possible piece of the "new second law" (though so far no one knows how to fit it together with Langton's) is given by physicist Per Bak. Imagine a pile of sand on a tabletop, with a steady drizzle of new sand grains raining down from above. Let the pile reach a state of equilibrium (by building the pile too big and then letting the excess grains fall off the table, for instance). The resulting sand pile is self-organized, and in a state of criticality. Add a new grain; maybe nothing happens, maybe there's a tiny shift in a few grains, or maybe there's a dramatic catastrophic landslide which takes off a whole face of the sand pile. The average frequency of a given size of avalanche is inversely proportional to some power of its size. The power law exhibits itself in lots of natural systems, such as the activity of the sun or the flow of water in a river. The sand pile metaphor suggests an answer: these are all masses of intricately interlocking subsystems just barely on the edge of criticality-- with breakdowns of all sizes ripping through and rearranging things just often enough to keep them poised on the edge. A power law fits for the large-scale behavior of earthquakes, fluctuations in stock prices, and stop-and-go city traffic. Is the economy on the edge of chaos? Are ecosystems? Is the immune system? Is the global community of nations? Intuitively you'd like to believe they all are. Bak's power law provides a way to measure if a system is on the "edge of chaos", so one doesn't have to rely on just an intuitive sense of "at a balance between stability and fluidity". Do extinction patterns fit a power law? There's not enough fossil data to tell-- though the results are suggestive, at least. Further computer simulations of coevolution also fit with the idea. A further part of the "new second law" would explain why, as evolution occurs, complexity seems to increase. (Even bacteria today are much more complex than the earliest forms of bacteria.) Another example of metastable equilibria: social-cultural- political evolution. Witness the collapse of communism in the former Soviet Union and its Eastern European satellites; the whole situation seems all too reminiscent of the power-law distribution of stability and upheaval at the edge of chaos. We're past the Cold War metastability, into a chaotic period where a lot of change happens, and the overall situation is very unstable and very sensitive to small changes of initial conditions. And it's not necessarily a step for the better-- the new metastable equilibrium that emerges may have "species" (nations) which are "less fit" than the ones during the Cold War. [Once again, I keep thinking of belief systems in terms of metastable equilibria. A "big avalanche" is analogous to a "crisis of faith" or a "paradigm shift". Is there any way of measuring the size of "belief avalanches"?] Ch.9 "Work in Progress" (Not done reading it yet.) (3) Remaining questions, plus, how it all applies back to my research * Relations to work with Ed For the most part, my object recognition work is an examination of a process that *is* fairly simple and "bottom-up." Still, I have a hunch there are applications somewhere down the line, I just don't see them clearly yet. Possibly some of Matt's modeling of curvature and recognition could incorporate some of the elements of the complexity modeling. * Relations to work with Frank Above, I've mentioned several potential relations between complexity theory and cognitive "belief systems". Alas, the whole idea of applying the ideas and simulation methods of complexity theory to studying cognitive belief systems and paradigms, though tempting (what makes an argument convincing in such a scenario?), probably isn't a tractable mode of study, at least not as a mere "project" or even a dissertation. Maybe as a *career*!... In the meantime, I'll probably go use the usual simplifying assumptions of cognitive research, ignore most of the individual differences between my subjects, and concentrate on tractable problems like the effects of "order of presentation of examples vs. logical frameworks" and "types of example". Ah, well.