Wednesday, March 25, 2015

Dr. Alexander D. Wissner-Gross AI alexwg.org cognitive niche theory how intelligence can become an ecological niche and thereby influence natural selection.

http://www.alexwg.org/companies


http://www.insidescience.org/content/physicist-proposes-new-way-think-about-intelligence/987

A radical concept could revise theories addressing cognitive behavior.

Originally published: 
Apr 19 2013 - 4:30pm

By: 
Chris Gorski, ISNS

(ISNS) -- A single equation grounded in basic physics principles could describe intelligence and stimulate new insights in fields as diverse as finance and robotics, according to new research.
Alexander Wissner-Gross, a physicist at Harvard University and the Massachusetts Institute of Technology, and Cameron Freer, a mathematician at the University of Hawaii at Manoa, developed an equation that they say describes many intelligent or cognitive behaviors, such as upright walking and tool use. 
 
The researchers suggest that intelligent behavior stems from the impulse to seize control of future events in the environment. This is the exact opposite of the classic science-fiction scenario in which computers or robots become intelligent, then set their sights on taking over the world. 
 
The findings describe a mathematical relationship that can "spontaneously induce remarkably sophisticated behaviors associated with the human 'cognitive niche,' including tool use and social cooperation, in simple physical systems," the researchers wrote in a paper published today in the journal Physical Review Letters.  
 
"It's a provocative paper," said Simon DeDeo, a research fellow at the Santa Fe Institute, who studies biological and social systems. "It's not science as usual."
 
Wissner-Gross, a physicist, said the research was "very ambitious" and cited developments in multiple fields as the major inspirations. 
 
The mathematics behind the research comes from the theory of how heat energy can do work and diffuse over time, called thermodynamics. One of the core concepts in physics is called entropy, which refers to the tendency of systems to evolve toward larger amounts of disorder. The second law of thermodynamics explains how in any isolated system, the amount of entropy tends to increase. A mirror can shatter into many pieces, but a collection of broken pieces will not reassemble into a mirror.
 
The new research proposes that entropy is directly connected to intelligent behavior.
 
"[The paper] is basically an attempt to describe intelligence as a fundamentally thermodynamic process," said Wissner-Gross.
 
The researchers developed a software engine, called Entropica, and gave it models of a number of situations in which it could demonstrate behaviors that greatly resemble intelligence. They patterned many of these exercises after classic animal intelligence tests.  
 
In one test, the researchers presented Entropica with a situation where it could use one item as a tool to remove another item from a bin, and in another, it could move a cart to balance a rod standing straight up in the air. Governed by simple principles of thermodynamics, the software responded by displaying behavior similar to what people or animals might do, all without being given a specific goal for any scenario.
 
"It actually self-determines what its own objective is," said Wissner-Gross. "This [artificial intelligence] does not require the explicit specification of a goal, unlike essentially any other [artificial intelligence]."
 
Entropica's intelligent behavior emerges from the "physical process of trying to capture as many future histories as possible," said Wissner-Gross. Future histories represent the complete set of possible future outcomes available to a system at any given moment.
 
Wissner-Gross calls the concept at the center of the research "causal entropic forces." These forces are the motivation for intelligent behavior. They encourage a system to preserve as many future histories as possible. For example, in the cart-and-rod exercise, Entropica controls the cart to keep the rod upright. Allowing the rod to fall would drastically reduce the number of remaining future histories, or, in other words, lower the entropy of the cart-and-rod system. Keeping the rod upright maximizes the entropy. It maintains all future histories that can begin from that state, including those that require the cart to let the rod fall.
 
"The universe exists in the present state that it has right now. It can go off in lots of different directions. My proposal is that intelligence is a process that attempts to capture future histories," said Wissner-Gross.
 
The research may have applications beyond what is typically considered artificial intelligence, including language structure and social cooperation.
 
DeDeo said it would be interesting to use this new framework to examine Wikipedia, and research whether it, as a system, exhibited the same behaviors described in the paper.
 
"To me [the research] seems like a really authentic and honest attempt to wrestle with really big questions," said DeDeo.
 
One potential application of the research is in developing autonomous robots, which can react to changing environments and choose their own objectives.
 
"I would be very interested to learn more and better understand the mechanism by which they're achieving some impressive results, because it could potentially help our quest for artificial intelligence," said Jeff Clune, a computer scientist at the University of Wyoming.
 
Clune, who creates simulations of evolution and uses natural selection to evolve artificial intelligence and robots, expressed some reservations about the new research, which he suggested could be due to a difference in jargon used in different fields.
Wissner-Gross indicated that he expected to work closely with people in many fields in the future in order to help them understand how their fields informed the new research, and how the insights might be useful in those fields.
 
The new research was inspired by cutting-edge developments in many other disciplines.  Some cosmologists have suggested that certain fundamental constants in nature have the values they do because otherwise humans would not be able to observe the universe. Advanced computer software can now compete with the best human players in chess and the strategy-based game called Go. The researchers even drew from what is known as the cognitive niche theory, which explains how intelligence can become an ecological niche and thereby influence natural selection.
 
The proposal requires that a system be able to process information and predict future histories very quickly in order for it to exhibit intelligent behavior. Wissner-Gross suggested that the new findings fit well within an argument linking the origin of intelligence to natural selection and Darwinian evolution -- that nothing besides the laws of nature are needed to explain intelligence.
 
Although Wissner-Gross suggested that he is confident in the results, he allowed that there is room for improvement, such as incorporating principles of quantum physics into the framework. Additionally, a company he founded is exploring commercial applications of the research in areas such as robotics, economics and defense.
 
"We basically view this as a grand unified theory of intelligence," said Wissner-Gross. "And I know that sounds perhaps impossibly ambitious, but it really does unify so many threads across a variety of fields, ranging from cosmology to computer science, animal behavior, and ties them all together in a beautiful thermodynamic picture."



Avatar
Join the discussion…

  • in this conversation
        Media preview placeholder
        Sign in with
        or pick a name

        Disqus is a conversation network

        • Disqus never moderates or censors. The rules on this community are its own.
        • Your email is safe with us. It's only used for moderation and optional notifications.
        • Don't be a jerk or do anything illegal. Everything is easier that way.


        • Avatar

          I don't buy it at all. Either we've got a major semantic problem in what we mean by "intelligence", or these guys are all wet. Why is keeping a stick upright in a cart more "intelligent" than having it fall over in a specific way? Doesn't intelligence in that case depend on what outcome you're trying to achieve? ie. Imagination? Isn't "intelligence" the ability to achieve a desired outcome?
          If these computer models were given no special goals, then why is using a tool to extract an object from another any more intelligent than just leaving the object as it is?
          It seems to me that the better part of human intelligence is dedicated to minimizing future entropy, not maximizing it. One of the most basic signs of life is entropy-reduction, the imposition of order on chaos.


            • Avatar

              A desired outcome is a short term view. Useful, but not always intelligent in the long term.


                • Avatar

                  Why are you assuming these "desired outcomes" are somehow not collectivist ones that occur naturally through inevitable interactions and thus as a whole work towards maximizing the entropy for the entire system.


                    • Avatar

                      But efforts at reducing entropy at the individual level only increase the entropy at the environmental level. Intelligent entities perhaps take that into account.


                        • Avatar

                          I disagree with the entropy-reduction. We think we want entropy reduction. But entropy in any aspect of life brings about better things. If you put animals in cages in a very orderly fashion and do not allow them to move naturally they become sick. But you let the same animals and allow them to freely roam, they become stronger and healthier. Same goes with humans and freedom (which is seemingly chaotic). You allow them to do what they want and they learn to fly and explore outer space and increasingly try to explore the unknown. I don't necessarily agree with the whole "future histories" thing, but i think they are on the right track by looking at the basics opposed to some nonsensical complex equations.


                          • Avatar

                            So, we maximize entropy, because intelligence needs to maintain the many possible histories to exist. What are the histories for? In my opinion they are what it keeps God alive: Knowledge.
                            The Universe is a knowledge “factory” for God.
                            Intelligence grows with entropy following the maximum power principle of energetics and Howard Odum theories, and can you imagine the wisdom of God knowing everything that is happening in the Universe. In His interest, evolution (intelligence) grows exponential as more experiences (histories) happen in the universe, more God learns.
                            In my opinion the Universe is a ball, full with fundamental “transporting information” particles, that God set in movement, to be amassed in matter with
                            one fundamental force (kinetic) that generate experiences (evolution), with information being collected in the borders of the Universe by God.
                            The velocity of such particles is the length radius of the Universe per God´s second, time enough for Him to live, process and store the experience.
                            Probably the motors to keep the contents of the Universe in motion are the black holes, and inflation of Universe is nothing more than the increase of total entropy in an isobaric universe.


                              • Avatar

                                Expert mathematicians running probability and statistics for self assembling amino acids into all the proteins needed for life in a SINGLE CELLED ORGANISM have stated that there hasn't been enough time if the universe is 14 billion years old or so for that to occur randomly.
                                They claim life is IMPOSSIBLE by random chance mutations in that time frame.
                                Evolution is great science fiction but it lacks any evidence whatsoever. I don't know how all this happened but so-called "evolution" certainly is not the explanation.
                                <quote>... information theorist Hubert Yockey (UC Berkeley) realized this problem:
                                "The origin of life by chance in a primeval soup is impossible in probability in the same way that a perpetual machine is in probability. The extremely small probabilities calculated in this chapter are not discouraging to true believers … [however] A practical person must conclude that life didn’t happen by chance."43
                                Note that in his calculations, Yockey generously granted that the raw materials were available in a primeval soup. But in the previous chapter of his book, Yockey showed that a primeval soup could never have existed, so belief in it is an act of ‘faith’. He later concluded, "the primeval soup paradigm is self-deception based on the ideology of its champions."44
                                [b]More admissions[/b]
                                Note that Yockey is not the only high-profile academic to speak plainly on this issue:
                                "Anyone who tells you that he or she knows how life started on earth some 3.4 billion years ago is a fool or a knave. Nobody knows."—Professor Stuart Kauffman, origin of life researcher, University of Calgary, Canada.45
                                "…we must concede that there are presently no detailed Darwinian accounts of the evolution of any biochemical or cellular system, only a variety of wishful speculations." —Franklin M. Harold, Emeritus Professor of Biochemistry and Molecular Biology Colorado State University.46
                                "Nobody knows how a mixture of lifeless chemicals spontaneously organized themselves into the first living cell."—Professor Paul Davies, then at Macquarie University, Sydney, Australia.47
                                "The novelty and complexity of the cell is so far beyond anything inanimate in the world today that we are left baffled by how it was achieved."— Kirschner, M.W. (professor and chair, department of systems biology, Harvard Medical School, USA.), and Gerhart, J.C. (professor in the Graduate School, University of California, USA).48
                                "Conclusion: The scientific problem of the origin of life can be characterized as the problem of finding the chemical mechanism that led all the way from the inception of the first autocatalytic reproduction cycle to the last common ancestor. All present theories fall far short of this task. While we still do not understand this mechanism, we now have a grasp of the magnitude of the problem."49
                                "The biggest gap in evolutionary theory remains the origin of life itself… the gap between such a collection of molecules [amino acids and RNA] and even the most primitive cell remains enormous."—Chris Wills, professor of biology at the University of California, USA.50
                                Even the doctrinaire materialist Richard Dawkins admitted to Ben Stein (Expelled, the movie documentary) that no one knows how life began:
                                Richard Dawkins: "We know the sort of event that must have happened for the origin of life—it was the origin of the first self-replicating molecule."
                                Ben Stein: "How did that happen?"
                                Richard Dawkins: "I’ve told you, we don’t know."
                                Ben Stein: "So you have no idea how it started?"
                                Richard Dawkins: "No, nor has anybody."51
                                "We will never know how life first appeared. However, the study of the appearance of life is a mature, well-established field of scientific inquiry. As in other areas of evolutionary biology, answers to questions on the origin and nature of the first life forms can only be regarded as inquiring and explanatory rather than definitive and conclusive."52 [emphasis added]</quote>

                                  see more

                                  • Avatar

                                    What these guys ignore is that evolution never happened de novo, out of nothing. It wasn't like shaking up a bunch of purine bases and amino acids and phosphates and voila! DNA! No. His calculations on the improbability of this are correct but irrelevant. Because that's not at all the way it happened.
                                    DNA came about by modifications of simpler structures (possibly RNA), which in turn came about by modifications of even simpler structures (phosphatide bases?), and so on, and so on... till finally you get to simple molecules joining up, possibly on clays, which are known to catalyze these reactions and favor certain structures. (Spooky in light of what Genesis says about how God created man, huh?)


                                      • Avatar

                                        "DNA came about by modifications of simpler structures (possibly RNA),"
                                        Conjecture does not equal certitude.
                                        " which in turn"
                                        How did you get past your first premise without conclusive proof?
                                        " came about by modifications of even simpler structures (phosphatide bases?)"
                                        More conjecture...
                                        " and so on, and so on.."
                                        Right! It's all so obvious and logical that it must be true; never mind the absence of evidence, it makes a great story! :>)
                                        " till finally you get to simple molecules joining up, possibly on clays, which are known to catalyze these reactions and favor certain structures"
                                        Nope. Your "finally" would actually be a crude beginning and scientists today have worked with simple molecules to no avail to create life so your "finally" isn't even a real start.
                                        " (Spooky in light of what Genesis says about how God created man, huh?)"
                                        Nope. Irrational, hypothetical and unscientific magical thinking, yes; "spooky", no.
                                        To show you the monumental task you face even if you get to simple molecules with known reactive affinities but still unknown proof of automatic protein synthesis, I present to you the Chaperones and the marvelous Chaperonin precise protein folding machine. Assuming you could, which no one has ever been able to do yet, manufacture your complex proteins automatically like they are done in the cell but by pure chance, you STILL have to chaperone the (enter the chaperones) UNFOLDED structure PREVENTING folding until it is chaperoned to the Chaperonin folding machine in the cell.
                                        Here's a video. Argue with me at the video link how you can "get there" from here. Believe me, they ARE TRYING! But they have nothing, zip, nada. Believing in Evolution is spooky considering the absence of proof for it. As to the book of Genesis, what does that have to do with anything. Are you a scientist or not? You won't find proof of much of anything in the Bible so bringing it up is silly. I'm not asking you to believe in ANY religious book or doctrine; I'm asking you to NOT believe in evolution unless you have proof.
                                        The Chaperones anti-folding and the Chaperonin Protein folding Machine could not occur randomly in 12 billion years. MULTIPLES of that time period are needed.

                                          see more

                                          • Avatar

                                            I don't have a problem with being skeptical of evolution, I have a problem with people who claim the only alternative is the bible/quran/whatever.


                                              • Avatar

                                                What you really mean is that you have a "problem" with the idea of a Supreme Being that you may owe your existence to. IOW, if Homo SAP isn't number one, it ain't real, right?


                                                  • Avatar

                                                    There is no need to approach with such an aggressive stance. Perhaps my mistake was saying 'whatever'. What I meant, and should have said, was: "the ancient organized religions (ancient as of today)". If perhaps ideas were to emerge that have a better view of how this supreme being functions (as in, not one that worries about whether men are circumcised and how many wives they can take, for example), I would be willing to consider accepting them.


                                                      • Avatar

                                                        I apologize for sounding aggressive. I have just been down that road that "it's all okay and we can solve out problems if we just get rid of religion" route too often.
                                                        I have written several posts on the issue of Evolution and why it is scientifically untenable.
                                                        I invite you to read and comment on them. I promise to show respect for your opinion even if, after reading them, I disagree with your conclusions.
                                                        If Darwin was alive today, he would be arguing AGAINST the validity of the Theory of Evolution
                                                        The New Age movement of the last century tried a world where God and His rules for respecting fellow humans and all life were thrown away. They thought when religion went away, mankind would enter Utopia with mutual respect and admiration. But instead, we entered the current Dystopia where greedy, cruelty and conscience free behavior has increased. God does not need us. We need Him.
                                                        If God exists, He knows who you are, where you are and what you need. Ask Him to prove He is real if He is really out there and not a figment of weak people's imagination.
                                                        The fact that worship of God and religion has been perverted often to excuse cruelty only shows that evil rulers can pervert ANYTHING, not that religion or Faith in God is faulty in itself.


                                                • Avatar

                                                  Despite my reservationson the Big Bang theory, I like two quotations of the great Einstein: …I maintain that the cosmic religious feeling is the strongest and noblest motive for scientific research… I am convinced that God does not play dice"
                                                  Must be an Entity that is taking advantage of all the information generated in the Universe… God, we call it… for fun or for knowledge the information must be used, and probably stored for future use.
                                                  The Free Will allowed in the Universe, suggests that whoever runs the show is interested in innovation, development and consequently Evolution.
                                                  One formula to make the seed or multiple seeds in the Universe, we will find it, and in my opinion it will be simultaneous with the encountered with God.
                                                  “Science goes wrong by thinking, we have 'the' answer, proper scientific method is to say, we have 'an' answer,” Roger Y Gouin.
                                                  The probability of unknown agent’s existence is almost certainly 1, as well “simple hidden variable theories” a reality within our perception. Around us, we see evolution and grow up from fragile to perfection. Mature, reforming and newborns must be the ultimate goal of the Universe.


                                                    • Avatar

                                                      Listen: Science and religion do not deal with the same subject, nor do they have the same idea of "truth" nor do they address the same questions. Science deals with HOW things work. It cannot ultimately tell you WHY. Religion deals in the WHY of things. It has no business dealing with the HOW.
                                                      The big conflict between science and religion is over the literal truth of Genesis. Well, Genesis is wrong, plain and simple. It was written as an allegory, not a text book. And any religion that stands or falls on the literal truth of Genesis is not much of a religion. God, Christianity, salvation, grace, and all the rest of it works just as well if Genesis is understood as allegory rather than literal truth, and suddenly all the conflict disappears


                                                        • Avatar

                                                          I agree with much of what you say. You are obviously a deep and serious thinker.
                                                          But I must take issue with your logical train of thought in the following thought:
                                                          "The Free Will allowed in the Universe, suggests that whoever runs the show is interested in innovation, development and consequently Evolution."
                                                          If there is one process in the perpetuation of life by DNA that has been conclusively proven, it is the fact that DNA is, not only self replicating, but scrupulously edits itself to ensure there is no change.
                                                          All attempts to buy into the myth of natural selection by searching for transitional life form have failed.
                                                          Free will certainly leads to innovation within a framework of the package a species has but it certainly does not lead to random positive mutations causing "evolution".
                                                          E. Coli bacteria were intelligently designed by us to make insulin, an activity that could certainly be considered and "evolutionary advantage". YET, in their multimillion or billion year history, they never "evolved" the ability to make insulin on their own. That is because they are STILL E. Coli, not something transitioning to an insect or whatever.
                                                          Adaptation within a species DNA package framework is not evolution, it is a predesigned part of the original design to deal with environmental pressures. Our so-called "nonsense" DNA is adaptation potential, not "evolution", simply because it has been there from the start.
                                                          When the angiosperms and insects were suddenly found in the fossil record, all sorts a irrational fiction was written by evolutionist true believers to justify this "problem". This was around the Triassic Period. The insects and angiosperms that existed THEN, still exist today in almost exactly the same form except that some of them (i.e. dragonflies) are much smaller. But the mosquitos are STILL mosquitos like the ones that sucked blood from dinosaurs. Why?
                                                          It appears that the rapid simultaneous appearance of new species depending for their existence on multiple symbiotic mechanisms cannot be explained by natural selection, indicating a (still unexplained) process occurred in the Triassic period that resulted in insects filling all available environmental niches of the present biosphere.
                                                          The symbiotic angiosperm/insect relationship is not rapidly adapting to the present level of planetary industrial toxins. Therefore, whatever the unexplained rapid adaptation mechanism that occurred in the Triassic Period was, there is no evidence that it is present today because we are experiencing a high level of species extinctions affecting, but not limited to, insects and angiosperms.
                                                          We are in trouble and belief in "evolution" isn't going to get us out of this biosphere polluting mess we are in.

                                                            see more

                                                      • Avatar


                                                        • Avatar

                                                          well that resembles a lot of philosophy already touched by, say, Stanislaw Lem in his works, partially also Asimov. See "The Master's Voice" for reference. Man's and life's greatest challenge is to challenge entropy. Life is defined by its will to slow down or halt entropy. Really inspiring authors.


                                                          • Avatar

                                                            Any process that occurs in this universe is, by definition, a thermodynamic process. Just be sure that you do not confuse a description of its thermodynamics with an automatic deeper understanding of root causes of its emergent properties -lifebiomedguru


                                                              • Avatar

                                                                Intelligent Behavior....is that the history of mankind?


                                                                • Avatar

                                                                  I have managed to apply those concepts to an AI that succesfully drives a kart on a track, even with several karts on track so they avoid each other while try to pass them.
                                                                  https://www.youtube.com/playli...
                                                                  Look at the latest ones to see several karts racing together. For any question, please just comment it on the videos.


                                                                  • Avatar

                                                                    I need to look at the math more, but the motivation and explanation of "causal entropic forces" is almost identical to Klyubin, Polani, and Nehaniv's "empowerment", using information theory to select next agent states so as to maximize future possible actions, which they have been developing and publishing since at least 2004.
                                                                    http://www.prokopenko.net/empo...
                                                                    (There's no confusion about Shannon entropy and thermodynamic entropy here. Edwin T. Jaynes identified the link between them in 1957, and James Avery showed how to quantify thermodynamics in terms of bits in 2003.)


                                                                    • Avatar

                                                                      If I understand this correctly, instead of selecting dead end lines which limit future options, weight ought to be given for lines that enhance future options. Instead of pruning the tree, you enhance it. I could be wrong, because I haven't grokked it yet.


                                                                        • Avatar

                                                                          Well, sort of. The goal is the use of the environment for species perpetuation by anticipating future events. At first there is no assurance that pruning or any other action on a food source will produce an improved condition.
                                                                          So this dynamic basically raises curiosity to possibly BE the new definition of intelligence.
                                                                          I disagree but I think that is where they are going with this gross simplification.
                                                                          Renewable Revolution


                                                                          • Avatar

                                                                            So, this means that a waiter is pretty clever, right? :)


                                                                            • Avatar

                                                                              sorry - couldn't resist :P http://xkcd.com/793/
                                                                              Interesting concept, though!


                                                                                • Avatar

                                                                                  Not sure how they can state that there was no objective given. It seems clear that the objective was to maximize entropy. Maybe a small point, but still...


                                                                                  • Avatar

                                                                                    theory of gravity as an entropic force let to correct "prediction" of cosmological constant ; will be interesting to see what kind of novel insight may be forthcoming from this entropic theory of human behavior: is the apparent chaos of the stock market simply a reflection of the drive multiplied many times over to keep ones options open?


                                                                                      • Avatar

                                                                                        This is still nowhere near general intelligence, because it has no purpose. Human intelligence for example is something rich which first must be DEFINED. So this isn't general intelligence, it's slapping an algorithm onto the use-case. Although intelligence in general including our very specific brand of human intelligence is invariably built on such thermodynamic and so on systems. But this isn't more fundamental than the stuff I think about. Just saiyan
                                                                                        AI is still a very stagnated field because of MODERN ECONOMICS AND ITS PROPREITARISM causing software in general to be horrifically inaccessible to the gamers/users that could be adding to general functionality (i.e. building a central AI definition of human meaning!) every day; not because of some magic god algorithm that will save our lazy asses from the real work-- again; this needs stressing. AI won't just compute itself, if it does it'll be here in far longer than 50 years so good luck with ever seeing it.
                                                                                        Other than that, very interesting indeed. Great job.


                                                                                          • Avatar

                                                                                            Visual, audio, and pattern recognition (including logic and verbology) in general is very advanced already in programming. Programming motivation is far less difficult. Fifty years is a very pessimistic prediction - according to Moore's Law we ought to have processors performing at human capacity within ten. Remember, AI is better at memory and copying/transfer, so write once, run universally (rather than humans who must be taught individually every time). Frankly, I think "consciousness" is overrated, and "conscious-like" is the same, so all an AI needs to do is flow smoothly interacting with the environment to demonstrate "self-awareness(-like)." If it gives you kicks to program additional motivations like self-preservation or aggression, then go for it. Frankly, selfishness (putting one's own interests above others) isn't a prerequisite to AGI.


                                                                                              • Avatar

                                                                                                The Singularity movement is banking on Moore's Law but there is plenty of evidence that indicates that expectation is wrong, and there is no way CPU's are reaching intelligence anytime soon. Computers don't have intelligence, they have speed and make few mistakes, giving them the appearance of being smart, but they aren't capable of strategy development.
                                                                                                I think that consciousness is under rated and poorly understood, you either have self awareness or you don't and there are no signs that computers are capable of self awareness, nor will they be anytime soon.


                                                                                              • Avatar

                                                                                                Maintaining as many possible future histories sounds like an attempt to resist death.. What happens when scientists try to shut down such an intelligent machine ? Hope they dont go too far.


                                                                                                  • Avatar

                                                                                                    Isn't intelligence just the abilitiy to predict possible future histories?


                                                                                                      • Avatar

                                                                                                        This begs the question of how we are to understand Wissner-Gross's notion of intelligence.


                                                                                                          • Avatar

                                                                                                            Very cool. Applied to an AI the first directive to maximizing entropy would be self-preservation. Catering to humans would be down the list. Some implications may ensue.


                                                                                                              • Avatar

                                                                                                                This fits into my research -- GO strategy game may be a useful place to 'prove' its application relatively simply...?
                                                                                                                This definitely has very strong application to evolution of living systems, which I have always interpreted as reversing the physics of entropy.


                                                                                                                  • Avatar

                                                                                                                    Interesting; could the future histories be considered a sort of moral conscious? Teaching this morality as human generated input versus allowing a machine to learn natively brings up classic questions of perception and insight. Juicy.


                                                                                                                      • Avatar

                                                                                                                        Not necessarily. I don't think it would be hard to set up an experiment in which an agent could be stuck in a local maxima, and only by forcing a collapse along a certain path would the emergence of greater heights of complexity, that were not previously predictable, become possible.
                                                                                                                        What the agent thought it was doing, and what you would like to ascribe a certain moral characteristic, turned out to be a trap that accomplishes the opposite, in a relativistic sense.


                                                                                                                        • Avatar

                                                                                                                          I agree with Nate Angell and Hao. The following three sentencses copy & pasted from the artical; I think the third senetnce has a typo, where "lower the entroyp" should read "increase the entropy":
                                                                                                                          The second law of thermodynamics explains how in any isolated system, the amount of entropy tends to increase.
                                                                                                                          A mirror can shatter into many pieces, but a collection of broken pieces will not reassemble into a mirror.
                                                                                                                          Allowing the rod to fall would drastically reduce the number of remaining future histories, or, in other words, lower the entropy of the cart-and-rod system.


                                                                                                                             
                                                                                                                            Avatar

                                                                                                                            Looks similar to empowerment to me. See:
                                                                                                                            http://homepages.feis.herts.ac...
                                                                                                                            max I(A_t; S_{t+n})
                                                                                                                            Maximizing the number of actions that is to your proposal in the future.




                                                                                                                               
                                                                                                                              Avatar

                                                                                                                              I wish that paper had a clear synopsis to distill its information into that formula.
                                                                                                                              In that paper there is a limitation imposed on the process of empowerment: agent perception. While Prof. Alex Wissner-Gross's formulas (although similar fundamentally) contain more of an objective 'gods eye' modality.
                                                                                                                              I think both give true insight. But having an 'objective' definition of 'intelligence' has a much more profound impact.
                                                                                                                              Unrelated, i would like to add that maximizing future freedom could be considered to be the very essence of 'good'. And that by applying that maximization across all highly intelligent beings we can achieve an objective view of morality. Which is earth-shattering. Perhaps the greatest advance of mankind since speech. Also, it provides a firm moral foundation for super-intelligent AI. I cant stress enough how much hope exists for this concept.









                                                                                                                                Wednesday, February 12, 2014


                                                                                                                                A new Equation for Intelligence F = T ∇ Sτ - a Force that Maximises the Future Freedom of Action

                                                                                                                                Intelligence is a Force with the Power to Change the World


                                                                                                                                Describing intelligence as a physical force that maximises the future freedom of action, adds a new aspect to intelligence that is often forgotten: the power to change the world. This, I think, was the biggest revelation for me, when I started thinking about the the new equation for intelligence. The second revelation was, that intelligent systems are survival engines, that increase their chances of survival by maximising a single quantity: the freedom of action. Both insights may sound trivial or obvious, but I don't think they are.

                                                                                                                                A few days ago a saw the TED talk "A new equation for intelligence" by Alex Wissner-Gross. He presents an equation he published in April 2013 in a physics journal. It may not be the most impressive talk I have ever seen. And I had to watch it twice to fully understand it. But the message excites me so much, that I don't sleep well since a few days. I thought everybody must be excited about this equation. But, it seems that this is not the case. Either I am not understanding it correctly or others don't get it. Or maybe it resonates with me, because I am physicist, with a strong background in computing, who has done research in computational biology. To find this out, let me explain my understanding of the equation. Please tell what your think and what's wrong with my excitement (I need sleep)....

                                                                                                                                So, why did the equation blow me away? Because this very simple physical equation can guide us in our decisions and it makes intelligent behaviour measurable and observable. It adds a new real physical force to the world, the force of intelligence. From the equation we can deduce algorithms to act intelligently, as individuals, as societies and as mankind. And we can build intelligent machines using the equation. Yes, I know, you may ask: "How can the simple equation F = T  Sτ do all of that?"

                                                                                                                                Intelligence is a Force that Maximises the Future Freedom of Action

                                                                                                                                Before we look at the equation in more detail, let me describe its essence in every day terms. Like many physical laws or equations the idea behind it is simple:
                                                                                                                                • Intelligence is a force that maximises the future freedom of action.
                                                                                                                                • It is a force to keeps options open.
                                                                                                                                • Intelligence doesn't like to be trapped.
                                                                                                                                But what is necessary to keep options open and not to be trapped? Intelligence has to to predict the future and change the world in a direction that leads to the "best possible future".  In order to predict the future, an intelligent system has to observe the world and create a model of the world. Since the future is not deterministic the prediction has to be based on some heuristics. Prediction is a kind of statistical process. In order to change the world, the intelligence has to interact with the world. Just thinking about the world, without acting, is not intelligence, because it produces no measurable force (well, sometimes it is intelligent not to act, because the physical forces drive you already in the right direction, but that is a way of optimising resources). The better in can predicted the future and the better a it can change the world in the desired direction, the more intelligent the system is.

                                                                                                                                The new Equation for Intelligence F = T ∇ Sτ

                                                                                                                                Note: skip this section, if you are not interested in understanding the mathematics of the equation!


                                                                                                                                This is the equation:

                                                                                                                                   F = T

                                                                                                                                Where F is the force, a directed force (therefore it is bold), T is a system temperature, Sτ is the entropy field of all states reachable in the time horizon τ (tau). Finally, ∇ is the nabla operator. This is the gradient operator that "points" into the direction of the state with the most freedom of action. If you are not a physicist this might sound like nonsense. Before I try to explain the equation in more detail, let's look at a another physical equation of force.

                                                                                                                                The intelligence equation very similar to the equation for potential energy F = ∇ Wpot. Wpot is the potential energy at each point is space. The force F pulls into the direction of lower energy. This is why gravitation pulls us in direction of the center of the earth. Or think of a landscape. At each point the force points downhill. The direction is the direction a ball would roll starting at that point. The strength of the force is determined by the steepness of the slope. The steeper the slope, the stronger the force. Like the ball is pulled downhill by the gravitational force to reach the state with the lowest energy, an intelligent system is pulled by the force of intelligence into a future with lowest number of limitations. In physics we use the  Nabla operator or gradient to turn a "landscape" into a directed force (a force field).

                                                                                                                                Back to our equation F = T  Sτ. What it says is that intelligence is a directed force F that pulls into the direction of states with more freedom of action. T is a kind of temperature, that defines the overall strength (available resources) the intelligent system has (heat can do work, think of a steam engine: the more heat the more power).   is the "freedom of action" of each state that can be reached by the intelligence within a time horizon τ (tau).  The time horizon is how far into future the intelligence can predict. Alex Wissner-Gross uses the notion of entropy S to express the freedom of action in the future. The force of intelligence is pointing into that direction. As we have seen, in physics the direction of the force at each state is calculated by a gradient operation ∇ (think of the direction the ball is pulled). The Nabla operator ∇ is used to assign a directional vector (the direction of the force of intelligence) to each state (in our case: all possible future states). The more freedom of action a state provides the stronger the force is pulling in that direction. So, Sτ is the pointing into the direction with the most freedom of action. The multiplication with T means the more power we have to act, the stronger the force can be.

                                                                                                                                Note: the optimal future state is the optimal state form the viewpoint of the intelligent system. It might not the optimal state for other systems or for the entire system.

                                                                                                                                If you want to understand the equation in more detail read the original paper 'Causal Entropic Forces - by A. D. Wissner-Gross and C. E. Freer'.

                                                                                                                                An Algorithmic Explanation of the Equation

                                                                                                                                As an intelligent system you want to get the best possible future. The best possible future is a future where you are not trapped. To get there, you try to predict possible futures. You adjust the time horizon  τ of your prediction, so that your prediction is reliable enough to make a decision. Look at all possible states and you assign a freedom of action value   to each state. In that map of states, you choose the state that has the highest potential for future actions, the state that gives you the most freedom and power. This requires that you have a model of how the world works and you have to be able determine the current state of the system. Now you move in that direction.

                                                                                                                                If the sate you are heading is not reachable effortless, you have imposed a force F on the world in that direction. This is the force of intelligence. The temperature T represents the power or "resources" you have to reach the desired state. The more power you have, the more force you can impose. As you move (in time and space), you have to constantly validate desired state and adjust the direction you move. This adjustment is needed, because your model of the future was only a vague prediction and, as you go, your internal models of the world have to be updated with the reality. 

                                                                                                                                The Physical Force of Intelligence

                                                                                                                                It is Really a New Force?

                                                                                                                                This force is a real physical force. This force is not explainable with the other physical laws. If you read this, on a computer or on paper, the letters are in an order that would not exist without an intelligence imposing forces on the physical world. Intelligence influence the world. This influence is directed. The direction can be understood by looking at the formula F = T ∇ Sτ. This means some intelligent actor came to the conclusion that this is a better state to be in. Some state changes requite little force (if I type a or s does not make much of a difference). Other state changes require a lot of directed force, e.g. building a new house.

                                                                                                                                How to Measure the Force of Intelligence?

                                                                                                                                If intelligence is a physical force, it has to be observable, right? In the TED talk, Wissner-Gross gives a nice example of how alien observers could detect intelligence on earth with a telescope: Suppose alines would measure the number of asteroids hitting the earth for millions and billions of years.  But suddenly a "magic force" appears, and asteroid do not hit the earth anymore. Instead they detonate or get deflected before they hit the earth. This is when humans have learned to to prevent the impact of asteroids with some advanced technology. For the aliens, none of the "classical" forces of physics would explain this. But the Equation of Intelligence would explain this otherwise mysterious force as the Force of Intelligence...

                                                                                                                                Is it a Break-Thru Like Chaos Theory?

                                                                                                                                I believe, this simple equation will change the way we see and understand the world. It may lead to a better future for mankind. I remember in the 80ies, when I studied physics, Chaos Theory emerged into physics. It provided a scientific explanation for the behaviour of complex systems. Once you understand that small differences in the initial conditions of non linear systems can yield to wildly diverging outcomes (butterfly effect),  chaos theory is simple and obvious. The Equation of Intelligence has similar qualities. Once you understand it, it is so obvious that you wonder why it was not formulated much earlier. As physicist, I really like the simplicity, beauty and generality of the equation. I also like the idea that we now can describe intelligence as a force that changes the physical world.

                                                                                                                                The Problem of Ambiguity of the Future

                                                                                                                                The biggest problem with this equations is to determine the entropic field Sτ and the accessible Temperature T. One might argue that this equation is useless because it is not well defined. I would agree with this critique, but the equation is a kind of approximation. The intelligence cannot know all possible future states nor can it know the freedom of action of each of the states. So, does it mean this equation is useless? I don't think so, because it as a set of interesting implications.

                                                                                                                                Implications

                                                                                                                                Let me point out a few implications of the equation.

                                                                                                                                Intelligence Needs a Dynamic Model of the World

                                                                                                                                In order to estimate the quality (in terms of actability) of future states, the system needs a model of the world. This model is used to simulate future states and to estimate risks, cost and and freedom of action for those states. This also means that, as the intelligence moves in a certain direction in option space, it has to re-evaluate the future states and correct the predictions and the direction. Our brain is a prediction engine. It makes predictions by associations. For example, we can predict the slope of a moving body without much reflection. But sometimes we have to question the automatic prediction (as Daniel Kahneman points out) and use slow analytical thinking. For example, our intuition for probability is often wrong and depending on how a problem is described we may have a bias in our predictions. Simple intelligent systems, like a single cell following the light, have very simple 'models' of the world. Many systems have no explicit model of the world. Instead they have a "good enough" build-in heuristics that they find good states to move to.

                                                                                                                                Intelligence Needs Power to Act

                                                                                                                                Intelligence has to maximise the "temperature" T in the equation to increase the force (the impact to change the sate of the world). When humans learned to utilise energy (like fire) and turn it into physical forces (like with a steam engine), they increased their influence on the world. So, just observing future states is not enough. The intelligence needs to have the power to turn the knowledge about the desired future state into a force to move there. 

                                                                                                                                The power needed to reach the desired state is a resource that has to be taken into account when we predict the freedom of action in the future. Let's consider an example: a cat is hunting a mouse. Instead of wasting a lot of energy running after the mouse, the cat sits still, and observes the mouse. It follows its movements and makes a prediction when there is a good moment to catch the mouse. This is a very good use of resources.

                                                                                                                                What is an Intelligent System?

                                                                                                                                Any system that creates a force into the direction of more freedom to act, is an intelligent system. This means a single cell or a single human is an intelligent system. But also communities can be seen as an intelligent systems. Animal colonies, nations, armies, religions, mankind can form intelligent systems. These systems are hold together by a set of rules that make them interpret the world in a specific way and they act (consciously or unconsciously) to give their system more (or less) freedom of action.  Some of those system do a kind of random walk in time and space, but others have internal models and rules that guides them into a direction of more freedom of action. True intelligence is revealed when the state changes and the old rules do not work anymore. In that situation, the ability to anticipate the new situation and find new solutions distinguishes really intelligent systems form just well adapted systems.

                                                                                                                                Hierarchy of Intelligent Systems

                                                                                                                                Let me point out, that intelligent systems are often hierarchical. Cells form plants, animals and humans. Humans form families, tribes, nations. The entire biotope of the earth is an intelligent system. Think about mankind. Each individual person has a model of the world, and the interaction of all humans form an implicit model of the world of mankind. Evolution, with survival of the fittest, is a form of intelligent system. Any believe system or memes, like a religion, atheism, capitalism or communism have a build-in set of models of the world and create forces in the world. It would be really interesting to analyse how good different believe systems are in terms of maximising the future freedom of the system. In terms of force to change the world and in terms of direction. There could be memes that have a lot of force but are misguided and die out.

                                                                                                                                Competition of Intelligent Systems

                                                                                                                                Another implication of hierarchical intelligent systems is, that intelligence at one level (a single human, company, religion, nation) may undermine the intelligence of the higher level system. Or vice versa, a higher level system has to reduce freedom of actions of its subsystem in order to maximise its freedom of action. If we see mankind as the more important system that a single human, this might have implication on how we should act as individuals, companies, or nations. If mankind is pulling in too many directions at the same time, then the overall force might be zero and therefore the intelligence is reduced to the intelligence of a bare physical system.

                                                                                                                                Brute Force Versus the Force of Intelligence

                                                                                                                                If you know what the best future would be, but you cannot or do not act, brute force systems may determine your future.  It's that simple. Thinking about a better future is not enough. The ability to act is integral part of intelligence. And that is what intelligence is actually optimising, the ability to act. The greatest idea, if not actionable, is not intelligence. Classical IQ tests measure only the future prediction part, but miss out on the power to act. Sometimes simple but strong systems rule the world and "intelligent" systems fail because they are not able to act.

                                                                                                                                Why is the Force Directed to Future Freedom of Action?

                                                                                                                                Not having any freedom of action means you are dead. Future freedom of action, essentially means to increase the probability of survival of the intelligent system. Therefore, intelligent systems increase their chance of survival by moving to states with the highest probability of survival. Because it is impossible to predict the future correctly, the best heuristic is to put yourself into a state where you are not trapped. Freedom of action means survival, which is a simple and general goal. From that, any sub goal is magically derived.

                                                                                                                                Evolution on Steroids

                                                                                                                                Because intelligent systems actively increase their chance of survival, intelligent systems are systems with an accelerated evolution. "Blind" evolution depends on random variations and the power of selection. Intelligent systems can boost the chances of survival by using force to move to states with higher chance of survival.

                                                                                                                                How to increase the force of intelligence?

                                                                                                                                There are tow ways to increase to force of intelligence:

                                                                                                                                1. Make better predictions of the future, which means detect sates of maximum freedom of action
                                                                                                                                2. Increase the power to move in the desired direction. By adding more energy or being clever in finding paths that require less resources.

                                                                                                                                Other Implications

                                                                                                                                The last few days, I did not sleep well, because my mind was overwhelmed by thinking about the implications of that simple equation. Here are just a few thoughts and each of them needs further exploration and refinement:
                                                                                                                                • I read about claims that say, artificial intelligence needs senses and a body to interact with the world. I think this is true, because in oder to change the future, intelligence has to interact with the world (apply forces). In order to make predictions it has to sense the world. So, AI without a body and senses is no intelligence because it cannot understand and interact with the world and therefore cannot apply any directed force.
                                                                                                                                • Intelligence based on that definition might not have a "moral" in the sense we use moral. The only goal it has is to maximise its future ability to act. The moral emerges from what has to be done in order to get to the best future. E.g. if you own a company and you treat you employees badly, it might fire back in the future because they may leave or they revolt against you.
                                                                                                                                • Intelligence may flow with the laws of physics but occasionally use force to change the world in a way that opens changes in the future. 
                                                                                                                                • As Wissner-Gross shows in his video on the Entropica webpage (which is also shown in the TED talk), a AI system that is build to maximise future options automatically chooses "intelligent" goals. For example a trading robot that maximises the portfolio, because it gives the system more freedom of action in the future within that game.
                                                                                                                                • Politics is often not very good at maximising the future freedom of action of the nation. Instead they are focused on maximising their own power.
                                                                                                                                • It would be interesting to analyse the way we manage software development with the question of how we can  maximising the future freedom of action....

                                                                                                                                Conclusions

                                                                                                                                The simple formula 

                                                                                                                                    F = T

                                                                                                                                explains how intelligence applies forces to the world that maximises its future freedom action. If we understand and apply it, we might be able to act more intelligently, which means we may put us in a better state for the future. It might provides us with cues on how to act more intelligently as individuals, as families, as communities, as companies, as countries and as mankind. 


                                                                                                                                42 comments:

                                                                                                                                1. Nice, I saw the TED talk ... I just wonder what this got to do in Eclipse and Java, resp. planeteclipse.org?
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. It is only indirectly related to eclipse. If you look at some of my previous posts, I am very concerned about the future of eclipse and especially about the problem of the tragedy of the commons. I have a half-finised post that applies the formula of intelligence to the eclipse eco system.

                                                                                                                                    If we assume eclipse is an "intelligent system", the question for me is: is the sum of many intelligent subsystems (individuals and companies) driving eclipse into a direction of survival. Or the sum of forces (each one intelligent for the subsystem) not good for eclipse as a system.
                                                                                                                                    Delete


                                                                                                                                2. You might want to explore the concepts of intelligence, freedom and commerce. Intelligence requires increased freedom, for the system that contains it. An intelligent system can increase it's freedom, by restricting the freedom of other systems. Commerce increases the freedom of companies, by decreasing the freedom of individuals, and individual companies can easily get locked into their own restrictions, ultimately decreasing their freedoms. Laws that restrict commerce ultimately restrict freedoms, and thus restrict intelligence. But companies prefer these laws because they facilitate increased freedom for individual companies in the short term. The goals of companies to produce quarterly and annual results ultimately decreases freedom and intelligence of the entire system.
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. This deserves a response :-). I agree with these comments, Tracy. Watching the TED talk it excited me too for several reasons. One part is about what is inside the equation, the other part about what is left out of the equation. The systemic view can easily lead to an orientation towards the "brain". But from a human perspective more is needed. It is noteworthy that nobody comments on this. The comments below are mostly about the inside of the equation (which is also interesting b.t.w.)
                                                                                                                                    Delete


                                                                                                                                3. Thanks Michael. This is a great explanation.
                                                                                                                                  ReplyDelete
                                                                                                                                4. I found an article that describes the force of intelligence in a really nice way http://physics.aps.org/articles/v6/46
                                                                                                                                  ReplyDelete
                                                                                                                                5. I am probably missing the point, so help me with this:
                                                                                                                                  >Any system that creates a force into the direction of more freedom to act, is an intelligent system.

                                                                                                                                  but once an action is taken, your choices are reduced.
                                                                                                                                  To me this equation says inertia= infinite intelligence since all your possibilities are open.
                                                                                                                                  Isn't observable 'intelligence' about making optimal choices to obtain a 'better' future state?
                                                                                                                                  but... maybe i'm missing the point.

                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. If you chose to open a box, the diffenrent futures you can face in a 5 seconds horizont increase a lot compared with not oppening it, so once a decision is taken, another set of options opens in front of you, wider or narrower, who knows... and the goal here is to get the widest possible set of options in a near future.
                                                                                                                                    Delete
                                                                                                                                  2. Agreed. Also, Baroquenhorse, I believe your equation is true if you substitute "entropy" for "inertia", and the author states that the ability to achieve entropy is a requirement for developing intelligent systems - though why, I don't understand.
                                                                                                                                    Delete
                                                                                                                                  3. This comment has been removed by the author.
                                                                                                                                    Delete
                                                                                                                                  4. I think the point is that the future states are vague/approximations, and the system also knows that so it always acts in anticipation of more favorable future options.
                                                                                                                                    Delete


                                                                                                                                6. > but once an action is taken, your choices are reduced

                                                                                                                                  True! But not acting is also a choice. But is it the best choice for the future? Sometimes yes, but sometimes no. If you are only floating into the direction you are pulled you may miss an opportunity to invest a bit (by acting) to get more in the future.

                                                                                                                                  Do you know the famous Marshmallow Experiment (http://en.wikipedia.org/wiki/Stanford_marshmallow_experiment)? The children who were able to resist an immediate attraction (eat a marshmallow) in order to receive a reward in the future (two marshmallows) have been more successful in their adult live. This is the "act" part.

                                                                                                                                  For use humans it is a balance between enjoying life now and being prepared for the future. If you do not enjoy your life now you might suffer and get sick. If you use all your resources immediately, you may not have the power to overcome a crisis or you cannot take an opportunity.
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. Thank you Michael for responding. I do agree with all you've said. I am not familiar with the marshmallow experiment, but i do know of a similar one with a Hersey bar :)... and i wish i could get my boss to agree that often the best decision is no decision. I think my trouble here is i didn't understand the notion of 'best choice' to be in the equation... action appears to be missing.
                                                                                                                                    Delete


                                                                                                                                7. If you choose not to decide you still have made a choice. Until you consider 4 things 6 different ways you have not had a thought. If it is a force it acts upon a field. What does the fact that parity is not conserved say about this new field? Intelligence is a property of entities or an intelligent field creates entities more reseach is needed
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. It is not a real force, it is an "entropic force", in the same way a gas tendency to the equilibrium can be thougt as the sum of its particles momentums or a single "entropic force" applied to the gas as a whole.

                                                                                                                                    The second is not true, and there is no field that correspond to such a "force", in the same way no field exists to "force" you to change channel when a bad commedy starts on your TV.
                                                                                                                                    Delete


                                                                                                                                8. Can this be related to things like 'restricted Bolzman machines' (RBM) and auto-encoders in deep learning neural networks (NN)
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. Yes, but not as part of the idea itself, but as a general way to "simulate" the future of a system: if you use a neuronal network to simulate the future of the system, your algorithm will learn and will be usable with systems you can't deterministically simulate.

                                                                                                                                    But once the simulation have been replaced with a neuronal network, the "decision taking" algorithm is the same one: use the simulation to calculate the entropy at a given time horizont for each option you are considering, average all the options with its entropy as a weigth, and you get your "intelligent force".
                                                                                                                                    Delete


                                                                                                                                9. I found your blog entry to be quite interesting and useful. I wonder what your thoughts might be on the following formulation?

                                                                                                                                  From cybernetics (which has been around since the 1940s
                                                                                                                                  VO = VD + U – (VB + VR)

                                                                                                                                  V represents variety of system states.
                                                                                                                                  VO is the outcome or goal state. Variety is reduced to obtain the desired outcome.
                                                                                                                                  VD is the variety of disturbance (states other than the goal state).
                                                                                                                                  VB is passive regulation.
                                                                                                                                  VR is active regulation – behavior
                                                                                                                                  In active regulation, only variety (VR) can reduce variety (VO)
                                                                                                                                  U is uncertainty.

                                                                                                                                  We can reduce variety (states other then the goal state) three ways driving the system to its goal state:
                                                                                                                                  1. Active regulation (VR) – think of a thermostat controlling room temperature
                                                                                                                                  2. Passive regulation (VB) – think of the insulation in the room’s walls
                                                                                                                                  3. Reducing uncertainty - (U).
                                                                                                                                  Through knowledge via learning (model of the world)
                                                                                                                                  By placing the system in approximately the same initial state
                                                                                                                                  By improving awareness of the current state of the system

                                                                                                                                  Relating cybernetics to the equation for intelligence
                                                                                                                                  How is VO = VD + U – (VB + VR) related to F = T ∇Sτ?
                                                                                                                                  ∇Sτ points in the direction of the most flexible state.
                                                                                                                                  • For an intelligent system, ∇Sτ represents the goal state
                                                                                                                                  • From cybernetics, VO is the goal state
                                                                                                                                  • Therefore, VO = ∇Sτ, since both represent the goal state
                                                                                                                                  • Sτ is the entropy field of all the reachable states in the time horizon τ (tau), a freedom of action value for every possible state.
                                                                                                                                  • VD is the variety of disturbance (states other than the goal state).
                                                                                                                                  • Therefore, Sτ = VD, SτB = VB, SτR = VB
                                                                                                                                  • Therefore, ∇Sτ = Sτ + U – (SτB + SτR)
                                                                                                                                  • Factoring in the power to act, T:
                                                                                                                                  F = T [Sτ + U – (SτB + SτR)]

                                                                                                                                  Introducing U from cybernetics explicitly accounts for the intelligent system’s model of the world as well as awareness.
                                                                                                                                  This formulation also makes explicit the value of passive regulation, which conserves T.

                                                                                                                                  Does the above formulation hold water? If so, it seems to support your argument, especially your assertions about the systems model of the world.
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. Not sure if it holds water, but it makes a lot of sense to me!
                                                                                                                                    Delete
                                                                                                                                  2. In this approach there is no "goal state", instead, you take the decision (in this present) that gives you a wider set of futures, or a bigger entropy gain in a given elapsed time.

                                                                                                                                    If the best possible future start by taking a decision that is also taken in a big number of "not so good" futures, it may be better for this algortihm to take a course with not so bright futures, but a bigger number of them.

                                                                                                                                    What you really use here is: take now the decision that opens more different futures to you (instead of take the decision that could drive you to the brightest future).

                                                                                                                                    It is the idea of "entropy" what makes different one approach form the other.
                                                                                                                                    Delete


                                                                                                                                10. Thank you Michael for this explanation. Your not the only one who's really excited about this revelation :-) Whilst my mind is still reeling I think it may show that morals are a result of intelligence. We give blood not necessarily for any altruistic reason, just that it is a good thing to do that may come back and help us later, increasing our possible future actions for little effort now.
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. Agreed.

                                                                                                                                    If you think with a 5 seconds horizont, give blood is a non sense, but when you think with a 5 years horizont, it makes perfect sense.

                                                                                                                                    The longer you can predict your futures, the smarter you are, or the more "sublime" or "moral" your decisions look.

                                                                                                                                    Also, the better you get predicting the good or bad a future will be for you, the more "moral" you get: helping others open more future possibilities to you than any other thing could do, but not all the people can "predict" this, so it is very common to ear people say "I helped people in that or this way and I never imagined how good it made me feel until I tried"... that is not being too good at calculating a future "goodness".
                                                                                                                                    Delete


                                                                                                                                11. You do realize that this equation inherently makes the universe intelligent as the evolution suggest that the survival intsinct is a driving force in the universe.
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. Yes, according to this definition intelligence is an inherent part of the universe that somehow "unfolds". Higher forms of intelligence have are better in predicting the future freedom of action and are better able to move into preferred states.
                                                                                                                                    Delete
                                                                                                                                  2. Agreed, but maybe with reservations. What I take away, at least now, is the conclusion that evolution necessarily creates intelligence, because those systems that persist are ones that have created, or inherently possess, freedom of action. I have difficulty with this thought, because one would suppose, as has been said above, that the ability to 'see' the future is required - but I don't think that's part of the original theory. Rather, I think evolution selects for those systems with the greatest freedom of action, simply because freedom of action = variation. So knowing the future isn't required - all that's required is the ability to generate, reliably, enough options that one or many will persist. To put this another way, there seems to be a 'meta' level to evolution.
                                                                                                                                    Delete
                                                                                                                                  3. I agree that 'seeing' the future is not required. However, if you look at 'intelligent' systems, in retrospect, you will see that they 'predicted' the future better. Instead of predicting the future one could say they have a better internal model of the world. And this model is detecting the direction of the gradient of future freedom of action. Here, the time horizon τ (tau) plays an important role. Sometimes following the short term best direction is not the best in the long term. A classical example form human intelligence is the marshmallow experiment: Children that could delay gratification are more successful in life.

                                                                                                                                    If a system evaluates the future within a short time horizon, it may burn all its resources immediately because this will maximize freedom within that time horizon, but it may freeze the system in the long run.

                                                                                                                                    In some way 'survival of the fittest' and 'maximizing future freedom of action' are equivalent. But the equation of intelligence gives a measure what the 'fittest' is: the fittest systems are the ones that do keep the freedom of action.

                                                                                                                                    Note that the equation of intelligence applies to systems at all levels: from abiotic adaptive systems to societies.
                                                                                                                                    Delete
                                                                                                                                  4. More intelligence means improved ability to predict future states (Sτ) and power or ability to act (T), which should accelerate exponentially based on the equation until it approaches infinity. To me the presence of intelligence explains the acceleration in the expansion of the universe much better than dark matter.
                                                                                                                                    Delete


                                                                                                                                12. Thanks for the very intelligent (pun intended) explanation, Michael!
                                                                                                                                  ReplyDelete
                                                                                                                                13. All you did was describe "Systems Analysis"
                                                                                                                                  http://en.wikipedia.org/wiki/Systems_analysis
                                                                                                                                  ReplyDelete
                                                                                                                                14. Hi Michael, I read the article just when it came out, I couldn't really sleep well until some months a go, when (I hope) I understood the implications completely (well, in the basic implications).

                                                                                                                                  I have a blog about it (entropicai.blogspot.com) with videos, a working exe and source code (for delphi7, object pascal, but may be I will por it to java some day to use it on some android games with intelligent bad guys).

                                                                                                                                  I LOVED to read your post as it sound quite like me talking of it to everyone around me! (btwe, I am mathematician but love physic, the weirdest the best).

                                                                                                                                  Most of your thoughts are shared by me, but I find it not appropiate to talk about "freedom of action", it really uses "entropy at a future horizont", where microstates (in the present) are switched to macrostates but measured in a future time.

                                                                                                                                  I also managed to "implant" goals to the intelligence, make some agents to cooperate or compite, and some other goodies just by manipulating the metric in the states space (entropy gained from state A to B is a metric, so I changed it to test and... ops, it worked nicely).

                                                                                                                                  If you want to play around with the exe and the code, I will be delighted to hear from you, really, so feel free to... anything.
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. I looked at your site, it is really impressive what you have done! Very interesting that you have successfully used the equation to model intelligence :-)
                                                                                                                                    Delete
                                                                                                                                  2. > I find it not appropiate to talk about "freedom of action", it really uses "entropy at a future horizont"

                                                                                                                                    I would agree that his is way more precise, but "freedom of action" is easier to understand in in real life. Or do you think that "entropy at a future horizont" is very different form "freedom of action" ?
                                                                                                                                    Delete


                                                                                                                                15. It definitely is an interesting way to look at intelligent behavior in animals and in humans. In fact, the behavior of all species is driven toward positions of safety and security for survival.

                                                                                                                                  In human societies and in martial traditions you see this same principle of moving toward the most strategic or best tactical position. Within society this usually is measured in terms of positions of greater status, wealth or power that translates generally into greater freedom of action. However, a narrow materialistic view on the individual level does not automatically produce greater collective freedom of action.

                                                                                                                                  Thus, larger values that aren't utopian tend to work out better for everyone.

                                                                                                                                  Another concept that is different and yet similar is the following formulation: Philosophical Assumptions define Theoretical Models determining Practical Applications.

                                                                                                                                  So, that intelligence is not merely a force seeking greater freedom of action but also is a set of reality constructs whereby one relates with the environment. These constructs can be more or less effective in terms of greater freedom of action.

                                                                                                                                  Interesting stuff...
                                                                                                                                  ReplyDelete
                                                                                                                                16. ...is attention intelligence a sub system related to our senses as variables?
                                                                                                                                  ReplyDelete
                                                                                                                                17. fascinating observation and formula, but I think you ignored a point in the "How to increase the force of intelligence?
                                                                                                                                  There are tow ways to increase to force of intelligence:

                                                                                                                                  Make better predictions of the future, which means detect sates of maximum freedom of action
                                                                                                                                  Increase the power to move in the desired direction. By adding more energy or being clever in finding paths that require less resources." paragraph;
                                                                                                                                  our world has many imperfection and unpredictability, and blind evolution comes in here:

                                                                                                                                  3. Making imperfect variations to cope with unpredictability
                                                                                                                                  I enjoyed your post.
                                                                                                                                  ReplyDelete
                                                                                                                                18. Great stuff! I’m in the process of codifying evolution for practical use. http://wp.me/p4neeB-4Y The current reference point is "increasing ROI" (in general, not just a near term bottom line). This is a past to present measure. Maximizing future freedom within a scope seems like a present to future measure. I am struggling to define future options when choices are like “hire 5 people”, "hire 9 people”.

                                                                                                                                  You ask, "It would be really interesting to analyse how good different believe systems are in terms of maximising the future freedom of the system" If you have a classification that holds each belief system (within an economic context) that tracks costs, that’s easy to display. That's a big part of my project.
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. When maximizing Return Of Investment, the question is, what is the Return. The answer of the equation for intelligence is that the "Return" is to not be trapped.

                                                                                                                                    The difficulty with the equation comes form the unpredictability of the future. Since we live in a world where at any time unpredictable events can happen (heart attack, earthquake, impact of an asteroid...) and even the "predictable" is not so predictable. But, if you make many decisions and if your decisions are a bit better on average than the predictions of other systems, you may gain more freedom of action in the long run. So, “hire 5 people” or "hire 9 people” is just one on the many decisions that add up. Maybe it's not the quantity of people you hire that matters, but the quality.

                                                                                                                                    When it comes to "believe systems", I think the ones that best observe "reality" and drive the best conclusions will win. In some cases, humans are driven by an utopia or by a believe that is no based on the reality. On the other hand, some religions seem to be very successful, even if its believes are not based on "reality".

                                                                                                                                    As individual you might come to the conclusion that not following the rules of the society (or the religion) gives you an advantage. But if you look at the society as a whole as an intelligent system, individual freedom might undermine the future freedom of action of the entire system. Take birth rates: for an individual a child might reduce the future freedom of action. So, if all intelligent individual decide not to get children, it might be bad for the system. Likewise, if everybody believes having many children is good, then the system might starve form overpopulation.

                                                                                                                                    "The tragedy of the commons" is a classical problem where the interest of individual members of the system in undermining the interest of the entire system.

                                                                                                                                    Delete


                                                                                                                                19. i just want to add my cent into this whole topic. From reading the above posts, I see the words force and freedom and this reminds me immediately to Friedrich Nietzsche's concept of "The Will To Power". Also, a poker game algorithm might also be closely related to this whole thing since acquiring a huge stack allows one more freedom of action in the game.
                                                                                                                                  ReplyDelete

                                                                                                                                  Replies



                                                                                                                                  1. Good point! It did not occur to me that 'force' and 'freedom' are used together in this context and that freedom and force are are often seen as opponents.

                                                                                                                                    I think, freedom in general requires the power/force to defend itself, else its options may be reduced (by other powers) and at the end the freedom of choice is gone. Using force to defend freedom takes resources. Therefore intelligent systems minimizes the power used to defend its freedom but it does not neglect it.

                                                                                                                                    So I think it may well be related to Nietzsches "Will to Power" ("Wille zur Macht").
                                                                                                                                    Delete


                                                                                                                                20. I'm the Anonymous who posted replies above on today's date, and I wanted to make some general comments too. First, the equation seems to indicate a direction to evolution, which I have supposed was impossible - but on later thought, evolution already seems directed toward reducing resource demand. Second, there also seems to be a general law, not stated yet AFAIK, that networks tend to concentrate over time - but in some way this tendency must oppose entropy. So third, there seems to me to be a need for a reconciliation between these ideas - evolution, intelligence, concentration and entropy.
                                                                                                                                  ReplyDelete
                                                                                                                                21. Interesting - some work done by Colonel John Boyd to form the OODA loop might be a decision system framework to test the rigor of this equation within the context of all known tactical flight patterns - as documented to date. Entropy & time distortion being the crux of Boyd's strategic approach to winning ... some parallels here.
                                                                                                                                  ReplyDelete





                                                                                                                                A new Equation for Intelligence F = T ∇ Sτ - a Force that Maximises the Future Freedom of Action

                                                                                                                                Intelligence is a Force with the Power to Change the World


                                                                                                                                Describing intelligence as a physical force that maximises the future freedom of action, adds a new aspect to intelligence that is often forgotten: the power to change the world. This, I think, was the biggest revelation for me, when I started thinking about the the new equation for intelligence. The second revelation was, that intelligent systems are survival engines, that increase their chances of survival by maximising a single quantity: the freedom of action. Both insights may sound trivial or obvious, but I don't think they are.

                                                                                                                                A few days ago a saw the TED talk "A new equation for intelligence" by Alex Wissner-Gross. He presents an equation he published in April 2013 in a physics journal. It may not be the most impressive talk I have ever seen. And I had to watch it twice to fully understand it. But the message excites me so much, that I don't sleep well since a few days. I thought everybody must be excited about this equation. But, it seems that this is not the case. Either I am not understanding it correctly or others don't get it. Or maybe it resonates with me, because I am physicist, with a strong background in computing, who has done research in computational biology. To find this out, let me explain my understanding of the equation. Please tell what your think and what's wrong with my excitement (I need sleep)....

                                                                                                                                So, why did the equation blow me away? Because this very simple physical equation can guide us in our decisions and it makes intelligent behaviour measurable and observable. It adds a new real physical force to the world, the force of intelligence. From the equation we can deduce algorithms to act intelligently, as individuals, as societies and as mankind. And we can build intelligent machines using the equation. Yes, I know, you may ask: "How can the simple equation F = T  Sτ do all of that?"

                                                                                                                                Intelligence is a Force that Maximises the Future Freedom of Action

                                                                                                                                Before we look at the equation in more detail, let me describe its essence in every day terms. Like many physical laws or equations the idea behind it is simple:
                                                                                                                                • Intelligence is a force that maximises the future freedom of action.
                                                                                                                                • It is a force to keeps options open.
                                                                                                                                • Intelligence doesn't like to be trapped.
                                                                                                                                But what is necessary to keep options open and not to be trapped? Intelligence has to to predict the future and change the world in a direction that leads to the "best possible future".  In order to predict the future, an intelligent system has to observe the world and create a model of the world. Since the future is not deterministic the prediction has to be based on some heuristics. Prediction is a kind of statistical process. In order to change the world, the intelligence has to interact with the world. Just thinking about the world, without acting, is not intelligence, because it produces no measurable force (well, sometimes it is intelligent not to act, because the physical forces drive you already in the right direction, but that is a way of optimising resources). The better in can predicted the future and the better a it can change the world in the desired direction, the more intelligent the system is.

                                                                                                                                The new Equation for Intelligence F = T ∇ Sτ

                                                                                                                                Note: skip this section, if you are not interested in understanding the mathematics of the equation!


                                                                                                                                This is the equation:

                                                                                                                                   F = T

                                                                                                                                Where F is the force, a directed force (therefore it is bold), T is a system temperature, Sτ is the entropy field of all states reachable in the time horizon τ (tau). Finally, ∇ is the nabla operator. This is the gradient operator that "points" into the direction of the state with the most freedom of action. If you are not a physicist this might sound like nonsense. Before I try to explain the equation in more detail, let's look at a another physical equation of force.

                                                                                                                                The intelligence equation very similar to the equation for potential energy F = ∇ Wpot. Wpot is the potential energy at each point is space. The force F pulls into the direction of lower energy. This is why gravitation pulls us in direction of the center of the earth. Or think of a landscape. At each point the force points downhill. The direction is the direction a ball would roll starting at that point. The strength of the force is determined by the steepness of the slope. The steeper the slope, the stronger the force. Like the ball is pulled downhill by the gravitational force to reach the state with the lowest energy, an intelligent system is pulled by the force of intelligence into a future with lowest number of limitations. In physics we use the  Nabla operator or gradient to turn a "landscape" into a directed force (a force field).

                                                                                                                                Back to our equation F = T  Sτ. What it says is that intelligence is a directed force F that pulls into the direction of states with more freedom of action. T is a kind of temperature, that defines the overall strength (available resources) the intelligent system has (heat can do work, think of a steam engine: the more heat the more power).   is the "freedom of action" of each state that can be reached by the intelligence within a time horizon τ (tau).  The time horizon is how far into future the intelligence can predict. Alex Wissner-Gross uses the notion of entropy S to express the freedom of action in the future. The force of intelligence is pointing into that direction. As we have seen, in physics the direction of the force at each state is calculated by a gradient operation ∇ (think of the direction the ball is pulled). The Nabla operator ∇ is used to assign a directional vector (the direction of the force of intelligence) to each state (in our case: all possible future states). The more freedom of action a state provides the stronger the force is pulling in that direction. So, Sτ is the pointing into the direction with the most freedom of action. The multiplication with T means the more power we have to act, the stronger the force can be.

                                                                                                                                Note: the optimal future state is the optimal state form the viewpoint of the intelligent system. It might not the optimal state for other systems or for the entire system.

                                                                                                                                If you want to understand the equation in more detail read the original paper 'Causal Entropic Forces - by A. D. Wissner-Gross and C. E. Freer'.


                                                                                                                                Understanding the Laplace operator conceptually

                                                                                                                                The Laplace operator: those of you who now understand it, how would you explain what it "does" conceptually? How do you wish you had been taught it?
                                                                                                                                Any good essays (combining both history and conceptual understanding) on the Laplace operator, and its subsequent variations (e.g. Laplace-Bertrami) that you would highly recommend?
                                                                                                                                share|improve this question
                                                                                                                                    
                                                                                                                                    
                                                                                                                                @JohnM You are very right! I feel terrible for asking a duplicate now. Can it be merged? Should it be made into a community wiki? –  user89 Jun 3 '14 at 5:05
                                                                                                                                    
                                                                                                                                I just wanted to point out those other answers in case they are of interest to you. –  John M Jun 3 '14 at 15:44

                                                                                                                                5 Answers 5


                                                                                                                                up vote38down voteaccepted
                                                                                                                                +50
                                                                                                                                The Laplacian Δf(p)  is the lowest-order measurement of how f  deviates from f(p)  "on average" - you can interpret this either probabilistically (expected change in f  as you take a random walk) or geometrically (the change in the average of f  over balls centred at p  ). To make this second interpretation precise, write the Taylor series

                                                                                                                                f(p+x)=f(p)+f i (p)x i +12 f ij (p)x i x j + 

                                                                                                                                and integrate:

                                                                                                                                 B r (p) f=f(p)V(B r )+f i (p) B r (0) x i dx+12 f ij (p) B r (0) x i x j dx+. 

                                                                                                                                The integrals x i dx  vanish because x i   is an odd function under reflection in the x i   direction, and similarly the integrals x i x j dx  vanish whenever ij  ; so this simplifies to

                                                                                                                                1V(B r )  B r (p) f=f(p)+CΔf(p)r 2 + 

                                                                                                                                where C  is a constant depending only on the dimension.
                                                                                                                                The Laplace-Beltrami operator is essentially the same thing in the more general Riemannian setting - all the nasty curvy terms will be higher order, so the same formula should hold.
                                                                                                                                share|improve this answer
                                                                                                                                    
                                                                                                                                Dear Anthony, I have accepted your answer, but wonder if you could write a little about why the Laplace operator is sometimes called the "diffusion operator"? Also, do I understand it correctly that in only one "spatial dimension" x  , it simply reduces to the second derivative? I.e. u xx =Δu  ? –  user89 May 28 '14 at 8:35
                                                                                                                                3  
                                                                                                                                @user89: Correct on the second question. Diffusion is a process that smooths out some density function by changing the value at each point to be closer to the values at surrounding points, and is modelled by the equation f/t=Δf  . –  Anthony Carapetis May 28 '14 at 10:08
                                                                                                                                    
                                                                                                                                +1 for "all the nasty curvy terms will be higher order" –  Neal Jun 12 '14 at 13:41

                                                                                                                                I think the most important property of the Laplace operator Δ  is that it is invariant under rotations. In fact, if a differential operator on Euclidean space is rotation and translation invariant, then it must be a polynomial in Δ  . That is why it is of such prominence in physical problems.
                                                                                                                                Some good books on the subject:
                                                                                                                                1. Rosenberg's The Laplacian on a Riemannian Manifold.
                                                                                                                                2. Gurarie's Symmetries and Laplacians.
                                                                                                                                share|improve this answer
                                                                                                                                    
                                                                                                                                Is the Laplacian invariant under rotations because it takes into account all points in the neighbourhood of the point in question? –  user89 May 28 '14 at 8:32
                                                                                                                                3  
                                                                                                                                Yes, because it takes into account all the points in a symmetric way. –  John M May 28 '14 at 12:59
                                                                                                                                    
                                                                                                                                John, if I consider the function f(x)=x 2   in 2D, rotating my axes can give me different values for the second derivative at a particular point (the second derivative being what the Laplace operator reduces to in one spatial dimension). This does not seem to be invariant -- why is that? –  user89 May 29 '14 at 0:23
                                                                                                                                    
                                                                                                                                Nice thing to try to check! It is invariant: If f(x,y)=x 2   in R 2   , then Δf=( f /x 2 )+( f /y 2 )=2+0=2  . On the other hand, if you rotate your axes 45 degrees clockwise, you get f(x,y)=x 2 /2+y 2 /2  . This also has Δf=1+1=2  . Rotate you axes another 45 degrees to get f(x,y)=y 2   . Again Δf=0+2=2  . –  John M May 29 '14 at 1:34
                                                                                                                                    
                                                                                                                                Ah. Hmm. I was simply flipping the graph of f(x)=x 2   , so that in some new axis system, it would be f  (x  )=(x 2 )  (i.e. rotating it 180 degrees). Then, the second derivative of that would be 2  . I know I am making an incredibly silly error here, but I can't seem to be able to catch it. –  user89 May 29 '14 at 5:13

                                                                                                                                To gain some (very rough) intuition for the Laplacian, I think it's helpful to think of the Laplacian on R  , which is just the second derivative d 2 dx 2    . (This answer may be more elementary than the OP was looking for, but I wish I had kept some of these things in mind when I first learned about the Laplacian.)
                                                                                                                                Just as Anthony's answer discusses, the second derivative at pR  measures how much f(p)  deviates from average values of f  on either side of it. If the second derivative is positive, then f(p)  is smaller than the average of f(p+h)  and f(ph)  for small h  . (As I would tell my calculus students, the trapezoid rule for Riemann sums is an overestimate when the second derivative is positive.)
                                                                                                                                Generally, a function is harmonic if and only if it satisfies the mean value property. In R  , harmonic functions are simply linear polynomials, which of course are precisely the functions that satisfy the mean value property.
                                                                                                                                The maximum principle states roughly that if Δu0  , then local maxima of u  do not occur. This is a generalization of the familiar "second derivative test" from calculus, which says that if the second derivative of u  is positive, then local maxima of u  do not occur (the graph of u  is concave up).
                                                                                                                                Finally, let me go up one dimension and mention some of my intuition for harmonic functions u(x,y)  of two variables, in which case Δu= 2 ux 2  + 2 uy 2    . If u  is harmonic, then  2 ux 2  = 2 uy 2    . This says that the graph of u  must always look like a saddle: if, say, the graph is concave up in the x  -direction ( 2 ux 2  >0  ), then it must be concave down in the y  -direction ( 2 uy 2  <0  ). When I picture a saddle-shaped graph in my head, I think I can also see why the maximum principle has to hold for harmonic functions, since a saddle has no local extrema.
                                                                                                                                share|improve this answer

                                                                                                                                Another view along the lines of the answers above:
                                                                                                                                Suppose you have some region in the plane Ω  , and you are given the value of some scalar function f  along the boundary Ω  . You now want to fill in f  on the interior of Ω  "as smoothly as possible." (A common physical interpretation is that f  is the heat of the region: you are fixing the temperature of the boundary of Ω  and want to know what the temperature on the interior will be at steady state.)
                                                                                                                                What does "as smooth as possible" mean? Well, one measure of the smoothness of f  is to look at its gradient f  and measure
                                                                                                                                E(f)= Ω f 2 dA. 
                                                                                                                                Notice that this integral, called the Dirichlet energy of f  , achieves its lowest possible value of 0  when f  is constant. The less smooth (to first order) that f  is, the higher the Dirichlet energy will be. Making f  as smooth as possible means finding the f  that satisfies the boundary conditions and minimizes E  .
                                                                                                                                How do we minimize E  ? We "take the derivative and set it to zero":
                                                                                                                                 f E=0. 
                                                                                                                                It may look a little weird to differentiate a scalar (the Dirichlet energy) with respect to a function, but the idea is the same as when you work with the ordinary gradient. Recall that for an ordinary scalar function g(x,y,z):R 3 R  , the gradient g  at a point is the unique vector that, when you dot it with any direction v  , tells you the directional derivative of g  in that direction:
                                                                                                                                g(x,y,z)v=ddt g[(x,y,z)+tv]∣ ∣ ∣  t0 . 
                                                                                                                                The gradient of E  works the same way: it gives you the unique function over Ω  that, when you take the inner product of E(f)  with any variation δf  of f  , gives you the directional derivative of E  in that "direction":
                                                                                                                                 Ω E(f)δfdA=ddt E(f+tδf)∣ ∣ ∣  t0 . 
                                                                                                                                You can do the multivariable calculus and after some integration by parts, you will see that E(f)=Δf.  Several takeaways from this:
                                                                                                                                • The function f  that interpolates the boundary conditions as smoothly as possible (in the sense of minimizing the Dirichlet energy) is the solution to the Laplace equation Δf=0  .
                                                                                                                                • Given some function f  that interpolates the boundary conditions but does not minimize the Dirichlet energy, the gradient of E  , Δf  , is the "direction of steepest ascent" of E  -- the direction to change f  if you want to most quickly increase E  . The negative of this, Δf  , is the direction that most quickly decreases E  : if you are trying to smooth f  , this is then the direction that you want to flow f  in. This insight leads to the heat equation
                                                                                                                                  dfdt =Δf 
                                                                                                                                  which, given initial temperatures on Ω  , flows in the direction that best decreases the Dirichlet energy until the heat has diffused as smoothly as possible over the surface.
                                                                                                                                • Nowhere in the above discussion was it essential that Ω  was a piece of a plane: as long as you can define functions on Ω  and take gradients of f  to get the Dirichlet energy, the above works equally well, and is one way of motivating the Laplace-Beltrami operator on arbitrary manifolds in R 3   . The physical picture here is that you have some conductive plate in empty space, and heat up the boundary of the plate, and look at how the heat equalizes over the plate.
                                                                                                                                share|improve this answer
                                                                                                                                    
                                                                                                                                It's interesting that some approaches to image denoising / deblurring / restoration minimize an objective function that contains a discrete version of the Dirichlet energy of the restored image, in order to encourage the restored image to be smooth. More sophisticated approaches use a penalty term E(f)= Ω f 1 dA  , which allows sharp edges in an image to be preserved. (It seems interesting to work out a differential equation based on this energy, as you did for the Dirichlet energy.) –  littleO Jun 3 '14 at 6:42

                                                                                                                                Here is some intuition:
                                                                                                                                I think the most basic thing to know about the Laplacian Δ  is that Δ=div  , and div  is the adjoint of   . Hence, Δ  has the familiar form A T A  which recurs throughout linear algebra. We see that Δ  is a self-adjoint positive semidefinite operator, and so we would expect (or hope) that the familiar properties of positive semidefinite operators in linear algebra hold true for Δ  . Namely, we expect that Δ  has real nonnegative eigenvalues, and that there should exist (in some sense) an orthonormal basis of eigenfunctions for Δ  . This provides some intuition or motivation for the topic of "eigenfunctions of the Laplacian". (By the way, I think the Laplacian should have been defined to be div  .)
                                                                                                                                Notice that the integration by parts formula can be interpreted as telling us that ddx   is the adjoint of ddx   (in a setting where boundary terms vanish). Fourier series can be discovered by computing the eigenfunctions of the anti-self-adjoint operator ddx   in an appropriate setting. Moreover, a multivariable integration by parts formula can be interpreted as telling us that div  is the adjoint of   . Green's second identity can be interpreted as expressing the self-adjointness of the Laplacian.

                                                                                                                                No comments:

                                                                                                                                Post a Comment