http://www.insidescience.org/content/physicist-proposes-new-way-think-about-intelligence/987
A radical concept could revise theories addressing cognitive behavior.
Originally published:
Apr 19 2013 - 4:30pm
(ISNS) -- A single equation grounded in basic physics principles could describe intelligence and stimulate new insights in fields as diverse as finance and robotics, according to new research.
Alexander Wissner-Gross, a physicist at Harvard University and the Massachusetts Institute of Technology, and Cameron Freer, a mathematician at the University of Hawaii at Manoa, developed an equation that they say describes many intelligent or cognitive behaviors, such as upright walking and tool use.
The researchers suggest that intelligent behavior stems from the impulse to seize control of future events in the environment. This is the exact opposite of the classic science-fiction scenario in which computers or robots become intelligent, then set their sights on taking over the world.
The findings describe a mathematical relationship that can "spontaneously induce remarkably sophisticated behaviors associated with the human 'cognitive niche,' including tool use and social cooperation, in simple physical systems," the researchers wrote in a paper published today in the journal Physical Review Letters.
"It's a provocative paper," said Simon DeDeo, a research fellow at the Santa Fe Institute, who studies biological and social systems. "It's not science as usual."
Wissner-Gross, a physicist, said the research was "very ambitious" and cited developments in multiple fields as the major inspirations.
The mathematics behind the research comes from the theory of how heat energy can do work and diffuse over time, called thermodynamics. One of the core concepts in physics is called entropy, which refers to the tendency of systems to evolve toward larger amounts of disorder. The second law of thermodynamics explains how in any isolated system, the amount of entropy tends to increase. A mirror can shatter into many pieces, but a collection of broken pieces will not reassemble into a mirror.
The new research proposes that entropy is directly connected to intelligent behavior.
"[The paper] is basically an attempt to describe intelligence as a fundamentally thermodynamic process," said Wissner-Gross.
The researchers developed a software engine, called Entropica, and gave it models of a number of situations in which it could demonstrate behaviors that greatly resemble intelligence. They patterned many of these exercises after classic animal intelligence tests.
In one test, the researchers presented Entropica with a situation where it could use one item as a tool to remove another item from a bin, and in another, it could move a cart to balance a rod standing straight up in the air. Governed by simple principles of thermodynamics, the software responded by displaying behavior similar to what people or animals might do, all without being given a specific goal for any scenario.
"It actually self-determines what its own objective is," said Wissner-Gross. "This [artificial intelligence] does not require the explicit specification of a goal, unlike essentially any other [artificial intelligence]."
Entropica's intelligent behavior emerges from the "physical process of trying to capture as many future histories as possible," said Wissner-Gross. Future histories represent the complete set of possible future outcomes available to a system at any given moment.
Wissner-Gross calls the concept at the center of the research "causal entropic forces." These forces are the motivation for intelligent behavior. They encourage a system to preserve as many future histories as possible. For example, in the cart-and-rod exercise, Entropica controls the cart to keep the rod upright. Allowing the rod to fall would drastically reduce the number of remaining future histories, or, in other words, lower the entropy of the cart-and-rod system. Keeping the rod upright maximizes the entropy. It maintains all future histories that can begin from that state, including those that require the cart to let the rod fall.
"The universe exists in the present state that it has right now. It can go off in lots of different directions. My proposal is that intelligence is a process that attempts to capture future histories," said Wissner-Gross.
The research may have applications beyond what is typically considered artificial intelligence, including language structure and social cooperation.
DeDeo said it would be interesting to use this new framework to examine Wikipedia, and research whether it, as a system, exhibited the same behaviors described in the paper.
"To me [the research] seems like a really authentic and honest attempt to wrestle with really big questions," said DeDeo.
One potential application of the research is in developing autonomous robots, which can react to changing environments and choose their own objectives.
"I would be very interested to learn more and better understand the mechanism by which they're achieving some impressive results, because it could potentially help our quest for artificial intelligence," said Jeff Clune, a computer scientist at the University of Wyoming.
Clune, who creates simulations of evolution and uses natural selection to evolve artificial intelligence and robots, expressed some reservations about the new research, which he suggested could be due to a difference in jargon used in different fields.
Wissner-Gross indicated that he expected to work closely with people in many fields in the future in order to help them understand how their fields informed the new research, and how the insights might be useful in those fields.
The new research was inspired by cutting-edge developments in many other disciplines. Some cosmologists have suggested that certain fundamental constants in nature have the values they do because otherwise humans would not be able to observe the universe. Advanced computer software can now compete with the best human players in chess and the strategy-based game called Go. The researchers even drew from what is known as the cognitive niche theory, which explains how intelligence can become an ecological niche and thereby influence natural selection.
The proposal requires that a system be able to process information and predict future histories very quickly in order for it to exhibit intelligent behavior. Wissner-Gross suggested that the new findings fit well within an argument linking the origin of intelligence to natural selection and Darwinian evolution -- that nothing besides the laws of nature are needed to explain intelligence.
Although Wissner-Gross suggested that he is confident in the results, he allowed that there is room for improvement, such as incorporating principles of quantum physics into the framework. Additionally, a company he founded is exploring commercial applications of the research in areas such as robotics, economics and defense.
"We basically view this as a grand unified theory of intelligence," said Wissner-Gross. "And I know that sounds perhaps impossibly ambitious, but it really does unify so many threads across a variety of fields, ranging from cosmology to computer science, animal behavior, and ties them all together in a beautiful thermodynamic picture."
A new Equation for Intelligence F = T ∇ Sτ - a Force that Maximises the Future Freedom of Action
Intelligence is a Force with the Power to Change the World
Describing intelligence as a physical force that maximises the future freedom of action, adds a new aspect to intelligence that is often forgotten: the power to change the world. This, I think, was the biggest revelation for me, when I started thinking about the the new equation for intelligence. The second revelation was, that intelligent systems are survival engines, that increase their chances of survival by maximising a single quantity: the freedom of action. Both insights may sound trivial or obvious, but I don't think they are.
A few days ago a saw the TED talk "A new equation for intelligence" by Alex Wissner-Gross. He presents an equation he published in April 2013 in a physics journal. It may not be the most impressive talk I have ever seen. And I had to watch it twice to fully understand it. But the message excites me so much, that I don't sleep well since a few days. I thought everybody must be excited about this equation. But, it seems that this is not the case. Either I am not understanding it correctly or others don't get it. Or maybe it resonates with me, because I am physicist, with a strong background in computing, who has done research in computational biology. To find this out, let me explain my understanding of the equation. Please tell what your think and what's wrong with my excitement (I need sleep)....
So, why did the equation blow me away? Because this very simple physical equation can guide us in our decisions and it makes intelligent behaviour measurable and observable. It adds a new real physical force to the world, the force of intelligence. From the equation we can deduce algorithms to act intelligently, as individuals, as societies and as mankind. And we can build intelligent machines using the equation. Yes, I know, you may ask: "How can the simple equation F = T ∇ Sτ do all of that?"
Intelligence is a Force that Maximises the Future Freedom of Action
Before we look at the equation in more detail, let me describe its essence in every day terms. Like many physical laws or equations the idea behind it is simple:- Intelligence is a force that maximises the future freedom of action.
- It is a force to keeps options open.
- Intelligence doesn't like to be trapped.
The new Equation for Intelligence F = T ∇ Sτ
Note: skip this section, if you are not interested in understanding the mathematics of the equation!
This is the equation:
This is the equation:
F = T ∇ Sτ
Where F is the force, a directed force (therefore it is bold), T is a system temperature, Sτ is the entropy field of all states reachable in the time horizon τ (tau). Finally, ∇ is the nabla operator. This is the gradient operator that "points" into the direction of the state with the most freedom of action. If you are not a physicist this might sound like nonsense. Before I try to explain the equation in more detail, let's look at a another physical equation of force.
The intelligence equation very similar to the equation for potential energy F = ∇ Wpot. Wpot is the potential energy at each point is space. The force F pulls into the direction of lower energy. This is why gravitation pulls us in direction of the center of the earth. Or think of a landscape. At each point the force points downhill. The direction is the direction a ball would roll starting at that point. The strength of the force is determined by the steepness of the slope. The steeper the slope, the stronger the force. Like the ball is pulled downhill by the gravitational force to reach the state with the lowest energy, an intelligent system is pulled by the force of intelligence into a future with lowest number of limitations. In physics we use the ∇ Nabla operator or gradient to turn a "landscape" into a directed force (a force field).
Where F is the force, a directed force (therefore it is bold), T is a system temperature, Sτ is the entropy field of all states reachable in the time horizon τ (tau). Finally, ∇ is the nabla operator. This is the gradient operator that "points" into the direction of the state with the most freedom of action. If you are not a physicist this might sound like nonsense. Before I try to explain the equation in more detail, let's look at a another physical equation of force.
The intelligence equation very similar to the equation for potential energy F = ∇ Wpot. Wpot is the potential energy at each point is space. The force F pulls into the direction of lower energy. This is why gravitation pulls us in direction of the center of the earth. Or think of a landscape. At each point the force points downhill. The direction is the direction a ball would roll starting at that point. The strength of the force is determined by the steepness of the slope. The steeper the slope, the stronger the force. Like the ball is pulled downhill by the gravitational force to reach the state with the lowest energy, an intelligent system is pulled by the force of intelligence into a future with lowest number of limitations. In physics we use the ∇ Nabla operator or gradient to turn a "landscape" into a directed force (a force field).
Back to our equation F = T ∇ Sτ. What it says is that intelligence is a directed force F that pulls into the direction of states with more freedom of action. T is a kind of temperature, that defines the overall strength (available resources) the intelligent system has (heat can do work, think of a steam engine: the more heat the more power). Sτ is the "freedom of action" of each state that can be reached by the intelligence within a time horizon τ (tau). The time horizon is how far into future the intelligence can predict. Alex Wissner-Gross uses the notion of entropy S to express the freedom of action in the future. The force of intelligence is pointing into that direction. As we have seen, in physics the direction of the force at each state is calculated by a gradient operation ∇ (think of the direction the ball is pulled). The Nabla operator ∇ is used to assign a directional vector (the direction of the force of intelligence) to each state (in our case: all possible future states). The more freedom of action a state provides the stronger the force is pulling in that direction. So, ∇Sτ is the pointing into the direction with the most freedom of action. The multiplication with T means the more power we have to act, the stronger the force can be.
Note: the optimal future state is the optimal state form the viewpoint of the intelligent system. It might not the optimal state for other systems or for the entire system.
If you want to understand the equation in more detail read the original paper 'Causal Entropic Forces - by A. D. Wissner-Gross and C. E. Freer'.
Understanding the Laplace operator conceptually
The Laplace operator: those of you who now understand it, how would you explain what it "does" conceptually? How do you wish you had been taught it?
Any good essays (combining both history and conceptual understanding) on the Laplace operator, and its subsequent variations (e.g. Laplace-Bertrami) that you would highly recommend? | |||||||||||||
|
The Laplacian
and integrate: The integrals where The Laplace-Beltrami operator is essentially the same thing in the more general Riemannian setting - all the nasty curvy terms will be higher order, so the same formula should hold. | |||||||||||||
|
I think the most important property of the Laplace operator
Some good books on the subject:
| |||||||||||||||||||||
|
To gain some (very rough) intuition for the Laplacian, I think it's helpful to think of the Laplacian on
Just as Anthony's answer discusses, the second derivative at Generally, a function is harmonic if and only if it satisfies the mean value property. In The maximum principle states roughly that if Finally, let me go up one dimension and mention some of my intuition for harmonic functions | |||
Another view along the lines of the answers above:
Suppose you have some region in the plane What does "as smooth as possible" mean? Well, one measure of the smoothness of How do we minimize
| |||||
|
Here is some intuition:
I think the most basic thing to know about the Laplacian Notice that the integration by parts formula can be interpreted as telling us that |
If we assume eclipse is an "intelligent system", the question for me is: is the sum of many intelligent subsystems (individuals and companies) driving eclipse into a direction of survival. Or the sum of forces (each one intelligent for the subsystem) not good for eclipse as a system.
>Any system that creates a force into the direction of more freedom to act, is an intelligent system.
but once an action is taken, your choices are reduced.
To me this equation says inertia= infinite intelligence since all your possibilities are open.
Isn't observable 'intelligence' about making optimal choices to obtain a 'better' future state?
but... maybe i'm missing the point.
True! But not acting is also a choice. But is it the best choice for the future? Sometimes yes, but sometimes no. If you are only floating into the direction you are pulled you may miss an opportunity to invest a bit (by acting) to get more in the future.
Do you know the famous Marshmallow Experiment (http://en.wikipedia.org/wiki/Stanford_marshmallow_experiment)? The children who were able to resist an immediate attraction (eat a marshmallow) in order to receive a reward in the future (two marshmallows) have been more successful in their adult live. This is the "act" part.
For use humans it is a balance between enjoying life now and being prepared for the future. If you do not enjoy your life now you might suffer and get sick. If you use all your resources immediately, you may not have the power to overcome a crisis or you cannot take an opportunity.
The second is not true, and there is no field that correspond to such a "force", in the same way no field exists to "force" you to change channel when a bad commedy starts on your TV.
But once the simulation have been replaced with a neuronal network, the "decision taking" algorithm is the same one: use the simulation to calculate the entropy at a given time horizont for each option you are considering, average all the options with its entropy as a weigth, and you get your "intelligent force".
From cybernetics (which has been around since the 1940s
VO = VD + U – (VB + VR)
V represents variety of system states.
VO is the outcome or goal state. Variety is reduced to obtain the desired outcome.
VD is the variety of disturbance (states other than the goal state).
VB is passive regulation.
VR is active regulation – behavior
In active regulation, only variety (VR) can reduce variety (VO)
U is uncertainty.
We can reduce variety (states other then the goal state) three ways driving the system to its goal state:
1. Active regulation (VR) – think of a thermostat controlling room temperature
2. Passive regulation (VB) – think of the insulation in the room’s walls
3. Reducing uncertainty - (U).
Through knowledge via learning (model of the world)
By placing the system in approximately the same initial state
By improving awareness of the current state of the system
Relating cybernetics to the equation for intelligence
How is VO = VD + U – (VB + VR) related to F = T ∇Sτ?
∇Sτ points in the direction of the most flexible state.
• For an intelligent system, ∇Sτ represents the goal state
• From cybernetics, VO is the goal state
• Therefore, VO = ∇Sτ, since both represent the goal state
• Sτ is the entropy field of all the reachable states in the time horizon τ (tau), a freedom of action value for every possible state.
• VD is the variety of disturbance (states other than the goal state).
• Therefore, Sτ = VD, SτB = VB, SτR = VB
• Therefore, ∇Sτ = Sτ + U – (SτB + SτR)
• Factoring in the power to act, T:
F = T [Sτ + U – (SτB + SτR)]
Introducing U from cybernetics explicitly accounts for the intelligent system’s model of the world as well as awareness.
This formulation also makes explicit the value of passive regulation, which conserves T.
Does the above formulation hold water? If so, it seems to support your argument, especially your assertions about the systems model of the world.
If the best possible future start by taking a decision that is also taken in a big number of "not so good" futures, it may be better for this algortihm to take a course with not so bright futures, but a bigger number of them.
What you really use here is: take now the decision that opens more different futures to you (instead of take the decision that could drive you to the brightest future).
It is the idea of "entropy" what makes different one approach form the other.
If you think with a 5 seconds horizont, give blood is a non sense, but when you think with a 5 years horizont, it makes perfect sense.
The longer you can predict your futures, the smarter you are, or the more "sublime" or "moral" your decisions look.
Also, the better you get predicting the good or bad a future will be for you, the more "moral" you get: helping others open more future possibilities to you than any other thing could do, but not all the people can "predict" this, so it is very common to ear people say "I helped people in that or this way and I never imagined how good it made me feel until I tried"... that is not being too good at calculating a future "goodness".
If a system evaluates the future within a short time horizon, it may burn all its resources immediately because this will maximize freedom within that time horizon, but it may freeze the system in the long run.
In some way 'survival of the fittest' and 'maximizing future freedom of action' are equivalent. But the equation of intelligence gives a measure what the 'fittest' is: the fittest systems are the ones that do keep the freedom of action.
Note that the equation of intelligence applies to systems at all levels: from abiotic adaptive systems to societies.
http://en.wikipedia.org/wiki/Systems_analysis
I have a blog about it (entropicai.blogspot.com) with videos, a working exe and source code (for delphi7, object pascal, but may be I will por it to java some day to use it on some android games with intelligent bad guys).
I LOVED to read your post as it sound quite like me talking of it to everyone around me! (btwe, I am mathematician but love physic, the weirdest the best).
Most of your thoughts are shared by me, but I find it not appropiate to talk about "freedom of action", it really uses "entropy at a future horizont", where microstates (in the present) are switched to macrostates but measured in a future time.
I also managed to "implant" goals to the intelligence, make some agents to cooperate or compite, and some other goodies just by manipulating the metric in the states space (entropy gained from state A to B is a metric, so I changed it to test and... ops, it worked nicely).
If you want to play around with the exe and the code, I will be delighted to hear from you, really, so feel free to... anything.
I would agree that his is way more precise, but "freedom of action" is easier to understand in in real life. Or do you think that "entropy at a future horizont" is very different form "freedom of action" ?
In human societies and in martial traditions you see this same principle of moving toward the most strategic or best tactical position. Within society this usually is measured in terms of positions of greater status, wealth or power that translates generally into greater freedom of action. However, a narrow materialistic view on the individual level does not automatically produce greater collective freedom of action.
Thus, larger values that aren't utopian tend to work out better for everyone.
Another concept that is different and yet similar is the following formulation: Philosophical Assumptions define Theoretical Models determining Practical Applications.
So, that intelligence is not merely a force seeking greater freedom of action but also is a set of reality constructs whereby one relates with the environment. These constructs can be more or less effective in terms of greater freedom of action.
Interesting stuff...
There are tow ways to increase to force of intelligence:
Make better predictions of the future, which means detect sates of maximum freedom of action
Increase the power to move in the desired direction. By adding more energy or being clever in finding paths that require less resources." paragraph;
our world has many imperfection and unpredictability, and blind evolution comes in here:
3. Making imperfect variations to cope with unpredictability
I enjoyed your post.
You ask, "It would be really interesting to analyse how good different believe systems are in terms of maximising the future freedom of the system" If you have a classification that holds each belief system (within an economic context) that tracks costs, that’s easy to display. That's a big part of my project.
The difficulty with the equation comes form the unpredictability of the future. Since we live in a world where at any time unpredictable events can happen (heart attack, earthquake, impact of an asteroid...) and even the "predictable" is not so predictable. But, if you make many decisions and if your decisions are a bit better on average than the predictions of other systems, you may gain more freedom of action in the long run. So, “hire 5 people” or "hire 9 people” is just one on the many decisions that add up. Maybe it's not the quantity of people you hire that matters, but the quality.
When it comes to "believe systems", I think the ones that best observe "reality" and drive the best conclusions will win. In some cases, humans are driven by an utopia or by a believe that is no based on the reality. On the other hand, some religions seem to be very successful, even if its believes are not based on "reality".
As individual you might come to the conclusion that not following the rules of the society (or the religion) gives you an advantage. But if you look at the society as a whole as an intelligent system, individual freedom might undermine the future freedom of action of the entire system. Take birth rates: for an individual a child might reduce the future freedom of action. So, if all intelligent individual decide not to get children, it might be bad for the system. Likewise, if everybody believes having many children is good, then the system might starve form overpopulation.
"The tragedy of the commons" is a classical problem where the interest of individual members of the system in undermining the interest of the entire system.
I think, freedom in general requires the power/force to defend itself, else its options may be reduced (by other powers) and at the end the freedom of choice is gone. Using force to defend freedom takes resources. Therefore intelligent systems minimizes the power used to defend its freedom but it does not neglect it.
So I think it may well be related to Nietzsches "Will to Power" ("Wille zur Macht").