Sunday, June 29, 2014

“causal entropic forces,” that drives a physical system toward a state that maximizes its options for future change

Here’s a way to make a lot of money. Publish a speculative scientific article with equations nobody understands, put out a press release, throw in a few credentials (say, a degree from Harvard or M.I.T.), and get a few bloggers to spread the word. In the meantime, quietly start a company based on the idea—the grander, the better.
The latest example of the scientific hype machine is a paper that comes from Alexander Wissner-Gross, a research scientist and entrepreneur affiliated with Harvard and M.I.T. who, according to the bio on his Web site, has “authored 15 publications, been granted 19 issued, pending, and provisional patents, and founded, managed, and advised 5 technology companies, 1 of which has been acquired.” According to one report (by a well-respected science journalist), Wissner-Gross and his co-author, Cameron Freer, “have figured out a ‘law’ that enables inanimate objects to behave [in a way that] in effect allow[s] them to glimpse their own future. If they follow this law, they can show behavior reminiscent of some of the things humans do: for example, cooperating or using ‘tools’ to conduct a task.” A start-up called Entropica aims to capitalize on the discovery; the futurist Web site io9 and the BBC have both gushed about it.

The paper’s central notion begins with the claim is that there is a physical principle, called “causal entropic forces,” that drives a physical system toward a state that maximizes its options for future change. For example, a particle inside a rectangular box will move to the center rather than to the side, because once it is at the center it has the option of moving in any direction. Moreover, argues the paper, physical systems governed by causal entropic forces exhibit intelligent behavior. Therefore, the principle of causal entropic forces will be a valuable, indeed revolutionary, tool in developing A.I. systems. The paper, and an accompanying video on Entropica’s Web site, then predict that this principle will be useful in building intelligent software for everything from social networking and military deployment to manufacturing and financial investment. “Our hypothesis”, Wisner-Gross told us in an e-mail, “is that causal entropic forces provide a useful—and remarkably simple—new biophysical model for explaining sophisticated intelligent behavior in human and nonhuman animals.”
Unfortunately, such grand, unified, one-size-fits-all solutions almost never work—because they radically underestimate the complexity of real world problems. As one of us wrote here just a few weeks ago, an algorithm that is good at chess won’t help parsing sentences, and one that parses sentences likely won’t be much help playing chess. Serious work in A.I. requires deep analysis of hard problems, precisely because language, stock trading, and chess each rely on different principles. In suggesting that causal entropy might solve such a vast array of problems, Wissner-Gross and Freer are essentially promising a television set that walks your dog.
The first problem is that Wissner-Gross’s physics is make-believe. Inanimate objects simply do not behave in the way that the theory of causal entropic forces asserts. The physics to which the paper refers, mostly obliquely, is speculative, and concerns ideas that have been used to explain processes on cosmological time scales, such as the formation of galaxies, or the emergence of the complex structures needed for life and intelligence from the primordial chemical soup. There is no reason to suppose that these are involved in the biophysics behind individual actions and mental activities by single organisms. There is no evidence yet that that causal entropic processes play a role in the dynamics of individual neurons or muscular motions. Nor do they apply to ordinary objects. One of the authors’ computer simulations shows a moving cart balancing a pole above it, but of course that is not what a cart with a pole actually does. No actual physical law “enables” an unaided cart to do this.
In another computer simulation of causal entropic forces that the authors ran, a particle bouncing around in a two-dimensional box found its way to the center. In reality a particle of a gas in a box moves randomly, and over time is equally likely to be anywhere in the box. Certainly, the particles of a gas cannot all converge in the center of the box; and it is not clear what makes the particular particle in this simulation so intelligent or so susceptible to causal entropic forces.
Of course, even if the “causal entropic” laws weren’t really drawn from the laws of physics, they could still provide a useful framework for artificial intelligence or for modeling human behavior. But there is very little evidence that they do. A.I. is hard—really hard—fifty years in the making, and very much unsolved. Wissner-Gross’s work promises to single-handedly smite problems that have stymied researchers for decades.
The paper and video consist largely of descriptions of simulations of how toy systems that incorporate causal entropic forces behave, such as the cart balancing the pole and the particle finding its way to the center of the box. The authors observe that the behaviors of these simulations resemble certain kinds of intelligent behavior, and they infer that the intelligent behavior may well be due to causal entropic forces.
For example, an ape can use a stick to scoop out food. Wissner-Gross’s interpretation is that he wants to eat in order to maximize his future options; once he is well fed, there are more things he can do. As a parallel, Wissner-Gross set up a computer simulation in which there was a large disk, corresponding to the ape; a small disk caught in a tube, corresponding to the food; and another small disk near the large disk, corresponding to the tool. He further set up the simulation so that the ape disk would have the most options for actions when it was able to get into contact with the food disk. Lo and behold, the causal entropic forces impelled the ape disk to push the tool disk into the tube; the tool disk drove the food disk out of the tube; and then the ape disk could reach the food disk. So the ape disk uses a tool, just like the real ape!
But here, Wissner-Gross relies too much on handpicked cases that happen to work well with the maxim of maximizing your future options. Apes prefer grapes to cucumbers. When given a choice between the two, they grab the grapes; they don’t wait in perpetuity in order to hold their options open. Similarly, when you get married, you are deliberately restricting the options available to you; but that does not mean that it is irrational or unintelligent to get married. The only way to rescue the theory is to tie it into knots, making different stories about what counts as an option for every distinct situation that is to be modelled. The paper, the video on the Entropica Web site, and the associated publicity have all made much of the fact that the simulations were not given any particular goal; the behavior simply “emerges” from the causal entropic forces. But the reality is that whether or not the right behavior emerges actually depends on how a given simulation is set up; there is nothing like a general theory of how to apply the theory to real-world data, nor a general means by which to make sure that the system’s goals actually accord with those of a programmer.
What Wissner-Gross has supplied is, at best, a set of mathematical tools, with no real results beyond a handful of toy problems. There is no reason to take it seriously as a contribution, let alone a revolution, in artificial intelligence unless and until there is evidence that it is genuinely competitive with the state of the art in A.I. applications. For now, there isn’t.
It would be grand, indeed, to unify all of physics, intelligence, and A.I. in a single set of equations, but such efforts invariably overlook the complexity of biology, the intricacy of intelligence, and the complexity of the real-world problems that such systems aim to solve; in A.I, as in so many other fields, what looks too good to be true usually is.
Gary Marcus is a professor of psychology at N.Y.U. who often writes for newyorker.com on psychology and artificial intelligence; Ernest Davis, also at N.Y.U., is a professor of computer science. Their last piece together for The New Yorker was on What Nate Silver Got Wrong.
Illustration by Dan Clowes.

No comments:

Post a Comment