Chaos theory is an area of deterministic dynamics proposing that seemingly random events can result from normal equations because of the complexity of the systems involved. In IT (information technology), chaos theory has applications in many areas including networking, big data analytics, fuzzy logic, business intelligence (BI), marketing, game theory, systems thinking, predictive analytics and social networking.
In a scientific context, the word chaos has a slightly different meaning than it does in its general usage as a state of confusion, lacking any order. Chaos, with reference to chaos theory, refers to an apparent lack of order in a system that nevertheless obeys particular laws or rules; this understanding of chaos is synonymous with dynamical instability, a condition discovered by the physicist Henri Poincare in the early 20th century that refers to an inherent lack of predictability in some physical systems.
The two main components of chaos theory are the idea that systems - no matter how complex they may be - rely upon an underlying order, and that very simple or small systems and events can cause very complex behaviors or events. This latter idea is known as sensitive dependence on initial conditions, a circumstance discovered by Edward Lorenz (who is generally credited as the first experimenter in the area of chaos) in the early 1960s.
Lorenz, a meteorologist, was running computerized equations to theoretically model and predict weather conditions. Having run a particular sequence, he decided to replicate it. Lorenz reentered the number from his printout, taken half-way through the sequence, and left it to run. What he found upon his return was, contrary to his expectations, these results were radically different from his first outcomes. Lorenz had, in fact, entered not precisely the same number, .506127, but the rounded figure of .506. According to all scientific expectations at that time, the resulting sequence should have differed only very slightly from the original trial, because measurement to three decimal places was considered to be reasonably precise. Because the two figures were considered to be almost the same, the results should have likewise been similar.
Since repeated experimentation proved otherwise, Lorenz concluded that the slightest difference in initial conditions - beyond human ability to measure - made prediction of past or future outcomes impossible, an idea that violated the basic conventions of physics. As the famed physicist Richard Feynman pointed out, "Physicists like to think that all you have to do is say, these are the conditions, now what happens next?"
Newtonian laws of physics are completely deterministic: they assume that, at least theoretically, precise measurements are possible, and that more precise measurement of any condition will yield more precise predictions about past or future conditions. The assumption was that - in theory, at least - it was possible to make nearly perfect predictions about the behavior of any physical system if measurements could be made precise enough, and that the more accurate the initial measurements were, the more precise would be the resulting predictions.
Poincare discovered that in some astronomical systems (generally consisting of three or more interacting bodies), even very tiny errors in initial measurements would yield enormous unpredictability, far out of proportion with what would be expected mathematically. Two or more identical sets of initial condition measurements - which according to Newtonian physics would yield identical results - in fact, most often led to vastly different outcomes. Poincare proved mathematically that, even if the initial measurements could be made a million times more precise, that the uncertainty of prediction for outcomes did not shrink along with the inaccuracy of measurement but remained huge. Unless initial measurements could be absolutely defined - an impossibility - predictability for complex - chaotic - systems performed scarcely better than if the predictions had been randomly selected from possible outcomes.
The butterfly effect, first described by Lorenz at the December 1972 meeting of the American Association for the Advancement of Science in Washington, D.C., vividly illustrates the essential idea of chaos theory. In a 1963 paper for the New York Academy of Sciences, Lorenz had quoted an unnamed meteorologist's assertion that, if chaos theory were true, a single flap of a single seagull's wings would be enough to change the course of all future weather systems on the earth.
By the time of the 1972 meeting, he had examined and refined that idea for his talk, "Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?" The example of such a small system as a butterfly being responsible for creating such a large and distant system as a tornado in Texas illustrates the impossibility of making predictions for complex systems; despite the fact that these are determined by underlying conditions, precisely what those conditions are can never be sufficiently articulated to allow long-range predictions.
Although chaos is often thought to refer to randomness and lack of order, it is more accurate to think of it as an apparent randomness that results from complex systems and interactions among systems. According to James Gleick, author of Chaos: Making a New Science, chaos theory is "a revolution not of technology, like the laser revolution or the computer revolution, but a revolution of ideas. This revolution began with a set of ideas having to do with disorder in nature: from turbulence in fluids, to the erratic flows of epidemics, to the arrhythmic writhing of a human heart in the moments before death. It has continued with an even broader set of ideas that might be better classified under the rubric of complexity."