I've got a cup that has a little glass wall dividing it in half. It's half full of salt water, and half full of fresh water. If the glass wall is suddenly removed, then the salt water mixes with the fresh water, until the salt ions are evenly distributed in the cup. This is *diffusion*: the tendency of particles to move from regions of high concentration to regions of low concentration.

A simple way of thinking about diffusion is to imagine the cup is composed of a large set of very small blocks, big enough to hold a salt ion. Each block may be filled or empty. (This is called a *lattice model*.) If there is no bias built in to the particles, the probability of each system arrangement should be equal. An even distribution of particles is the most probable system configuration because it contains the highest number of total possible arrangements.

To see this, imagine there are four salt particles in a very small cup, and we partition the cup into four little boxes in the left half, and four more in the right. How many ways are there to have all four particles on the left side, and none on the right? Assuming the particles are indistinguishable (that is, we don't care what order they're in), since there are only four boxes, there is only 1 system configuration that has all four particles on the left side. On the other hand, there are 6 different ways to arrange the particles to have 2 on the left side and 2 on the right. If each individual arrangement is equally likely, the most probable state in which to find the system is with 2 particles on the left, and 2 on the right. To be precise, this would be six times more likely than having all the salt on one side! (Mathematically, these are just binomial coefficients, so if you're lazy like me and don't like doing a bunch of counting, you can get the number of arrangements by reading across the fifth row of Pascal's triangle, 1 4 6 4 1.)

The idea of maximizing the number of states is central to statistical mechanics. The number of states (or, more specifically, its natural logarithm) is a measure of the amount of disorder, or *entropy*, in a system. One of most important conclusions of thermodynamics is that in a system with a conserved amount of total energy (and number of particles, taking the classical view that these are distinct items), its entropy will always increase. (This has a number of interesting, and disquieting, implications for, for example, the future of our universe, if it is an isolated system.) This counting approach is a *microscopic* model for systems at equilibrium, but in a real system -- e.g., a real-life cup of water -- of course we don't sit there and count the number of salt ions in the cup. In one gram of table salt, there's around 3.6x10^{25} pairs of sodium and chloride ions. This is a ridiculously huge number: it's 36 followed by 24 zeros, or *36 trillion trillion* ions in a single gram. If you consider the number of arrangements of 36 trillion trillion ions where all the ions are on one side of the cup (total: 1), versus the number of arrangements where they are approximately equally distributed (total: a stupidly big number), it's clear that in this simple lattice model, generally all the salt is not going to be on one side of the cup!

Instead of counting trillions of ions, instead what we measure are *macroscopic* quantities -- for example, the average concentration of ions on each side of the cup. If we take this measurement, and discover that there's actually a higher concentration on the right side than the left on average, then that tells us something about our system: it gives us a *constraint* that we can use to weight the entropy calculation we did before. As mentioned before, if our system contains 36 trillion trillion particles, we're probably only going to be interested in average quantities, since the stupidly big number-of-states will make shifts away from the average very improbable: the probability distribution, in this case, will be approximately *Gaussian* (a bell curve), with a tiny standard deviation. But, what about our oversimplified tiny cup with only four particles of salt in it? In that case, there's a 1 in 6 chance of finding all the salt on one side of the cup! Given this non-negligibly small probability, for very small systems, the microscopic fluctuations can be important. That is to say, we're interested in how the system evolves with time -- we care about its *dynamics*.

Let's go back to our tiny cup, and start with 2 particles on the left side, and 2 on the right. Assume there's no bias built in to the system, and that every particle is independent of the others, so that each particle has some fixed probability of jumping to the other side. The sequence of jumps and stays that a particle follows over a period of time is called its *trajectory*. If there are no constraints on the system, we'd predict that every trajectory is equally likely, and if we tabulate the number of trajectories that result in 0, 1, or 2 particles jumping, 6 out of 16 trajectories result in 2 particles on the left, and 2 on the right (the same result as before). However, the table also says that there is a 1 in 16 chance for 2 particles to jump from left to right, and 0 to jump from right to left, putting all the salt on one side of the cup! For a given starting distribution of particles, this quantifies how the system's fluctuations are likely to alter the overall state of the system with time, and also how the fluctuations themselves are likely to change over time. A system's trajectory multiplicity is called its *caliber*.

We think that the idea of maximum caliber will be a useful way to analyze systems far from equilibrium, very small systems, as well as highly correlated systems. All three are true of many biological systems, and one system I'm particularly interested in applying maximum caliber to is long term potentiation, the way we form memories. My short-term project is a little farther afield: I'd like to apply caliber to correlated stock prices, and see if the predicted fluctuations in the stock market correspond to what's actually been observed. Because stock prices can be highly correlated, the independent-particle assumption does not work, which is why random-walk statistics do not accurately describe the probability of large fluctuations in the market. Instead, markets approximately follow an inverse quartic power law, but as far as I'm aware, no one's provided a quantitative, microscopic explanation for this. It will be interesting to see if a power law distribution follows naturally from thinking about the caliber of the market.