REXML could not parse this XML/HTML: <script type="text/javascript"> window.MathJax && MathJax.Hub.Queue(["Typeset",MathJax.Hub]); Acko.queue(function () { Acko.Fallback.warnWebGL(); }); </script>

All the models we've dealt with until now are programmatic. If we wish to run a sequence of animations, we have to schedule calls appropriately, perhaps using a queue. The proper tool for this job is a timeline. At first glance, it's just a series of keyframes on tracks: a set of coordinates over time, one for each property you're animating, with some easing in between. But it's hard to offer direct controls to a director or animator, without creating uneven or jarring motion, at least in 2D.

We must stop treating space and time as separate things, and chart a course in both at the same time.

REXML could not parse this XML/HTML: <div class="wide slideshow full"> <div class="iframe c"> <iframe src="/files/animating/mb-3-timeline.html" class="mathbox paged autosize" height="320"></iframe> </div> <div class="steps"> <div class="step"> <p>This a classic keyframe timeline: a set of frames, with values defined along the way. It could be a vertical or horizontal motion, the opacity of a shape, the volume of a sound, etc. Any one thing we want to animate precisely over a long time.</p> </div> <div class="step"> <p>This is one second of a 60 fps animation and there's a keyframe every 10 frames. We can interpolate between the points with a <span class="blue">cosine ease</span>. But there's already a mistake.</p> </div> <div class="step"> <p>By expressing animations as frames, we can only have animations that are multiples of the frame length. In this case, that's 16.6 ms. If we want to space keyframes at 125ms, we can't, because that's 7.5 frames. The closest we can manage is alternating <span class="orangered">7</span> and <span class="green">8</span> frame sections.</p> </div> <div class="step"> <p>Just like with variable frame rates, we need to set keyframes in <span class="blue">absolute time</span>, not numbered frames. We use a global clock to produce arbitrary in-between frames. If we change our mind and wish to speed up or slow down part of our timeline, there's no snap-to-frame to get in our way. Note that Adobe Flash does <em>not</em> do this: you define your frame rate up front and are stuck with it.</p> </div> <div class="step"> <p>There is a catch though, easy to overlook: by the time we notice the first animation has ended, the second one has already started. We need to account for this <em>slack</em>, and make sure we start partway in, <span class="orangered">not from the beginning</span>. Otherwise, this error accumulates with every keyframe, leading to noticeable lag.</p> </div> <div class="step"> <p>This is also important for triggered actions like <span class="purple">sound</span>. Suppose there is a performance glitch right before it plays, and we lose 7 frames. Rare, but not impossible. If we don't account for slack, we'd have 7.5 frames of permanent lag on the audio, 125ms. More than enough to disrupt lip sync. Instead we should skip ahead to make up for it. To avoid an audible pop when skipping audio, we can apply a tiny fade in: a <em>microramp</em>.</p> </div> <div class="step"> <p>With real-time dependencies like audio, it's better to be safe than sorry though. As the audio subsystem is generally independent, we can avoid this issue by pre-queuing all the sound with a delay. Here we begin playback 100ms earlier, but start each sound with an implied 100ms silence, minus the slack. Now, no audio will be lost in most situations. This too is animation: micromanaging time.</p> </div> <div class="step"> <p>Let's focus our attention back on <span class="blue">this easing curve</span>. By treating it as a sequence of individual animations, we've created a smooth path. But it's not a very ideal path: it stops at every keyframe and then starts moving again, creating a curve with stairs. This is more obvious if we plot the <span class="green">velocity</span>.</p> </div> <div class="step"> <p>We need to replace it with a <em>spline</em>, a <em>designable</em> curve. There's too many to name, but we'll stick to a common one: Catmull-Rom splines. It's entirely based on <span class="purple">one particular curve</span>. Looks suspiciously like an <em>impulse response</em>, doesn't it?</p> </div> <div class="step"> <div class="extra top edge" data-delay="4.5" data-align-y=".95" ><span style="font-size: 80%"> $$ catmullRom(t) = \frac{1}{2} \cdot \left\{ \begin{array}{rcll} \class{purple}{p_1(t)} & = & t^3 + 5 t^2 + 8 t + 4 & \mbox{if } -2 \leq t \lt -1 \\ \class{royal}{p_2(t)} & = & -3 t^3 - 5 t^2 + 2 & \mbox{if } -1 \leq t \lt 0 \\ \class{purple}{p_3(t)} & = & 3 t^3 - 5 t^2 + 2 & \mbox{if } 0 \leq t \lt 1 \\ \class{royal}{p_4(t)} & = & -t^3 + 5 t^2 - 8 t + 4 & \mbox{if } 1 \leq t \lt 2 \\ \end{array} \right. $$</span> </div> <p>But actually, it's not one curve, it's 4 separate <span class="purple">cubic</span> <span class="royal">curves</span> <span class="purple">glued</span> <span class="royal">together</span> into a symmetric pulse. They're designed so their <span class="green">velocities</span> meet up at the transition, thus creating a <span class="purple">single smooth path</span>. But if you look closely, you can see that the <span class="green">velocity</span> (scaled) has two minor kinks in it, one on each side.</p> </div> <div class="step"> <p>There are two other important features. The first is that the curve <span class="purple">goes through 0</span> at all the keyframes <em>except the central one</em>. There, its <span class="purple">value is 1</span>. The keyframes are called the <em>knots</em> of the spline.</p> </div> <div class="step"> <p>The other is that its <span class="green">slope</span> is 0 at all the knots <em>except the ones adjacent to the peak</em>. There, it's <span class="green">$ \frac{1}{2} $</span> and <span class="green">$ -\frac{1}{2} $</span> respectively. If we trace the slopes out to the center, we go <em>half as high</em> as the peak, to 0.5.</p> </div> <div class="step"> <p>That means if we <span class="purple">scale down</span> this curve as a whole, very few things we're interested in actually change. All the horizontal slopes remain horizontal. All the knots at 0 remain at 0. Only the <em>peak shrinks</em>, and the <em>slopes at the adjacent knots go down</em>.</p> </div> <div class="step"> <div class="extra top" data-align-y="1.2" data-delay="4.5"> $$ \class{blue}{p_i} \cdot catmullRom(t-i) $$ </div> <p>We can literally treat the curve as the <span class="purple">impulse response</span> of a filter, and the knots as a series of <span class="blue">impulses</span>. A filter outputs a copy of its impulse response for every impulse it encounters. As this is all theoretical, we don't care about filter lag.</p> </div> <div class="step"> <div class="extra top" data-align-y="1.2" data-delay="4"> $$ \class{blue}{spline(t)} = \sum\limits_{i=0}^n \class{blue}{p_i} \cdot catmullRom(t-i) $$ </div> <p>If we now add up all the curves, we get the <span class="blue">Catmull-Rom spline</span>. Despite the intricate interactions of the curves between the knots, the result is very predictable. The spline goes through every keyframe, because the <span class="blue">values at the knots</span> are all 0 except for the peak itself.</p> </div> <div class="step"> <p>What's more, when we move a single value up and down, only two other things change: the two slopes at the <span class="purple">adjacent knots</span>. The <span class="green">slope at the knot itself</span> is still constant. This means we can control the initial and final slope of the spline just by adding an extra knot before and after: it won't affect anything else.</p> </div> <div class="step"> <p>See, the slope at a knot is actually just the <span class="green">central difference</span> around that point. This is where the factor of $ \frac{1}{2} $ for the <span class="purple">adjacent slopes</span> came from earlier, and why their signs were opposite: it's a difference that spans two keyframes, so we divide by 2. This is the rule that determines how <span class="blue">Catmull-Rom splines</span> curve.</p> </div> <div class="step"> <p>There's just one problem: all of this only works if the keyframes are equally spaced. If we change the spacing, our <span class="purple">base curve</span> is no longer smooth: there is a kink at the adjacent keyframes. This might not look like much, but it would be noticeable.</p> </div> <div class="step"> <p>There's two ways to solve this. One is to try and come up with a <span class="purple">unique curve</span> for every knot. This curve has to be smooth and hit all the right values and slopes. This is the hard way, and can result in odd changes of curvature if done badly, like here.</p> </div> <div class="step"> <p>But actually, you already know the other solution. By distorting the <span class="blue">Catmull-Rom spline</span> to fit our keyframes, it's like we've rendered it with a <em>variable speed clock</em>. But one that doesn't change smoothly. This is why the curves have developed kinks out of nowhere. If we can smooth out the passage of time, then we'll stretch the spline smoothly between the keyframes.</p> </div> <div class="step"> <p>We can just create another Catmull-Rom spline to do so. Horizontally, we put <span class="cyan">equally spaced knots</span>. This dimension has no real meaning: it's just 'spline time' now, independent of real time.</p> </div> <div class="step"> <p>We move the knots vertically to the keyframe time and make a <span class="cyan">spline</span>. In this case, I tweaked the start and end to be a diagonal rather than a horizontal slope. This curve hits all the keyframes at the right time and transitions smoothly between them. It's a variable clock that goes from <em>constant spline time to variable real time</em>.</p> </div> <div class="step"> <p>To actually calculate the animation, we need to go the other way and invert the relationship: from <em>variable real time to constant spline time</em>. This can be done a few ways, but the easiest is to use a binary search, as the time curve always rises: it's like finding a value in an ordered list. This tells us how fast to <em>scrub through</em> the spline. </p> </div> <div class="step"> <p>With this, we can warp the <span class="blue">Catmull-Rom spline</span> to hit all the keyframes at the right times. We'll still need to manually edit the keyframes to get a perfectly ripple-free path, but now we can move them anywhere, anytime we like.</p> </div> <div class="step"> <p>What we just did was to chart a path through 1D space and 1D time, by combining two Catmull-Rom splines. Add <span class="blue">time travel</span>, and this is entirely equivalent to charting a random 2D spline through 2D space. To create such an animation, you create two parallel tracks, one for X and one for Y, with identical timings. By scrubbing through <em>spline time</em>, you move in both X and Y, and hence along the curve. However, doing so precisely turns out to be complicated.</p> </div> <div class="step"> <p>In the 1D case, the distance between two keyframes is trivial: going from 0 to 1 means you moved 1 unit. In the 2D case, that's no longer true: the distance travelled depends on both X and Y simultaneously. What's worse is, splines generate <span class="blue">uneven paths</span>. If we divide them equally in <em>spline time</em>, we get unequal steps in <em>real time</em>. The apple slows down and then shoots off.</p> </div> <div class="step"> <p>It might seem cool that the spline naturally has a tension in its motion, but it will only get in the way. If we move just a single X coordinate of a single knot, the entire path shifts, and the <span class="blue">distance between the steps</span> changes considerably. The easing of the Y coordinate needs to compensate for this. We can't maintain a controlled velocity this way: X and Y are dependent and have to be animated together.</p> </div> <div class="step"> <p>We can resolve this by doing for distance what we did for time: we have to make a map from <em>spline time</em> to <em>real distance</em>. We can step along the spline in small steps and measure the distance one line segment at a time. When we <span class="green">integrate</span>, adding up the pieces, we get a <span class="cyan">curve</span> that maps <em>spline time</em> onto <em>total distance along the curve</em>.</p> </div> <div class="step"> <p>Again, we can invert the relationship to get a map from <span class="cyan"><em>distance</em> to <em>spline time</em></span>.</p> </div> <div class="step"> <p>We can use it to divide the spline into segments of equal length and move an object along the path with a constant speed. This works for any spline, not just Catmull-Rom. We can always turn a curve into a neat set of <span class="blue">equally spaced points</span> of our choosing.</p> </div> <div class="step"> <p>The distance map gives us a <span class="blue">natural parametrization</span>: a way to move along the curve by the arc length itself. This effectively flattens out the curve into a straight line, and we can treat it like a 1D animation. We can apply straightforward easing again, because distances are once again preserved.</p> </div> <div class="step"> <p>To animate, we just define an easing curve for <span class="blue">distance over time</span>. If we want to move along the path at a constant speed, we line the keyframes up along a diagonal.</p> </div> <div class="step"> <p>However, the knots don't have any special meaning anymore. When we move, we pass through them at just the same speed as any other point. That means we can control velocity completely independent of the path itself, using all the tricks from earlier. We can also apply a direct easing curve along the path, for example <span class="blue">cubic easing</span>.</p> </div> <div class="step"> <p>To run the animation, we go the other way. The easing curve tells us the <span class="blue">desired distance along the path</span> at any moment in time. We have to use the <span class="cyan">inverse distance map</span> to convert this to <em>spline time</em> for the point in question.</p> </div> <div class="step"> <p>Then we can use the <em>spline time</em> to look up the point on the <span class="purple">Catmull-Rom spline</span>. The <span class="blue">easing curve</span> makes us scrub smoothly along the <span class="cyan">distance map</span>. This in turn makes us move smoothly on the <span class="purple">spline</span>—albeit with a bit of whiplash.</p> </div> <div class="step"> <p>While that might seem like a lot of work, the good news is, it works in 3D too. We can find a <span class="cyan">distance map based on 3D distance</span>, and now have three simultaneous Catmull-Rom splines for the <span class="purple">X, Y and Z</span> coordinates.</p> </div> <div class="step"> <p>In a way, path-based animation is <em>cheating</em>: it acts like there's an infinitely strong force keeping the object <span class="purple">on the track</span>, only we don't need physics to make it happen. If we <em>did</em> add other forces however, we'd get a miniature WipeOut-style racing game. This principle is applied in the demo at the top of this page: the velocity along the track is constant, but the camera and its target are being exponentially eased, creating lag and swings in corners, giving it a natural feel.</p> </div> </div> </div>

Timelines and splines are both sides of the same coin: using piecewise sections to create smoothness. The combination of both gives us path-based animation, pretty close to being the holy grail of controlled animation. We can fit this neatly into any timeline model—provided we don't lose track of all the tiny extra bits of time—with any easing mechanism we want. The track and the motion on it are completely decoupled.

Aside from Catmull-Rom, there's the non-rational splines, the popular Bezier curves as well as other recursive methods. As most of these allow you to control the slope directly, you get direct velocity control on any path in a timeline.

Path-based animations don't have to be restricted to positions either. You can animate RGB colors as XYZ triplets the same way. Or you could animate the parameters of a procedural generator, or a physics simulation. Or animate the volume levels of music in response to gameplay. Or move your robot. Timelines are excellent tools to manage change, but only if you can control the speed precisely at the same time.

Which leaves us only one thing: rotation.

How difficult can a few angles be? Very. In 2D, they don't cooperate with our linear models. Even just turning to face a particular direction requires care. In 3D, things get properly messed up. Rotations will turn the wrong way, wobble in place and generally not behave. If you're trying to animate a free-moving camera in 3D, fixing this is pretty important, unless you're making *Motion Sickness Tycoon* or *Cloverfield Returns*.

Defeating this particular Goliath will require a careful approach. We'll launch our squadrons of X, Y and Z-wings, use the Force, and attack the weak spot for maximum damage. It better not be a trap.

REXML could not parse this XML/HTML: <div class="wide slideshow full"> <div class="iframe c"> <iframe src="/files/animating/mb-4-quaternion.html" class="mathbox paged autosize" height="320"></iframe> </div> <div class="steps"> <div class="step"> <div class="extra left" data-align-x="1.25" data-delay="4"> <big>$$ \phi = 0° $$</big> </div> <p>What's wrong with angles? Let's ask our trusty friend, the apple. Sorry, I got hungry.</p> </div> <div class="step"> <div class="extra left" data-align-x="1.4" data-delay="4"> <big>$$ \begin{array}{rcl} \class{blue}{\phi} & = & 2.3 \cdot τ \\ & = & 828° \end{array} $$</big> </div> <p>Well, they <em>wrap around</em>. Suppose we have an object that's been rotated a couple of times, for example as part of an interactive display. It completed 2.3 turns ($ τ = 2π $) around the circle. For now we'll use degrees, but eventually we'll switch to radians for the heavier stuff.</p> </div> <div class="step"> <div class="extra left" data-align-x="1.4"> <big>$$ \class{purple}{\phi_T} = 0° $$</big> </div> <p>If we animate the apple to a target angle $ \class{purple}{\phi_T} $ at 0°, it will spin all the way back. Our animation system doesn't know that it could stop earlier at 720° or 360°.</p> </div> <div class="step"> <div class="extra left" data-align-x="1.4"> <big>$$ \class{purple}{\phi_T} = 315° \\ $$</big> </div> <p>To fix this, we can't simply reduce all angles to the interval 0…360. If we animate from 0 to <span class="purple">315°</span>, we still go the long way around rather than just 45° in the other direction.</p> </div> <div class="step"> <div class="extra left" data-align-x="1.4"> <big>$$ \begin{array}{rcl} \class{blue}{\phi} &=& 315° \\ \class{purple}{\phi_T} &=& 90° \\ \end{array} $$</big> </div> <div class="extra right edge" data-delay="2"> $$ \begin{array}{rcl} δ &=& \frac{\class{purple}{\phi_T} - \class{blue}{\phi}}{360°} \\[8pt] \class{green}{Δ\phi} &=& 360° \cdot (δ - ├\,δ\,┤) \\ \end{array} $$ </div> <p>We need to reduce the difference in angle $ \class{purple}{\phi_T} - \class{blue}{\phi} $ to less than 180° in either direction. This is a <em>circular difference</em>, easiest when counting whole turns $ δ $, so we can <em>round off</em> to $├\,δ\,┤$. The difference, e.g. $ 3.3 - 3 = 0.3 $ or $ 1.6 - 2 = -0.4 $ is never more than half a turn. If we now set the target to <span class="purple">90°</span>, it tells us to animate by $ \class{green}{+135°} $, that is, the short way around.</p> </div> <div class="step"> <div class="extra left" data-align-x="1.4"> <big>$$ \class{purple}{\phi_T} = 90° \\ \class{blue}{\phi} = 450° \\ $$</big> </div> <p>Our angles are now still continuous, going <span class="blue">beyond 360°</span> in either direction, but we never rotate more than 180° at a time unless we actually want to. We can apply this correction whenever we interpolate between two angles, and always end up at an equivalent angle.</p> </div> <div class="step"> <p>Here I use double exponential easing to chase a <span class="blue">rapidly changing angle</span>. The <span class="purple">once filtered angle</span> jerks whenever it gets lapped, as it suddenly needs to change direction. The <span class="green">twice filtered angle</span> moves smoothly however.</p> </div> <div class="step"> <p>What about 3D? If we're restricting ourselves to a single axis of rotation, nothing really changes. We still control the angle the same way.</p> </div> <div class="step"> <p>But orientation in 3D is a complicated thing. The easiest way to express it is with a <em>3×3 matrix</em>: this is a set of 3 vectors in 3D. They define a frame of reference in space, a <em>basis</em>: <span class="blue">right/left</span>, <span class="green">up/down</span> and <span class="red">forward/back</span>. When we rotate around the vertical axis <span class="green">$ \vec y $</span>, we rotate <span class="blue">$ \vec x $</span> and <span class="red">$ \vec z $</span> together.</p> </div> <div class="step"> <p>For arbitrary orientations, <span class="blue">$ \vec x $</span>, <span class="green">$ \vec y $</span> and <span class="red">$ \vec z $</span> can turn in any direction, but always maintain a perfect 90° angle between themselves.</p> </div> <div class="step"> <div class="extra right" data-align-x="1.15"> <big><big>$$ \begin{bmatrix} \class{blue}{a} & \class{green}{d} & \class{orangered}{g} \\ \class{blue}{b} & \class{green}{e} & \class{orangered}{h} \\ \class{blue}{c} & \class{green}{f} & \class{orangered}{i} \end{bmatrix} $$</big></big> </div> <p>Each vector is a set of $ (x, y, z) $ coordinates. That means we can write down the matrix as a set of 3 triples of coordinates, one column <span class="blue">for</span> <span class="green">each</span> <span class="red">vector</span>. At first it would seem we need <em>9 numbers</em> to describe a 3D rotation. We can apply this <em>rotation matrix</em> to transform any point $ (x, y, z) $ by adding up proportional amounts of <span class="blue">$ \vec x $</span>, <span class="green">$ \vec y $</span> and <span class="red">$ \vec z $</span>. This is <em>linear</em> algebra.</p> </div> <div class="step"> <div class="extra right" data-align-x="1.15"> <big><big>$$ \begin{bmatrix} \class{blue strike}{a} & \class{green strike}{d} & \class{orangered}{g} \\ \class{blue strike}{b} & \class{green strike}{e} & \class{orangered}{h} \\ \class{blue strike}{c} & \class{green strike}{f} & \class{orangered}{i} \end{bmatrix} $$</big></big> </div> <p>But there's tons of redundancy here. Because the 3 vectors are perpendicular, <span class="red">$ \vec z $</span> can only be in one of two places. The difference between the two is called a <em>left handed</em> or <em>right handed</em> coordinate system: for <span class="blue">thumb</span>, <span class="green">index</span> and <span class="red">middle</span> finger, with your hand shaped like a gun and the middle finger sticking out.</p> </div> <div class="step"> <div class="extra right" data-align-x="1.15" data-hold="1"> <big><big>$$ \begin{bmatrix} \class{blue}{a} & \class{green}{d} & \class{red gone}{g} \\ \class{blue}{b} & \class{green}{e} & \class{red gone}{h} \\ \class{blue}{c} & \class{green}{f} & \class{red gone}{i} \end{bmatrix} \\[32pt] \class{blue}{\vec x} × \class{green}{\vec y} = \class{orangered}{\vec z} $$</big></big> </div> <p>So long as we agree on a common style of coordinate system, for example <em>right-handed</em>, we don't need to track <span class="red">$ \vec z $</span>. We can recover it from <span class="blue">$ \vec x $</span> and <span class="green">$ \vec y $</span> using something called the vector <em>cross product</em>. The vector that comes out will always be perpendicular to the two we pass in, decided by a left- or right-hand rule. This is by the way how you can aim a camera in 3D: all you need is a <span class="blue">target</span>, and an <span class="green">up</span> vector.</p> </div> <div class="step"> <p> We're down to 6 numbers. But there's more. A rotation preserves length, so the basis must always stay the same size. All the vectors must have length 1—be <em>normalized</em>—and hence move on the surface of a sphere. </p> </div> <div class="step"> <div class="extra right" data-align-x="1"> <big><big>$$ (\class{purple}{\phi}, \class{slate}{\theta}) $$</big></big> </div> <p> Instead of 3 coordinates, we can remember <span class="blue">$ \vec x $</span> as two angles: <span class="purple">longitude $ \phi $</span> and <span class="slate">latitude $ \theta $</span>. First we rotate around the Y axis, then around the rotated Z axis. Did we uniquely determine <span class="green">$ \vec y $</span> as well? </p> </div> <div class="step"> <div class="extra right" data-align-x="1"> <big><big>$$ (\class{purple}{\phi}, \class{slate}{\theta}, \class{cyan}{\gamma}) $$</big></big> </div> <p> No, there is a <span class="cyan">third degree of freedom</span> we haven't been using so far. In order to account for all the places where <span class="green">$ \vec y $</span> can be, we need to allow rotation around <span class="blue">$ \vec x $</span>, by another angle <span class="cyan">$ \gamma $</span>. Now we can describe any orientation in 3D using just <em>three numbers</em>, the so called <em>Euler angles</em>. </p> </div> <div class="step"> <p> This is a <em>YZX gyroscope</em>, after the order of rotations used. We can build one in real life by using concentric rings connected by axles. Make one large enough to put a chair in the middle, and you've got an amusement ride—or something to train pilots with. When we rotate the object inside, we rotate the rings, decomposing the rotation into 3 <em>separate perpendicular ones</em>. </p> </div> <div class="step"> <p> If we animate the individual angles smoothly, like here, we seem to get a smooth orientation. What's the problem? Well, we need to study the gyroscope a bit more. </p> </div> <div class="step"> <p> Let's go back to neutral, setting all angles to 0. You can see the <span class="green">Y</span><span class="red">Z</span><span class="blue">X</span> nature of the gyroscope, if you follow the axles from the outside in. </p> </div> <div class="step"> <p> We rotate the <span class="purple">first ring</span> by 90° and look at the axles again. Now they go <span class="green">Y</span><span class="blue">X</span><span class="red">Z</span>. We've swapped the last two. </p> </div> <div class="step"> <p> If we rotate the <span class="slate">second ring</span> by 90°, the axles change again. They've moved to <span class="green">Y</span><span class="blue">X</span><span class="green">Y</span>. This means changing the order or nature of the axles doesn't change the gyroscope, it just rotates all or part of it. That is, unless you make the very useless <span class="green">YYY</span> gyroscope. All <em>functional</em> gyroscopes are identical. Whatever we discover for one applies to all. </p> </div> <div class="step"> <p> This configuration is special however. The axles for the <span class="purple">first</span> and <span class="cyan">third rings</span> are aligned. This is called <em>gimbal lock</em>, though no ring actually locks. If we apply an equal-but-opposite rotation to both, the apple doesn't move. From any of these configurations, we can only rotate <em>two ways</em>, not <em>three</em>. It shows Euler angles do not divide rotations equally. </p> </div> <div class="step"> <p> If we now rotate the <span class="slate">inner ring</span> by 90°, all rings have been changed 90° from their initial position. Same for the apple: its final orientation happens to be rotated -90° around the <span class="red">Z</span> axis. </p> </div> <div class="step"> <p> Which means if we rotate the <em>entire gyroscope</em> by 90° around <span class="red">Z</span>, the apple returns to its original orientation. This is what we'd <em>like</em> to see if we simultaneously rotated the three rings of the gyroscope back to zero. </p> </div> <div class="step"> <p> That's not the case however. We try to hold the apple in place, by rotating back the gyroscope as we rotate back all three rings at the same time. The rotations don't cancel out cleanly and the apple wobbles. We'll need to create an <em>angle map</em>, similar to the <em>distance map</em> for splines before. Only now we need to equalize three numbers at the same time. </p> </div> <div class="step"> <p> Another telling sign is when we rotate all rings by 180°: the start and end orientation is the same. Yet the apple performs a complicated pirouette in between. Just like with circular easing, we'll need a way to identify <em>equivalent orientations</em> and rotate to the nearest one. </p> </div> <div class="step"> <p> To see why this is happening, we can rotate the apple around <span class="royal">a diagonal axis</span>. You can do this with a real gyroscope just by turning the object in the middle. The three rings—and hence the Euler angles—undergo a complicated dance. The two <span class="purple">outer</span> <span class="slate">rings</span> wobble back and forth rather than completing a full turn. Charting a straight course through <em>rotation space</em> is not obvious. </p> </div> <div class="step"> <p> In summary: trying to decompose rotations is messy and leads to gimbal lock. We're going to build a different model altogether, using what we just learnt. </p> </div> <div class="step"> <p> First, we make an arbitrary rotation matrix by doing a random <span class="blue">X</span> rotation followed by a (local) <span class="green">Y</span> rotation and a (local) <span class="red">Z</span> rotation. This is like using an <span class="blue">X</span><span class="green">Y</span><span class="red">Z</span> gyroscope. </p> </div> <div class="step"> <p> We can apply the same rotations again, acting like a nested <span class="blue">X</span><span class="green">Y</span><span class="red">Z</span><span class="blue">X</span><span class="green">Y</span><span class="red">Z</span> gyroscope. Because the gyroscope is made of two equal parts in series, we've rotated twice as far. </p> </div> <div class="step"> <p> Three points uniquely define a circle. So we can trace an arc for each of <span class="blue">the</span> <span class="green">basis</span> <span class="red">vectors</span>. These arcs are not part of the same circle, but they do lie parallel to each other. They turn around a common axis of rotation. </p> </div> <div class="step"> <div class="extra left" data-align-x="1.4"> <big>$$ \class{cyan}{\vec a} = \class{blue}{\vec x_1} - \class{blue}{\vec x_0} \\ \class{cyan}{\vec b} = \class{blue}{\vec x_2} - \class{blue}{\vec x_1} \\ \class{purple}{\vec c} = \class{cyan}{\vec a} × \class{cyan}{\vec b} \\ \class{slate}{\phi} = \arcsin \frac{|\class{purple}{\vec c}|}{|\class{cyan}{\vec a}| \cdot |\class{cyan}{\vec b}|} $$</big> </div> <p> We can find this <span class="purple">common axis</span> from any of the arcs. We take the <em>cross product</em> of the <span class="cyan">forward differences</span>. If we divide by the lengths of the differences, the <span class="purple">cross product</span>'s length tells us about the <span class="slate">angle of rotation</span>. We apply an arcsine to get an angle in <em>radians</em>. This is the <em>axis-angle representation</em> of rotations. Note that the axis is <em>oriented</em> to distinguish clockwise from counter-clockwise, here using the right hand rule. </p> </div> <div class="step"> <p> We can do this for any rotation matrix, for any set of Euler angles. It tells us we can rotate from neutral to any orientation by doing <em>a single rotation around a specific axis</em>. Now we have three better numbers to describe orientations: $ \class{purple}{(x, y, z)} $. They don't privilege any particular rotation axis, as both their direction and length can change equally in all 3 dimensions. We can pre-apply the arcsine: we make the vector's length directly equal rotation angle, <em>linearizing</em> it. </p> </div> <div class="step"> <p> We can also identify equivalent angles: if we rotate more than 180° one way, that's equivalent to rotating less than 180° the other way. The axis can <span class="purple">flip around</span> when its length reaches <em>$ π $ radians</em> (180°) without any disruption. We can restrict axis-angle to a <em>ball of radius $ π $</em>. </p> </div> <div class="step"> <p> If we <span class="slate">interpolate linearly</span> to a different $ \class{purple}{(x, y, z)} $, we get a smooth animation, but there's some wobble. It also goes the long way through the sphere. There's a much shorter way. </p> </div> <div class="step"> <p> We can flip $ \class{purple}{(x, y, z)} $ and then <span class="slate">interpolate back</span>, to get a more direct rotation. The wobble remains though: there's a subtle change in direction at the start and end. Hence, axis-angle cannot be used directly to rotate between any two orientations in a single smooth motion. </p> </div> <div class="step"> <div class="extra left" data-delay="2" data-align-x="1.2"> <big>$$ \begin{bmatrix} \class{blue}{a_1} & \class{green}{d_1} & \class{orangered}{g_1} \\ \class{blue}{b_1} & \class{green}{e_1} & \class{orangered}{h_1} \\ \class{blue}{c_1} & \class{green}{f_1} & \class{orangered}{i_1} \end{bmatrix} $$</big> </div> <div class="extra right" data-delay="2" data-align-x="1.2"> <big>$$ \begin{bmatrix} \class{slate}{a_2} & \class{cyan}{d_2} & \class{purple}{g_2} \\ \class{slate}{b_2} & \class{cyan}{e_2} & \class{purple}{h_2} \\ \class{slate}{c_2} & \class{cyan}{f_2} & \class{purple}{i_2} \end{bmatrix} $$</big> </div> <p> If we have two random rotation matrices $ [\class{blue}{\vec x_1} \,\,\, \class{green}{\vec y_1} \,\,\, \class{orangered}{\vec z_1}] $ and $ [\class{slate}{\vec x_2} \,\,\, \class{cyan}{\vec y_2} \,\,\, \class{purple}{\vec z_2}] $, how can we find the axis-angle rotation that turns one directly onto the other? </p> </div> <div class="step"> <div class="extra left" data-delay="2" data-align-x="1.2"> <big>$$ \begin{bmatrix} \class{blue}{a_1} & \class{blue}{b_1} & \class{blue}{c_1} \\ \class{green}{d_1}& \class{green}{e_1} &\class{green}{f_1} \\ \class{orangered}{g_1} & \class{orangered}{h_1} & \class{orangered}{i_1} \end{bmatrix} $$</big> </div> <p> We have to <em>invert</em> the first matrix to turn the other way. We could convert it to axis-angle and then reverse the angle. But it turns out that's the same as swapping rows and columns. The latter is obviously a lot less work, but it only works because <span class="blue">the</span> <span class="green">three</span> <span class="red">vectors</span> are perpendicular and have length 1. We end up with a matrix that rotates the same amount around the same axis, but in the other direction. For other kinds of matrices, inversion is trickier. </p> </div> <div class="step"> <div class="extra left edge" data-delay="2"> $$ \begin{bmatrix} \class{blue}{a_1} & \class{blue}{b_1} & \class{blue}{c_1} \\ \class{green}{d_1}& \class{green}{e_1} &\class{green}{f_1} \\ \class{orangered}{g_1} & \class{orangered}{h_1} & \class{orangered}{i_1} \end{bmatrix} \cdot \begin{bmatrix} \class{blue}{a_1} & \class{green}{d_1} & \class{orangered}{g_1} \\ \class{blue}{b_1} & \class{green}{e_1} & \class{orangered}{h_1} \\ \class{blue}{c_1} & \class{green}{f_1} & \class{orangered}{i_1} \end{bmatrix} $$ </div> <div class="extra right edge" data-delay="2"> $$ \begin{bmatrix} \class{blue}{a_1} & \class{blue}{b_1} & \class{blue}{c_1} \\ \class{green}{d_1}& \class{green}{e_1} &\class{green}{f_1} \\ \class{orangered}{g_1} & \class{orangered}{h_1} & \class{orangered}{i_1} \end{bmatrix} \cdot \begin{bmatrix} \class{slate}{a_2} & \class{cyan}{d_2} & \class{purple}{g_2} \\ \class{slate}{b_2} & \class{cyan}{e_2} & \class{purple}{h_2} \\ \class{slate}{c_2} & \class{cyan}{f_2} & \class{purple}{i_2} \end{bmatrix} $$ </div> <p> To apply the inverted rotation, we do a <em>matrix-matrix multiplication</em>, which is a fancy way of saying we use it to rotate the other matrix's basis vectors $ [\class{slate}{\vec x_2} \,\,\, \class{cyan}{\vec y_2} \,\,\, \class{purple}{\vec z_2}] $ individually. When applied to the first basis $ [\class{blue}{\vec x_1} \,\,\, \class{green}{\vec y_1} \,\,\, \class{orangered}{\vec z_1}] $, it rotates back to neutral as expected, aligned with the XYZ axes. </p> </div> <div class="step"> <p> We can now convert the relative rotation matrix into <span class="royal">axis-angle</span> again. This is the rotation straight from A to B, without any wobble or variable speed. This method is quite involved, and hence is still just a stepping stone towards rotational bliss. </p> </div> <div class="step"> <p> We go back to our <em>axis-angle sphere</em> and apply this rotation, while measuring the <span class="cyan">total axis-angle</span> every step along the way. We can see the cause of the earlier wobble: when moving straight through <em>rotation space</em>, we need to follow a <span class="cyan">curved arc</span> rather than a straight line. As <span class="purple">both rotations</span> are the same length, this arc follows the surface of the sphere. </p> </div> <div class="step"> <p> To get a better feel for how this works, let's move <span class="purple">the other end</span> around. We change it to various rotations of $ \frac{π}{2} $ radians (90°). These are all the rotations on a sphere of radius $ \frac{π}{2} $. Both the <span class="cyan">arc</span> and <span class="royal">axis of rotation</span> change in mysterious ways. The arc snaps back and forth, crossing through the edge of the sphere if that's shorter. </p> </div> <div class="step"> <p> What this really means is that <em>angle space</em> itself is <em>curved</em>. Think of the <span class="blue">surface of the earth</span>: if we keep going long enough in any particular direction, we always get back to where we started. As a consequence, you can't flatten an orange peel without tearing it, and you can't make a flat map of the Earth without distorting the shapes and area unequally. Yet we can view such a <span class="blue">curved 2D space</span> easily in 3D: it's just the <em>surface of a sphere</em>. </p> </div> <div class="step"> <p> The same applies here, except not just on the surface, but also <em>inside it</em>. Each <span class="cyan">curved arc</span> is actually straight as far as rotation is concerned, and each <span class="slate">straight interpolation</span> is actually curved. The inside of this ball is <span class="blue">curved 3D space</span>. </p> </div> <div class="step"> <p> If we want to see curved 3D space without distorting it, we need to view it in <em>four dimensions</em>. This ball is the <em>hypersurface of a 4D hypersphere</em>. So 3D rotation is four dimensional. WTF? </p> </div> <div class="step"> <p> Math is boring. Let's blow up the Death Star. </p> </div> <div class="step"> <p> <img src="/files/animating/portrait1.jpg" width="140" height="140" class="r ml1 natural" /> The Emperor has made a critical error and the time for our attack has come. The data brought to us by the Bothan spies pinpoints the exact location of the Emperor's new battle station. We also know that the weapon systems of this Death Star are not yet operational. Many Bothans died to bring us this information. Admiral Ackbar, please. </p> </div> <div class="step"> <p> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> Although the weapon systems on this Death Star are not yet operational, the Death Star does have a strong defense mechanism. It is protected by an energy shield, which is generated from the nearby forest Moon of Endor. Once the shield is down, our cruisers will create a perimeter, while the fighters fly into the superstructure and attempt to knock out the main reactor. </p> </div> <div class="step"> <p> <img src="/files/animating/portrait3.jpg" width="140" height="140" class="r ml1 natural" /> Sir, I'm getting a report from Endor! A pack of rabid teddy bears has attacked the generator, tearing the equipment to shreds. The shield is failing… </p> </div> <div class="step"> <p> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> The Death Star is completely vulnerable! Report to your ships, we launch immediately. We'll relay your orders on the way.<br /><br /> Lieutenant Hamilton, show me the interior of the superstructure.<br /> <em>(That's your cue.)</em> </p> </div> <div class="step"> <p> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> <em>*Mic screech*</em><br /> Red Wing, these are your orders. Of the <em>6 access points</em> to the interior, you will fly your X-wings through the <span class="blue">east</span> portal, closest to the superlaser. </p> </div> <div class="step"> <p> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> <span> There are large passageways leading directly to the central chamber. As these shafts are heavily guarded by fighters, a direct assault is impossible. We will need to avoid patrols by navigating the dense tunnel network that makes up the interior. </span> </p> </div> <div class="step"> <p> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> The Death Star's inner core is fortified, and all access is restricted. However, one of our operatives has informed us of <span class="orange">a large, unsecured ventilation shaft</span>, still under construction. This is our best chance to get into the core and destroy it. You must reach this target at all costs.<br /> <em>Tip: Click and drag to see things from a different angle.</em> </p> </div> <div class="step"> <div class="quattext"> <p class="bouncer"> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> <span style="display: block; height: 82px;">There are <em>large tunnels circling</em> just underneath the surface. You will fly your fighters into: <em>(Choose one)</em></span> </p> </div> <div class="quatcontrols"> </div> </div> <div class="step"> <div class="quattext"> <p class="bouncer"> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> <span style="display: block; height: 82px;">We will also send a detachment of Y-wings from the <span class="green">north pole</span>. These heavy bombers will <span class="gold">rendezvous</span> with the X-wings, taking the long way around, away from the defensive perimeter.</span> </p> </div> <div class="quatcontrols"> </div> </div> <div class="step"> <p> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> <span style="display: block; height: 82px;">Lieutenant, I have another task for you. Now that the Death Star's shields have conveniently failed, we will launch a probe ahead of our arrival, gathering detailed sensor data of the entire structure.</span> </p> </div> <div class="step"> <p> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> <span style="display: block; height: 82px;">It will approach directly from the <span class="red">front side</span>. It must pass through each of the <span class="orange">large tunnels</span> circling the Death Star to ensure full sensor coverage.</span> </p> </div> <div class="step"> <p> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> <span style="display: block; height: 82px;">The probe's energy signal is shielded, but it will not escape detection for long. To minimize our chances of detection, we should complete the survey of the entire Death Star without any overlap.</span> </p> </div> <div class="step"> <div class="quattext"> <p class="bouncer"> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> <span style="display: block; height: 82px;">Survey all areas of the Death Star, <em>without entering any tunnel twice</em>.</span> </p> </div> <div class="quatcontrols"> </div> </div> <div class="step"> <p> <img src="/files/animating/portrait3.jpg" width="140" height="140" class="r ml1 natural" /> <span style="display: block; max-height: 56px;">The probe is approaching the Death Star…</span> </p> </div> <div class="step"> <p> <img src="/files/animating/portrait3.jpg" width="140" height="140" class="r ml1 natural" /> <span style="display: block; max-height: 56px;">Scan of the interior progressing. Plotting data now.</span><br /> </p> </div> <div class="step"> <p> Hamilton's head hurts. Who would design such a crazy, tangled thing? Yet as he studies the structure, he notices a remarkable symmetry. <span class="blue">Grouped</span> <span class="green">by</span> <span class="red">color</span>, the tunnels form a swirling vortex around each <em>central axis</em>. Each vortex is surrounded by a great circle. He labels the three groups <span class="blue">$ i $</span>, <span class="green">$ j $</span> and <span class="red">$ k $</span>, as is the convention in this era. </p> </div> <div class="step"> <p> Yet mysteriously, tunnels always meet at a 90° angle, everywhere: on the central axes, on the circles, even anywhere in between. The colors also maintain their relative orientation at each intersection, including the polarity (<span class="blue">positive</span> or <span class="slate">negative</span>). Amazed, Hamilton starts scribbling down notes. "Cubic grid, twisted through itself? $ i → j → k $?". </p> </div> <div class="step"> <p> In fact, he's so mesmerized by the display, he's completely lost track of what's going on. As the hustle and bustle of the starship bridge slowly creeps back into focus, he looks up and–… </p> </div> <div class="step"> <p> <img src="/files/animating/portrait2.jpg" width="140" height="140" class="r ml1 natural" /> <span style="display: block; max-height: 56px;"><big><big>IT'S A TRAP!</big></big></span><br /> </p> </div> <div class="step"> <p> Not to despair. Some lightning gets thrown around, the Emperor is killed, a man finds redemption in death, and the Death Star is destroyed. </p> </div> <div class="step"> <p>That night, after many hours of celebration, the young Lieutenant falls asleep contentedly, and starts dreaming of that maze again. Maybe it was just a flash of inspiration, maybe it was the Force—or maybe the interesting neurochemical effects of fermented Endorian moonberry juice—but a long time ago, in a galaxy far, far away, William Rowan Hamilton figured out <em>quaternions</em>. </p> </div> <div class="step"> <p> More precisely, it was in 1843 in Dublin, Ireland. He was so struck by it, he immediately carved it into the nearest bridge—true story. It shouldn't surprise you that you've been doing quaternion calculations all along: those edges weren't color-coded just to look pretty. They consistently denoted the multiplication by a <span class="blue">+X</span>/<span class="slate">-X</span>, <span class="green">+Y</span>/<span class="cyan">-Y</span> or <span class="red">+Z</span>/<span class="purple">-Z</span> quaternion, representing a particular rotation around that axis. </p> </div> <div class="step"> <p> A key feature is how the colors wrap around the great circles. They always maintain the same relative orientation at every intersection, but the entire arrangement rotates from one place to the next. For example <span class="green">+Y</span>: it goes <em>up</em> at the core, but circles around the equator horizontally. You also saw what the inside looked like, omitted here for sanity. </p> </div> <div class="step"> <p> But we're missing something: a 6th quaternion at every 'pole'. This space continues outward, we've just been ignoring that part of it. </p> </div> <div class="step"> <p> This suggests there is a <span class="gold">second set of 'poles'</span>, at twice the radius. We can travel to and from them by multiplying with a quaternion. In fact that's completely true, but with one catch: all the orange points are actually all <em>one and the same point</em>. Huh? </p> </div> <div class="step"> <p> Remember, we're looking at <em>curved space</em>, a hypersphere. To make sense of it, we need to first look at the 2D case. </p> </div> <div class="step"> <p> If the disc represents a curved <em>plane</em> that was projected down to 2D, then in its undistorted form, it's actually a sphere. All the points on the disc's perimeter are actually the same: here they're the north pole, and the disc's center is the south pole. </p> </div> <div class="step"> <p> In curved <em>space</em>, it works similar. We can't visualize this, because this is happening in every direction all at the same time. We experienced the result of it while navigating the Death Star. What we didn't see was that the entire <span class="orange">sphere of radius 2π</span>—in axis-angle terms—is all just one and the same point. We never bothered to go beyond radius π before: rotations up to 180° in either direction. </p> </div> <div class="step"> <p> The important thing is to realize that the center of our diagram is not the center of the hypersphere, rather it's just another <em>pole</em>. In order to fit a hypersphere into 3D correctly, we'd somehow have to shrink the entire <span class="orange">sphere of radius 2π</span> to a point, to create a new pole, but without passing through the <span class="royal">sphere of radius π</span>. This is impossible, you need an extra dimension to make it work. </p> </div> <div class="step"> <p> But why are 3D rotations and quaternions connected? Why does <em>axis angle</em> map so cleanly to <em>half</em> of a hypersphere in quaternion space? And what does a quaternion actually look like? Well. What other kind of mathematical thing likes to turn? When you multiply it by another one of its kind? Where the rotation angle depends on where both inputs are? </p> </div> <div class="step"> <div class="extra bottom" data-delay="2"> <big>$$ \class{blue}{|z| = 1} $$</big> </div> <p> Complex numbers! Yay! If you're not familiar with them however, not to worry. We won't be needing all the complex numbers: we'll only use those that have length 1. In other words, all <span class="blue">points on a circle of radius 1</span>. Much simpler. </p> </div> <div class="step"> <div class="extra bottom" data-delay="1.5"> <big>$$ z = \class{royal}{\frac{\sqrt{3}}{2}} + \class{blue}{\frac{1}{2} \cdot i} $$</big> </div> <p> Complex numbers are 2D vectors that lead a double life. Ordinarily, they are written as the sum of two parts. Their <span class="royal">horizontal component</span> is a real number, a multiple of $ \class{royal}{1} $. Their <span class="blue">vertical component</span>, is a so-called imaginary number, a multiple of $ \class{blue}{i} $, which is a square root of -1. Which supposedly does not exist. Lies. </p> </div> <div class="step"> <div class="extra bottom" data-delay="1.5"> <big>$$ \class{orange}{z = 1∠30°} $$</big> </div> <p> It is often better to see them as a length and an angle. $ \class{royal}{1} $ becomes $ \class{royal}{1∠0°} $. The number $ \class{blue}{i} $ becomes $ \class{blue}{1∠90°} $. And $ -1 $ becomes $ 1∠180° $ or $ 1∠-180° $. </p> </div> <div class="step"> <div class="extra bottom" data-delay="2"> <big>$$ \begin{array}{rl} \class{orange}{z} & = & 1∠30° \cdot 1∠90° \\ & = & 1∠120° \\ \end{array} $$</big> </div> <p> When we multiply two complex numbers, their lengths multiply, and their angles add up. As the lengths are always 1 in our case, we can ignore them. Here, we multiply $ \class{orange}{1∠30°} $ by $ \class{blue}{1∠90°} $ to turn it 90° counter-clockwise. By the same rule, $ \class{blue}{1∠90°} \cdot \class{blue}{1∠90°} = 1∠180° $, better known as $ \class{blue}{i}^2 = -1 $. Complex numbers like to turn, and this gives them interesting properties, explored elsewhere on this site. </p> </div> <div class="step"> <p> Representing 2D rotation with complex numbers is trivial. We can directly map the rotation angle to the complex number's angle, and we can combine rotations by adding up the angles, positive or negative. The angles 0°, 90°, 180°, 270°, 360° become 1, $ \class{blue}{i} $, -1, $ -\class{blue}{i} $, 1. Of course, this adds nothing useful, at least in 2D. </p> </div> <div class="step"> <p> We can expand the model to 3D though, where we have three perpendicular ways of turning. First we'll try to add a second degree of rotation. We add a new imaginary component $ \class{green}{j} $, representing <span class="green">Y</span> rotation, while $ \class{blue}{i} $ is <span class="blue">X</span> rotation. Any position in this 3D space is now a quaternion, but we're still limiting them to only length 1, only interested in rotation. We'll be using the surface of what is, for now, a sphere. </p> </div> <div class="step"> <p> But wait, this isn't right. According to this diagram, if we rotate 180° around either the <span class="blue">X</span> or <span class="green">Y</span> axis, we end up in the same place—and hence the same orientation. Clearly that's not the case. Yet we based our quaternions on complex numbers, so both $ \class{blue}{i}^2 = -1 $ and $ \class{green}{j}^2 = -1 $. </p> </div> <div class="step"> <p> We can satisfy this condition in a different way though. If we rotate an object by <em>360°</em> around any axis, we always end up back where we started. So we can make this rule work if we agree that a 360° rotation equals a 180° quaternion. </p> </div> <div class="step"> <p> That means each rotation is represented by a quaternion of <em>half its angle</em>. A rotation by 180° becomes a quaternion of 90°, that is $ \class{blue}{i} $ or $ \class{green}{j} $, and each rotation axis takes us to a unique place. As we still treat $ \class{royal}{1} $ as 0°, the quaternion $ \class{blue}{1∠180°} = \class{green}{1∠180°} = -1 $ now represents a rotation of 360° = 0° around any axis. So $ \class{royal}{1} $ and $ -1 $ are considered equivalent, as far as representing rotation goes. </p> </div> <div class="step"> <p> Furthermore, $ \class{blue}{i} $ and $ \class{slate}{-i} $ are equivalent too, and so are $ \class{green}{j} $ and $ \class{cyan}{-j} $. Each represents rotating either +180° or -180° around the corresponding axis, which is the same thing. In fact, any half of this sphere is now equivalent to the other half, when you reflect it around the central point. This is why we were missing half of the hypersphere earlier: the 'outer half' is a mirror image of the 'interior'. </p> </div> <div class="step"> <p> So what about in-between axes? Well, we could try rotating around $ \class{orange}{(1,1,0)} $ and $ \class{gold}{(1,-1,0)} $, which are the axes that lie ±45° rotated between <span class="blue">X</span> and <span class="green">Y</span>. We'd end up tracing circles right between them: this is the only possibility where <span class="orange">both</span> <span class="gold">rotations</span> are perpendicular, yet maintain an equal distance to both the X and Y situation. </p> </div> <div class="step"> <p> Unfortunately we're missing something important. We've only applied rotations from neutral, from $ 1 $. If we apply a 180° <span class="blue">X</span> and <span class="green">Y</span> rotation <em>in series</em>, where do we end up? And what about <span class="green">Y</span> followed by <span class="blue">X</span>? The diagram might suggest we'd end up at respectively $ \class{green}{j} $ and $ \class{blue}{i} $. </p> </div> <div class="step"> <p> But this wouldn't make sense: if $ \class{blue}{i} \cdot \class{green}{j} = \class{green}{j} $, and $ \class{green}{j} \cdot \class{blue}{i} = \class{blue}{i} $, then both $ \class{blue}{i} $ and $ \class{green}{j} $ have to be equal to $ 1 $. There'd be no rotation at all. And if we say that $ \class{blue}{i} \cdot \class{blue}{j} = \class{blue}{j} \cdot \class{blue}{i} = -1 $, then the quaternions $ \class{blue}{i} $ and $ \class{green}{j} $ have the exact same effect. We'd only have <em>one</em> imaginary dimension, not two. Even in math, a difference that makes no difference is no difference. </p> </div> <div class="step"> <p> Whether we want to or not, we have to add a third imaginary component, $ \class{orangered}{k} $ to make this click together. So $ \class{orangered}{k}^2 = - 1 $, but it's different from both $ \class{blue}{i} $ and $ \class{green}{j} $. As we've used up our 3 dimensions, we need to project down this new 4th, putting it at an angle between the others. Again, a $ \class{orangered}{k} $ quaternion represents rotation around the <span class="red">Z</span> axis, with the angle divided by two. </p> </div> <div class="step"> <p> We end up with two peculiar relationships: $ \class{blue}{i} \cdot \class{green}{j} = \class{orangered}{k} $ and $ \class{green}{j} \cdot \class{blue}{i} = \class{purple}{-k} $. The quaternion product is <em>not the same</em> when you reverse the factors. Just like an XY gyroscope turns differently than a YX gyroscope. But if we'd started with <span class="orangered">Z</span>/<span class="blue">X</span> or <span class="green">Y</span>/<span class="red">Z</span>, we'd see the exact same thing. </p> </div> <div class="step"> <p> Hence we can rotate and combine these rules to get $ \class{blue}{i} \cdot \class{green}{j} \cdot \class{orangered}{k} = -1 $. This is the $ i^2 = -1 $ of quaternions, the magic rule that links together 3 separate imaginary dimensions and a real one, creating a maze of twisty passages all alike. When you cycle the axes, it still works: $ \class{green}{j} \cdot \class{orangered}{k} \cdot \class{blue}{i} = -1 $ and $ \class{orangered}{k} \cdot \class{blue}{i} \cdot \class{green}{j} = -1 $, demonstrating that quaternions link together three imaginary axes into a cyclic whole. </p> </div> <div class="step"> <div class="extra bottom"> <big>$$ |\class{blue}{z}| = 1 \\ z = \class{royal}{\cos(\theta)} + \class{blue}{i \cdot \sin(\theta)} $$</big> </div> <p> So how do we actually use quaternions for rotation? It's quite easy, because they are literally complex numbers whose imaginary component has sprouted two extra dimensions. Compare with an ordinary <span class="blue">complex number</span> on the unit circle. Its length (1) is divided non-linearly over the horizontal and vertical component using the <em>cosine and sine</em>: this is trigonometry 101. </p> </div> <div class="step"> <div class="extra bottom edge"> <big>$$ |q| = 1 \\ q = \class{royal}{\cos(\frac{\theta}{2})} + (x \cdot \class{blue}{i} + y \cdot \class{green}{j} + z \cdot \class{orangered}{k}) \cdot \sin(\frac{\theta}{2}) $$</big> </div> <p> For a quaternion on the unit hypersphere, we only make two minor changes. We replace the single $ i $ with a vector $ (x\class{blue}{i}, y\class{green}{j}, z\class{red}{k}) $ where $(x,y,z)$ is the normalized axis of rotation. The cosine and sine stay, though we divide the rotation angle by two. We can visualize the 3 imaginary dimensions directly without projection, after squishing the real dimension to nothing. As the length of the imaginary vector shrinks, the <span class="royal">real component</span> grows to compensate, maintaining length 1 for the entire 4D quaternion. </p> </div> <div class="step"> <div class="extra top edge"> <big>$$ q_1 \cdot q_2 = (w_1 + x_1 \cdot \class{blue}{i} + y_1 \cdot \class{green}{j} + z_1 \cdot \class{orangered}{k}) \cdot (w_2 + x_2 \cdot \class{blue}{i} + y_2 \cdot \class{green}{j} + z_2 \cdot \class{orangered}{k}) \\[10pt] $$</big> </div> <div class="extra bottom edge"> <big>$$ \class{blue}{i}^2 = -1 \,\,\,\,\,\,\, \class{green}{j}^2 = -1 \,\,\,\,\,\,\, \class{orangered}{k}^2 = -1 \\ \class{blue}{i} \cdot \class{green}{j} = \class{orangered}{k} \,\,\,\,\,\,\, \class{green}{j} \cdot \class{orangered}{k} = \class{blue}{i} \,\,\,\,\,\,\, \class{orangered}{k} \cdot \class{blue}{i} = \class{green}{j} \\ \class{green}{j} \cdot \class{blue}{i} = \class{purple}{-k} \,\,\,\,\,\,\, \class{orangered}{k} \cdot \class{green}{j} = \class{slate}{-i} \,\,\,\,\,\,\, \class{blue}{i} \cdot \class{orangered}{k} = \class{cyan}{-j} \\ \class{blue}{i} \cdot \class{green}{j} \cdot \class{orangered}{k} = -1 $$</big> </div> <p> We can apply the rules of quaternion arithmetic to multiply two quaternions. This is equivalent to performing the rotations they represent in series. Just like complex numbers, two length 1 quaternions make another length 1 quaternion. Of course, all the other quaternions have their uses too, but they're not as common. In a graphics context, you can pretty much forget they exist.</em>

```
</p>
</div>
<div class="step">
<p>
There's only one question left: how to smoothly move between two quaternions, that is, between two arbitrary orientations. With axis-angle, it was a very complicated procedure. With quaternions, it's super easy, because <em>unit quaternions</em> are shaped like a hypersphere. The 'angle map' is the hypersphere itself.
</p>
</div>
<div class="step">
<p>
As it turns out though, a <em>hyperspherical interpolation</em> in 4D is exactly the same as a spherical one in 3D. So we really only need to understand the 3D case. We have a linear interpolation between <span class="orangered">two</span> <span class="green">points</span> on a sphere, and want to replace it with a spherical arc.
</p>
</div>
<div class="step">
<p>
The line and the <span class="cyan">arc</span> share the same <span class="slate">plane</span>: the one that contains <span class="green">both</span> <span class="orangered">points</span> and the center of the sphere. Any such plane cuts the sphere into two equal halves, along an equator-sized <em>great circle</em>. Hence the arc is just an inflated version of the line, with a circular bulge applied in the plane, following the sphere's radius along the way. But we also have to traverse the arc at constant speed: otherwise we'd end up creating an uneven spline-like curve again.
</p>
</div>
<div class="step">
<div class="extra top edge">
<big>$$ \class{orange}{\theta} = \arccos(\class{green}{x_1} \cdot \class{red}{x_2} + \class{green}{y_1} \cdot \class{red}{y_2} + \class{green}{z_1} \cdot \class{red}{z_2}) $$</big>
</div>
<div class="extra bottom edge">
<big>$$ \class{cyan}{slerp}(\class{green}{\vec v_1}, \class{red}{\vec v_2}, f) = \frac{\sin((1-f) \cdot \class{orange}{\theta})}{\sin \class{orange}{\theta}} \cdot \class{green}{\vec v_1} + \frac{\sin(f \cdot \class{orange}{\theta})}{\sin \class{orange}{\theta}} \cdot \class{red}{\vec v_2} $$</big>
</div>
<p>
Luckily, we can apply a little trigonometry again. We can use the 3D (4D) vector dot product to find the <span class="orange">angle</span> between two unit vectors (quaternions), after applying an arccosine. Then we weigh the two vectors (quaternions) appropriately so they sum to length 1 and move linearly with the arc length. This is the <em>slerp</em>, the spherical linear interpolation. Working it out yourself can be tricky, but the result is elegant and independent of the number of dimensions.
</p>
</div>
<div class="step">
<p>
With all that in place, we can track any orientation with just 4 numbers, and change them linearly and smoothly. Quaternions are like a better axis-angle representation, which simply does the right thing all the time. Of course, you could just look up the formulas and cargo-cult your way through this problem. But it's more fun when you know what's actually going on in there. Even if it's in 4D.
</p>
</div>
```

REXML could not parse this XML/HTML: </div>

REXML could not parse this XML/HTML: </div>

So that's quaternions, the magical rotation vectors. Every serious 3D engine supports them, and if you've played any sort of 3D game within the last 15 years, you were most likely controlling a quaternion camera. Whether it's head-tracking on an Oculus Rift, or the attitude control of a spacecraft (a real one), these problems become much simpler when we give up our silly three dimensional notions and accept 3D rotation as the four dimensional curly beast that it is.

Ultimately though, quaternions can be treated as just vectors with a special difference and interpolation operator. We can apply all our usual linear filtering tricks, and can create sophisticated motions on the hypersphere. Combine that with smooth path-based animation with controllable velocities, and you have everything you need to build carefully tracked shots from any angle.

It's useful to see where we actually are with animation tools, especially on the web. Unfortunately, it doesn't look that great. For most, animation means calling an `.animate()`

method or defining a transition in CSS with a generic easing curve, fire-and-forget style. Keyframe animation is restricted to clunky CSS Animations, where we only get a single cubic bezier curve to control motion. We can't bounce, can't use real friction, can't set velocities or apply forces. We can't blend animations or use feedback. By now you know how ridiculously limiting this is.

In an ideal world, we'd have a perfect animation framework with all the goodies, which runs natively and handles all the math for us while still giving direct control. Until then, consider inserting a little bit of custom animation magic from time to time. Writing a simple animation loop is easy, and offers you fine grained control. Upgrades can be added later when the need presents itself. Your audience might not notice directly, but you can be sure they will remark on how pleasant it is to use, when everything seems alive.

But *buyer beware*: we need to be thinking as much about what isn't changing, as what is. Just like we use grids and alignment to keep layouts tidy, so should we use animation consistently to bring a rhythm and voice to our interactions.

In what you just saw, little was left to chance. Color, size, proportion, direction, orientation, speed, timing… they're used consistently throughout, there to reinforce the connections that are expressed. I try to hash ideas into memorable qualia, while avoiding optical illusions or accidental resemblances. If it's not the same, it should look, act or speak differently. Even if it's just a slightly different shade of blue, or a 300ms difference in timing.

Though MathBox is a simple script-based animator (for now), it exposes some interesting knobs to play with and can handle arbitrary motion through custom expressions. It also supports slowable time and maintains per-slide clocks. If you map bullet time to a remote control, you can manipulate time mid-sentence: you don't need to follow your slides, your slides follow you. It feels ridiculously empowering when you're doing it. When properly applied, you can build up a huge amount of context and carry it along for extended periods of time at the forefront of people's attention.

Many of these slides are based on live systems that run in the background, advancing at their own pace. The entire diagram reflows, maintaining the mathematical laws and relations that are represented. Often, I use periodic or pseudo-random motion to animate the source data. While it may just seem like a cool trick at first, I think it's actually the main feature. It changes every slide from one example into many different ones. It shows them one after the other, streaming into your brain at 60 frames per second. It maximizes the use of visual bandwidth, yet cause and effect can still be read directly from any freeze frame.

Additionally, looping creates a continuous reinforcement of the depicted mechanisms. In my experience, our intuition can absorb math this way through mere exposure, slowly internalizing models of abstract spaces, as well as the relations and algorithms that operate within. Even if we don't notice right away, it anchors our understanding for later when it's finally expressed formally and verbally.

That's the theory anyway: put the murder weapon on the mantelpiece in the opening scene, and work your way towards revealing it. Only the weapon could be complex exponentials, and the mantelpiece the real number line. I'm not an education specialist or neuroscientist, though I did devour David Marr's seminal work on the human visual system. I just know that whenever I manage to pour a complicated concept into a concise and correct visual representation, it instantly begins to make more sense.

Bret Victor has talked about media as thinking tools, about needing a better way to examine and build dynamical systems, a new medium for thinking the unthinkable. In that context, MathBox is an attempt at *viewing the unviewable*. To borrow a term, it's qualiscopic. It's about creating graphical representations that obey certain laws, and in doing so, making abstract things tangible. It encourages you to smoothly explore the space of all possible diagrams, and find paths that are meaningful.

By carefully crafting living dioramas of space and time this way, we can honestly say: nothing will appear, change or disappear without explanation.

*Comments, feedback and corrections are welcome on Google Plus. Diagrams powered by MathBox.*

REXML could not parse this XML/HTML: <script type="text/javascript"> window.MathJax && MathJax.Hub.Queue(["Typeset",MathJax.Hub]); Acko.queue(function () { Acko.Fallback.warnWebGL(); }); </script>

“The last time that I stood here was seven years ago.”

“Seven years ago! How little do you mortals understand time.

Must you be so linear, Jean-Luc?”– Picard and Q, All Good Things, Star Trek: The Next Generation

*Note: This article is significantly longer than previous instalments. It features 4 interactive slideshows, each introducing a new tool as well as related concepts around it. In one way, it's just another math guide, but going much deeper. In another, it's a thesis on everything I know about animating. Their intersection is a handbook for anyone who wants to make things move with code, but I hope it's an interesting read even if that's not your goal.*

Developers have a tough job these days. A seamless experience is mandatory, polish is expected. On touch devices, they are expected to become magicians. The trick is to make an electronic screen look and feel like something you can physically manipulate. Animation is the key to all of this.

Not just any animation though. Flash intros were hated for a reason. The `<blink>`

tag is not your friend, and flashing banner ads only annoy rather than invite. If elaborately designed effects distract from the content, or worse, ruin smoothness and performance, it'll turn people off rather than endear. Animation can only add value when its fast and fluid enough to be responsive.

It's not mere polish either, a finishing touch. Animation–and UI in general—should always be an additional conversation with the user, not a representation of internal software or hardware state. When we press Play in a streaming music app, the app should respond immediately by showing a Pause control, even if the music won't actually start playing for another second. When we enable Airplane Mode on our phones, we don't care that it'll take a few seconds to say good bye to the cell tower and turn off the radio. The UI is there to respond to our wishes: it should act like a personal assistant, not a reluctant helper, or worse, a demanding master.

REXML could not parse this XML/HTML: <aside class="g5 mt2 tc l"><div class="pad"> <img src="http://acko.net/files/animating/genie.jpg" alt="Apple Genie Effect" /> <p>The OS X 'genie' effect. Ridiculed, but it leaves no question where the window went.</p> </aside>

Hence animation is visual language and communicates both explicitly and implicitly. It establishes an unspoken trust and confidence between designer and user: we promise nothing will appear, change or disappear without explanation. It can show where to find things, like an application that minimizes into place in the dock, or a picture sliding into a thumbnail strip. It can tell miniature stories, like a Download button turning into a progress bar turning into a checkmark. More simply just the act of scrolling around a live document, creating the illusion of viewing an infinite canvas, persisting in space and time. Here, page layout is the use of placement and style to denote hierarchy and meaning in a 2D space.

As with any conversation, tone matters, in this case expressed through choreography. Items can fade into the background or pop to demand our attention, expressing calm or assertiveness. Elements can complement or oppose, creating harmony or dissonance. Animations can be minimalist or baroque, ordered or parallel, independent or layered. The proper term for this is *staging*, and research shows that it can significantly increase our understanding of diagrams and graphs when applied carefully. Whenever elements transition, preferably one at a time, it is easier to gauge changes in color, size, shape and position than when we are only shown a before and after shot.

This is important everywhere, but especially so for abstract topics like data visualization and mathematics. When we have no natural mental model of something, we build our understanding based on the interface we use to examine it. The more those interfaces act like real objects, the less surprising they are.

In doing so, we replace explicit explanations with implicit metaphors from the natural world: distance, direction, scale, shadow, color, contrast. These are the cues our brains evolved to be excellent at interpreting. By imbuing virtual objects with these properties, we make them more realistic and thus more understandable. Mind you, this is not a call for *skeuomorphism*, far from it. The properties we are seeking to mimic are far more basic, far more important, than some faux leather and stitching.

REXML could not parse this XML/HTML: <aside class="g5 mt1 tc l cl"><div class="pad"> <a href="http://bl.ocks.org/mbostock/1062288"><img src="http://acko.net/files/animating/d3.jpg" alt="Apple Genie Effect" /></a> <p>D3.js Force Directed Graph — <a href="http://bl.ocks.org/mbostock/1062288">Mike Bostock</a></p> </aside>

REXML could not parse this XML/HTML: <aside class="g5 mt1 tc l cl"><div class="pad"> <img src="http://acko.net/files/animating/padd.jpg" alt="PADD" /> <p>Star Trek TNG PADD, aka the iPad. Arrived slightly before the 2360s.</p> </aside>

The clearest example of this has to be inertial scrolling. Compared to an ordinary mouse wheel, scrolling on a tablet is actually much more complicated. We can flick and grab, go as fast or slow as we want. When skimming through a list, often we never wait for the page to stop moving, in theory requiring more effort to read. Yet everyone who's seen a toddler with an iPad can attest to its uncanny ease of use and efficiency, offering improved control and comprehension. Our brains are very good at tracking and manipulating objects in motion, particularly when they obey the laws of physics: moving with *consistent inertia and force*.

Which brings me to the actual topic of this post: how animation works on a fundamental level. I'd like to teach a mental model based on physics and math, and how to precisely control it. Along the way, we'll come to understand why Apple built a physics engine into iOS 7's UI, reveal some secrets of the demoscene, compose fluid keyframe animations, and defeat the final boss: seamless rotation in 3D. In doing so, we'll also go beyond just visual animation. The techniques described here work equally well for manipulating audio, processing data or driving meatspace devices. In a world of data, animation is just a different word for *precise control*.

An animation is something that *changes over time*. As it so happens, these three humble words are a veritable Pandora's box of mathematics. They open up to the strange world of the continuously and infinitely dividable, also known as *calculus*.

In a previous article, I covered the origins of calculus and how to approach the concept of infinity. In what follows, we won't be needing it much though. We'll be working with finite steps throughout, with *discrete* time. This makes it vastly easier to understand, and is an eminently useful stepping stone to the true theory of continuous motion, which you can find in any good physics textbook.

Math class hates it when we just punch numbers into our calculator instead of deducing the exact result: a decimal number is meaningless on its own. On that, I can agree. But when we punch in a couple thousand numbers and look at them in aggregate, it can tell us just as much. This page will be your calculator.

REXML could not parse this XML/HTML: <div class="wide slideshow full"> <div class="iframe c"> <iframe src="/files/animating/mb-1-ease.html" class="mathbox paged autosize" height="320"></iframe> </div> <div class="steps"> <div class="step"> <p>Let's start where Isaac Newton supposedly did, with an apple.</p> </div> <div class="step"> <p>Gravity kicks in. The apple bounces off the ground, losing some energy in the process. After a few bounces, its kinetic energy (speed) and potential energy (height) have both dissipated, and the apple is at rest.</p> </div> <div class="step"> <p>But analyzing motion by watching it in real-time is tricky. It's better to visualize time as its own dimension, here horizontal, and look at the entire animation as a whole.</p> </div> <div class="step"> <div class="extra top right" data-delay="4"><big><big>$$ \class{blue}{p(t)} $$</big></big></div> <p>The apple's position $ \class{blue}{p(t)} $ moves through space and time, along arcs of decreasing height and duration. Once at rest, it continues advancing through time, without moving in space. In common parlance, this is the animation's easing curve.</p> </div> <div class="step"> <div class="extra top right" data-delay="2"><big><big>$$ \class{blue}{p_i}, \, t_i $$</big></big></div> <p>It's worth pointing out they're not really arcs. This animation consists of individually numbered frames $ i $, switching 60 times per second. While a frame is displayed, the position $ \class{blue}{p_i} $ of the apple is constant. In between its value changes instantly, at times $ t_i $.</p> </div> <div class="step"> <p>For convenience's sake, it's reasonable to consider this a curve, approximated by a series of straight lines. After all, that's the illusion that the animation successfully tricks us into seeing. The discrete nature of the curve will let us dissect it more easily. We're interested in the physics of this motion.</p> </div> <div class="step"> <div class="extra top left" data-delay="2"><big>$$ \class{green}{v_{i→}} = \frac{\class{blue}{p_{i+1}} - \class{blue}{p_{i}}}{t_{i+1} - t_i} $$</big></div> <p>To determine the speed of the apple, we find the slope of a line segment: <span class="purple">vertical divided by horizontal</span>. Dividing distance by time gives us <em>speed</em>, e.g. meters per second. But actually, we're dealing with its cousin <span class="green">velocity</span> which has a direction too. Positive slope means going up, negative slope means going down. This operation is called a <em>forward or backward difference</em>, depending on whether you look forward ($ \class{green}{v_{i→}} $) or backward ($ \class{green}{v_{←i}} $) around a point.</p.> </div> <div class="step"> <div class="extra top left" data-delay="2"><big>$$ \class{green}{v_{i↓}} = \frac{\class{blue}{p_{i+1}} - \class{blue}{p_{i-1}}}{t_{i+1} - t_{i-1}} $$</big></div> <p>Forward differences tell us about what's happening <em>between</em> two adjacent points. We're more interested in what's happening at the points themselves. To fix this, we can take a <em>central difference</em> $ \class{green}{v_{i↓}} $, spanning two frames instead. We now get a good approximation for the slope directly at a point of interest, and thus the <span class="green">velocity</span>.</p> </div> <div class="step"> <div class="extra top right" data-delay="4" data-hold="1"><big>$$ \class{blue}{p_i}, \, \class{green}{v_{i↓}} $$</big></div> <p>If we apply this procedure along the entire curve, we can graph the apple's <span class="green">velocity</span> over time, in sync with its <span class="blue">position</span>. This is the discrete version of <em>taking the derivative</em> in calculus, or <em>differentiation</em> and shows these two quantities are intimately related.</p> </div> <div class="step"> <p>While in the air, the apple's <span class="green">velocity</span> decreases along a straight line, first positive, then negative. On impact, the velocity suddenly reverses, though only to a portion of its previous value. At the top of each arc, the velocity passes through zero, which means the apple essentially hangs motionless in the air for a fraction of a second.</p> </div> <div class="step"> <div class="extra top right" data-delay="4" data-hold="1"><big>$$ \class{blue}{p_i}, \, \class{green}{v_{i↓}}, \, \class{orangered}{a_{i↓}} $$</big></div> <p>To further analyze this, we can repeat the procedure, and find the slope of the <span class="green">velocity</span>. This is the <em>change in velocity over time</em>, better known as <span class="orangered">acceleration</span>. It can be expressed in <em>meters per second per second</em>, that is, $ m / s^2 $. According to Newton, acceleration is <em>force divided by mass</em>: the heavier something is, the less effect the same force has.</p> </div> <div class="step"> <p>What looked like a complicated animation at the <span class="blue">position</span> level is now revealed to be very simple: the apple undergoes a small constant <span class="orangered">acceleration</span> downwards from gravity. It also experiences a short burst of much stronger acceleration upwards whenever it bounces. Once the upward force goes below a critical threshold, the apple stops moving. At the end, gravity is countered by the apple's resistance to being squished, and the net acceleration is zero.</p> </div> <div class="step"> <p>Suppose we were given only the <span class="orangered">acceleration</span>, and wanted to reconstruct the animation. Can we do that?</p> </div> <div class="step"> <div class="extra bottom right" data-delay="2"><big>$$ \class{green}{v_{i+1→}} = \class{green}{v_{i→}} + \class{orangered}{a_{i→}} \cdot (t_{i+1} - t_i) $$</big></div> <p>Yep, we just work our way back up. If the <span class="orangered">acceleration</span> represents a difference in <span class="green">velocity</span> over time, then we can track the velocity by adding these differences back, accumulating them one step at a time. Since we divided the differences by time initially, we'll now have to multiply <span class="orangered">each value</span> by the time between frames. Technically we need forward differences ($ \class{orangered}{a_{i→}} $) for this, not central ones ($ \class{orangered}{a_{i↓}} $), but the error will be minor.</p> </div> <div class="step"> <div class="extra bottom right" data-delay="6.5"><big>$$ \class{green}{v_{i+1→}} = \sum\limits_{k=0}^i\class{orangered}{a_{k→}} \cdot Δt $$</big></div> <p>In calculus, this accumulation process is called <em>integration</em>. In our case, it's a sum ($ \sum $). As we are multiplying the vertical value $ \class{orangered}{a_{k→}} $ by the horizontal time step $ Δt $, each term represents the area of a thin rectangle. By adding up all these signed areas, positive for up and negative for down, we can approximate the <em>integral</em> and get <span class="green">velocity</span> back. Integrals and areas under curves are very closely linked.</p> </div> <div class="step"> <div class="extra top right" data-delay="6.5" data-hold="1"> <big>$$ \class{green}{v_{i+1→}} = \class{green}{v_{0→}} + \sum\limits_{k=0}^i\class{orangered}{a_{k→}} \cdot Δt $$</big> <big>$$ \class{blue}{p_{i+1}} = \class{blue}{p_0} + \sum\limits_{k=0}^i\class{green}{v_{k→}} \cdot Δt $$</big> </div> <p>Similarly, we can integrate <span class="green">velocity</span> into <span class="blue">position</span> by adding up strips of area under the velocity curve, recreating the original bounce. Note that for both sums, we needed to manually specify the starting point. If we didn't set it correctly, the apple would drift, bounce on thin air or penetrate the ground.</p> </div> <div class="step"> <p>We've produced real physical behavior from raw forces like gravity. That means we've just described a real physics engine. It's a one-dimensional one, but a physics engine none the less. It implements <em>Euler integration</em>, a fast but generally inaccurate method. In this case, the reconstruction is not perfect due to the earlier mentioned usage of <em>central</em> rather than <em>forward</em> differences.</p> </div> <div class="step"> <p>We only need one of the three in order to produce a plausible copy of the other two. That means we can control animations on any of the three levels. If we want full control, we specify <span class="blue">position</span> directly. For simple constrained motions, we can manipulate <span class="green">velocity</span> and integrate once. For full-on physics, we set <span class="orangered">acceleration</span> from physical laws and integrate twice. This is why the Newtonian model of motion is so important.</p> </div> <div class="step"> <p>It also reveals smoothness. A smooth animation isn't just continuous in its <span class="blue">path</span>. Its <span class="green">velocity</span> is continuous too, without sudden jumps. In some cases, we'll even want smooth acceleration too. An ordinary bounce effect is shown to involve a large <span class="orangered">acceleration</span>, a sudden <em>jerk</em>. This is a noticeable visual disruption, the kind we generally want to avoid. If you've ever tried to ignore a bouncing icon, you'll know how hard this is.</p> </div> <div class="step"> <p>In fact, <span class="slate">jerk</span> is what we call the slope of <span class="orangered">acceleration</span>. That's three derivatives deep, and it's turtles all the way down. The next ones are imaginatively called <em>snap</em>, <em>crackle</em> and <em>pop</em>, though they signify little directly. A large <span class="slate">jerk</span> however implies a sudden, jarring <em>change in force</em>.</p> </div> <div class="step"> <div class="extra top right"> <big>$$ \class{purple}{E_p} = m \cdot g \cdot h $$</big> </div> <p>There's more physics hiding in plain sight. Earlier on, I mentioned energy: kinetic and potential. The apple's <span class="purple">available potential energy</span> $ \class{purple}{E_p} $ comes from gravity and is proportional to its height $ h $ above the ground, as well as the mass $ m $ and the local strength of gravity $ g $.</p> </div> <div class="step"> <div class="extra top right"> <big>$$ \class{cyan}{E_k} = \frac{1}{2} \cdot m \cdot v^2 $$</big> </div> <p>The <span class="cyan">kinetic energy</span> $ \class{cyan}{E_k} $ comes from its motion. It's proportional to the <span class="green">velocity</span> <em>squared</em>. That means each additional meter per second makes the previous ones more energetic, adding more kinetic energy the <em>faster it's already going</em>. To explain, we can imagine the force required to stop a moving object. By increasing the speed, you don't just add additional momentum: the impact also takes <em>less time</em>, concentrating it.</p> </div> <div class="step"> <div class="extra top right"> <big>$$ \class{purple}{E_p} = m \cdot g \cdot h $$</big> <big>$$ \class{cyan}{E_k} = \frac{1}{2} \cdot m \cdot v^2 $$</big> </div> <p>In a closed system, total momentum is conserved. As we are treating gravity as an outside force, this does not apply. Energy is conserved however. There's a vertical symmetry, where one energy level goes up as the other goes down, and vice versa. So we actually have a fourth level to control physics at: that of <em>energy</em> and <em>potential</em>. With some minor bookkeeping, we can create motion this way, called <em>Hamiltonian mechanics</em>.</p> </div> <div class="step"> <div class="extra top right" data-hold="1"> <big>$$ \class{royal}{E_t} = \class{purple}{E_p} + \class{cyan}{E_k} $$</big> </div> <p>The <span class="royal">total energy</span>, <span class="purple">potential</span> plus <span class="cyan">kinetic</span>, is perfectly constant between bounces. On impact, a significant amount is lost. Note that the dips towards zero are a side effect of the finite approximation: if the bounce occurs between two frames, the apple appears to slow down for a frame, instantly falling down and bouncing back to where it was one frame earlier. Finite differences are oblivious to this.</p> </div> <div class="step"> <p>The energy levels follow a <span class="orangered">decaying exponential curve</span>. This is very typical: exponentials show up whenever a quantity is related to its rate of change. Hamiltonian models are useful for more complicated things like 3D roller coasters, where they allow you to abstract away complex interactions into a few concise relations like this.</p> </div> <div class="step"> <p>In simple animation though, we'll generally stick to the direct Newtonian model. We can use it to analyze real use cases. Let's start with a common easing curve, cosine interpolation, used by default in <code>jQuery.animate()</code> and these slides too.</p> </div> <div class="step"> <div class="extra right" data-delay="1.5"> <big>$$ lerp(\class{orangered}{a}, \class{green}{b}, f) = \class{orangered}{a} + (\class{green}{b} - \class{orangered}{a}) \cdot f $$</big> </div> <p>We animate the apple's position, changing its Y coordinate. In practice, that means we apply <em>linear interpolation</em>, lerping, between the start $ \class{orangered}{a} $ and end $ \class{green}{b} $. We take the starting point and add a fraction $ f $ of the difference $ \class{green}{b} - \class{orangered}{a} $ to it. Half the difference gets us halfway there, and so on. As long as $ f $ is between 0 and 1, we end up somewhere in the middle. When $ f $ reaches 1, the animation is complete.</p> </div> <div class="step"> <div class="extra bottom" data-delay="1.5"> <big>$$ elerp(\class{orangered}{a}, \class{green}{b}, f) = \class{orangered}{a} + (\class{green}{b} - \class{orangered}{a}) \cdot \class{blue}{ease(f)} $$</big> <big>$$ \class{blue}{ease(f)} = 0.5 - 0.5 \cdot \cos πf $$</big> <br /><br /> </div> <p>The purpose of the easing curve is then to make the animation non-linear, not in space, but in time: in this case, the apple smoothly starts and stops. We can use any curve we like, e.g. half of a cosine wave of period 2. This <em>eased lerp</em> is the basic building block of any animation system.</p> </div> <div class="step"> <div class="extra bottom" data-delay="4.5" data-hold="1"> <big>$$ \class{blue}{p_i}, \, \class{green}{v_{i↓}}, \, \class{orangered}{a_{i↓}} $$</big> </div> <p>The effect of the easing curve is visible when we take central differences again, and look at <span class="green">velocity</span> and <span class="orangered">acceleration</span>. The acceleration has been divided by 3 to fit. This doesn't seem bad, all three quantities appear to change smoothly. This picture is deceptive though.</p> </div> <div class="step"> <p>All curves continue before and after the animation. The smooth cosine ease turns out to be quite jarring in its <span class="orangered">acceleration</span>: it's like flooring the accelerator from standstill then easing off gently. At the halfway point you start braking, more and more until you stop. It's one of the most responsive animations possible that's still smooth at both ends. Smoother easing curves have smoother accelerations, but respond slower.</p> </div> <div class="step"> <div class="extra bottom" data-delay="2"> <big>$$ \class{blue}{ease(f)} = f^2 $$</big> </div> <p>A simpler example is the <em>half-ease</em>, here achieved with a quadratic curve $ \class{blue}{f^2} $. The <span class="green">velocity</span> is a linearly increasing ramp. The <span class="orangered">acceleration</span> is constant, except for a very large instant deceleration at the end. This is like flooring the accelerator from standstill, holding it down for the duration, and then crashing into a wall—the <em>suicide ease</em>. Due to this, half-easing is typically used for fading transitions, where the object is invisible–or the audio inaudible–at the start or end. </p> </div> <div class="step"> <div class="extra bottom" data-delay="4"> <big>$$ \class{blue}{ease(f)} = \left\{ \begin{array}{ll} f^2 & \mbox{if } f \leq 1 \\ 2f - 1 & \mbox{if } f > 1 \end{array} \right. $$</big> </div> <p>But we can repurpose it quite easily. By tweaking this at the <span class="green">velocity</span> level, we can maintain a constant speed at the end. This is the <em>slow start</em>, and can be expressed directly as an open-ended easing curve. In this case, we allow $ f $ to exceed 1, and the linear interpolation turns into <em>extrapolation</em> for free, no extra charge. We can scale the curve vertically to change the final speed, and scale it horizontally to control the delay. The slow start (and stop) is used throughout these slides.</p> </div> <div class="step"> <div class="extra bottom edge" data-delay="7"> <big>$$ \class{blue}{ease(f)} = \frac{1}{4} \cdot (1 - \cos 2πf) + \left\{ \begin{array}{ll} f^2 & \mbox{if } f \leq 1 \\ 2f - 1 & \mbox{if } f > 1 \end{array} \right. $$</big> </div> <p>We can combine curves too. Here, we add a cosine wave to the slow start, creating perhaps the motion of a rising jellyfish. Adding up animations is an easy way to create variations on a theme, used often in the demoscene. The derivatives add straight up too, so all three curves shift up and down by a sine or cosine wave. You can see how a small shift in <span class="blue">position</span> can have a large effect on both <span class="green">velocity</span> and <span class="orangered">acceleration</span>. </p> </div> <div class="step"> <p>The next example is a bit different. Any guesses as to what this is? The hint is in the vertical scale, now measured in pixels. This animation moves almost 1000 pixels in just over one second.</p> </div> <div class="step"> <p>It's an inertial flick gesture, recorded on Mac OS X. We can plot <span class="green">velocity</span> and <span class="orangered">acceleration</span> again. There's a slight measurement error, visible as noisy ripples on the acceleration, even after smoothing out the data: derivatives are very sensitive to noise. The velocity and acceleration have also been scaled down to fit, as they are both quite large.</p> </div> <div class="step"> <p>The <span class="cyan">first part</span> of the curve is not an animation at all: it was tracking the direct motion of my finger. Fingers move very smoothly: the <span class="orangered">acceleration</span> follows a curve up and down. This is more physics: of nerve signals causing muscle fibers to contract and digits to move. This <em>work</em> smoothly converts <em>chemical potential</em> into <em>kinetic energy</em>. The small jump in speed at time 0 is easy to explain: my finger was already moving when it touched the pad.</p> </div> <div class="step"> <p>The <span class="cyan">second part</span> is the actual inertial animation. It kicks in as soon as the finger leaves the pad. All three values follow an exponential curve past that point, disregarding the noise. But the important one is <span class="green">velocity</span>: the animation starts with the last known velocity and smoothly decays it to zero. <span class="blue">Where we end up</span> depends on how fast we were going when the finger left the pad.</p> </div> <div class="step"> <div class="extra" data-hold="1" data-delay=".5"> $$ \class{green}{v_{i+1→}} = \class{green}{v_{i→}} \cdot (1 - \class{royal}{f}) $$ </div> <p>Inertial scroll is easiest to control at the <span class="green">velocity</span> level. We can measure the initial velocity by finding the <span class="blue">position</span>'s slope, usually averaged over several frames. We then start at this velocity, but reduce it every frame by a fraction $ \class{royal}{f} $, which is a <span class="royal">coefficient of friction</span>. We don't need to care how far we'll go or how long it'll take: we can just keep animating until the <span class="green">velocity</span> gets close enough to 0.</p> </div> <div class="step"> <p>Suppose we do care where we end up. We might be showing a list of items, each 100 pixels tall. It could be good to control the animation so it always stops right at an item. We can't violate the principle of smooth motion, so we can't just change the <span class="blue">position</span> or <span class="green">velocity</span> directly. We have to change the <span class="royal">coefficient of friction</span>.</p> </div> <div class="step"> <div class="extra" data-delay="1"> $$ \class{green}{v_{i→}} = \class{green}{v_{0→}} \cdot (1 - \class{royal}{f})^i $$ </div> <p>As the <span class="green">velocity</span> follows a simple curve, we don't have to track it manually. We can express it over time as a direct relation, based on the initial velocity $ \class{green}{v_{0→}} $. The exponential nature is clear, with the frame number $ i $ appearing as the exponent of a number between 0 and 1.</p> </div> <div class="step"> <div class="extra" data-delay="2"> $$ \begin{array}{rl} \class{blue}{p_{i}} & = \class{blue}{p_0} + \sum\limits_{k=0}^{i} \class{green}{v_{0→} \cdot (1 - f)^k} \cdot Δt \\ & = \class{blue}{p_0} + \class{green}{v_{0→}} \cdot Δt \cdot \class{purple}{\sum\limits_{k=0}^{i} (1 - f)^k} \end{array} $$ </div> <p>The position at frame $ i $ is then the sum of all the <span class="green">previous velocities</span> times the <em>time step $ Δt $</em>, just like before, relative to the initial position $ \class{blue}{p_0} $. As the time step and initial velocity are constant, we can move both outside <span class="purple">the sum</span>.</p> </div> <div class="step"> <div class="extra"> $$ \begin{array}{rl} \class{blue}{p_∞} & = \class{blue}{p_0} + \class{green}{v_{0→}} \cdot Δt \cdot \class{purple}{\sum\limits_{k=0}^{∞} (1 - f)^k} \\ & = \class{blue}{p_0} + \frac{\class{green}{v_{0→}} \cdot Δt}{\class{royal}{f}} \end{array} $$ </div> <p>To find the <span class="blue">final resting position</span>, we theoretically have to continue the animation all the way to infinity. This can be done using a limit. For now, we'll just look up the formula for this infinite sum, a <span class="purple">geometric series</span>. We end up <em>dividing by the <span class="royal">coefficient of friction</span></em>: the lower it is, the further we go after all. If the coefficient were 0, there'd be no friction. We'd divide by zero, because there's no final resting position when you never slow down.</p> </div> <div class="step"> <div class="extra" data-delay="1"> $$ \class{royal}{f} = \frac{\class{green}{v_{0→}} \cdot Δt}{\class{blue}{Δp}} $$ </div> <p>We can invert this relationship to find the <span class="royal">coefficient of friction</span> required to stop at a given target. We just need the initial distance to the target, $ \class{blue}{Δp} $. To apply this in practice, we determine the friction needed to reach the next couple of items, and pick the one which is closest to the default case. The user won't notice the subtle change in friction—the UI will just magically seem better.</p> </div> <div class="step"> <p>The simulation works identical in all cases and the velocities are still continuous and exponential, which means: <em>physical</em>. This effect only requires one additional calculation at the start, which makes it all the more strange that developers have come up with increasingly jarring ways to achieve something similar.</p> </div> <div class="step"> <p>Now let's try animating in 2D.</p> </div> <div class="step"> <div class="extra right"> $$ x(t) = \sin t $$ $$ y(t) = \sin t $$ </div> <p>We can move the apple in 2D by animating its X and Y coordinates. Here we animate both in lockstep, using a sine wave: the apple moves diagonally, as X and Y are always equal. By adjusting their relative amplitudes, we can control the angle of motion.</p> </div> <div class="step"> <div class="extra right"> $$ x(t) = \sin t $$ $$ y(t) = \sin \frac{7}{8}t $$ </div> <p>If we animate X and Y separately, we create arbitrary paths. Here they both follow a sine wave, but with different frequencies. The <span class="blue">resulting path</span> is called a Lissajous curve. The sine waves drift in and out of phase, going from a diagonal to an oval to a circle, and back again.</p> </div> <div class="step"> <div class="extra right edge"> $$ \class{blue}{\vec p(t)} = \begin{bmatrix} \class{blue}{p_x(t)} \\ \class{blue}{p_y(t)} \end{bmatrix} = \begin{bmatrix} \sin t \\ \sin \frac{7}{8}t \end{bmatrix} $$ </div> <p>It makes more sense to picture the position as a 2D vector, an arrow. It has both a direction and a length, relative to the origin. While the calculation is equivalent—animating X and Y separately—the vector representation is more natural once we look at the derivatives.</p> </div> <div class="step"> <div class="extra left edge" data-align-x=".9" data-delay="2" data-hold="1"> <big>$$ \class{green}{\vec v_{i→}} = \frac{\class{blue}{\vec p_{i+1}} - \class{blue}{\vec p_{i}}}{t_{i+1} - t_i} $$</big> </div> <p>What does slope and <span class="green">velocity</span> mean in this context? The same principle applies: we take the <span class="purple">difference in position</span> between two frames, and divide it by the difference in time $ Δt $. In this case, all quantities except time are vectors.</p> </div> <div class="step"> <p>As a single frame is very short, the <span class="green">velocity</span> is quite large, and always tangent to the <span class="blue">path</span>. Its length directly represents speed.</p> </div> <div class="step"> <p>If we center the velocity vector, it traces out its own <span class="green">Lissajous curve</span>. This one is slightly different and doubles back on itself at regular intervals.</p> </div> <div class="step"> <div class="extra left edge" data-align-x=".9" data-delay=".5"> <big>$$ \class{orangered}{\vec a_{i→}} = \frac{\class{green}{\vec v_{i+1→}} - \class{green}{\vec v_{i→}}}{t_{i+1} - t_i} $$</big> </div> <p>We can apply finite differences again to dissect <span class="green">velocity</span> into <span class="orangered">acceleration</span>. It follows yet another Lissajous curve, a scaled and rotated version of the <span class="blue">position</span>.</p> </div> <div class="step"> <p>Finally, we can disentangle these curves by plotting them out over time. <span class="blue">Position</span>, <span class="green">velocity</span> and <span class="orangered">acceleration</span> dance around each other. Despite its artificial construction, even this motion is physical: it's what happens when you take an object and hang it off independently moving horizontal and vertical springs of different stiffness. With the right visualization, raw physics is quite beautiful in its own right.</p> </div> </div> </div>

We've seen how to examine an animation at multiple levels of change: position, velocity, acceleration. Differences approximate *derivatives* and let us to dissect our way down the chain. Accumulators approximate *integration* and let us construct higher levels from lower ones. Thus we can manipulate an animation at any level. By plugging in correct physical laws or arbitrary formulas, we can produce behavior that is as physical or unphysical as we like.

Everything we've done so far has been independent animation, without interaction. Even inertial scrolling has this luxury: whenever the user is touching, there is no inertia and the animation system is inactive. It's only when you let go that the surface coasts.

In many cases, this is not enough: animations need to be scheduled and executed while retaining full interactivity. Often the animation needs to continue despite its target changing midway. In order to handle such situations, we need to build adaptive models that remain continuous and smooth, no matter what.

We'll also need to drop the assumption that the frame rate—the time step—is constant. In the real world, the frame rate might drop here or there, or be variable altogether. In either case, we'd prefer it if the effect of this was minimal. If we're adding music to an animation, this is essential to prevent desynchronization. It will also have some nasty consequences for our physics engine, and we need to level it up significantly.

REXML could not parse this XML/HTML: <div class="wide slideshow full"> <div class="iframe c"> <iframe src="/files/animating/mb-2-adaptive.html" class="mathbox paged autosize" height="320"></iframe> </div> <div class="steps"> <div class="step"> <p>So far, we've assumed a constant frame rate.</p> </div> <div class="step"> <p>If our animation is defined by an <span class="blue">easing curve</span>, we can look up its value at any point along the way.</p> </div> <div class="step"> <p>It seems at first, variable frame rates are trivial: we can evaluate the curve at arbitrary times instead of pre-set intervals. </p> </div> <div class="step"> <div class="extra top left" data-delay="1"> <br /><big>$$ \class{green}{v_{i→}} = \frac{\class{blue}{p_{i+1}} - \class{blue}{p_{i}}}{t_{i+1} - t_i} $$</big><br /><br /><br /><br /></div> <div class="extra top left" data-delay="2"> <br /><big><br /><br /><br /><br />$$ \class{blue}{p_{i+1}} = \class{blue}{p_0} + \sum\limits_{k=0}^i\class{green}{v_{k→}} \cdot Δt_i $$</big> </div> <p>If we take forward differences to measure slope, we still get a smooth <span class="green">velocity curve</span>. We can accumulate—<em>integrate</em>—these differences back into <span class="blue">position</span> as long as we account for a variable time step $ Δt_i $. It seems our physics engine should be unbothered too. But there's a few problems.</p> </div> <div class="step"> <div class="extra top right"> $$ \class{green}{v_{i+1→}} = \class{green}{v_{i→}} \cdot (1 - \class{royal}{f}) $$ </div> <p>First, if we implemented inertial scrolling like we did before, multiplying the <span class="green">velocity</span> by $ 1 - \class{royal}{f} $ every frame, we'd get the wrong curve. The amount of velocity lost per frame should now vary, we can no longer treat it as a convenient constant.</p> </div> <div class="step"> <div class="extra top right" data-align-x=".8" data-delay="1"> <big><big>$$ \begin{array}{rcl} (1 - \class{purple}{f_i})^\frac{t}{Δt_i} & = & (1 - \class{royal}{f})^\frac{t}{Δt} \\ ⇔ \,\,\, \class{purple}{f_i} & = & 1 - e^{\frac{Δt_i}{Δt} \log_e (1 - \class{royal}{f})} \end{array} $$</big></big> </div> <p>If we do the math, we can find an expression for the correct amount of friction $ \class{purple}{f_i} $ per frame for a given step $ Δt_i $, relative to the default $ \class{royal}{f} $ and $ Δt $. Not pretty, and this is just one object experiencing one force. In more elaborate scenarios, finding exact expressions for positions or velocities can be hard or even impossible. This is what the physics engine is supposed to be doing for us.</p> </div> <div class="step"> <p>There's another problem. If we integrate these curve segments to get position, we get an <span class="blue">exponential curve</span>, just as before. Did we achieve frame rate independence?</p> </div> <div class="step"> <p>Well, no. If we change the time steps and run the algorithm again, it looks the same. However, the <span class="blue">new curve</span> and <span class="purple">old curve</span> don't match up. The difference is surprisingly large, as this animation is only half a second long and the average frame rate is identical in both cases. Such errors will compound the longer it runs, and make your program unpredictable.</p> </div> <div class="step"> <p>Luckily we can have our cake and eat it too. We can achieve consistent physics and still render at arbitrary frame rates. We just have to decouple the <span class="cyan">physics clock</span> from the <span class="purple">rendering clock</span>.</p> </div> <div class="step"> <p>Whenever we have to <span class="purple">render a new frame</span>, we compare both clocks. If the render clock has advanced past the physics clock, we do one or more <span class="blue">simulation steps</span> to catch up. Then we <span class="cyan">interpolate linearly</span> between the last two values until we run out of physics again.</p> </div> <div class="step"> <p> This means the visuals are delayed by one physics frame, but this is usually acceptable. We can even <span class="blue">run our physics at half the frame rate</span> or less to conserve power. Though more error will creep in, this error will be identical between all runs, and we can manually compensate for it if needed.</p> </div> <div class="step"> <p> When we implement variable frame rates correctly, we can produce an <span class="cyan">arbitrary number of frames</span> at arbitrary times. This buys us something very important, not for the end-user, but for the developer: the ability to skip to any point in time, or at least fast-forward as quickly as your computer can manage. </p> </div> <div class="step"> <p> But just because the simulation is consistent, doesn't mean it's correct or even <em>stable</em>. Euler integration fits our intuitive model of how pieces add up, but it's actually quite terrible. For example, if we made our <span class="blue">bouncing apple</span> perfectly <em>elastic</em> in the physical sense—losing no energy at all—and apply Euler, it would start <span class="cyan">bouncing irregularly</span>, gaining height. </p> </div> <div class="step"> <p> Which means the first bounce simulation wasn't using Euler at all. It couldn't have: the energy wouldn't have been conserved. All the finite differentiation and integration magic that followed only worked neatly because the <span class="blue">position</span> data was of a higher quality to begin with. We have to find the source of this phantom energy so we can correct for it, creating the <em>Verlet integration</em> that was used. </p> </div> <div class="step"> <p> We're trying to simulate <span class="blue">this path</span>, the ideal curve we'd get if we could integrate with infinitely small steps. We imagine we start at the point in the middle, and would like to step forward by a large amount. The time step is exactly 1 second, so we can visually add accelerations and velocities like vectors, without having to scale them. Note that this is <em>not</em> a gravity arc, the downward force now varies. </p> </div> <div class="step"> <div class="extra top left" data-delay="2" data-hold="1"> <big> $$ \class{green}{v_{i+1→}} = \class{green}{v_{i→}} + \class{orangered}{a_{i→}} \cdot Δt $$ $$ \class{blue}{p_{i+1}} = \class{blue}{p_{i}} + \class{green}{v_{i→}} \cdot Δt $$ </big> </div> <p> Earlier, I said that if we used <span class="green">forward differences</span>, we could get the velocity between two points. And that we could make a reconstruction of <span class="blue">position</span> from forward <span class="green">velocity</span> by applying 'Euler integration'. While that's true, that's not actually what Euler integration <em>is</em>. </p> </div> <div class="step"> <p> See, this is a chicken and egg problem. This <span class="green">velocity</span> isn't the slope at the start <em>or</em> the end or even the middle. It's the <em>average velocity over the entire time step</em>. We can't get this velocity without knowing the future <span class="blue">position</span>, and we can't get there without knowing the <span class="green">average velocity</span> in the first place. </p> </div> <div class="step"> <div class="extra top left" data-delay="1" data-hold="1"> <big> $$ \class{green}{v_{i+1↓}} = \class{green}{v_{i↓}} + \class{orangered}{a_{i↓}} \cdot Δt $$ $$ \class{blue}{p_{i+1}} = \class{blue}{p_{i}} + \class{green}{v_{i↓}} \cdot Δt $$ </big> </div> <p> The <span class="green">velocity</span> that we're actually tracking is for the <span class="blue">point itself</span>, at the start of the frame. Any force or <span class="orangered">acceleration</span> is calculated based on that single instant. If we integrate, we move forward along the <span class="cyan">curve's tangent</span>, not the curve itself. This is where the extra height comes from, and thus, phantom gravitational energy. </p> </div> <div class="step"> <p> For any finite step, there will always be some overshooting, because we don't yet know what happens along the way. Euler actually made the same mistake we made earlier: he used a <em>central</em> difference where a <em>forward</em> one was required, because the forward difference can only be gotten <em>after the fact</em>. The 'central difference' here is the actual <span class="green">velocity</span> at a point, the true <em>derivative</em>. </p> </div> <div class="step"> <div class="extra top left" data-delay="3"> <big> $$ \class{green}{v_{i+1↓}} = \class{green}{v_{i↓}} + \class{orangered}{a_{i↓}} \cdot Δt $$ $$ \class{blue}{p_{i+1}} = \class{blue}{p_{i}} + \frac{\class{green}{v_{i↓}} + \class{green}{v_{i+1↓}}}{2} \cdot Δt $$ </big> </div> <p> As the acceleration changes in this particular scenario, we could try <span class="orangered">applying Euler</span>, and then averaging the <span class="green">start and end velocities</span> to get something in the middle. It fails, because the end velocity itself is totally wrong. Though we get closer than Euler did, we now <em>undershoot</em> by half the previous amount. </p> </div> <div class="step"> <div class="extra top left" data-delay="2"> <big> $$ \class{green}{v_{←i}} = \frac{\class{blue}{p_{i}} - \class{blue}{p_{i-1}}}{Δt} $$ </big> </div> <p> To resolve the chicken and egg, we need to look to the past. We assume that rather than starting with one position, we start with <span class="blue">two known good frames</span>, defined by us. That means we can take a <em>backwards</em> difference and now know the <span class="green">average velocity</span> of the <em>previous frame</em>. How does this help? </p> </div> <div class="step"> <p> Well, we assume that this velocity happens to be equal or close to the velocity at the <span class="purple">halfway point</span>. We also still assume the <span class="orangered">acceleration</span> is constant for the entire duration. If we then integrate from here to the <span class="cyan">next halfway point</span>, something magical happens. </p> </div> <div class="step"> <div class="extra top left" data-delay="2.5"> <big> $$ \class{green}{v_{i→}} = \class{green}{v_{←i}} + \class{orangered}{a_{i↓}} \cdot Δt $$ </big> </div> <p> We get a <em>perfect prediction</em> for the next frame's <span class="green">average velocity</span>, the <em>forward</em> difference. By always remembering the previous position, we can repeat this indefinitely. That this works at all is amazing: we're applying the exact same operation as before—<span class="orangered">constant acceleration</span>—for the same amount of time. On just a <em>slightly</em> different concept of velocity. Without even knowing exactly when the object reaches that velocity. That's <em>Verlet integration</em>. </p> </div> <div class="step"> <p>Euler integration failed on a simple <em>constant acceleration</em> like gravity and can only accurately replicate a linear ease $ f $. This motion is a <em>cubic ease</em> $ f^3 $, with <em>linear acceleration</em> that decreases. Verlet still nails it, even when leaping seconds at a time. Why does this work?</p> </div> <div class="step"> <p>Euler integration applies a constant acceleration <span class="orangered">ahead of a point</span>. If there's any decrease in acceleration, it <span class="slate">overestimates</span> by a significant amount. That's on top of stepping in the <span class="green">wrong direction</span> to begin with. Both <span class="blue">position</span> and <span class="green">velocity</span> will instantly begin to drift away from their true values. </p> </div> <div class="step"> <div class="extra left" data-delay="3"> <big> $$ \class{blue}{p_{i+1}} = 2 \cdot \class{blue}{p_{i}} - \class{blue}{p_{i-1}} + \class{orangered}{a_{i↓}} \cdot Δt^2 $$ </big> </div> <p>Verlet integration applies the same constant acceleration <span class="orangered">around a point</span>. If the acceleration is a perfect line, the error cancels out: the two triangles make up an equal <span class="cyan">positive</span> and <span class="slate">negative</span> area</span>

```
</p>
</div>
<div class="step">
<p>As this captures the slope of acceleration, we only get errors if the acceleration <span class="orangered">curves</span>. In this case, the left and right areas don't cancel out exactly. The <span class="purple">missing area</span> however smoothly approaches 0 as the time step shrinks, a further sign of Verlet's error-defeating properties. If we do the math, we find the <span class="blue">position</span> has $ O(Δt^2) $ global error: decrease the time step $ Δt $ by a factor of 10, and it becomes 100× more accurate. Not bad.
</p>
</div>
<div class="step">
<p>
For completeness, here's the 4th order <em>Runge-Kutta</em> method (RK4), which is a sophisticated modification of Euler integration. It involves taking full and half-frame steps and backtracking. It finds 4 estimates for the <span class="green">velocity</span> based on the <span class="orangered">acceleration</span> at the start, middle and end.
</p>
</div>
<div class="step">
<p>
The physics can then be integrated from a weighted sum of these estimates, with coefficients $ [\frac{1}{6}, \frac{2}{6}, \frac{2}{6}, \frac{1}{6}] $. We end up in the <span class="blue">right place</span>, at the <span class="green">right speed</span>. This method offers an $ O(Δt^4) $ global error. Decrease the time step 10× and it becomes 10,000× more accurate. We have a choice of easy-and-good-enough (Verlet) or complicated-but-precise (RK4), at any frame rate. Each has its own perks, but Verlet is most attractive for games.
</p>
</div>
<div class="step">
<p>
With physics under our belt, let's move on. Why not animate time itself? This is the <em>variable speed clock</em> and it's dead simple. It's also a great debugging tool: sync all your animations to a global clock and you can activate <em>bullet time</em> at will. You can tell right away if a glitch was an animation bug or a performance hiccup. On this site too: if you hold <kbd>Shift</kbd>, everything slows down 5×.
</p>
</div>
<div class="step">
<div class="extra top" data-delay="1">
<big>$$ \class{green}{v_{←i}} = \frac{\class{blue}{t_i} - \class{blue}{t_{i-1}}}{\class{blue}{t_i} - \class{blue}{t_{i-1}}} = \frac{Δt_i}{Δt_i} = 1 $$</big>
</div>
<p>
First, we differentiate the clock's <span class="blue">time</span> backwards—because in real-time applications, we don't know what the future holds. This is time's <span class="green">velocity</span> $ \class{green}{v_{←i}} $. As we have to divide by the time step too, the velocity is constant and equal to 1. Let's change that.
</p>
</div>
<div class="step">
<div class="extra top" data-delay="1.5" data-hold="1">
<big>$$ \class{blue}{t'_i} = \sum\limits_{k=0}^i \class{green}{v'_{←k}} \cdot Δt_k $$</big>
</div>
<p>
We can reduce the speed of time at will, by changing $ \class{green}{v_i} $. If we then multiply by the time step $ Δt_i $ again and <span class="green">add the pieces back together</span> incrementally, we get a <span class="blue">new clock $ t'_i $</span>. By <em>integrating</em> this way, we only need to worry about <span class="green">slope</span>, not <span class="blue">position</span>: time always advances consistently. This is also where variable frame rates pay off: going half the speed is the same job as rendering at twice the frame rate.
</p>
</div>
<div class="step">
<p>
Using our other tricks, we can animate $ \class{green}{v_i} $ smoothly, easing in and out of slow motion, or speeding into fast-forward. If we didn't do this, then any animation cued off this clock would jerk at the transition point. This is the <em>chain rule</em> for derivatives in action: derivatives compound when you compose functions. Any jerks caused along the way will be visible in the end result.
</p>
</div>
<div class="step">
<p>If time is smooth, what about interruptions? Suppose we have a <span class="blue">cosine eased animation</span>. After half a second, the user interrupts and triggers a new animation. If we abort the animation and start a new one, we create a huge jerk. The object stops instantly and then slowly starts moving again.
</p>
</div>
<div class="step">
<p>One way to solve this is to layer on another animation: one that <span class="purple">blends between</span> the two easing curves in the middle. Here it's just another cosine ease, interpolating in the vertical direction, between <span class="blue">two changing values</span>. We blend across the entire animation for maximum smoothness. This has a downside though: if the blended animation itself is interrupted, we'd have to layer on another blend, one for each additional interruption. That's too much bookkeeping, particularly when using long animations.</p>
</div>
<div class="step">
<p>We can fix this by mimicking inertial scrolling. We treat everything that came before as a black box, and assume nothing happens afterwards. We only look at one thing: <span class="green">velocity</span> at the time of interruption.</p>
</div>
<div class="step">
<p>After determining the <span class="green">velocity</span> of any running animations, we can construct a <span class="blue">ramp</span> to match. We start from 0 to create a <em>relative animation</em>.</p>
</div>
<div class="step">
<p>We can bend this <span class="blue">ramp</span> <span class="purple">back to zero</span> with another cosine ease, interpolating vertically. This time however, the first easing curve is no longer involved.</p>
</div>
<div class="step">
<p>If we then add this to the <span class="blue">second animation</span>, it <span class="purple">perfectly fills the gap</span> at the corner. We only need to track two animations at a time: the currently active one, and a <span class="cyan">corrective bend</span>. If we get interrupted again, we measure the combined velocity, and construct a new bend that lets us forget everything that came before.
</p>
</div>
<div class="step">
<p>By using a <span class="cyan">different easing curve</span> for the correction, we can make it tighter, creating a slight wave at the end. Either way, it doesn't matter how the object was moving before, it will always recover correctly.</p>
</div>
<div class="step">
<p>But what if we get interrupted all the time? We could be tracking a moving pointer, following a changing audio volume, or just have a fidgety user in the chair. We'd like to smooth out this data. The interrupted easing approach would be constantly missing its target, because there is never time for the value to settle. There is an easier way.
</p>
</div>
<div class="step">
<div class="extra top edge">
<big>$$ \class{blue}{p_{i+1}} = lerp(\class{blue}{p_{i}}, \class{purple}{o_{i}}, \class{royal}{f}) $$</big>
</div>
<p>We use an <em>exponential decay</em>, just like with inertial scrolling. Only now we manipulate the <span class="blue">position $ p_{i} $</span> directly: we move it a certain constant fraction towards the target $ \class{purple}{o_{i}} $, chasing it. Here, $ \class{royal}{f} = 0.1 = 10\% $. This is a one-line feedback system that will keep trying to reach its target, no matter how or when it changes. When the <span class="purple">target</span> is constant, the <span class="blue">position</span> follows an exponential arc up or down.
</p>
</div>
<div class="step">
<div class="extra top edge">
$$ \class{blue}{p_{i+1}} = lerp(\class{blue}{p_{i}}, \class{purple}{o_{i}}, \class{royal}{f}) $$
$$ \class{cyan}{q_{i+1}} = lerp(\class{cyan}{q_{i}}, \class{blue}{p_{i}}, \class{royal}{f}) $$
</div>
<p> The <span class="blue">entire path</span> is continuous, but not <em>smooth</em>. That's fixable: we can apply <span class="cyan">exponential decay again</span>. This creates two linked pairs, each chasing the next, from $ \class{slate}{q_{i}} $ to $ \class{blue}{p_{i}} $ to $ \class{purple}{o_{i}} $. Each level appears to do something akin to <em>integration</em>: it smooths out discontinuities, one derivative at a time. Where a curve crosses its parent, it has a local maximum or minimum. These are signs that calculus is hiding somewhere.
</p>
</div>
<div class="step">
<div class="extra top edge" data-hold="1">
$$ \class{blue}{p_{i+1}} = lerp(\class{blue}{p_{i}}, \class{purple}{o_{i}}, \class{royal}{f}) $$
$$ \class{cyan}{q_{i+1}} = lerp(\class{cyan}{q_{i}}, \class{blue}{p_{i}}, \class{royal}{f}) $$
$$ \class{slate}{r_{i+1}} = lerp(\class{slate}{r_{i}}, \class{cyan}{q_{i}}, \class{royal}{f}) $$
</div>
<p>That's not so surprising when you know these are <em>difference equations</em>: they describe a relation between a quantity and how it's changing from one to step to the next. These are the finite versions of <em>differential equations</em> from calculus. They can describe sophisticated behavior with remarkably few operations. Here I added a third feedback layer. The <span class="slate">path</span> gets smoother, but also lags more behind the target.
</p>
</div>
<div class="step">
<p>If we increase $ f $ to 0.25, the curves respond more quickly. Exponential decays are directly tuneable, and great for whiplash-like motions. The more levels, the more inertia, and the longer it takes to turn.
</p>
</div>
<div class="step">
<div class="extra top edge">
$$ \class{blue}{p_{i+1}} = lerp(\class{blue}{p_{i}}, \class{purple}{o_{i}}, \class{blue}{f_1}) $$
$$ \class{cyan}{q_{i+1}} = lerp(\class{cyan}{q_{i}}, \class{blue}{p_{i}}, \class{cyan}{f_2}) $$
$$ \class{slate}{r_{i+1}} = lerp(\class{slate}{r_{i}}, \class{cyan}{q_{i}}, \class{slate}{f_3}) $$
</div>
<p>We can also pick a different $ f_i $ for each stage. Remarkably, the order of the $ \class{royal}{f_i} $ values doesn't matter: <span class="blue">0.1</span>, <span class="cyan">0.2</span>, <span class="slate">0.3</span> has the <span class="slate">exact same result</span> as <span class="blue">0.3</span>, <span class="cyan">0.2</span>, <span class="slate">0.1</span>. That's because these filters are all <em>linear, time-invariant systems</em>, which have some very interesting properties.</p>
</div>
<div class="step">
<p>
If you shift or scale up/down a particular input signal, you'll get the exact same output back, just shifted and scaled in the same way. Even if you shift by less than a frame. We've created <em>filters</em> which manipulate the frequencies of signals directly. These are 1/2/3-pole <em>low-pass filters</em> that only allow slow changes. That's why this picture looks exactly like <em>sampling</em> continuous curves: the continuous and discrete are connected.</p>
</div>
<div class="step">
<p>Exponential decays retain all their useful properties in 2D and 3D too. Unlike splines such as Bezier curves, they require no set up or garbage collection: just one variable per coordinate per level, no matter how long it runs. It works equally well for adding a tiny bit of mouse smoothing, or for creating grand, sweeping arcs. You can also use it to smooth existing curves, for example after randomly distorting them.</p>
</div>
<div class="step">
<p>However there's one area where decay is constantly used where it really shouldn't be: download meters and load gauges. Suppose we start downloading a file. The <span class="purple">speed</span> is relatively constant, but noisy. After 1 second, it drops by 50%. This isn't all that uncommon. Many internet connections are <em>traffic shaped</em>, allowing short initial bursts to help with video streaming for example.</p>
</div>
<div class="step">
<div class="extra bottom" data-align-y=".75">
$$ \class{blue}{p_{i+1}} = lerp(\class{blue}{p_{i}}, \class{purple}{o_{i}}, \class{royal}{f}) $$
</div>
<p>Often developers apply <span class="blue">slow exponential easing</span> to try and get a stable reading. As you need to smooth quite a lot to get rid of all the noise, you end up with a long decaying tail. This gives a completely wrong impression, making it seem like the speed is still dropping, when it's actually been steady for several seconds. The same shape appears in Unix load meters: it's a lie.</p>
</div>
<div class="step">
<div class="extra bottom" data-align-y=".75">
$$ p'_{i+1} = lerp(p'_{i}, \class{purple}{o_{i}}, \class{royal}{f}) $$
$$ \class{cyan}{q_{i+1}} = lerp(\class{cyan}{q_{i}}, p'_{i}, \class{royal}{f}) $$
</div>
<p>If we apply <span class="cyan">double exponential easing</span>, we can increase $ f $ to get a <em>shorter tail</em> for the same amount of smoothing. But we can't get rid of it entirely: the more levels of easing we add, the more the curve starts to lag behind the data. We can do much better.</p>
</div>
<div class="step">
<p>We can analyze the filters by examining their response to a standard input. If we pass in a <span class="purple">single step</span> from 0 to 1, we get the <em>step response</em> for the <span class="blue">two</span> <span class="cyan">filters</span>.</p>
</div>
<div class="step">
<p>Another good test pattern is a single <span class="orangered">one frame pulse</span>. This is the <em>impulse response</em> for <span class="green">both</span> <span class="gold">filters</span>. The impulse responses go on forever, decaying to 0, but never reaching it. This shows these filters effectively compute a weighted average of every single value they've ever seen before: they have a <em>memory</em>, an <em>infinite impulse response</em> (IIR).</p>
</div>
<div class="step">
<p>Doesn't this look somewhat familiar? It turns out, the step response is the <em>integral</em> of the impulse response. It's a <span class="blue">position</span>. Vice versa, the impulse response is the <em>derivative</em> of the step response. It's a <span class="green">velocity</span>. Surprise, physics!</p>
</div>
<div class="step">
<p>But it gets weirder. <em>Integration</em> sums together all values starting from a certain point, multiplied by the (constant) time step. That means that integration is itself a <em>filter</em>: its <span class="purple">impulse response</span> is a single step, the <em>integral of an impulse</em>. Its <span class="slate">step response</span> is a ramp, a constant increase.</p>
</div>
<div class="step">
<p>It works the other way too. <em>Differentiation</em> takes the difference of neighbouring values. It's a filter and its <span class="orangered">step response</span> is just an impulse, detecting the single change in the <span class="purple">step</span>. Its <span class="slate">impulse response</span> is an upward impulse followed by a downward one: the <em>derivative of an impulse</em>. When one value is weighed positively and the other weighed negatively, the sum is their difference.</p>
</div>
<div class="step">
<div class="extra left edge purple" data-align-x=".95" data-delay="3.5">
<big><big>$$ \sum p_i \cdot Δt \,\,\, ↑ $$</big></big>
</div>
<div class="extra right edge slate" data-delay="3.5" data-align-x=".76">
<big><big>$$ ↓ \,\,\, \frac{Δp}{Δt} $$</big></big>
</div>
<p>This explains why exponential filters seem to have integration-like qualities: these are <em>all</em> integrators, they just apply different weights to the values they add up. Every step response is another filter's impulse response, and vice versa, connected through <span class="purple">integration</span> and <span class="slate">differentiation</span>. We can use this to design filters to spec.</p>
</div>
<div class="step">
<div class="extra bottom edge" data-delay="2">
<big>$$ \class{green}{v_{i→}} = \sin \frac{π}{4} t_i $$</big>
</div>
<p>That said, filter design is still an art. IIR filters are feedback systems: once a value enters, it never leaves, bouncing around forever. Controlling it precisely is difficult under real world conditions, with finite arithmetic and noisy measurements to deal with. Much simpler is the <span class="green">finite impulse response</span> (FIR), where each value only affects the output for a limited time. Here I use one lobe of a sine wave over 4 seconds.</p>
</div>
<div class="step">
<div class="extra bottom edge" data-delay="2">
<big>$$ \class{blue}{p_{i+1}} = \class{blue}{p_0} + \sum\limits_{k=0}^i\class{green}{v_{i→}} \cdot Δt $$</big>
</div>
<p>Even if we don't know how to build the filter, we can still analyze it. We can integrate the <span class="green">impulse response</span> to get the <span class="blue">step response</span>. But there's a problem: it overshoots, and not by a little. Ideally the filtered signal should settle at the original height. The problem is that the area under the green curve does not add up to 1.</p>
</div>
<div class="step">
<div class="extra bottom edge" data-delay="1">
<big>$$ \class{green}{v_{i→}} = \frac{π}{8} \sin \frac{π}{4} t_i \,\,\,\,\,\,\,\,\,\,\, \class{blue}{p_{i}} = \frac{1}{2} + \frac{1}{2} \cos \frac{π}{4} t_i $$</big>
</div>
<p>To fix this, we divide the <span class="green">impulse response</span> by the area it spans, $ \class{green}{\frac{8}{π}} $, or multiply by $ \class{green}{\frac{π}{8}} $, normalizing it to 1. Such filters are said to have <em>unit DC gain</em>, revealing their ancestry in analog electronics. The step response turns out to be a <span class="blue">cosine curve</span>, and this filter must therefor act like perpetually interruptible cosine easing.</p>
</div>
<div class="step">
<p>There's two ways of interpreting the step response. One is that we pushed a <span class="purple">step</span> through the <span class="green">filter</span>. Another is that we pushed the <span class="green">filter</span> through a <span class="purple">step</span>—an <span class="purple">integrator</span>. This symmetry is a property of <em>convolution</em>, which is the integration-like operator we've been secretly using all along.</p>
</div>
<div class="step">
<p>Convolution is easiest to understand in motion. When you <em>convolve two curves</em> $ \class{purple}{q_i} ⊗ \class{green}{r_i} $, you slide them past each other, after mirroring one of them. As our <span class="green">impulse response</span> is symmetrical, we can ignore that last part for now.</p>
</div>
<div class="step">
<div class="extra bottom edge" data-delay="8">
<big>$$ \class{blue}{p_i} = \class{purple}{q_i} ⊗ \class{green}{r_i} = \class{cyan}{\sum\limits_{k=-∞}^{+∞}} \class{purple}{q_k} \cdot \class{green}{r_{i-k}} $$</big>
</div>
<p>We multiply <span class="purple">both</span> <span class="green">curves</span> with each other, creating a <span class="cyan">new curve</span> in the overlap: here a growing section of the impulse response. The area under this curve is the <span class="blue">output of the filter</span> at that time. The <span class="cyan">sum</span> goes to infinity in both directions, allowing for infinite tails. We already saw something similar when we used a <em>geometric series</em> to determine the final resting position of an inertial scroll gesture. With a FIR filter, the sum ends.</p>
</div>
<div class="step">
<div class="extra bottom edge" data-delay="4.5">
<big>$$ \class{blue}{p_i} = \class{purple}{q_i} ⊗ \class{green}{r_i} = \class{cyan}{\sum\limits_{k=-∞}^{+∞}} \class{purple}{q_k} \cdot \class{green}{r_{i-k}} $$</big>
</div>
<p>But why did we have to mirror one curve? It's simple: from the <span class="green">impulse response</span>'s point of view, new values approach from the positive X side, now left, not the negative X side, right. By flipping the impulse response, it faces the <span class="purple">other signal</span>, which is what we want.</p>
</div>
<div class="step">
<div class="extra bottom edge" data-hold="2">
<big>$$ \class{blue}{p_i} = \class{green}{r_i} ⊗ \class{purple}{q_i} = \class{cyan}{\sum\limits_{k=-∞}^{+∞}} \class{green}{r_k} \cdot \class{purple}{q_{i-k}} $$</big>
</div>
<p>If we center the view on the impulse response, it's clear we've swapped the role of the two curves. Now it's the <span class="purple">step</span> that's passing backwards through the <span class="green">filter</span>, rather than the other way around.</p>
</div>
<div class="step">
<p>If we replace the step response with a <span class="purple">random burst of signal</span>, the filter can work its magic, smoothing out the input through convolution. It's a <em>weighted average</em> with a sliding window. The filter still lags behind the data, but the tail is now finite.</p>
</div>
<div class="step">
<p>If we make the window narrower, its amplitude increases due to the normalization. We get a <span class="blue">more variable curve</span>, but also a shorter tail. This is like a blur filter in Photoshop, only in 1D instead of 2D. As Photoshop has the entire image at its disposal, rather than processing a real-time signal, it doesn't have to worry about lag: it can compensate directly by shifting the result back a constant distance when it's done.</p>
</div>
<div class="step">
<div class="extra bottom edge" data-delay="3.5">
<big>$$ \class{blue}{ease(f)} = \frac{1}{2} - \frac{1}{2} \cdot \cos πf
\,\,\,\,\,\,\,\,\,\,\,
\class{green}{slope(f)} = \frac{1}{2}π \cdot \sin πf $$</big>
</div>
<p>What about custom filter design? Well, if you're an engineer, that's a topic for advanced study, learning to control the level and phase at exact frequencies. If you're an animator, it's much simpler: you pick a desired <span class="blue">easing curve</span>, and use its <span class="green">velocity</span> to make a normalized filter. You end up with the exact same step response, turning the easing curve into a perpetually interruptible animation.</p>
</div>
<div class="step">
<div class="extra bottom edge" data-delay="2" data-hold="1">
<big>$$ \class{blue}{ease(f)} = (\frac{1}{2} - \frac{1}{2} \cdot \cos πf) \cdot (1 + 20f \cdot (1 - f)^\frac{5}{2}) $$</big>
</div>
<p>Which leads to the last trick in this chapter: removing lag on a real-time filtered signal. There's always an inherent delay in any filter, where signals are shifted by roughly half the window length. We can't get rid of it, only reduce it. We have to change the filter to prefer certain frequencies over others, making it <em>resonate</em> to the kind of signal we expect. We use an <span class="blue">easing curve</span> that overshoots, and preferably a short one. This is just one I made up.</p>
</div>
<div class="step">
<p>The <span class="green">velocity</span>—here scaled down—now has a positive and negative part. As neither part is normalized by itself, the filter will first <span class="blue">amplify</span> any signal it encounters. The second part then compensates by pulling the level <span class="blue">back down</span>.</p>
</div>
<div class="step">
<p>
The result is that the filter actually tries to <span class="purple">predict</span> the signal, which you can imagine is a useful thing to do. At certain points, the lag is close to 0, when the resonance frequency matches and slides into phase. When applied to animation, resonant filters can create jelly-like motions. When applied to electronic music at about 220 Hz, you get Acid House.
</p>
```

REXML could not parse this XML/HTML: </div>

```
<div class="step">
<p>
Let's put it all together, just for fun. Here we have some <span class="blue">particles</span> being simulated with Verlet integration. Each particle experiences <span class="orangered">three forces</span>. <em>Radial containment</em> pushes them to within a certain distance of the <span class="purple">target</span>. <em>Friction</em> slows them down, opposing the <span class="green">direction of motion</span>. A <em>constantly rotating force</em>, different for each particle, keeps them from bunching up. The target follows the mouse, with double exponential easing.
</p>
```

REXML could not parse this XML/HTML: </div>

```
<div class="step">
<p>
Friction links <span class="orangered">acceleration</span> to <span class="green">velocity</span>. Containment links <span class="orangered">acceleration</span> to <span class="blue">position</span>. And integration links them back the other way. These circular dependencies are not a problem for a good physics engine. Note that the particles <em>do not interact</em>, they just happen to follow similar rules.<br>
<em>Tip: Move the mouse and hold <kbd>Shift</kbd> to see variable frame rate physics in action.</em>
</p>
```

REXML could not parse this XML/HTML: </div>

```
<div class="step">
<p>
If we add up the <span class="orangered">three forces</span> and trace out curves again, we can watch the <span class="blue">particles</span>—and their derivatives—speed through time. Just like you are doing right now, in your chair. As <span class="green">velocity</span> and <span class="orangered">acceleration</span> only update in steps, their curves will only be smooth if the physics clock and rendering clock are synced.
</p>
```

REXML could not parse this XML/HTML: </div>

REXML could not parse this XML/HTML: </div>

REXML could not parse this XML/HTML: </div>

By manipulating time, we've managed to eliminate frame rate issues altogether, even make it work to our advantage. We've discovered more accurate physics engines, so we don't have to waste time simulating tiny steps. We've also created interruptible animations and turned them into filters. We can choose their easing curves and use feedback systems to remove the need for any manual interruptions altogether.

Here, *linear time-invariant* systems are very useful building blocks: they are simple to implement, but eminently customizable. Both IIR and FIR filters are simple in their basic form. We can also combine feedback systems with other physical or unphysical forces: we can move the target any way we like, perhaps superimposing variation onto existing curves. If we broaden our horizons a bit, we can find applications outside of animation: data analysis, audio manipulation, image processing, and much, much more.

Of course, there are plenty of non-linear and/or non-time-invariant systems too, too many to cover. When dealing with animation though, we'll prefer systems based on physics. They're just the trick to turn a bunch of artificial data into something that feels slick and natural. That said, physics itself is sometimes non-linear: fluids like water, smoke or fire are perfect examples. Solving those particular boondoggles requires the kind of calculus that frightens most adults and large children, so we won't go into that here. It's the same thing though: you simulate it finitely with a couple of clever tricks and the awesome power of raw number crunching.

*Continued in part two.*

REXML could not parse this XML/HTML: <script type="text/javascript"> window.MathJax && MathJax.Hub.Queue(["Typeset",MathJax.Hub]); Acko.queue(function () { Acko.Fallback.warnWebGL(); }); </script>

“It is known that there are an infinite number of worlds, simply because there is an infinite amount of space for them to be in. However, not every one of them is inhabited. Therefore, there must be a finite number of inhabited worlds.

Any finite number divided by infinity is as near to nothing as makes no odds, so the average population of all the planets in the universe can be said to be zero. From this it follows that the population of the whole universe is also zero, and that any people you may meet from time to time are merely the products of a deranged imagination.”– The Restaurant at the End of the Universe, Douglas Adams

If there's one thing mathematicians have a love-hate relationship with, it has to be *infinity*. It's the ultimate tease: it beckons us to come closer, but never allows us anywhere near it. No matter how far we travel to impress it, infinity remains disinterested, equally distant from everything: infinitely far!

$$ 0 < 1 < 2 < 3 < … < \infty $$

Yet infinity is not just desirable, it is absolutely necessary. All over mathematics, we find problems for which no finite amount of steps will help resolve them. Without infinity, we wouldn't have real numbers, for starters. That's a problem: our circles aren't round anymore (no $ π $ and $ \tau $) and our exponentials stop growing right (no $ e $). We can throw out all of our triangles too: most of their sides have exploded.

A steel railroad bridge with a 1200 ton counter-weight.

Completed in 1910. Source: Library of Congress.

We like infinity because it helps avoid all that. In fact even when things are not infinite, we often prefer to pretend they are—we do geometry in infinitely big planes, because then we don't have to care about where the edges are.

Now, suppose we want to analyze a steel beam, because we're trying to figure out if our proposed bridge will stay up. If we want to model reality accurately, that means simulating each individual particle, every atom in the beam. Each has its own place and pushes and pulls on others nearby.

But even just $ 40 $ grams of pure iron contains $ 4.31 \cdot 10^{23} $ atoms. That's an inordinate amount of things to keep track of for just 1 teaspoon of iron.

Instead, we pretend the steel is solid throughout. Rather than being composed of atoms with gaps in between, it's made of some unknown, filled in material with a certain density, expressed e.g. as *grams per cubic centimetre*. Given any shape, we can determine its volume, and hence its total mass, and go from there. That's much simpler than counting and keeping track of individual atoms, right?

Unfortunately, that's not quite true.

Like all choices in mathematics, this one has consequences we cannot avoid. Our beam's density is *mass per volume*. Individual points in space have zero volume. That would mean that at any given point inside the beam, the amount of mass there is $ 0 $. How can a beam that is entirely composed of nothing be solid and have a non-zero mass?

*Bam! No more iron anywhere.*

While Douglas Adams was being deliberately obtuse, there's a kernel of truth there, which is a genuine paradox: what exactly is the mass of every atom in our situation?

To make our beam solid and continuous, we had to shrink every atom down to an infinitely small point. To compensate, we had to create infinitely many of them. Dividing the finite mass of the beam between an infinite amount of atoms should result in $ 0 $ mass per atom. Yet all these masses still have to add up to the total mass of the beam. This suggests $ 0 + 0 + 0 + … > 0 $, which seems impossible.

If the mass of every atom were not $ 0 $, and we have infinitely many points inside the beam, then the total mass is infinity times the atomic mass $ m $. Yet the total mass is finite. This suggests $ m + m + m + … < \infty $, which also doesn't seem right.

It seems whatever this number $ m $ is, it can't be $ 0 $ and can't be non-zero. It's definitely not infinite, we only had a finite mass to begin with. It's starting to sound like we'll have to invent a whole new set of numbers again to even find it.

That's effectively what Isaac Newton and Gottfried Leibniz set in motion at the end of the 17th century, when they both discovered calculus independently. It was without a doubt the most important discovery in mathematics and resulted in formal solutions to many problems that were previously unsolvable— our entire understanding of physics has relied on it since. Yet it took until the late 19th century for the works of Augustin Cauchy and Karl Weierstrass to pop up, which formalized the required theory of *convergence*. This allows us to describe exactly how differences can shrink down to nothing as you approach infinity. Even that wasn't enough: it was only in the 1960s when the idea of infinitesimals as fully functioning numbers—the hyperreal numbers—was finally proven to be consistent enough by Abraham Robinson.

But it goes back much further. Ancient mathematicians were aware of problems of infinity, and used many ingenious ways to approach it. For example, $ π $ was found by considering circles to be infinite-sided polygons. Archimedes' work is likely the earliest use of *indivisibles*, using them to imagine tiny mechanical levers and find a shape's center of mass. He's better known for running naked through the streets shouting Eureka! though.

That it took so long shows that this is not an easy problem. The proofs involved are elaborate and meticulous, all the way back. They have to be, in order to nail down something as tricky as infinity. As a result, students generally learn calculus through the simplified methods of Newton and Leibniz, rather than the most mathematically correct interpretation. We're taught to mix notations from 4 different centuries together, and everyone's just supposed to connect the dots on their own. Except the trail of important questions along the way is now overgrown with jungle.

Still, it shows that even if we don't understand the whole picture, we can get a lot done. This article is in no way a formal introduction to infinitesimals. Rather, it's a demonstration of why we might need them.

What is happening when we shrink atoms down to points? Why does it make shapes solid yet seemingly hollow? Is it ever meaningful to write $ x = \infty $? Is there only one infinity, or are there many different kinds?

To answer that, we first have to go back to even simpler times, to Ancient Greece, and start with the works of Zeno.

Zeno of Elea was one of the first mathematicians to pose these sorts of questions, effectively trolling mathematics for the next two millennia. He lived in the 5th century BC in southern Italy, although only second-hand references survive. In his series of paradoxes, he examines the nature of equality, distance, continuity, of time itself.

Because it's the ancient times, our mathematical knowledge is limited. We know about zero, but we're still struggling with the idea of nothing. We've run into negative numbers, but they're clearly absurd and imaginary, unlike the positive numbers we find in geometry. We also know about fractions and ratios, but square roots still confuse us, even though our temples stay up.

Limits are the first tool in our belt for tackling infinity. Given a sequence described by countable steps, we can attempt to extend it not just to the end of the world, but literally forever. If this works we end up with a finite value. If not, the limit is undefined. A limit can be equal to $ \infty $, but that's just shorthand for *the sequence has no upper bound*. Negative infinity means no lower bound.

Until now we've only encountered fractions, that is, *rational numbers*. Each of our sums was made of fractions. The limit, if it existed, was also a rational number. We don't know whether this was just a coincidence.

It might seem implausible that a sequence of numbers that is 100% rational and converges, can approach a limit that isn't rational at all. Yet we've already seen similar discrepancies. In our first sequence, every partial sum was *less than* $ 1 $. Meanwhile the limit of the sum was *equal to* $ 1 $. Clearly, the limit does not have to share all the properties of its originating sequence.

We also haven't solved our original problem: we've only chopped things up into infinitely many *finite pieces*. How do we get to *infinitely small pieces*? To answer that, we need to go looking for *continuity*.

Generally, continuity is defined by what it is and what its properties are: a noticeable lack of holes, and no paradoxical values. But that's putting the cart before the horse. First, we have to show which holes we're trying to plug.

REXML could not parse this XML/HTML: <div class="wide slideshow full"> <div class="iframe c"> <iframe src="/files/infinity-and-beyond/mb-5-continuity.html" class="mathbox paged autosize" height="320"></iframe> </div> <div class="steps"> <div class="step"> <p>Let's imagine the <em>rational numbers</em>.</p> </div> <div class="step"> <p>Actually, hold on. Is this really a line? The integers certainly weren't connected.</p> </div> <div class="step"> <p class="math">Rather than assume anything, we're going to attempt to visualize all the rational numbers. We'll start with the <span class="blue">numbers between $ 0 $ and $ 1 $</span>.</p> </div> <div class="step"> <div class="extra bottom"><big>$$ \class{blue}{\frac{0 + 1}{2}} $$</big></div> <p class="math">Between any two numbers, we can find a new number in between: their average. This leads to $ \frac{1}{2} $.</p> </div> <div class="step"> <div class="extra bottom"><big>$$ \frac{a + b}{2} $$</big></div> <p>By repeatedly taking averages, we keep finding new numbers, filling up the interval.</p> </div> <div class="step"> <p>If we separate out every step, we get a <em>binary tree</em>.</p> </div> <div class="step"> <p class="math">You can think of this as a map of all the fractions of $ 2^n $. Given any such fraction, say <big>$ \frac{13}{32} = \frac{13}{2^5} $</big>, there is a unique path of lefts and rights that leads directly to it. At least, as long as it lies between $ 0 $ and $ 1 $.</p> </div> <div class="step"> <p class="math"> Note that the graph resembles a fractal and that the distance to the top edge is divided in half with every step. But we only ever explore a finite amount of steps. Therefor, we are not taking a limit and we'll never actually touch the edge. </p> </div> <div class="step"> <div class="extra left" data-hold="1">$$ \frac{2 \cdot a + b}{3} $$</div> <div class="extra right" data-hold="1">$$ \frac{a + 2 \cdot b}{3} $$</div> <p class="math">But we can take thirds as well, leading to fractions with a power of $ 3^n $ in their denominator.</p> </div> <div class="step"> <p class="math">As some numbers can be reached in multiple ways, we can eliminate some lines, and end up with this graph, where every number sprouts into a three-way, <em>ternary tree</em>. Again, we have a map that gives us a unique path to any fraction of $ 3^n $ in this range, like <big>$ \frac{11}{27} = \frac{11}{3^3} $</big>.</p> </div> <div class="step"> <div class="extra"> $$ \frac{21}{60} = \frac{21}{2^2 \cdot 3 \cdot 5} $$ </div> <p class="math">Because we can do this for any denominator, we can define a way to get to any rational number in a finite amount of steps. Take for example <big>$ \frac{21}{60} $</big>. We decompose its denominator into prime numbers and begin with $ 0 $ and $ 1 $ again.</p> </div> <div class="step"> <div class="extra top edge hold3"> <small><small> $$ \frac{21}{60} = \frac{21}{2^2 \cdot 3 \cdot 5} $$ </small></small> </div> <p class="math">There is a division of $ 2^2 $, so we do two binary splits. This time, I'm repeating the previously found numbers so you can see the regular divisions more clearly. We get <span class="green">quarters</span>.</p> </div> <div class="step"> <p class="math">The next factor is $ 3 $ so we divide into thirds once. We now have <span class="gold">twelfths</span>.</p> </div> <div class="step"> <p class="math">For the last division we chop into fifths and get <span class="orangered">sixtieths</span>.</p> </div> <div class="step"> <p class="math">$ \frac{21}{60} $ is now the <span class="orangered">21st number from the left</span>.</p> </div> <div class="step"> <p class="math">But this means we've found a clear way to visualize <em>all</em> the rational numbers between $ 0 $ and $ 1 $: it's all the numbers we can reach by applying a finite number of binary (2), ternary (3), quinary (5) etc. divisions, for any denominator. So there's always a finite <em>gap</em> between any two rational numbers, even though there are infinitely many of them.</p> </div> <div class="step"> <p>The rational numbers are <em>not continuous</em>. Therefor, it is more accurate to picture them as a set of tick marks than a connected number line.</p> </div> <div class="step"> <p class="math">To find continuity then, we need to revisit one of our earlier trees. We'll pick the binary one.<br />While every fork goes two ways, we actually have a third choice at every step: we can choose to stop. That's how we get a finite path to a whole fraction of $ 2^n $.</p> </div> <div class="step"> <p>But what if we never stop? We have to apply a limit: we try to spot a pattern and try to <span class="green">fast-forward it</span>. Note that by halving each step vertically on the graph, we've actually <em>linearized</em> each approach into a straight line which ends. Now we can <em>take limits visually</em> just by intersecting lines with the top edge.</p> </div> <div class="step"> <p class="math">Right away we can spot two convergent limits: by always choosing either the <span class="orangered">left</span> or the <span class="green">right</span> branch, we end up at respectively $ 0 $ and $ 1 $.</p> </div> <div class="step"> <p class="math"><span class="orangered">These</span> <span class="green">two</span> sequences both converge to $ \frac{1}{2} $. It seems that 'at infinity steps', the graph meets up with itself in the middle.</p> </div> <div class="step"> <p class="math">But the graph is now a true fractal. So the same convergence can be found here. In fact, the graph meets up with itself anywhere there is a multiple of <big>$ \frac{1}{2^n} $</big>.</p> </div> <div class="step"> <p class="math">That's pretty neat: now we can eliminate the option of stopping altogether. Instead of ending at $ \frac{5}{16} $, we can simply take <span class="purple">one additional step in either direction, followed by infinitely many opposite steps</span>. Now we're <em>only</em> considering paths that are infinitely long.</p> </div> <div class="step"> <p class="math">But if this graph only leads to fractions of $ 2^n $, then there must be gaps between them. In the limit, the distance between any two adjacent numbers in the graph shrinks down to <em>exactly</em> $ 0 $, which suggests there are no gaps. This infinite version of the binary tree must lead to a lot more numbers than we might think.<br /> Suppose we take a path of <span class="orangered">alternating left and right steps</span>, and extend it forever. Where do we end up?</p> </div> <div class="step"> <p>We can apply the same principle of an <span class="gold">upper and lower bound</span>, but now we're approaching from both sides at once. Thanks to our linearization trick, the entire sequence fits snugly inside a triangle.</p> </div> <div class="step"> <p class="math">If we zoom into the convergence at infinity, we actually end up at $ \class{orangered}{\frac{2}{3}} $.<br /> Somehow we've managed to coax a fraction of $ 3 $ out of a perfectly regular <em>binary</em> tree.</p> </div> <div class="step"> <p class="math">If we <span class="orangered">alternate two lefts with one right</span>, we can end up at $ \class{orangered}{\frac{4}{7}} $. This is remarkable: when we tried to visualize all the rational numbers by combining all kinds of divisions, we were overthinking it. We only needed to take <em>binary divisions</em> and repeat them infinitely with a <em>limit</em>.</p> </div> <div class="step"> <p>Every single rational number can then be found by taking a finite amount of steps to get to a certain point, and then settling into a <em>repeating pattern of lefts and/or rights</em> all the way to infinity.</p> </div> <div class="step"> <p class="math">If we can find numbers between $ 0 $ and $ 1 $ this way, we can apply the exact same principle to the range $ 1 $ to $ 2 $. So we can connect two of these graphs into a single graph with its tip at $ 1 $.</p> </div> <div class="step"> <p class="math">But we can repeat it as much as we like. The full graph is not just infinitely divided, but infinitely big, in that no finite box can contain it. That means it leads to <em>every single positive rational number</em>. We can start anywhere we like. Is your mind blown yet?</p> </div> <div class="step"> <p class="math">No? Ok. But if this works for positives, we can build a similar graph for the negatives just by mirroring it. So we now have a map of the entire rational number set. All we need to do is take <em>infinite paths that settle into a repeating pattern</em> from either a positive or a negative starting point. When we do, we find every such path leads to a rational number.<br /> So any rational number can be found by taking an infinite stroll on one of <em>two</em> infinite binary trees.</p> </div> <div class="step"> <p class="math">Wait, did I say two infinite trees? Sorry, I meant <em>one</em> infinitely big tree.<br />See, if we repeatedly scale up a <span class="green">fractal binary tree</span> and apply a limit to that, we end up with almost exactly the same thing. Only this time, the two downward diagonals always eventually fold back towards $ 0 $. This creates a path of <em>infinity + 1</em> steps downward. While that might not be very practical, it suggests you can ride out to the restaurant at the end of the universe, have dinner, and take a single step to get back home.</p> </div> <div class="step"> <p class="math">Is it math, or visual poetry? It's time to bring this fellatio of the mind to its inevitable climax.</p> </div> <div class="step"> <div class="extra hold2 bottom left" data-align-x=".4" data-align-y=".8">$ \class{blue}{0} $</div> <div class="extra hold2 bottom right" data-align-x=".4" data-align-y=".8">$ \class{green}{1} $</div> <div class="extra hold2 left top" data-align-x="1.0" data-align-y="0.25">$ \class{blue}{0} $</div> <div class="extra hold2 left top" data-align-x=".6" data-align-y="0.25">$ \class{green}{1} $</div> <div class="extra hold2 right top" data-align-x=".6" data-align-y="0.25">$ \class{blue}{0} $</div> <div class="extra hold2 right top" data-align-x="1.0" data-align-y="0.25">$ \class{green}{1} $</div> <p class="math">You may wonder, if this map is so amazing, how did we ever do without?<br /> Let's label our branches. If we go left, we call it $ 0 $. If we go right, we call it $ 1 $.</p> </div> <div class="step"> <div class="extra"> $$ \frac{5}{3} = \class{green}{11}\class{blue}{0}\hspace{2pt}\class{green}{1}\class{blue}{0}\hspace{2pt}\class{green}{1}\class{blue}{0}… $$ </div> <p class="math">We can then identify any number by writing out the infinite path that leads there as a sequence of ones and zeroes—bits.<br /><br />But you already knew that.</p> </div> <div class="step"> <div class="extra"> $$ \frac{5}{3} = \class{green}{1}.\class{green}{1}\class{blue}{0}\hspace{2pt}\class{green}{1}\class{blue}{0}\hspace{2pt}\class{green}{1}\class{blue}{0}…_2 $$ </div> <p class="math">See we've just rediscovered the binary number system. We're so used to numbers in decimal, <em>base 10</em>, we didn't notice. Yet we all learned that rational numbers consist of digits that settle into a repeating sequence, a <em>repeating pattern of turns</em>. Disallowing finite paths works the same, even in decimal: the number $ 0.95 $ can be written as $\, 0.94999…\, $, i.e. <em>take one final step in one direction, followed by infinitely many steps the other way</em>.</p> </div> <div class="step"> <div class="extra"> $$ \frac{4}{5} = \class{blue}{0}.\class{green}{11}\class{blue}{00}\hspace{2pt}\class{green}{11}\class{blue}{00}…_2 $$ </div> <p class="math"> When we write down a number digit by digit, we're really <em>following the path to it</em> in a graph like this, dialing the number's … er … number. The rationals aren't <em>shaped</em> like a binary tree, rather, they <em>look</em> like a binary tree when viewed through the lens of binary division. Every infinite binary, ternary, quinary, etc. tree is then a different but complete perspective of the same underlying thing. We don't have <em>the</em> map, we have one of infinitely many maps.</p> </div> <div class="step"> <div class="extra top" data-delay=".8" data-align-y=".25"> $$ π = \class{green}{11}.\class{blue}{00}\class{green}{1}\class{blue}{00}\class{green}{1}\class{blue}{0000}\class{green}{1}…_2 $$ </div> <p class="math"> Which means we can show this graph is actually an interdimensional number portal.<br /> See, we already know <em>where</em> the missing numbers are. Irrational numbers like $ π $ form a never-repeating sequence of digits. If we want to reach $ π $, we find it's at the end of an infinite path whose turns <em>do not repeat</em>. By allowing such paths, our map leads us straight to them. Even though it's made out of only <em>one kind of rational number</em>: division by two.</p> </div> <div class="step"> <div class="extra" data-delay="1.6"><big><big> $$ π = \mathop{\class{no-outline}{►\hspace{-2pt}►}}_{\infty\hspace{2pt}} x_n \,? $$ </big></big> </div> <p class="math"> So now we've invented <em>real numbers</em>. How do we visualize this invention? And where does continuity come in? What we need is a procedure that generates such a non-repeating path <em>when taken to the limit</em>. Then we can figure out where the behavior at infinity comes from. </p> </div> <div class="step"> <p>Because the path never settles into a pattern, we can't pin it down with a single neat triangle like before. We try something else. At every step, we can see that the <em>smallest</em> number we can still reach is found by <span class="orangered">always going left</span>. Similarly, the <em>largest</em> available number is found by <span class="green">always going right</span>. Wherever we go from here, it will be somewhere in this range.</p> </div> <div class="step"> <p>We can set up shrinking intervals by placing such triangles along the path, forming a nested sequence.</p> </div> <div class="step"> <div class="extra left"> $$ \begin{align} 3 \leq & π \leq 4 \\ 3.1 \leq & π \leq 3.2 \\ 3.14 \leq & π \leq 3.15 \\ 3.141 \leq & π \leq 3.142 \\ 3.1415 \leq & π \leq 3.1416 \\ 3.14159 \leq & π \leq 3.14160 \\ \end{align} $$ </div> <div class="extra right"> $$ \begin{align} 11_2 \leq & π \leq 100_2 \\ 11.0_2 \leq & π \leq 11.1_2 \\ 11.00_2 \leq & π \leq 11.01_2 \\ 11.001_2 \leq & π \leq 11.010_2 \\ 11.0010_2 \leq & π \leq 11.0011_2 \\ 11.00100_2 \leq & π \leq 11.00101_2 \\ \end{align} $$ </div> <p class="math"> What we've actually done is rounded up and down at every step, to find an upper and lower bound with a certain amount of digits. This works in any number base. </p> </div> <div class="step"> <p class="math">Let's examine these intervals by themselves. We can see that due to the binary nature, each interval covers either the left or right side of its ancestor. Because our graph goes on forever, there are infinitely many nested intervals. This <em>tower of $ π $</em> never ends and never repeats itself, we just squeezed it into a finite space so we could see it better.</p> </div> <div class="step"> <p class="math">If we instead approach a rational number like $ \frac{10}{3} = 3.333…\, $ then the tower starts repeating itself at some point. Note that the intervals <em>don't slide smoothly</em>. Each can only be in one of two places relative to its ancestor.</p> </div> <div class="step"> <p class="math">In order to reach a different rational number, like $ 3.999… = 4 $, we have to establish a different repeating pattern. So we have to rearrange infinitely many levels of the tower all at once, from one configuration to another. This reinforces the notion that rational numbers are not continuous.</p> </div> <div class="step"> <p class="math">If the tower converges to a number, then the top must be infinitely thin, i.e. $ 0 $ units wide. That would suggest it's meaningless to say what the interval at infinity looks like, because it stops existing. Let's try it anyway.</p> </div> <div class="step"> <p class="math"> There is only one question to answer: does the interval cover the <span class="orangered">left side</span>, or the <span class="green">right</span>? </p> </div> <div class="step"> <p class="math"> Oddly enough, in this specific case of $ 3.999…\, $ there is an answer. The tower <em>leans to the right</em>. Therefor, the state of the interval is the same all the way up. If we take the limit, it converges and the <em>final interval</em> goes right. </p> </div> <div class="step"> <p class="math"> But we can immediately see that we can build a second tower that leans left, which converges on the same number. We could distinguish between the two by writing it as $ 4.000…\, $ In this case the <em>final interval</em> goes left. </p> </div> <div class="step"> <p class="math"> If we approach $ 10/3 $, we take a path of <em>alternating left and right steps</em>. The state of the interval at infinity becomes like our paradoxical lamp from before: it has to be both left and right, and therefor it is neither, it's simply undefined. </p> </div> <div class="step"> <p class="math"> The same applies to irrational numbers like $ π $. Because the sequence of turns never repeats itself, the interval flips arbitrarily between left and right forever, therefor it is in an undefined state at the end. </p> </div> <div class="step"> <p class="math"> But there's another way to look at this.<br /> If the interval converges to the number $ π $, then the two sequences of respectively <span class="orangered">lower</span> and <span class="green">upper bounds</span> also converge to $ π $ individually. </p> </div> <div class="step"> <p class="math"> Remember how we derived our bounds: we rounded down by always taking <span class="orangered">lefts</span> and rounded up by always taking <span class="green">rights</span>. The shape of the tower depends on the specific path you're taking, not just the number you reach at the end. </p> </div> <div class="step"> <p class="math"> That means we're approaching the <span class="orangered">lower bounds</span> so they all end in $ 0000… \, $ Their towers always lean left. </p> </div> <div class="step"> <p class="math">If we then take the <em>limit of their final intervals</em> as we approach $ π $, that goes <span class="orangered">left</span> too. Note that this is a double limit: first we find the <em>limit of the intervals</em> of each tower individually, then we take the <em>limit over all the towers as we approach $ π $</em>. </p> </div> <div class="step"> <p class="math"> For the same reason, we can think of all the <span class="green">upper bounds</span> as ending in $ 1111 …\, $ Their towers always lean right. When we take the limit of their final intervals and approach $ π $, we find it points <span class="green">right</span>. </p> </div> <div class="step"> <p class="math"> But, we could actually just reverse the rounding for the upper and lower bounds, and end up with the exact opposite situation. Therefor it doesn't mean that we've invented a <span class="orangered">red $ π $</span> to the left and <span class="green">green $ π $</span> to the right which are somehow different. $ π $ is $ π $. This only says something about our procedure of building towers. It matters because the towers is how we're trying to reach a real number in the first place. </p> </div> <div class="step"> <p class="math"> See, our tower still represents a binary number of infinitely many bits. Every interval can still only be in one of two places. To run along the real number line, we'd have to rearrange infinitely many levels of the tower all at once to create motion. That still does not seem continuous. </p> </div> <div class="step"> <p> We can resolve this if we picture the final interval of each tower as a <em>bit at infinity</em>. If we flip the bit at infinity, we swap between two equivalent ways of reaching a number, so this has no effect on the resulting number. </p> </div> <div class="step"> <p class="math"> In doing so, we're actually imagining that every real number is a rational number whose <em>non-repeating head</em> has grown infinitely big. Its <em>repeating tail</em> has been pushed out all the way past infinity. That means we can flip the repeating part of our tower between different configurations without creating any changes in the number it leads to. </p> </div> <div class="step"> <p class="math"> That helps a little bit with the intuition: if the tower keeps working <em>all the way up there</em>, it must be continuous at its actual tip, wherever that really is. A <em>continuum</em> is then what happens when the smallest possible step you can take isn't just as small as you want. It's so small that it no longer makes <em>any</em> noticeable difference. While that's not a very mathematical definition, I find it very helpful in trying to imagine how this might work. </p> </div> <div class="step"> <div class="extra bottom" data-delay="1.6" data-align-y=".15"> $ 1, 2, 3, 4, 5, 6, … $ </div> <p class="math"> Finally, we might wonder how many of each type of number there are.<br />The natural numbers are <em>countably infinite</em>: there is a procedure of steps which, in the limit, counts all of them. </p> </div> <div class="step"> <div class="extra bottom" data-align-y=".15"> $$ 1, 2, 3, 4, 5, 6, … $$ <br /><br /><br /><br /> </div> <div class="extra bottom" data-delay=".4" data-align-y=".15"> <br /><br /> $$ \class{orangered}{2, 4, 6, 8, 10, 12, …} $$ <br /><br /> </div> <div class="extra bottom" data-delay=".8" data-align-y=".15"> <br /><br /><br /><br /> $$ \class{green}{0, 1, -1, 2, -2, 3, …} $$ </div> <p class="math"> We can find a similar sequence for the <span class="orangered">even natural numbers</span> by multiplying each number by two. We can also alternate between a positive and negative sequence to count <span class="green">the integers</span>. The three sequences are <em>all</em> countably infinite, and we can relate the elements one-to-one. Which means all sequences are <em>equally long</em>.<br />There are as many even positives as positives. Which is exactly as many as all the integers combined. As counter-intuitive as it is, it is the only consistent answer. </p> </div> <div class="step"> <div class="extra bottom right" style="background-image: url(/files/infinity-and-beyond/rationals.png); background-size: 100%; background-position: 0 0; background-repeat: no-repeat;" data-align-x=".2" data-align-y=".5"><small><small> $$ \begin{array}{cccccccc} 1 \hspace{2pt}&\hspace{2pt} 2 \hspace{2pt}&\hspace{2pt} 3 \hspace{2pt}&\hspace{2pt} 4 \hspace{2pt}&\hspace{2pt} 5 \hspace{2pt}&\hspace{2pt} 6 \hspace{2pt}&\hspace{2pt} … \\[6pt] \frac{1}{2} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{2}{2}} \hspace{2pt}&\hspace{2pt} \frac{3}{2} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{4}{2}} \hspace{2pt}&\hspace{2pt} \frac{5}{2} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{6}{2}} \hspace{2pt}&\hspace{2pt} \\[3pt] \frac{1}{3} \hspace{2pt}&\hspace{2pt} \frac{2}{3} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{3}{3}} \hspace{2pt}&\hspace{2pt} \frac{4}{3} \hspace{2pt}&\hspace{2pt} \frac{5}{3} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{6}{3}} \hspace{2pt}&\hspace{2pt} \cdots \\[3pt] \frac{1}{4} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{2}{4}} \hspace{2pt}&\hspace{2pt} \frac{3}{4} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{4}{4}} \hspace{2pt}&\hspace{2pt} \frac{5}{4} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{6}{4}} \hspace{2pt}&\hspace{2pt} \\[3pt] \frac{1}{5} \hspace{2pt}&\hspace{2pt} \frac{2}{5} \hspace{2pt}&\hspace{2pt} \frac{3}{5} \hspace{2pt}&\hspace{2pt} \frac{4}{5} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{5}{5}} \hspace{2pt}&\hspace{2pt} \frac{6}{5} \hspace{2pt}&\hspace{2pt} \\[3pt] \frac{1}{6} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{2}{6}} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{3}{6}} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{4}{6}} \hspace{2pt}&\hspace{2pt} \frac{5}{6} \hspace{2pt}&\hspace{2pt} \class{grey}{\frac{6}{6}} \hspace{2pt}&\hspace{2pt} \\[3pt] \hspace{2pt}&\hspace{2pt} \vdots \hspace{2pt}&\hspace{2pt} \hspace{2pt}&\hspace{2pt} \vdots \hspace{2pt}&\hspace{2pt} \hspace{2pt}&\hspace{2pt} \hspace{2pt}&\hspace{2pt} \hspace{2pt}&\hspace{2pt} \class{white}{\ddots} \end{array} $$ </small></small> </div> <p class="math"> But we can take it one step further: we can find such a sequence for the rational numbers too, by laying out all the fractions on a grid. We can follow diagonals up and down and pass through every single one. If we eliminate duplicates like $ 1 = 2/2 = 3/3 $ and alternate positives and negatives, we can 'count them all'. So there are as many fractions as there are natural numbers. <em>"Deal with it"</em>, says Infinity, donning its sunglasses. <p> </div>

```
<div class="step">
<div class="extra">
$$
\begin{array}{c}
0.\hspace{1pt}\class{green}{1}\hspace{1pt}0\hspace{1pt}0\hspace{1pt}1\hspace{1pt}1\hspace{1pt}1\hspace{1pt}0\hspace{1pt}…_2 \\
0.\hspace{1pt}1\hspace{1pt}\class{blue}{0}\hspace{1pt}0\hspace{1pt}1\hspace{1pt}0\hspace{1pt}0\hspace{1pt}1\hspace{1pt}…_2 \\
0.\hspace{1pt}1\hspace{1pt}0\hspace{1pt}\class{green}{1}\hspace{1pt}0\hspace{1pt}0\hspace{1pt}1\hspace{1pt}0\hspace{1pt}…_2 \\
0.\hspace{1pt}0\hspace{1pt}1\hspace{1pt}1\hspace{1pt}\class{green}{1}\hspace{1pt}0\hspace{1pt}1\hspace{1pt}1\hspace{1pt}…_2 \\
0.\hspace{1pt}1\hspace{1pt}0\hspace{1pt}1\hspace{1pt}1\hspace{1pt}\class{blue}{0}\hspace{1pt}0\hspace{1pt}1\hspace{1pt}…_2 \\
0.\hspace{1pt}0\hspace{1pt}1\hspace{1pt}0\hspace{1pt}1\hspace{1pt}0\hspace{1pt}\class{blue}{0}\hspace{1pt}0\hspace{1pt}…_2 \\
0.\hspace{1pt}0\hspace{1pt}1\hspace{1pt}1\hspace{1pt}1\hspace{1pt}1\hspace{1pt}0\hspace{1pt}\class{green}{1}\hspace{1pt}…_2 \\
… \\
\\
0.\hspace{1pt}\class{blue}{0}\hspace{1pt}\class{green}{1}\hspace{1pt}\class{blue}{0\hspace{1pt}0}\hspace{1pt}\class{green}{1\hspace{1pt}1}\hspace{1pt}\class{blue}{0}\hspace{1pt}…_2
\end{array}
$$
</div>
<p class="math">
The real numbers on the other hand are <em>uncountably infinite</em>: no process can list them all in the limit. The proof is short: suppose we did have a sequence of all the real numbers between $ 0 $ and $ 1 $ in some order. We could then build a new number by taking <span class="green">all</span> <span class="blue">the</span> <span class="green">bits</span> <span class="green">on</span> <span class="blue">the</span> <span class="blue">diagonal</span>, and flipping zeroes and ones.<br>That means this number is different from every listed number in at least one digit, so it's <em>not on the list</em>. But it's also between $ 0 $ and $ 1 $, so it should be. Therefor, <em>the list can't exist</em>.
</p>
</div>
<div class="step">
<p class="math">
This even matches our intuitive explanation from earlier. There are so many real numbers, that we had to invent a bit at infinity to try and count them, and find something that would <em>tick at least once</em> for every real number. Even then we couldn't say whether it was $ 0 $ or $ 1 $ anywhere in particular, because it literally depends on how you approach it.
</p>
</div>
```

REXML could not parse this XML/HTML: </div>

REXML could not parse this XML/HTML: </div>

What we just did was a careful exercise in hiding the obvious, namely the digit-based number systems we are all familiar with. By viewing them not as digits, but as paths on a directed graph, we get a new perspective on just what it means to use them. We've also seen how this means we can construct the rationals and reals using the least possible ingredients required: division by two, and limits.

In school, we generally work with the decimal representation of numbers. As a result, the popular image of mathematics is that it's the science of *digits*, not the underlying structures they represent. This permanently skews our perception of what numbers really are, and is easy to demonstrate. You can google to find countless arguments of why $ 0.999… $ is or isn't equal to $ 1 $. Yet nobody's wondering why $ 0.000… = 0 $, though it's practically the same problem: $ 0.1, 0.01, 0.001, 0.0001, … $

Furthermore, in decimal notation, rational numbers and real numbers look incredibly alike: $ 3.3333… $ vs $ 3.1415…\, $ The question of what it actually means to have infinitely many non-repeating digits, and why this results in continuous numbers, is hidden away in those 3 dots at the end. By imagining $ π $ as $ 3.1415…0000… $ or $ 3.1415…1111… $ we can intuitively bridge the gap to the infinitely small. We see how the distance between two neighbouring real numbers must be so small, that it really is equivalent to $ 0 $.

That's not as crazy as it sounds. In the field of *hyperreal numbers*, every number actually has additional digits 'past infinity': that's its infinitesimal part. You can imagine this to be a multiple of $ \frac{1}{\infty} $, an infinitely small unit greater than $ 0 $, which I'll call $ ε $. The idea of equality is then replaced with *adequality*: being equal aside from an infinitely small difference.

You can explore this hyperreal number line below.

Note that $ ε^2 $ is also infinitesimal. In fact, it's even infinitely smaller than $ ε $, and we can keep doing this for $ ε^3, ε^4, …\,$ To make matters worse, if $ ε $ is infinitesimal, then $ \frac{1}{ε} $ must be infinitely big, and $ \frac{1}{ε^2} $ infinitely bigger than that. So hyperreal numbers don't just have inwardly nested infinitesimal levels, but outward levels of increasing infinity too. They have infinitely many dimensions of infinity both ways.

So it's perfectly possible to say that $ 0.999… $ does not equal $ 1 $, if you mean they differ by an infinitely small amount. The only problem is that in doing so, you get much, *much* more than you bargained for.

That means we can finally answer the question we started out with: why did our continuous atoms seemingly all have $ 0 $ mass, when the total mass was not $ 0 $? The answer is that the mass per atom was *infinitesimal*. So was each atom's volume. The density, *mass per volume*, was the result of dividing one infinitesimal amount by another, to get a normal sized number again. To create a finite mass in a finite volume, we have to add up infinitely many of these atoms.

These are the underlying principles of calculus, and the final puzzle piece to cover. The funny thing about calculus is, it's conceptually easy, especially if you start with a good example. What is hard is actually working with the formulas, because they can get hairy very quickly. Luckily, your computer will do them for you:

That was differential and integral calculus in a nutshell. We saw how many people actually spend hours every day sitting in front of an *integrator*: the odometers in their cars, which integrate speed into distance. And the derivative of speed is acceleration—i.e. how hard you're pushing on the gas pedal or brake, combined with forces like drag and friction.

By using these tools in equations, we can describe laws that relate quantities to their *rates of change*. Drag, also known as air resistance, is a force which gets stronger the faster you go. This is a relationship between the first and second derivatives of position.

In fact, the relaxation procedure we applied to our track is equivalent to another physical phenomenon. If the curve of the coaster represented the temperature along a thin metal rod, then the heat would start to equalize itself in exactly that fashion. Temperature wants to be smooth, eventually averaging out completely into a flat curve.

Whether it's heat distribution, fluid dynamics, wave propagation or a head bobbing in a roller coaster, all of these problems can be naturally expressed as so called *differential equations*. Solving them is a skill learned over many years, and some solutions come in the form of infinite series. Again, infinity shows up, ever the uninvited guest at the dinner table.

Infinity is a many splendored thing but it does not lift us up where we belong. It boggles our mind with its implications, yet is absolutely essential in math, engineering and science. It grants us the ability to see the impossible and build new ideas within it. That way, we can solve intractable problems and understand the world better.

What a shame then that in pop culture, it only lives as a caricature. Conversations about infinity occupy a certain sphere of it—Pink Floyd has been playing on repeat, and there's usually someone peddling crystals and incense nearby.

*"Man, have you ever, like, tried to imagine infinity…?"* they mumble, staring off into the distance.

"Funny story, actually. We just came from there…"

*Comments, feedback and corrections are welcome on Google Plus. Diagrams powered by MathBox.*

*More like this: How to Fold a Julia Fractal.*

"Take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy. And yet," Death waved a hand, "And yet you act as if there is some ideal order in the world, as if there is some… some rightness in the universe by which it may be judged."– The Hogfather, Discworld, Terry Pratchett

Mathematics has a dirty little secret. Okay, so maybe it's not so dirty. But neither is it little. It goes as follows:

*Everything in mathematics is a choice.*

You'd think otherwise, going through the modern day mathematics curriculum. Each theorem and proof is provided, each formula bundled with convenient exercises to apply it to. A long ladder of subjects is set out before you, and you're told to climb, climb, climb, with the promise of a payoff at the end. "You'll need this stuff in real life!", they say, oblivious to the enormity of this lie, to the fact that most of the educated population walks around with *"vague memories of math class and clear memories of hating it."*

Rarely is it made obvious that all of these things are entirely optional—that mathematics is the art of making choices so you can discover what the consequences are. That algebra, calculus, geometry are just words we invented to group the most interesting choices together, to identify the most useful tools that came out of them. The act of mathematics is to play around, to put together ideas and see whether they go well together. Unfortunately that exploration is mostly absent from math class and we are fed pre-packaged, pre-digested math pulp instead.

And so it also goes with the numbers. We learn about the natural numbers, the integers, the fractions and eventually the real numbers. At each step, we feel hoodwinked: we were only shown a part of the puzzle! As it turned out, there was a 'better' set of numbers waiting to be discovered, more comprehensive than the last.

Along the way, we feel like our intuition is mostly preserved. Negative numbers help us settle debts, fractions help us divide pies fairly, and real numbers help us measure diagonals and draw circles. But then there's a break. If you manage to get far enough, you'll learn about something called the *imaginary numbers*, where it seems sanity is thrown out the window in a variety of ways. Negative numbers can have square roots, you can no longer say whether one number is bigger than the other, and the whole thing starts to look like a pointless exercise for people with far too much time on their hands.

I blame it on the name. It's misleading for one very simple reason: all numbers are imaginary. You cannot point to anything in the world and say, "This is a 3, and that is a 5." You can point to three apples, five trees, or chalk symbols that represent 3 and 5, but the concepts of 3 and 5, the numbers themselves, exist only in our heads. It's only because we are taught them at such a young age that we rarely notice.

So when mathematicians finally encountered numbers that acted just a little bit different, they couldn't help but call them *fictitious* and *imaginary*, setting the wrong tone for generations to follow. Expectations got in the way of seeing what was truly there, and it took decades before the results were properly understood.

Now, this is not some esoteric point about a mathematical curiosity. These imaginary numbers—called *complex numbers* when combined with our ordinary real numbers—are essential to quantum physics, electromagnetism, and many more fields. They are naturally suited to describe anything that turns, waves, ripples, combines or interferes, with itself or with others. But it was also their unique structure that allowed Benoit Mandelbrot to create his stunning fractals in the late 70s, dazzling every math enthusiast that saw them.

Yet for the most part, complex numbers are treated as an inconvenience. Because they are inherently multi-dimensional, they defy our attempts to visualize them easily. Graphs describing complex math are usually simplified schematics that only hint at what's going on underneath. Because our brains don't do more than 3D natively, we can glimpse only slices of the hyperspaces necessary to put them on full display. But it's not impossible to peek behind the curtain, and we can gain some unique insights in doing so. All it takes is a willingness to imagine something different.

So that's what this is about. And a lesson to be remembered: complex numbers are typically the first kind of numbers we see that are undeniably strange. Rather than seeing a sign that says *Here Be Dragons, Abandon All Hope*, we should explore and enjoy the fascinating result that comes from one very simple choice: *letting our numbers turn*. That said, there *are* dragons. Very pretty ones in fact.

What does it mean to let numbers turn? Well, when making mathematical choices, we have to be careful. You could declare that $ 1 + 1 $ should equal $ 3 $, but that only opens up more questions. Does $ 1 + 1 + 1 $ equal $ 4 $ or $ 5 $ or $ 6 $? Can you even do meaningful arithmetic this way? If not, what good are these modified numbers? The most important thing is that our rules need to be consistent for them to work. But if all we do is swap out the *symbols* for $ 2 $ and $ 3 $, we didn't actually change anything in the underlying mathematics at all.

So we're looking for choices that don't interfere with what already works, but add something new. Just like the negative numbers complemented the positives, and the fractions snugly filled the space between them—and the reals somehow fit in between *that*—we need to go look for new numbers where there currently aren't any.

We've seen how complex numbers are arrows that like to turn, which can be made to behave like numbers: we can add and multiply them, because we can come up with a consistent rule for doing so. We've also seen what powers of complex numbers look like: we fold or unfold the entire plane by multiplying or dividing angles, while simultaneously applying a power to the lengths.

With a basic grasp of what complex numbers are and how they move, we can start making Julia fractals.

At their heart lies the following function:

$$ f(z) = z^2 + c $$

This says: map the complex number $ z $ onto its square, and then add a constant number to it. To generate a Julia fractal, we have to apply this formula repeatedly, feeding the result back into $ f $ every time.

$$ z_{n+1} = (z_n)^2 + c $$

We want to examine how $ z_n $ changes when we plug in different starting values for $ z_1 $ and iterate $ n $ times. So let's try that and see what happens.

Making fractals is probably the least useful application of complex math, but it's an undeniably fascinating one. It also reveals the unique properties of complex operations, like conformal mapping, which provide a certain rigidity to the result.

However, in order to make complex math practical, we have to figure out how to tie it back to the real world.

It's a good thing we don't have to look far to do so. Whenever we're describing wavelike phenomena, whether it's sound, electricity or subatomic particles, we're also interested in how the wave evolves and changes. Complex operations are eminently suited for this, because they naturally take place on circles. Numbers that oppose can cancel out, numbers in the same direction will amplify each other, just like two waves do when they meet. And by folding or unfolding, we can alter the frequency of a pattern, doubling it, halving it, or anything in between.

More complicated operations are used for example to model electromagnetic waves, whether they are FM radio, wifi packets or ADSL streams. This requires precise control of the frequencies you're generating and receiving. Doing it without complex numbers, well, it just sucks. So why use boring real numbers, when complex numbers can do the work for you?

In visualizing complex waves, we've seen functions that map real numbers to complex numbers, and back again. These can be graphed easily in 3D diagrams, from $ \mathbb{R} $ to $ \mathbb{C} $ or vice-versa. You cross 1 real dimension with the 2 dimensions of the complex plane.

But complex operations in general work from $ \mathbb{C} $ to $ \mathbb{C} $. To view these, unfortunately you need four-dimensional eyes, which nature has yet to provide. There are ways to project these graphs down to 3D that still somewhat make sense, but it never stops being a challenge to interpret them.

For every mathematical concept that we have a built-in intuition for, there are countless more we can't picture easily. That's the curse of mathematics, yet at the same time, also its charm.

Hence, I tried to stick to the stuff that is (somewhat!) easy to picture. If there's interest, a future post could cover topics like: the nature of $ e^{ix} $, Fourier transforms, some actual quantum mechanics, etc.

For now, this story is over. I hope I managed to spark some light bulbs here and there, and that you enjoyed reading it as much as I did making it.

Comments, feedback and corrections are welcome on Google Plus. Diagrams powered by MathBox.

*More like this: To Infinity… And Beyond!.*

*For extra credit: check out these great stirring visualizations of Julia and Mandelbrot sets. I incorporated a similar graphic above. Hat tip to Tim Hutton for pointing these out. And for some actual paper mathematical origami, check out Vihart's latest video on Snowflakes, Starflakes and Swirlflakes.*

Of all the free extras that Mac OS X has, Grapher has to be one of the coolest. This little app, hidden away in the `Applications/Utilities`

folder, is a powerful graphing tool for mathematical equations and data sets.

As you might expect from Apple, it typesets symbolic math beautifully and produces smooth, anti-aliased graphs. But this isn't just a little tech demo to showcase some of OS X's technologies: Grapher's features blow away your crusty old TI-83, and it comes with its own set of surprises. For example, not only can you save graphs as PDF or EPS, but it can export animations and even doubles as a LaTeX formula editor.

In fact, it does so much that its main weakness is the documentation, which only covers the very basics. The best way to learn Grapher is to look at the handful of included examples, although it might take you a while to find out how to replicate them from scratch.

The other day I needed to quickly graph a couple of things involving complex numbers, and it seemed that Grapher was doing some *very freaky shit*. Either that, or my math was really rusty. It turned out I'm not as stupid as I thought, and there are some weird caveats with using complex numbers in Grapher. Oddly, there is very little information online about it, so I figured for future reference, I should document the workarounds I discovered.

Let's dive in. Fuck MS Paint, I've got math to do.

To type formulas into Grapher, you can use the symbol palette, available in the Window menu, or type away using various keyboard shortcuts:

- Type
`^`

for exponents,`_`

for indices,`/`

for fractions. Grapher understands exponents and other notations, for example the Bessel functions`J`

.n(x) - Use the arrow keys to move around the equation: in and out of parentheses, exponents, fractions, etc. Pay attention to the cursor to see where you're typing.
- Type out greek letter names for the symbols:
`alpha`

,`omega`

,`pi`

. - Common mathematical constants work:
`e`

,`π`

,`i`

. - The very useful 'Copy LaTeX expression' command is hidden away in the editor's right-click menu.

At first sight, complex numbers 'just work'. Using `i`

as the imaginary unit, you can use numbers like `1 + 2i`

or plot graphs like `y=e`

. You can use the ix`Re()`

and `Im()`

operators to explicitly extract the real or imaginary part of a complex number and use `abs()`

and `arg()`

to extract the modulus and argument. If an expression's result is complex, Grapher will only plot the real part.

This last bit is where things get tricky, because this silent casting of complex numbers to reals also sometimes happens in intermediate values.

Let's plot a complex parametric curve directly using formulas of the form `x + iy=...`

. As an example, let's look at this:

These equations are using Euler's formula `e`

to plot a half circle each. The only difference between the two formulas is that the second one is passing its value through the (useless) function i·x = cos x + i·sin x`f(t)`

.

Now if we replace `e`

with i·x`1/e`

and change i·x = e–i·x = cos x – i·sin x`f(t)`

to `1/t`

, all that should happen is that the graph is mirrored vertically. Instead, this happens:

The blue circle segment is drawn as a broken horizontal line. What's happening is that Grapher is treating the definition `f(t) = 1/t`

as if it said `f(t) = 1/Re(t)`

. In other words, it is truncating the complex input of `f(t)`

to a real number.

To fix this, you need to replace the variable `t`

with `complex(t)`

. This `complex()`

function is listed in the built-in definitions list in the Help menu, but lacks any documentation. With this fix applied, the graph will plot as expected:

Further tests reveal that `complex(t)`

is in fact equivalent to writing out `Re(t) + i·Im(t)`

, thus manually recomposing the complex number from its own real and imaginary parts. If it weren't for the existence of the `complex()`

helper, one might consider this issue a bug. The way it is now, it seems this behaviour is somewhat intentional.

Moral of the story: wrap all your function inputs in `complex()`

to avoid nasty surprises.

Another annoying issue is that certain built-in functions don't handle complex inputs. To show this, you can try plotting `y=sinh(–i`

. Mathematically, this is equivalent to plotting 2·x)`y=sinh(x)`

directly. However the presence of the imaginary unit causes the plot to fail.

As a workaround, you need to define your own functions using known formulas and incorporating the `complex()`

fix.

For example, you might define:

`fixsinh(x) = (e`

complex(x) – e-complex(x))/ 2

fixcosh(x) = (ecomplex(x) + e-complex(x))/ 2

Other built-ins are trickier. For example, ` Γ(z)`

needs replacing, but mathematically it is defined as an improper integral. Unfortunately, Grapher's integrator doesn't seem to handle the definition for `Γ(z)`

at all — though it's supposed to do improper integrals.

When using built-in definitions, always verify that you're getting the results you need with a simple example.

To round this off, here's an example where I use these tricks to plot a Kaiser sampling window and its frequency response:

Happy graphing!