# Hackery, Math & Design

## Steven Wittens i # MathBox²

## Part 2

Continued from Part 1.

## I-Can't-Believe-It's-Not-React

Underneath sits a large codebase driving it, 200+ files in JS/CS alone, not including dependencies that aren't my own. Much of it is infrastructure necessary to pull off certain tricks consistently: you can draw 2.5D lines with grace, render arbitrary Unicode text in GL, sync HTML to GPU-computed geometry, and do all this with GLSL code composed on the fly, including your own. Nobody needs all of this.

These wildly different strategies are actually all abstracted into the DOM path or the GL path, with a quacks-like-React component as the glue in the DOM path.

<root>
<!-- Place the camera -->
<camera />
<!-- 3D Polar view -->
<polar>
<!-- Sample an interval -->
<interval />
<!-- Draw a line -->
<line />
<!-- Resample along length -->
<resample />
<!-- Draw a set of points -->
<point />
<!-- Make HTML labels -->
<html />
<!-- Draw HTML sprites -->
<dom />

...

<!-- Sample an interval -->
<interval />
<!-- Draw a line -->
<line />
<!-- Resample along length -->
<resample />
<!-- Draw a set of points -->
<point />
<!-- Make GL text -->
<text />
<!-- Generate colors -->
<array />
<!-- Draw GL sprites -->
<label />

</vertex>
</cartesian>
</root>

Most of the code is for initialization only, building up a reactive machine by combining components. Once assembled it lets the GPU do most of the crunching, while relying on the JS VM to inline and optimize the chewy outside.

At the top level, MathBox is plain old JavaScript, used like this:

/* Easy Mode */

// Bootstrap MathBox and Three.js
var mathbox = mathBox();
if (mathbox.fallback) { throw Error("No WebGL support.") };

// Make MathBox primitives
var view =
mathbox
.set({
scale: null,
})
.camera({
proxy: true,
position: [0, 0, 3]
})
.polar({
range: [[-2, 2], [-1, 1], [-1, 1]],
scale: [2, 1, 1],
bend: .25
});

view.interval({
width: 48,
expr: function (emit, x, i, t) {
// Emit sine wave
y = Math.sin(x + t / 4) * .5 + .75;
emit(x, y);
},
channels: 2,
})
.line({
color: 0x30C0FF,
width: 16,
})
.resample({
width: 8,
})
.point({
color: 0x30C0FF,
size: 60,
})
.html({
width:  8,
expr: function (emit, el, i, j, k, l, t) {
// Emit random latex
var color = ['#30D0FF','#30A0FF'][i%2];
var a = Math.round(t + i) % symbols.length;
var b = Math.round(Math.sqrt(t * t + Math.sin(t + i * i) + 5));
emit(el(latex, {style: {color: color}},
'\\sqrt{\\text{LaTeX} + '+(i + b)+' \\pi^{'+symbols[a]+'}}'));
},
})
.dom({
snap: false,
offset: [0, 32],
depth: 0,
zoom: 1,
outline: 2,
size: 20,
});

// ...


The WebGL Canvas bootstrapper is a separate piece though, it's wrapping Threestrap, the little non-framework that could. It lets you spawn a fully functioning GL canvas in one exceedingly configurable call. It takes care of browser support, resizing with retina, CSS alignment, warm-up and more.

If you prefer instead, you can spawn a bare MathBox context in a Three.js scene of your choosing, but you'll have to babysit it:

/* Simple Mode */

// Vanilla Three.js
var renderer = new THREE.WebGLRenderer();
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(60, WIDTH / HEIGHT, .01, 1000);

// Insert into document
document.body.appendChild(renderer.domElement);

// MathBox context
var context = new MathBox.Context(renderer, scene, camera).init();
var mathbox = context.api;

// Set size
renderer.setSize(WIDTH, HEIGHT);
context.resize({ viewWidth: WIDTH, viewHeight: HEIGHT });

// Place camera and set background
camera.position.set(0, 0, 3);
renderer.setClearColor(new THREE.Color(0xFFFFFF), 1.0);

// MathBox elements
view = mathbox
.set({
focus: 3,
})
.cartesian({
range: [[-2, 2], [-1, 1], [-1, 1]],
scale: [2, 1, 1],
});
// ...

// Render frames
var frame = function () {
requestAnimationFrame(frame);
context.frame();
renderer.render(scene, camera);
};
requestAnimationFrame(frame);

Despite looking like a monolith, it really isn't, it was merely a matter of convenience and sanity to not decouple it more until its shape had stabilized. Minimal builds are, for now, left as an exercise to the reader. I've split up the thinking and design into several articles, mirroring the architecture. However, you don't need to know all this to use MathBox 2, they are for people who want to know the how and why... the document model, the geometry core and the shader assembly.

In putting it all together, the devil's in the details of course. Depending on your imagination, it's either much more or much less powerful than you want. There's still far too much to cover: slideshows, keyframe tracks, fov-calibrated units, z-indexes, atlas retexting, … Most of this is unsurprising in that it all works. You can define a keyframe interpolation between two value or emitter expressions, and watch the smoothly lerped data go. Animation tracks are tied to triggers like clocks and slides, which lets them fit naturally in presentations.

<root>
<!-- Present slides -->
<present index={1}>
<!-- Slide 1 -->
<slide steps={2}>
<!-- Transition effect -->
<reveal>
<!-- Sample an interval -->
<interval />
<!-- Draw a line -->
<line />
<!-- Step through keyframes -->
<step script={[
{props: {color: 'red'}},
{props: {color: 'blue'}},
]} >
</reveal>
</slide>
<!-- Slide 2 -->
<slide>
<!-- Transition effect -->
<move>
<!-- Sample an interval -->
<interval />
<!-- Play through keyframes -->
<play script={[
{props: {expr: (emit, x) => emit(x, Math.sin(x))}},
{props: {expr: (emit, x) => emit(x, Math.tan(x))}},
]} />
<!-- Draw a line -->
<line />

<!-- Slide 2.1 -->
<slide>
</slide>
</move>
</slide>
<!-- Slide 3 -->
<slide>
<!-- Transition effect -->
<reveal>
<!-- Sample an interval -->
<interval />
<!-- Draw a line -->
<line />
</reveal>
</slide>
</present>
</root>

## One More Thing… Images are data. So is audio. That means MathBox 2 is Winamp AVS, Milkdrop and the mythical Fridge all rolled into one. You can replicate your everyday trippy music visualizer with two operators: render-to-texture (rtt) and compose. It acts as an embedded scene, rendering all of its children to an off-screen image, while Compose renders a full-screen pass. This is where the model's expressiveness shines.

Milkdrop equals mathbox.rtt(…).compose(…).….end().compose(…), that is, an image feeding back into itself, but also rendered to the screen. The necessary double buffering and swaps are abstracted away. Drop in shapes and shaders, add transforms, nest as you like. RTTs have a history parameter like arrays, so Turing patterns, self-propagating hypno spirals, and other cool partial diffy eqs are a shader away.

<root id="1" scale={720}>
<rtt id="render" minFilter="nearest" magFilter="nearest" type="unsignedByte">
<camera id="3" lookAt={[0, 0, 0]} position=>{(t) => { return [Math.cos(t) * 3, 0, Math.sin(t) * 3] }} />
<cartesian id="4" range={[[-2, 2], [-1, 1], [-1, 1]]} scale={[2, 1, 1]}>
<transform id="5" scale={[7/10, 7/10, 7/10]}>
<grid id="6" width={5} divideX={2} divideY={2} zBias={10} opacity={1/4} color={16768992} />
</transform>
</cartesian>
</rtt>
<rtt id="rtt1" history={4} type="unsignedByte" minFilter="linear" magFilter="linear">
uniform vec3 dataResolution;
uniform vec3 dataSize;
uniform float cosine;
uniform float sine;
vec4 getSample(vec3 xyz);
vec4 getFramesSample(vec3 xyz) {
vec2 pos = xyz.xy * dataResolution.xy - .5;
pos = ((pos * dataSize.xy) * mat2(cosine, sine, -sine, cosine) * .999) / dataSize.xy;
xyz.xy = (pos + .5) * dataSize.xy;
vec4 c = getSample(xyz + vec3( 0.0, 0.0, 1.0));
vec3 t = getSample(xyz + vec3( 0.0, 1.5, 0.0)).xyz;
vec3 b = getSample(xyz + vec3( 0.0,-1.5, 0.0)).xyz;
vec3 l = getSample(xyz + vec3(-1.5, 0.0, 0.0)).xyz;
vec3 r = getSample(xyz + vec3( 1.5, 0.0, 0.0)).xyz;
return vec4((t + b + l + r) / 2.0 - c.xyz, c.w);
}"
cosine=>{(t) => Math.cos(Math.sin(t * .2) * .005)} sine=>{(t) => Math.sin(Math.sin(t * .2) * .005)} />
<resample id="resample1" indices={3} channels={4} />
<compose id="10" />
</rtt>
<rtt id="12" minFilter="linear" magFilter="linear" type="unsignedByte">
uniform float modulate1;
uniform float modulate2;
uniform float modulate3;
uniform float modulate4;
vec4 getSample(vec3 xyz);
vec4 getFramesSample(vec3 xyz) {
vec4 color = (
getSample(xyz) +
getSample(xyz + vec3(0.0, 0.0, 1.0)) +
getSample(xyz + vec3(0.0, 0.0, 2.0)) +
getSample(xyz + vec3(0.0, 0.0, 3.0))
) / 4.0;
color = color * color * color * 1.15;
float v = color.x + color.y + color.z;
vec3 c = vec3(v*v + color.x * .2, v*v, v*v*v + color.z) * .333;
c = mix(c, mix(sqrt(c.yzx * c), c.zxy, modulate1), modulate2);
c = mix(c, mix(c.yzx, c.zxy, modulate1), modulate2);
c = mix(c, mix(abs(sin(c.yxz * 2.0)), c.zyx, modulate3), modulate4);
return vec4(c, 1.0);
}"
modulate1=>{(t) => Math.cos(t * .417) * .5 + .5} modulate2=>{(t) => Math.cos(t * .617 + Math.sin(t * .133)) * .5 + .5} modulate3=>{(t) => Math.cos(t * .217 + 2.0) * .5 + .5} modulate4=>{(t) => Math.cos(t * .117 + 3.0 + Math.sin(t * .133)) * .5 + .5} />
<resample id="resample2" source="#rtt1" indices={3} channels={4} />
<compose id="15" />
</rtt>
vec4 getSample(vec2 xy);
vec4 getFramesSample(vec2 xy) {
return getSample(xy + vec2(0.5, 0.5));
}"
/>
<resample id="resample3" indices={2} channels={4} />
<compose id="18" source="#resample2" />
</root>

The difference compared to AVS is that the .rtt() is inert to its container by default. Until you add on a .compose() pass, it's just a dangling data source. Meanwhile the .compose() op offers the necessary GL blend modes, opacity and color tints through style properties. Document order defines drawing order, so the decomposition into render passes is direct. On top, zOrder can be overridden (drawing order), as can zIndex (2D stacking order) and zBias (3D stacking order).

You can nest effects and compose shaders to create recursive visualizations, sampling from themselves or each other:

Open in New Window
(Burn that fillrate, baby.)

Bonus: Endless Visualizer.

<root id="1" scale={720}>
<camera id="2" proxy={true} position={[3/10, 1/10, 2]} />
<group id="3">
<array id="audioTime" data={[]} width={1024} channels={1} />
<array id="audioFreq" data={[]} width={512} channels={1} />
</group>
<rtt id="render" width={256} height={144} type="unsignedByte" minFilter="nearest" magFilter="nearest">
<camera id="7" position={[0, 0, 5/2]} />
<group id="8">
<swizzle id="9" source="#audioTime" order="yx" />
<spread id="10" width={[861/250, 0, 0, 0]} />
vec4 getSample(vec4 xyzw);
vec4 getColor(vec4 xyzw) {
float h = getSample(xyzw).y;
return vec4(vec3(h), 1.0);
}"
/>
<resample id="12" />
<transform id="13" scale={[1, 3/4, 1]}>
<line id="14" points="<<" colors="<" width={5} color={16777215} opacity={2/5} blending="add" />
</transform>
</group>
<cartesian id="15" range={[[-2, 2], [-1, 1], [-1, 1]]} scale={[1/2, 1/4, 1/4]} quaternion=>{(t) => {
c = Math.cos(t / 3);
s = Math.sin(t / 3);
c2 = Math.cos(t / 8.71);
s2 = Math.sin(t / 8.71);
return [s * s2, s * c2, .2, c];
}}>
<grid id="16" divideX={4} divideY={4} zBias={10} opacity={1/10} color={16768992} width={6} />
</cartesian>
</rtt>
<rtt id="rtt1" history={4} width={256} height={144} type="unsignedByte">
<resample id="resample1" indices={3} channels={4} />
<compose id="20" color="#ffffff" zWrite={false} />
<compose id="21" source="#render" blending="add" color="#ffffff" zWrite={false} />
</rtt>
<rtt id="rtt2" width={256} height={144} type="float">
<camera id="23" position={[0, 0, 5/2]} />
<clock id="24" seek=>{(t) => audio ? audio.currentTime : t}>
<shader id="25" code="#map-temporal-blur" time=>{(t) => t * 16.0} modulate=>{(t) => {
var bang = ((t > 69.229311)  && (t < 88.922656)) ||
((t > 88.922656)  && (t < 148.9143)) ||
((t > 148.9143)   && (t < 158.2)) ||
((t > 168.284427) && (t < 188.00)) ? 1 : 0;
if ((t > 88.922656)  && (t < 148.9143)) {
bang *= .5 + .45 * Math.cos(t / 3);
}
if ((t > 168)) {
bang *= .85 + .15 * Math.cos(t);
}
modulate = modulate + (bang - modulate) * .1;
return modulate;
}} pattern=>{(t) => {
var bang = ((t > 88.922656) && (t < 148.9143));
pattern = pattern + (bang - pattern) * .1;
if ((t > 168)) {
pattern = .5 + .4 * Math.cos(t * 2.311);
}
return pattern;
}} warp=>{(t) => {
var bang = (t > 148.9143);
if ((t > 168)) {
warp *= 1 + .5 * Math.cos(t * .556);
}
if ((t > 148.2) && (t < 158.2)) warp = warp + .75 + .25 * Math.cos((t - 158.2));
warp = warp + (bang - warp) * .1;
return warp;
}} shift=>{(t) => {
var bang = (t > 168) ? Math.max(0, Math.min(1, .1 * (t - 168))) : 0;
bang *= .75 + .25 * Math.cos(t * .731);
warp = warp + (bang - warp) * .1;
return warp;
}} />
<resample id="resample2" source="#rtt1" indices={3} channels={4} />
<compose id="27" color="#fff" zWrite={false} />
</clock>
<transform id="28" scale={[1, 1/4, 1]}>
<swizzle id="29" source="#audioTime" order="yx" />
<spread id="30" width={[861/250, 0, 0, 0]} />
vec4 getSample(vec4 xyzw);
vec4 getColor(vec4 xyzw) {
float h = getSample(xyzw).y;
return vec4(vec3(h) * .2, 1.0);
}"
/>
<resample id="32" />
<line id="33" points="<<" colors="<" width={50} color={16777215} opacity={1} blending="add" />
</transform>
</rtt>
<resample id="34" width={129} height={73} />
<repeat id="lerp" depth={2} />
<resample id="37" indices={3} channels={3} />
<transpose id="transpose" order="xywz" />
<transpose id="color" source="#lerp" order="xywz" />
<clock id="40" seek=>{(t) => audio ? audio.currentTime : t}>
<clock id="disco" speed=>{(t) => {
var bang = ((t > 69.329311)  && (t < 89.122656)) ||
((t > 148.9143)   && (t < 158.0)) ||
((t > 168.284427) && (t < 188.077772));
return bang ? 1 : .2;
}}>
<shader id="42" code="#map-z-to-color" modulate1=>{(t) => Math.cos((t + 1) * .417) * .5 + .5} modulate2=>{(t) => Math.cos((t + 1) * .617 + Math.sin(t * .133)) * .5 + .5} modulate3=>{(t) => Math.cos((t + 1) * .217 + 2.0) * .5 + .5} modulate4=>{(t) => Math.cos((t + 1) * .117 + 3.0 + Math.sin(t * .133)) * .5 + .5} />
<resample id="color1" source="#lerp" indices={2} channels={4} />
<shader id="44" code="#map-z-to-color-2" modulate1=>{(t) => Math.cos((t + 1) * .417) * .5 + .5} modulate2=>{(t) => Math.cos((t + 1) * .617 + Math.sin(t * .133)) * .5 + .5} modulate3=>{(t) => Math.cos((t + 1) * .217 + 2.0) * .5 + .5} modulate4=>{(t) => Math.cos((t + 1) * .117 + 3.0 + Math.sin(t * .133)) * .5 + .5} />
<resample id="color2" source="#lerp" indices={2} channels={4} />
</clock>
<cartesian id="46" range={[[-1.7788, 1.7788], [-1, 1], [-1, 1]]} scale={[16/9, 1, 1]} quaternion=>{(t) => {
t = t / 3;
c = Math.cos(t / 4);
s = Math.sin(t / 4);
c2 = Math.cos(t / 11.71) * 1.71;
s2 = Math.sin(t / 11.71) * 1.71;
return [s * s2, s * c2, -.2, c];
}}>
<lerp id="47" source="#transpose" width={33} height={19} />
<lerp id="48" source="#color2" width={33} height={19} />
<transform id="49">
<line id="50" points="<<" colors="<" color="#ffffff" width={2} zBias={5} />
</transform>
<play id="51" script={{19: {position: [0, 0, 0]}, 39: {position: [0, 0, 2]}, 57: {position: [0, 0, 0]}}} />
<transpose id="52" source="<<" order="yxzw" />
<transpose id="53" source="<<" order="yxzw" />
<transform id="54">
<line id="55" points="<<" colors="<" color="#ffffff" width={2} zBias={5} />
</transform>
<play id="56" script={{19: {position: [0, 0, 0]}, 39: {position: [0, 0, -2]}, 57: {position: [0, 0, 0]}}} />
<transform id="57">
<point id="58" points="<<" colors="<" color="#ffffff" size={10} zBias={5} zOrder={1} blending="add" zWrite={false} />
</transform>
<play id="59" script={{19: {position: [0, 0, 0]}, 39: {position: [0, 0, -1]}, 57: {position: [0, 0, 0]}}} />
<transform id="60">
<point id="61" points="#transpose" colors="#color2" color="#ffffff" size={5} zBias={5} zOrder={1} blending="add" zWrite={false} />
</transform>
<play id="62" script={{9: {position: [0, 0, 0]}, 39: {position: [0, 0, 1]}, 57: {position: [0, 0, 0]}}} />
<vector id="63" points="#transpose" colors="#color1" color="#ffffff" start={false} end={false} width={40} opacity={3/100} blending="add" zWrite={false} zOrder={-2} />
</cartesian>
</clock>
</root>

## Full Stack

What's left is basically kicking the tires and fixing the blind spots. As such this is not MathBox 2.0, this is MathBox 2 Alpha 1. It's still rough in the compatibility department, easily letting you exceed GL limits satisfied by only 70-80% of WebGL implementations in the wild, without warning. My own goal for public release was to be able to make another one of those presentations with it, only this time, 100% idiosyncratic MathBox. Result, The Pixel Factory.

Some people have assumed this talk was another tour-de-force of multi-week autism, but in fact, rebuilding my old slides for v2 was easy and obvious. The RGBA subpixels and their labels are animated lambdas and GLSL. The multi-samples, the depth buffer columns, the tangents and normals, same thing. JavaScript twiddles the knobs while the GPU visualizes the visualizer, and in doing so, itself.

To give it a whirl in your browser, open the JSBin Sandbox. There is a quick start introduction and a list of legos.