TECHNICAL DOCUMENT — INTERNAL

Dust Mote Laboratory

Visual Score Authoring System — Operations Manual

CLEARANCE: LEVEL 0 — UNRESTRICTED VERSION: 1.0

This document describes the Dust Mote Laboratory, an internal tool used by the research team to author visual particle scores for recovered transmissions. The "dust motes" are the amber particles visible during focused listening mode. Their movement is not random — it is scored to each transmission, synchronized to the audio.

Overview

The Dust Mote Laboratory is a browser-based tool for designing, previewing, and deploying particle animation sequences synchronized to audio transmissions. Each transmission has a corresponding "visual score" — a timeline of cues that control how particles move, form patterns, change color, and respond to the audio signal.

The tool is accessed at /designer/ from the main site. It consists of three areas: a full-screen particle preview, a control panel on the right, and an audio transport bar at the bottom.

Interface Layout

Particle Preview (Center)

The main screen area shows a live preview of the particle system. A dashed circle in the center represents the transmitter panel — the area that will be obscured during focused listening. Particles are programmed to avoid this zone. What you see here is what the listener sees behind the VU meter.

Control Panel (Right Sidebar)

All particle parameters and action buttons live here. It scrolls vertically. The panel can be toggled with the "Hide Panel" button in the top-right corner. Parameters are organized into groups:

Pattern — the formation particles move toward. Each pattern has a distinct visual character and may expose additional parameters below the main controls.

Movement — controls how particles travel to their target positions. Speed is the velocity multiplier. Attraction is the pull strength. Wander adds random drift. Damping is friction. Rotation spins the entire formation.

Appearance — size multiplies each particle's radius. Brightness scales opacity. Connection distance and alpha control lines drawn between nearby particles.

Color — hue, saturation, and lightness offsets applied to the base amber palette. Hue 0 is amber, negative is red, positive shifts toward blue/cyan.

Transition — how many seconds the system takes to morph from the previous cue's state to this one. All values interpolate smoothly: positions, colors, size, speed, everything. Range is 0 to 180 seconds (3 minutes).

Actions — buttons for committing cues, generating scores, loading/deploying, and exporting.

Transport Bar (Bottom)

Contains the track selector, play/pause, time display, scrub bar, and the cue timeline. Green markers on the scrub bar show where cues are placed — these are draggable. Cue chips below the scrub bar show each cue's timestamp and pattern. The Preview button toggles playback of the committed cue sequence.

Available Patterns

Geometric

scatter — random distribution. The default state, ambient and unfocused.

grid — ordered rows and columns. Structure and control.

lattice — 3D cube of points projected to 2D. Architectural, framework-like.

concentric — expanding rings from center. Ripples, sonar, radio waves.

spiral — arms radiating outward. Journey, descent, being drawn inward. Params: arms, tightness.

fibonacci — golden ratio distribution. Organic, natural order.

ring — single large circle. Boundaries, containment.

column — vertical line of particles. Signal beam, standing alone.

horizon — flat line near the top of screen. Ocean horizon, vast emptiness. Param: y position, spread.

helix — intertwined strands. DNA, duality.

lissajous — mathematical curves. Precision, observation.

converge — all particles collapse to a point. Use sparingly — particles disappear behind the transmitter. Fast speed, max 4 seconds, always followed by an expand.

explode — particles fly outward. Sudden release, the weapon deploying.

Narrative

drift — particles flow like ocean current. The boat, crossing water, solitude at sea. Params: speed, angle, spread.

dissolve — particles detach from a base formation and drift away. Consciousness losing coherence, reality failing. Params: rate (0–1), base pattern.

breathe — synchronized expand/contract pulse. Living presence, conscious reality maintenance. Audio-reactive: voice drives expansion. Params: rate, min/max scale.

interference — two overlapping wave systems creating moiré. The fata organa concept — false patterns from real signals. Params: freq1, freq2, angle, amplitude.

fragment — small isolated clusters orbiting independently. Survivors in the CZ, isolated consciousness. Params: groups, orbit speed, spread.

filament — thin threads between anchor points. Radio transmissions, neural connections. Params: anchors (thread count), thickness.

Audio-Reactive

waveform_reactive — horizontal wave driven by voice audio. The primary driver is vocal energy — the waveform comes alive when Marcus speaks. Bass adds slow undulation, highs add shimmer. Flat line when silent. Params: frequency, amplitude, layers.

grid_reactive — grid that pulses with audio. Center columns respond to voice, edges to bass and treble. Static grid when silent. Params: cols, rows, amplitude.

3D Model

stl — particles map to the surface of an uploaded STL file. Back-face culling makes the object appear opaque. Params: rotation XYZ, auto-spin, scale, perspective, offset XY. Each cue can reference a different STL file.

Workflow

Manual Scoring

1. Select a track from the dropdown and press Play.

2. Listen. Scrub to a moment where you want the visuals to change.

3. Select a pattern from the dropdown. Adjust sliders. Watch the particles respond in real time.

4. Press "● New Cue" to commit the current settings at the current playhead position.

5. A cue chip appears in the timeline and a green marker appears on the scrub bar.

6. Repeat for each visual change across the track.

7. Press Preview to watch the full sequence play back with the audio.

8. Press "↑ Deploy" to push the score to the live site (password required).

Editing Cues

Click a cue chip to select it. The chip highlights green, the playhead jumps to that time, and all sliders load that cue's settings. The button changes to "● Update Cue". Adjust parameters and press Update to overwrite in place. Click the scrub bar to deselect and return to creating new cues.

Drag the green markers on the scrub bar to adjust when a cue fires. The audio follows the drag so you can hear the new position. Delete a cue by clicking the × on its chip.

Auto Score

The "⚡ Auto" button fetches and decodes the track's audio, analyzes energy levels and spectral brightness in half-second windows, detects section boundaries, classifies each section by intensity and trend, and assigns patterns with appropriate defaults. This is a fast first pass — expect to adjust individual cues afterward.

AI Score

The "🧠 AI" button performs the same audio analysis but also fetches the track's written transcript from the repository. Both are sent to an AI model along with descriptions of all available patterns and narrative guidelines. The AI generates cues that match the story's meaning — not just its volume. This feature requires network access to the API.

Loading and Deploying

"↓ Load" fetches the currently deployed score for the selected track and loads it into the timeline for editing. "↑ Deploy" pushes the current score JSON (and any STL model files) directly to the GitHub repository. Cloudflare auto-deploys from the main branch, so changes appear on the live site within about 30 seconds. Deploy requires a password.

Technical Notes

Cue Format

Scores are stored as JSON files in the /data/ directory (e.g., score_intro.json, score_ep01.json). Each file contains a track name, duration, and an array of cue objects. Each cue specifies a timestamp and only the fields that change — unspecified fields carry forward from the previous cue.

Transitions

When a cue fires, all numeric values (positions, colors, physics, size) interpolate from the previous state to the new state over the transition duration using a smoothstep curve. Pattern target positions also blend — particles slide from their old formation to the new one. If a cue's transition duration exceeds the gap to the next cue, it is clamped to fit.

Dead Zone

The center ~23% of the screen is occupied by the transmitter panel and VU meter. All patterns automatically exclude this zone — target positions inside it are pushed outward. A physics repulsion force also prevents particles from drifting through center during transitions.

Mobile

On mobile devices, particle count is reduced to 150 (from up to 300 on desktop). Movement uses delta-time normalization for consistent speed regardless of framerate. The VU meter drives needle movement from pre-analyzed volume data rather than live AudioContext analysis, preserving background playback.

STL Pipeline

STL files uploaded in the designer are stored per-cue. On deploy, they are pushed to /models/ in the repository, and the score JSON references them by URL. The atmosphere loads them via fetch, parses the triangles, distributes particles across surfaces (area-weighted), and projects with rotation and perspective. Back-face culling redirects particles from hidden surfaces to visible ones, doubling density on the side facing the viewer.

Pattern Selection Guide

When scoring a transmission, consider what the listener is hearing at each moment:

Silence or ambient tone → scatter, drift, horizon, breathe. Dim, slow, minimal.

Marcus speaking calmly → filament, breathe, fibonacci. Measured, contemplative.

Marcus speaking with intensity → waveform_reactive, grid_reactive. Let the voice drive the visuals.

Describing the ocean or the boat → drift. Let it flow.

Describing the Containment Zone → dissolve, fragment, interference. Reality is uncertain.

Describing consciousness or perception → breathe, interference. The fata organa itself.

Describing a specific object or place → stl (if a model exists), lattice, concentric.

Emotional climax → fast transition to waveform_reactive or explode. Then slow dissolve or breathe as it settles.

End of a section → converge (brief, fast) → explode or scatter. Collapse and release.

End of the transmission → slow fade to scatter or horizon. Final converge as the signal dies.