Building The Lab: Interactive Experiments in the Browser
The Lab started as a question: what if our club's website was itself a demo of what we could build? Every experiment in the Lab is a working proof of an idea.
The Architecture Decision
We use a single dynamic route — app/lab/[slug]/page.tsx — that renders any experiment by slug. The registry (lib/lab-registry.tsx) is the single source of truth. Adding an experiment is three steps: add an entry to the registry, build the component, wire the switch case.
This keeps the routing zero-config while keeping component code isolated. Each experiment is self-contained. It can use useEffect, useRef, WebGL, WebRTC, or WebSockets without affecting anything else.
The Camera Experiments
Three experiments use getUserMedia — the ASCII camera, animal vision filter, and neural interface. The browser permission model means we can't preload the stream. We initialize it lazily on mount and tear it down on unmount.
useEffect(() => {
let stream: MediaStream;
navigator.mediaDevices.getUserMedia({ video: true }).then((s) => {
stream = s;
videoRef.current!.srcObject = s;
});
return () => stream?.getTracks().forEach((t) => t.stop());
}, []);
The teardown matters. Forgetting it means the camera light stays on after the user navigates away. That's a trust issue, not just a bug.
MediaPipe in Next.js
The neural interface and hand-mouse experiments use MediaPipe's Holistic model. MediaPipe loads a WASM binary and several model files. In Next.js App Router, this means:
- Dynamic import with
ssr: false— MediaPipe is browser-only - Loading state while the WASM initializes (can take 2–3 seconds on first load)
locateFileoverride to serve model files from the correct CDN path
The performance is surprising. Holistic (face + hands + pose) runs at 30fps on a mid-range laptop. The model is doing extraordinary things very quietly.
What We Learned
The Lab forced us to get comfortable with the browser's low-level APIs. OffscreenCanvas, requestAnimationFrame timing, Worker threads for heavy computation, WebGL buffer management. These aren't framework skills. They're platform skills.
That's the point of the Lab. Not to ship features — to develop taste for what the platform can actually do.
Did this land?