After reviewing my archived experiments from a decade ago, I’ve decided to call that effort a “proof of concept” and go back to the drawing board for a full redesign.
◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇
There were three significant problems: the software, the wiring, and the case. That might sound like the entire project, but it’s not quite. I haven’t changed the actual sensors I’m using or the principles of operation, and I still like the name and logo, but almost everything else is being redone.
Microcontroller
The original design used an Arduino Nano, tethered to a computer via USB for both power and data exchange. But the more I think about it, the more I don’t love that physical tether. Having a cord wrapped around your arm is okay when you’re working at your desk, but I can imagine lots of scenarios where a performer might need greater mobility. So I’ve abandoned the Nano in favor of an ESP32, which adds both WiFi and Bluetooth as possible comms links.
It will also add the ability to host a config panel right on the device over local http, allowing the operator to tune the capture settings using their phone, instead of having to clutter the device with fancy selector knobs and display screens.
And because the ESP32 has its own storage, we can even capture a few hours of performance and store it locally on PuppetFist for download later. That makes an external computer optional, rather than required, during the performance.
Data Capture
The current configuration will capture the puppet’s torso and mouth movements, but a full performance includes vocals as well. I briefly considered capturing audio directly on PuppetFist, but for now, that will have to be captured on a different device, which we’ll sync with the physical performance later. But how?
Each performance is called a take, so there will be a motion take file, and a matching audio take file. As with other film techniques, the two will be synced by “slating” them. To do that, I’ve added two buttons on the device: a green button that gets pressed at the start of a take, and a red one to mark the end.
By clearly encoding the start and stop points, PuppetFist can capture hours of performance without wasting space for the long periods of time between takes. Despite its tiny storage capacity, the WROOM can actually record anywhere from 2-12 hours of performance data before filling up, depending on the frame sampling rate you’ve set and which joint angles are being recorded.
When sync-slating is enabled, PuppetFist will emit a brief audible chirp at the start and end of each take. These chirps encode the current take number, which is used to name the data files: take-014.pfst, take-015.pfst…
But more importantly, those chirps show up in the captured audio, too. PuppetFist will have a post-capture tool that can find those codes in the audio waveform and match each telemetry take with its corresponding audio capture file. It can even correct the timing drift that happens when two different capture devices have slightly different internal clock speeds.
Speaking of Software
I’ve also said goodbye to the old Arduino code infrastructure. Yes, you definitely can program an ESP in ArduinoIDE, but I’ve never loved the user experience there. It’s a great platform for learners, but not very efficient for seasoned programmers, so I’ve moved on to other software platforms that align better with my lab workflow. For this project, I’m currently taking ESP-IDF for a spin.
Today’s Status
As of this very moment, I’m working my way through a wiring test, with all the components cobbled together on a breadboard. I’ve got the rumble board working (as stand-in for the chirp board), a common-cathode RGB LED status light, and 3 push buttons. (Start, Stop, and Calibrate).
The proximity sensor (QRD1114) is hooked up but not working in software yet. Once I get that resolved later today, I’ll be adding the orientation sensors (GY25) and a main power switch.
And when that’s done, sending correct numbers for all sensors to the serial output stream, I’ll call the wiring plan complete and move on to case design, then the comms protocols between PuppetFist and FistPump - the client app that recieves data from PuppetFist and syncs it with its corresponding audio.
So there’s still lots to do.
Update: Joint angles and jaw position all reporting green now. Time to start building the case.