Dopplerender: A turbo boost for Blender animation rendering


Dopplerender teaser imageThe problem
If you’ve ever tried to make a hand-drawn animation, you’ll appreciate how much faster computer-generated animation really is. But for all its whiz-bangery, there’s one thing that even CGI is still glacially slow at doing: the rendering.

Yes, a single gorgeous frame of animation might only take 3 minutes to put together, but if your video is 5 minutes long, there will still be another 8,999 equally gorgeous images waiting to be assembled. That’s 27,000 minutes, or 450 hours. Who wants to spend 19 days waiting for a five minute animation?

For all the years I’ve worked in the industry, this has been a problem that has scraped at the tender inside flesh of my creative happy-place. I know that frames 28-53 are identical to frame 27, because the character has paused to think. And I also know that when a character walks into the room, he passes through a sequence of identical repeating positions. (It’s called a “walk cycle.”)

But even though I know these frames are redundant, I’ve also learned that trying to save time by the “render once and reuse” method is a recipe for disaster. No matter how carefully you plan it, you always end up missing a step somewhere, and the result is a herky-jerky mess when you assemble the final frames.

If only there was a way to tell the computer which frames were duplicates. At least then you could let the computer handle the bookkeeping for you and maybe avoid most of the erroneously missing or over-duplicated frames. But even if there were such a feature, managing that list of duplicates would probably end up eating more time than you save. So like most people in the industry, I’ve grumbled about this for years and then resigned myself to spending the additional computation (and calendar) time to let the computer churn out the same image over and over again. At least we know it will work.

The solution
But then a few days ago, I had the brainwave that has eluded me for decades. For all the years I’ve been idly puzzling over this problem, I’ve been approaching it from the perspective of trying to teach the rendering code to look at the modeling/rendering parameters and recognize when the scene at frame 27 is in exactly the same state as at frame 53. But that turns out to be really, really complicated. Plus, it wouldn’t catch any of the cases where the model differed but the images were identical anyway. (For example, if your dancing iguana goes from a rotation of 0 degrees to a rotation of 360 degrees, the variables are different but the images will be exactly the same.)

It turns out I needed to forget about the rendering code completely and just use my eyes. Or rather, the computer’s eyes. To explain this the easy way, let’s look at the logo image again. Dopplerender logo

Imagine that my graphic designer charges me $100 for every new letter he designs, but duplicating one he’s already created only costs me a penny. There are twelve letters in “dopplerender,” so having him design each letter from scratch is going to set me back $1200. But if I notice that five of those letters are duplicates of letters he’s already designed, I can take advantage of that optimization and reduce my bill to $700.05. That’s pretty serious savings. And all I had to do was a quick visual comparison.

Well, that’s exactly what dopplerender does with your animation. The trick comes from putting two unrelated ideas together:

  1. Any two images that are slightly different from each other will always be slightly different from each other, even if you reduce them to a tiny size. Sure, at some point, the differences will get lost in the statistical noise and the two images will become bit-wise identical, but for most non-trivial resolutions, there will always be some tiny differences between the color values of a pixel or two.
  2. A digital “fingerprint” can be created very, very quickly for any computer file. The fingerprints will be exactly the same for any files with identical contents, and they will differ otherwise—even for images that are quite similar—so long as at least one bit of information differs between them.

And here’s how that goes together to make dopplerender work:

  1. Upon launch, dopplerender tells blender[1] to render a tiny, low-fidelity thumbnail image of every frame in the animation. By setting the resolution to 5% of the original resolution, and reducing the number of samples computed per pixel, these thumbnails are lightning fast to render, taking about 0.1 seconds per frame in my tests.
  2. It then computes a fingerprint for every one of those thumbnail frames and looks for duplicates. If any two frames have the same fingerprint, they are almost certain to be duplicate images. Images that share a fingerprint are bundled together into clusters.
  3. For each cluster, one of the frames is chosen and rendered at full resolution and fidelity.
  4. Then for each of the other members of the cluster, that image file is duplicated rather than being re-rendered. (Yes, symbolic links are used if the OS supports them, so that disk space is saved as well as computation time.)

So, how well does it work? That depends on how much repetition is in your animation. For my first test, I created a simple rotating cube that turns 10° per frame over 60 frames. Dopplerender correctly identified that only 9 of those frames were unique. (Remember that after rotating through 90°, the cube looks the same again, just with a different face toward the camera.)


Unique cube orientationsMore importantly, how much time was saved? A full naive render of this sequence at HD resolution took 14 minutes, while the dopplerender took only 2 minutes. And that included the time needed to render out all the tiny thumbnail images and compute the fingerprint analysis as well.

Now, you might think that an animation wouldn’t typically have 85% redundancy like this, but remember that the very techniques used to make animations easier to construct (holds, pose libraries, phonetic lipsync, etc.) mean that, while the animation might not be visibly cyclical, many of them are composed of a limited number of unique positions that appear in different orders.

And dopplerender finds those too.

Frame grab of animated Jeff's talking head.Test Case #2
In this case, I examined a talking-head animation from my ongoing series of YouTube videos for writers, called The 5-Minute Immersion Lab. The animation is pretty limited: there are only three key mouth positions and 3 head-tilt angles, but when they’re put together, the result is a surprisingly watchable animation. I’ve optimized the hell out of the scene design and rendering settings, resulting in a render time per frame of about 0.7s, so dopplerender isn’t going to save me days of rendering time here. But any time saved here could also be used to increase the visual richness of the image. It isn’t always about saving time.

In this case, of the 10808 frames in the sequence, 5258 of them were identified as redundant. dopplerender cut the render time in half.

Test case #3
Kowtow animationHere’s another sequence taken from a new 5-Minute Immersion Lab episode I’m working on. This time it’s just a simple kowtow sequence. In the past when I’ve animated a repeating loop like this, I’ve taken the time to eliminate the held frames at the end of the motion and then duplicated those images manually into the sequence after rendering. (Can you tell that I really hate blatantly redundant rendering?)

But with dopplerender to fall back on, this time I ignored the repetitions and just keyframed the entire thing, complete with held poses. Of the 60 frames in the sequence, dopplerender correctly recognized that only 28 of them were unique, and did the duplicating for me, again cutting the total render time by more than half. So dopplerender is not just saving time—it’s also reducing project complexity.

Anyway, this cake is by no means fully baked. There are still a number of tests to run and potential tweaks to add, but I’ve been using it for a couple of weeks now and it seems stable enough to be useful, so I thought I’d share it out and see if anybody else is irritated by redundant rendering costs, and wants to help with the tweaking.

Warning: This is very much an alpha release. At this time, I’m not recommending the script for people who are not comfortable working with Python code and shell scripts. There are still some hard-coded entries in the script that will have to be modified to make it work on your machine, and it has only been tested on my own computer, running Linux. If there’s sufficient interest in the script, I’m sure we’ll be able to get a version of dopplerender ready that hides the messy bits and presents a simple user interface for less “code-comfy” users.

And if you’ve read all the way to here and still want a copy to play with, here you go: Download I look forward to hearing your feedback in the comments below.

Exopod: The flat-pack display stand for books
  1. [1]This technique should be applicable to any animation system, but since I use Blender, that’s the one I wrote it for.

About the author

Jefferson Smith is a Canadian fantasy author, as well as the founder, chief editor and resident proctologist of ImmerseOrDie. With a PhD in Computer Science and Creativity Systems compounded by a life spent exploring most art forms for fun and profit, he is underqualified in just about everything. That's why he writes.