Introduction
This is a build log for a render farm. It is unbelievably technical, so if explanations of mechanical devices and computer code are boring to you, feel free to pass on the entire thing. It’s easily done; these posts are tagged as “render farm” and may be easily skipped. I’ll continue to use this test render image of the space girl in front of Mars as the featured image as well, so you can gloss over these posts. Furthermore, because I anticipate the interest to be so incredibly low on these posts, I’m not going to bother posting them to any social media, lest I send the wrong message. If you’re still reading, I apologize.
Abstract
A render farm, as you may know, is a way of getting multiple computers to work on the same material. Computer graphics take a notoriously long time, and special effects with high resolution video can take even longer, due to the number of frames needed and the size of those frames. It would be great to get a bunch of computers in a big pile to do this work while you turn your attention to more important things – like email and looking for new cat pictures. Though my usual video tasks do not require such firepower, I’m a person who dreams, and often. I envision 4K renders of beautiful scenes such as the Mars girl I posted earlier, and I know a single desktop would take weeks to render shots like that at such a resolution.
Luckily, I find myself in possession of eight deprecated Macintosh computers from my school. They were formerly editing systems, so they’ve been used quite a bit, but they’ve still got lots of life left in them. Equipment like this would normally be moved into a large warehouse the school calls “Sending them to Property.” This terminology, “property,” seems to come from a state law prohibiting any kind of sale or transfer of goods the state purchases for educational programs. The big fear is that old or deprecated equipment will be sold – either by instructors, or staff – on some kind of black market. My school had exactly this trouble in the 1970s (which involves a long story that includes not only porno films made on school grounds, but the eventual murder of seven innocent people), so it’s no wonder there is such a law.
However, the problem with older, used equipment is that it sits in a giant warehouse while it rusts and rots. Then it is sent to a landfill. So sending a piece of equipment “to property” basically means they are going to take ten or fifteen years to throw it in the garbage. In the meantime, there are lots of things that could be done with this equipment but there is no access to it. It’s kind of a shame, because some of it, like these eight computers, still have a lot of use left to them. So, the trick is to figure out what resources are available, and see if you can intercept them before they go into the giant “property” dump. Fortunately, the school’s policies concerning equipment like this are pretty questionable to most people working at the University, and you can find lots of sympathetic people who also want to use things until they wear out. Keeping something in use and not putting it to pasture is good sense. So long as we’re not selling these things or taking them home, it’s actually a good use of the material; we’re being good stewards of the public money. I’m quite sure I’ll run my own material on the finished render farm, but it’s certainly going to help my digital effects class.
Hardware
Thus, I seem to have been successful in securing these resources. As stated above, I have eight Mac Pro 1.1 2.66 GHz computers, all now evenly equipped with about 10 Gbs of RAM. At one point I was considering what was necessary to remove the motherboards and install them in some sort of a rack. The most efficient thing would be to connect them all to a centralized power source. For now, however, I think I will maintain their unwieldy cases and power supplies. That’s mostly because I really don’t know what I’m doing if I were to take them apart, and I worry I’ll blow things up if I try to attach more than one motherboard to a power supply. As it is, I’m just hoping I don’t blow the fuse in my office.
One thing I can do, is to unplug excess peripherals like the DVD players. These machines will not be running monitors or keyboards, so that will also reduce power consumption. I’ll be able to connect to them via screen sharing if they are all on the same network.
In addition, I have secured from my university IT system a deprecated ethernet switch. This is not a router, but just a switch for a local network. I’m aware that the speed of my network is dependent on this switch, and I know I could probably purchase something a bit faster later if I need to.
The PDC
I’ve got to get all these computers talking to each other. The basic plan for a render farm is this: the render nodes (sometimes called “slaves” or “workers” as well) will do the actual image processing. But who tells them what to do? They need a kind of “boss” computer that doles out the jobs and keeps track of which computer is processing which frame.
This central server is also called a “PDC”, or Primary Domain Controller, and it’s the hub of the local network. I’ve often read that the duties of the PDC are so few and so light on processing power that you should choose a relatively weak computer to perform them – the weakest one you have. There are even render farms running on ancient 1980s and 90s computers. This makes sense. In the real world the boss is often the least powered intelligence with the least capability, and yet his “job” is to order the others around. As in heaven, so on earth, so I think about what would make a good PDC.
At first, I think perhaps my office iMac would be a good candidate. For one, it’s an all-in-one unit, and I wouldn’t have to use a second monitor in that room. Secondly, I thought I might be able to use some kind of VNC from home in order to check on the status of the render farm.
Thus began my descent into madness.
The Queue Manager Software
My research shows that the open-source render farm manager called “Dr. Queue” is just what I need. It’s notoriously awkward to build, and requires some ability I do not have, but it is also a robust system used on a number of big projects (including major Hollywood films). It’s also free, did I mention that?
But Open Source solutions are quite often a complicated affair with terminal commands, which are not exactly my forte. I’m not a Linux administrator, I do not do command line stuff ordinarily, and I’m thoroughly out of my element with this project. But remember, I’m also broke, so there is no other solution. I cannot buy something easy and get it to work – I’m just going to have to figure out this difficult piece of software if I’m going to do this project.
This, I realize with a fair degree of melancholy, is the lesson of my life. If I want anything good I’m going to have to make it myself out of other people’s trash.
As it turns out, the supposedly complicated world of Linux commands ends up not being as bad as one would think. Though this log will seem to prove otherwise, it turns out that computers are rather straightforward and refreshingly simple. When I get it wrong it’s exactly that – I’m getting it wrong, and not understanding the logic of the system.
Another trick of Open-Source software is that the authors, who are often groups of people, rely heavily on already-created routines developed by other open-source software groups. So, in order to create the program “Dr. Queue,” I have to install several other pieces of software that Dr. Queue relies on. These are called “dependencies.” Thus a programmer could use a routine – let’s say a method for choosing files, or displaying windows – and not have to reinvent the wheel himself by writing every routine from scratch. These libraries are also released Open Source, so programmers can use them for free.
This means that in order to build any given piece of software, the dependencies must be found, must be built and must check out OK before your particular piece of software can be built. On most modern systems this happens almost automatically – the computer can find these dependencies on its own, will build them, and will report on the results. It’s quite brilliant.
Sadly, those results can sometimes be “failed,” especially when the computer is old, the software is not maintained, or the operator really does not know what he is doing.
And you can see where this is leading.