UPDATED 16:46 EDT / JANUARY 25 2010

Three Words You Wouldn’t Hear 10 Years Ago: Homebrew Render Farm

imageIt all started when I wanted  some new network icons. 

Somehow all the ones I used in the past were made by an art department I strangely do not have access to anymore, and I really don’t want to have to pay an agency to make them for me.  I could probably outsource somewhere, but don’t want to have to explain what I want, so sometimes it is just easier to do it yourself (and learn a few new things while you are at it). 

Plus it was a fun way to spend an evening.

So off I went, using a 3d design program known as Rhinocerous.

Incidentally, Rhinocerous is an insanely cool name. I wish I could name my products things like that. A new switch is the Raven 98000, and over here we have the Magpie 5600 connected to the Corpus Corax 11000. You catch my drift; cooler names should be used in networking products, instead of the secret-decoder-ring-needing acronyms and SKUs limited to 17 characters (who who picked 17, again?). I don’t expect this to change anywhere anytime soon, so I’ll get off my soapbox.

So we have Rhino running, doing a bit of drawing, getting the shape right and such.  Then we couple that with a ray-trace rendering app, in this case VRAY, for Rhino. 

You get a lot of choices about textures, lighting, and such that frankly are too plentiful for a neophyte, but in the hands of an expert is clearly a pretty powerful program.  This is where it gets fun though: there is an option in VRAY for ‘distributed rendering’. 

Nerd alerts went off throughout my office as I madly scrambled around loading a VM with the VRay distributed rendering client onto every machine I could get my hands on.  Old Mac laptops, an 8-Core MacPro, a 4-Core MacPro, even a 2-Core MacMini fell victim to loading this intimidating piece of software.  I then realized that I had some network issues, and quickly patched through a few more Cat6 ports from the office to the wiring closet, locked the ports down at 1000-FULL and moved my IP Phone to a PoE port while I was at it.

imageAfter it was all said and done and running like a champ, the coolest part was watching the MacPro spawn multiple execution threads which you cold see rendering in real time.  Render times were cut down by about 70% from using just one machine.

Lessons Learned

It wasn’t all roses.  A few things I learned and a few things I think the SW developers should focus on in future versions.

1) VRay and Rhinocerous both do not have native Mac versions yet.  This is frustrating but you can work around it with VMware Fusion 3. They both worked pretty well through a VM on Windows XP.  I am still not up to Windows 7 being happy to have skipped Vista.

2) Since you are running it in a VM note this:  On the station with Rhinocerous be sure to tweak your setting to as many CPU cores as you can.  I set it up for 4 cores and 3-4Gb of DRAM on the VM.   I need more RAM for this machine, it could easily be happy with 16Gb on the VM.  I am looking forward to the native version.

image 3) On the distributed render farms you don’t need a whole lot of memory as it seems mostly CPU intensive, at least for the way I was using it.  I set mine at 512Mb of Dram and let the other machines continue their happy servitude streaming iTunes, serving photos, keeping myDrobo happy, and generally performing well.  Even the Tweetdeck machine.  On these and the master you will have to move the Network Interface Card settings from NAT (default) to Bridged.  You will probably have to at least go to the console and do a ‘ipconfig /release, ipconfig /renew’ to ensure the adapter comes up and yvou are ont eh same LANs egment as teh physical hosts.  I was not able to get it working with NAT.  Also be sure to let the sockets through any host-stack firewalls, McAfee goofed me for a bit on this.

4) Room for improvement: a native MacOS client for Rhinocerous and for VRay would really help.  But the way the developers have you add distributed render  nodes is archaic.  First on the node themselves it spawns a text window and doesn’t provide any diagnostics, just a scrolling log when it gets a job.

b) VRay requires you put the IP address, hard coded, into the master machine of each client.  Don’t you think this would work much better integrated with Bonjour or something that enables auto-discovery of potential render-nodes?

c) Even smarter would be have the render nodes run as a reduced priority process in the taskbar.  Then each machine in a studio could be helping any rendering via processor reclamation when not being dominated by the user.

d) I like the real-time display of the ray tracing going on, but put a report in their showing what system did what percentage of the work.  This way I would now which ones to upgrade, find the bottlenecks, etc.  A little diagnostics would go a long way here.

e) Also when showing the list of the servers, check server availability and let me know BEFORE I start a render job. Novel, yes?

In the end, it was fun, I will continue to use them, but there is some room to improve that would be really useful for someone like me and I imagine the IT staff at any design studio.  Here’s some shots of the finished products…

image


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU