top of page

Forum Comments

Tips for setting up a Cross Platform Render Farm
In Getting started
Julian Rendell
May 21, 2020
This all makes sense, along with a healthy handful of assumptions ;-) We have a separate FreeNAS based NAS, and for a lab/production, I'd imagine that's the way to think this through. My 2c would be to leave the setting up of shares etc outside of CR; IT policies, already existing shares, making assumptions on network/NAS topologies, etc. Also, then a CR service doesn't need elevated privileges to be able to set this up at an OS level. Not to mention all the testing re taking down shares if things fall over, etc, etc. Detecting missing files and sending to the clients is the simplest for the end-user, but my gut tells me that (except for longer animations) the cost of sending the files, multiple times across the network, is going to cost more than CR gains. One possibility might be something like an embedded BitTorrent system: eg http://www.bittornado.com looks to be MIT licensed and in python. But in some experiments I did years ago to speed up distributing multi-gig source files to a dev site of developers, the cost of computing the required hashes and distributing them ate most of the block distribution speed-ups. Maybe there are two use cases to consider: • Absolute beginner: CR detects and ships all files to clients. Effective speed up limited a lot due to transfers • Advanced/production: CR takes care of (relative) path fixes based on OS, and requires everything be on a Network share. For the advanced case, could CR just send the location of the (master) blend file on the network to the nodes? Then it wouldn't need to sync the blend file at all. I'm assuming blend files (with all assets external) are relatively small... then nodes could load the same file the master has. Or if write-lock/file changes are a concern, each node could make a copy of the master-blend file, tweak it as necessary (eg OS specific path tweaks), and execute it in the same place as the master. Again, I'm thinking about leaving the file sync/transfer to something that is hopefully well optimized, and also as flexible as the end user needs. E.g. with this setup, in theory a team could set up a pre-batch rsync job to sync all assets to SSDs local to each slave, and then the master coordinates the rendering. Maybe for animation rendering CR could switch to frame-level batching to different nodes; optimize for latency (tile based) for single images, optimize for throughput (frame based) for animations. Again, I'm day dreaming with no real experience in the field, just some ideas based on imagination + experiences in other industries (at various points in the past have been a customer support dev, app dev, test dev, devops for Motorola & Nokia; sometimes large teams needing lotsa files fast, and I got to do some experiments outside the standard systems IT had deployed- thankfully with their support :-) ) "Unfortunately" one of my potential helpers has just built a new PC- it's a lot faster than their old machine... so their interest in this is diminished. But the new-shiny will wear off, and "fast" is a relative term that we soon shift to taking for granted. I'll try and make sometime to play with these ideas in the next few weeks!
1
0
Tips for setting up a Cross Platform Render Farm
In Getting started
Julian Rendell
May 19, 2020
Great news on all fronts! I'd read somewhere that blender running headless can't access the GPU. Most likely old/wrong/apocryphal info. Our email host is having problems today, so I may not get the emails until tomorrow. Looking forward to trying the newer version, and early access :-). Tile batching will be very cool, especially if it can be a bit "pathological" and potentially scatter tiles from entire frame-ranges to nodes for animations, and then gather them all back into frames. I've watched (parts of) that video a couple of times, its really good, and I appreciate the "uncut/honest" nature of it, but I'm old-fashioned and prefer a text document, especially when it comes to code (snippets.) (Am also lazy, and prefer to cut and paste text cf typing it from a paused video ;-) ). In the video it looks like you're using hardcoded absolute UNC paths for all external assets, which isn't going to work cross platform. Hence my interest in understanding the use of relative paths... if they even work in blender properly; we did a simple blender only test with the cube & a texture and simply moving it to a new directory... failed (pink textures.) So given what you've said re the UUID/local copy, even if relative paths are working in Blender, they'll be broken UNLESS the external files are in the same relative location to the temporary blend file on the remote node. Could you provide a "network base path" (NBP) option on each node? This would be a start-of-string match that would be removed from the path when the master node sends data, and added on remote nodes? e.g. Given the blender file is at //share/projects/blender/projectX/projectX.blend on the master, but at //readonly/projectX/projectX.blend on the remote node(s) and NBP on the master = //share/projects/blender and the NBP on the node(s) is //readonly/ then the master would send the relative base as projectX, and the remote nodes would "reconstitute" this as //share/projects/blender/projectX. Then if remote nodes set their current working directory to //share/projects/blender/projectX, then relative assets should be accessible...? And if the MNBP is left blank (on all nodes) you have the current behaviour. If I'm very, very lucky, Blender is "\" vs "/" tolerant and this would work cross-platform. Thinking of this with very general knowledge and very little specifics- won't be offended if you have to tell me I'm missing enough info to drive a fleet of internet packet delivery trucks through!
1
0
Tips for setting up a Cross Platform Render Farm
In Getting started
Julian Rendell
May 19, 2020
Awesome replies James! Headless with access to the read-only path seems like a reasonable interim solution if you're working on a service wrapper. I don't have tonnes of time, but if you're looking for some off-and-on testing/feedback re the Windows service setup, I'd be interested in helping. For Windows, we have a VR capable workstation, and 3x i5 2D/basic 3D/video stations. (I've set up windows services for automated tests in the distant past... not that keen to go down that path by myself again ;-) ) Just want to check, headless precludes GPU rendering, correct? So that would be Cycles only (no Evee.) Re paths: Requiring all relative, with a light "is it accessible" check & warning seems the way to go. Changing file paths in the blend file could break things for non-CR rendering. And it would be another API point you have to track, etc. We did the "set all externals to relative" operation from file->external data, got a clean report (no absolute paths, no changes needed) and it didn't work. Lots of pink. Set it to 'pack assets' and it worked- mostly; another post coming... (BUT I just realized I may have mapped the shared drive inconsistently on the other machines- I need to test that again.) For remote nodes, what do they use as the path for the replicated blend file? ie what is the "current working directory" on a remote node when looking to load relative paths to assets? Maybe this could be a setting to allow some basic mapping between remote and master node path locations? And if you need help with cross-platform testing, we have a Linux multiuser box with dual GPUs (has run 14 workstations with 14 instances of Minecraft, thanks to VirtualGL), an ESXi dual Xeon multiuser (multiple GPUs using VM hardware passthrough- runs Windows & Linux VMs) and can also add my personal OS X laptop. Have a FreeNAS box with CIFS shares accessible to all the above. Final note- we are using V0.2.2-BL280. Thanks for creating this, it's pretty amazing! Julian
1
0

Julian Rendell

More actions
bottom of page