This all makes sense, along with a healthy handful of assumptions ;-)
We have a separate FreeNAS based NAS, and for a lab/production, I'd imagine that's the way to think this through.
My 2c would be to leave the setting up of shares etc outside of CR; IT policies, already existing shares, making assumptions on network/NAS topologies, etc. Also, then a CR service doesn't need elevated privileges to be able to set this up at an OS level. Not to mention all the testing re taking down shares if things fall over, etc, etc.
Detecting missing files and sending to the clients is the simplest for the end-user, but my gut tells me that (except for longer animations) the cost of sending the files, multiple times across the network, is going to cost more than CR gains.
One possibility might be something like an embedded BitTorrent system: eg http://www.bittornado.com looks to be MIT licensed and in python. But in some experiments I did years ago to speed up distributing multi-gig source files to a dev site of developers, the cost of computing the required hashes and distributing them ate most of the block distribution speed-ups.
Maybe there are two use cases to consider:
• Absolute beginner: CR detects and ships all files to clients. Effective speed up limited a lot due to transfers
• Advanced/production: CR takes care of (relative) path fixes based on OS, and requires everything be on a Network share.
For the advanced case, could CR just send the location of the (master) blend file on the network to the nodes? Then it wouldn't need to sync the blend file at all. I'm assuming blend files (with all assets external) are relatively small... then nodes could load the same file the master has. Or if write-lock/file changes are a concern, each node could make a copy of the master-blend file, tweak it as necessary (eg OS specific path tweaks), and execute it in the same place as the master.
Again, I'm thinking about leaving the file sync/transfer to something that is hopefully well optimized, and also as flexible as the end user needs. E.g. with this setup, in theory a team could set up a pre-batch rsync job to sync all assets to SSDs local to each slave, and then the master coordinates the rendering. Maybe for animation rendering CR could switch to frame-level batching to different nodes; optimize for latency (tile based) for single images, optimize for throughput (frame based) for animations.
Again, I'm day dreaming with no real experience in the field, just some ideas based on imagination + experiences in other industries (at various points in the past have been a customer support dev, app dev, test dev, devops for Motorola & Nokia; sometimes large teams needing lotsa files fast, and I got to do some experiments outside the standard systems IT had deployed- thankfully with their support :-) )
"Unfortunately" one of my potential helpers has just built a new PC- it's a lot faster than their old machine... so their interest in this is diminished.
But the new-shiny will wear off, and "fast" is a relative term that we soon shift to taking for granted.
I'll try and make sometime to play with these ideas in the next few weeks!