Want to start of by saying great add-on! Really appreciate what y'all are doing.
I have a couple linked .blends in my central .blend file to help things run a little quicker. All the links (as well as the main file) are saved in the same place on our server, so each computer I've synced with has access to them — plus the file path doesn't change between computers since everything is on the server. However, when I render, the linked items don't show. Any help on what to do to remedy this would be greatly appreciated! Thanks!
I reckon I figured it out thanks to this tutorial that CrowdRender posted on YouTube: https://www.youtube.com/watch?v=ttZVSYKFcgE
Not the most ideal solution as my file ends up being an egregious size and syncing takes forever/doesn't happen. I am having issues connecting with other nodes now — after going through the connection process they remain gray, no text beside the node name or anything as is usual. Occasionally one or two will connect, but they either stall out at ready or syncing, but this might just be a network speed/latency thing.
Hi, if you are getting instability, you might want to check if you have more than the usual number of blender processes open. With Crowdrender there are usually three instances of the blender process open.
One is the foreground blender process which you see, then we use two more, one to act as a manager for your foreground process to connect out to other machines, the other acts as a render server, waiting for incoming connections.
Anymore than three on your client/master/local computer and you're likely to get instability. Also the system won't work well if you intentionally open more than one blender foreground process. You should only have one instance of blender open at a time.
Maybe take a look at your processes if you experience this issue again and let us know if this is the case in your situation, if not then we'd like to investigate.
Cheers!
Will this always be like this? I mean, packing solves the problem but I'm working on scenes with huge amounts of textures and objects, packing all of them results in blend files up to 10 gb. Will there be in the future a way to keep all assets linked?
Thanks!
Hi Juan,
The short answer is of course not! We are working on a way to automate the distribution of assets to all nodes. I completely agree that having to pack is annoying and inflexible.
Consider our position for a moment though, we are a group of three devs, working in our spare time on this project. We are trying to fund raise all the time to get us working full time. We are as frustrated by the state of our software as you might be with this particular issue, we want to work on it, but we have, for the time being, to work day jobs to pay rent and such.
Ok, enough of the sob story! To give you a better idea of the real issues at stake here, consider the following.
Keeping assets linked makes for a very flexible way to work on your project. However it comes at a cost.
Using linked assets requires each node in your network to have access to the linked/appended files at render time. To get around having to pack files, you can choose to link and append from a network storage location. This way each node will automatically be able to access the file at render time, and you'll not have to pack anything.
However, using this method you have to be aware of how much network traffic will be generated each time you render. Since each node does not have access to the linked/appended files, they will need to be transferred at some point. I am not sure if blender caches appended/ linked files for subsequent renders either.
So, if you had 10Gb of external data, and ten nodes, then you have 100GB of data to transfer across the network from your shared storage to update each node for the render. Even if blender caches this data, the first time is gonna hurt.
For example
Assuming you have a 1Gb LAN, you'll have about 120MB/s of bandwidth at best. So the total time to upload 100GB from your server is around 800 seconds or about 13 minutes. This is the absolute best case as it assumes;
1. All the link's bandwidth is available for file transfer.
2. You're not using TCP for data transfer. And a LOT of transfer protocols use TCP by the way!
To give you an idea of how much less bandwidth you can have if using TCP, consider this, if the network has a ping (round trip time) of 4 ms, and you have a 1GB ethernet link, then you'll have a Bandwidth Delay product of 0.004 * 1'000'000'000 = 4'000'000 or 4 million bits. This means that at most, there can be 500kB of data 'in flight' at anyone moment. By 'in flight' I mean, the amount of data one computer can send to another computer before it has to stop and wait for the other computer to send an acknowledgment that it has got that data.
However, its not always the case that the maximum amount of data will be allowed to be 'in flight', the TCP protocol, which is used A LOT, is limited to the minimum of two special numbers which control congestion. If there are other things going over the network, expect less effective bandwidth.
The reduction can affect transfer rates quite dramatically in the extreme cases. For example, if congestion is severe, the minimum value of the congestion window might be used, which is 64KB. Meaning only 64KB of data can be in flight at any one moment.
Now you have very little effective bandwidth.
Eff Bandwidth = 64 * 1024 * 8 / 0.004 = 131 Mb/s !!
That is a pretty big drop in bandwidth from 1Gb/s!
To put this in perspective, your transfer time for the 100GB is now;
TTime = 100'000'000'000 / ( 131'000'000 / 8) = 6106 seconds or about 100 hours! (Correction 100 minutes! Which is about 1.7 hours!)
So, in my opinion, you'd want to do some tests to see if your network in its current state can actually handle your particular situation, you may have more or less nodes and the actual bandwidth/data you need to transfer will likely vary.
But using the same math and logic, you should be able to predict how long transfers will take and perhaps plan a little for how you want to render your final animation, remembering that although linked and appended assets are nice in production, they may be impractical in the final render. You may want to pack them and store the project in one file so that one file is uploaded to each node and then loaded locally at render time. This means all that data is loaded from a hard drive, which is way faster than your network.
For this reason, we will begin work (eventually) on a new system for synchronising the data between computers. Though to be honest, we really need your (and the Blender community's) help to do so. Though we are dedicated to coding this addon in our spare time, it will take a long time to get there.
So for you and everyone that is reading this, I'd like to invite you to consider donating to the project so we can get back to coding full time and stop working day jobs.
Thanks!
p.s. link to our crowdfunding campaign -> https://www.crowd-render.com/crowdfunding
Well thanks for that reply, that gives me a better overview on the challenge you guys are facing, and please don't get me wrong I understand perfectly how hard is to develop and still having to keep a full time job at the same time, I didn't meant to urge you guys.
Hi Juan,
No problem :) you didn't 'urge' us, its ok! We're glad to help. My goal is to be able to help more people though. And for this we need the community to support our project if we're to see the software keep pace with the change of technology, which is pretty fast, I mean, a year ago, we didn't have hardware accelerated ray tracing, today, we have it! Though it may not work yet with blender, if probably will do eventually, and of course, we'd love to make use of that and help people get fantastic render times, but this all takes a lot of development.