The current alpha build has some limitations which are important for artists to know about. Here's a list of them and what they mean for you. Please read this list carefully, we want you to really get something out of testing this software, if you make yourself aware of the limitations you can avoid tripping over them!
1. Frame splitting and performance: Each frame is split into two parts, for scenes which completely fill the camera field of view, this results in a near halving of render time. For really simple scenes, you might not see such a great improvement since there are fixed overheads that single machine renders don't have, such as transferring and loading image files.
If you are rendering a scene like the one below, you will see much poorer performance. The reason for this is that in our case the two machines have the same number of cores, we use this to determine frame splitting and hence the frame is split equally. This means the machine which gets the left hand side will see a much higher load than the machine which gets the right hand side. We're currently working on enhancements which will make sure both machines will have an equal share of work no matter what orientation the objects are in the scene, see the features forum post on frame splitting and like to vote up if you think this is important!
2. Frame splitting and compositing: Crowdrender will run your compositing setup and include this pass in the final image you get back into blender. This means that for heavy composting setups, there will be an increase in performance for compositing as well as rendering. However it comes at a price, the alpha compositing is done on tiles of the total image, one tile for each machine, two in the case of the alpha. Because of this, the boundaries of the tile will introduce differences in how compositing is calculated vs using the normal method in blender.
The image below shows two images (blend file courtesy of Antonio Arroyo!) which have been rendered using cycles, the left image used crowdrender, the right did not.
Though the pixel image data is visually identical (I couldn't tell at least), there are differences, beyond compare (great file comparison tool) marked the differences as minor, but they are still there. If you are concerned about your work being affected, do a test render to see if there are any visual differences that are unacceptable. You may want to move the camera to place important objects in the scene along the seem between the two images to make sure there is nothing wrong.
We have a post in the features area of the forum about fixing this, please vote up this post if you think this is critical to your needs.
3. Changing Render Settings
Currently you can change the resolution and percentage settings of a render and these settings will be in effect immediately. You can also change the output settings such as where you wish the rendered frames to be saved and what format they are in.
All other render settings will require a "manual resync". Therefore you might want to structure your workflow so that you are not changing these render settings often, especially if your file is large. If you find that this is not acceptable then you may wish to vote up this post.
4. Second Machine going offline
If the Second machine goes offline for anyreason, you'll likely experience one of two possible situations:
1. If you attempt to do a streamed edit, you'll see that the second machine's status gets stuck at "Synchronising" for a few seconds, then the second machine is declared "dead" and is dropped, you can attempt to reconnect with that machine once you have made sure its still on the network, blender is still running and crowdernder is enabled.
2. If you attempt to do a render, bad news I'm afraid, this is the limitation, for which we have an upgrade planned (see this post in the feature area of the forum). In this case, crowdrender can not yet check to see if the second machine is late in sending its results, so you may find that the render gets stuck. To fix this, you can simply cancel the render. However, you will need to fix the problem with the second machine and possibly restart blender and crowdrender on both.
If you're having trouble, contact us, leave a comment below :)
5. Border Render
Currently the use of border render is not supported. We have planned to introduce support for it in April this year in our alpha 0.1.2. release. More on that will be announced soon :)
Sorry I forgot to reply here. Anyway, forget about “Camera Cropper” and say hello to “Camera Regions”!
Will do man, do you have a tutorial about this? We could link to that?
I made further updates to my Camera cropper add-on and used it at work with Crowd Render and it works pretty good already. If people ask you about the region render feature, feel free to direct them to me for some help on my workaround while you work on a more built-in solution.
I just updated my cropper add-on to 2.8: https://gitlab.com/ChameleonScales/camera-cropper/tree/master More updates and features to come soon.
Hi James, Border Render would be a huge help for me. In my work, using Crowd Render is often pointless since the objects that have to be rendered cover a small part of a very large image (objects inserted in real footage using shadowcatcher), so using Border render allows me to avoid all the empty space which would still take time to render. If I use crowdrender on 5 equally powerful machines, it still takes a lot longer to render than rendering the small region on a single machine. I made this camera cropper add-on a while ago which I need to update to 2.8 but would this approach be of any interest to you? Also maybe I can try and join forces directly in your add-on. I'm not nearly as experienced as you but if you think I can help feel free to ask (I would need pointers on where to look in your code) You can contact me at firstname.lastname@example.org