As a relative novice, I am here because I am grappling with working out a rendering pipeline. There is discussion related to this topic here: https://www.crowd-render.com/forum-1/future-features/apply-denoising-only-on-one-single-device?origin=auto_suggest I have discovered that you must not use the denoise render option in render options, but add the open image denoiser in the compositor. There are two reasons, the compositor method is much faster probably because the rendering process fully utilises your CPU, and because the optix denoiser is much less friendly to animations. But neither are demonising temporally - temporal denoisers work over a number of frames (usually 7) so you don't get artefacts when rendering animation. Also, if using crowd render, each denoiser works independently and has to make a guess about pixels either side. Therefore, denoising with crowd render will lead to a visible line for the tiles for each computer in the whole image. So I am looking at rendering to openexr multilayer using CR without denoising, then passing the exr through the compositor for denoising and splitting to render layers to use in my VFX software, where I can use either neat temporal denoiser or the hitfilm alternative which works well and some say works as well as neat. So one of my questions is, what sample resolution do people typically use in animation? I realise the answer will usually be "depends" but there must be some kind of range or situational typical values. There must be a balance between reasonable render times and quality. I am finding that the only way to get something usable is to continually up the samples. The stuff I am trying to do is often very dark and with volumetrics. I am trying to find ways to "cheat". For example, splitting a scene into layers and rendering them separately so that the background volumetrics can be dealt with separately - the foreground brighter images seem to work well with the denoisers, allowing lower sample rates and thus faster rendering. I've also discovered that you can combine render engines. So in one render, I can set up a scene using part of it in cycles and part of it in Eevee, then composite them later in the VFX software. How would this work in CR? I set up a scene in one render engine and a scene in another, and when I hit render will they all be distributed and I can go to bed? You can bet I've trawled the interwebs...but it's so frustratingly vague. The best so far is the Blender Guru video on denoising - but it still is much more focussed on stills. How are others approaching this?