top of page

What CPUs crush blender rendering?

This chart shows render performance in Blender of about 100 different CPU models from Intel and AMD. It was compiled using data form Blender's own open data project which has recently been updated.

Graph showing CPU render times for blender's cycles engine

First, why this data?

It would be remiss of us to not champion the virtues of using data to make decisions. We're of course talking about the decision of what CPU should you be looking to buy in 2020 for your rendering pleasure. Whether you are building a render farm, or a powerful a workstation, you should be using the most relevant data to guide you or you'll sacrifice performance.

The data we're using here is not only freely available, but if you're doing any rendering in Blender's cycles engine, its as relevant as it gets, unless you build your own testing lab and spend a fortune on gear. That would be nice, but no.

The chart we created (see above) is of the Barbershop interior benchmark (see screenshot, also above), on windows operating systems using Blender 2.81. It contains render times for 102 CPU models from intel and AMD.

We chose these settings because most users are on windows, Blender 2.81 is the current stable release (for now, 2.82 and 2.83 are available for downloading right now) and finally, the barber shop interior is long enough to get systems to max temperature on pretty much any system (looking at the total render time that is). Its as consistent as we could possibly hope to get outside of a dedicated test lab.

We used data only for Blender 2.81 on windows

A final word on data quality though, since the data is submitted by users from all over the world, we have no idea what their system config actually is, some of the CPUs only have one benchmark sample submitted. These runs could have been using water cooling vs stock coolers, hell even LN2 maybe. We don't know.

We can only say that we have analysed the data and found that the spread between runs on the same CPU is not ridiculously big so we're fairly confident that the data represents reality for most people.

Enough! What CPU should we choose?

First, you may wish to wait. If you missed our last article about what hot new tech is coming this year, then you can check that out here. The TLDR on that is that AMD is continuing its surge into the market, making them more than a contender for consideration.

Considering nothing but speed, AMD has it in spades. You can see in the chart above, its got the first and second fastest render times with its thread ripper 3970X and 3960X. You would of course be shopping the top of the market, and if Intel is your chip of choice, then the i9-7980XE and i9-10980XE ain't cheap either.

Thanks to a great suggestion by Robert (see comments below) I changed the formula for performance score to something that makes more sense. Performance is defined as;

P = 100000000 / ( s • $) P is a measure of bang for buck, a higher number indicates a better score, which happens as either as the number of seconds to render decreases, or as cost decrease.

AMD threadripper - 3970X - AUD 1,949.99 | P 234.98 | Sync time 29.01s

AMD threadripper - 3960X - AUD 1,399.00 | P 256.70 | Sync time 28.49s

Intel i9-7980XE - AUD 1799.00 | P 174.65 | Sync time 27.10s

Intel i9-10980XE - AUD 1799.00 | P 166.04 | Sync time 25.45s

The 3970X has the highest score, so it represents the best value for this particular benchmark. If you do a lot of work similar to the barbershop interior (or that has similar render times) then these numbers might interest you.

Using the different formula for a performance score, AMD is now clearly winning, which it wasn't before, so thanks to Robert for the suggestion :).

But what about power draw??

Ok, so this is where I have learned a lot lately and this article has been updated thanks to the following people:

Matej Polák, Alan Leigh-Lancaster and Zocker1600

Thanks guys!

So, Originally I wrote about using TDP or Thermal Design Power as a measure of power consumption. This power consumption was then used to compare the relative efficiency of CPUs. I kept this analysis as the technique is valid, but its the use of TDP that is dubious! Read on past this part of the article to find out why TDP is not a good benchmark figure to use for this particular task of comparing CPU power consumption!

AMD threadripper - 3970X - AUD 1,949.99 | P 234.98 | TDP 280W

AMD threadripper - 3960X - AUD 1,399.00 | P 256.70 | TDP 280W

Intel i9-7980XE - AUD 1799.00 | P 174.65 | TDP 165W

Intel i9-10980XE - AUD 1799.00 | P 166.04 | TDP 165W

Ok, so Intel chips seem to draw a lot less power. But its difficult to see immediately which chip is better. So let's fix that too. We'll calculate the total energy it took to render the barbershop benchmark for each CPU.

3970X - 218.35 * 280 = 61138 Joules

3960X - 278.45 * 280 = 77966 Joules

7980XE - 318.27 * 165 = 52514 Joules

10980XE - 334.77 * 165 = 55237 Joules

Interesting! Seems intel has the advantage vs AMD in terms of efficiency. Though I have to admit, this calculation may be flawed, here is an article on what intel defines as TDP and how the answer on what power was actually consumed in these benchmarks 'depends' on the CPUs circumstances, this part is particularly telling;

when we quote a base frequency, we think about a worst case environment and a real world high-complexity workload that a user would put on the platform – when the part is run at a certain temperature, we promise that every part you will get will achieve that base frequency within the TDP power.

TDP, what it is and why its not a good yardstick

So, as I mentioned above, this article originally calculated total energy consumed using TDP, and that was the mistake. TDP is the only figure the manufacturer gives that is related to power, but, it is determined in contrived conditions and in real world tests, is not reliable enough to use to compare CPU power consumption.

To give us a concrete example of this, checkout this image from

We can easily see here that power consumption is clearly different from TDP, in some cases. Thankfully we have the same processors as in the original calculation above that used TDP, we'll now repeat this calculation using actual power draw and see where we come out.

3970X - 218.35 * 286.72 = 62605 Joules (original 61138, difference 1467 or 2%)

3960X - 278.45 * 279.82 = 77915 Joules (original 77966, difference 51 or 0.06%)

7980XE - 318.27 * 182.69 = 58144 Joules (original 52514, difference 5630 or 9.7%)

10980XE - 334.77 * 190.84 = 63887 Joules (original 55237, difference 8650 or 13.5%)

If TDP is not a true measure of power consumption, what is it and what's it good for? Thanks to Steve from Gamers Nexus we can present the formula that AMD uses for TDP which is

TDP = (Tcase - Tambient) / HSFθca

This formula may not mean much to you at first sight, but for me, I recognise it from my engineering background, its a simple formula to calculate power transfer for heat. What it says is that the difference in temperature (between the case and ambient air), divided by a coefficient representing how the heat sink 'resists' heat flowing through it. Combined in this way the formula calculates the thermal power transferred between case of the CPU and the air.

So in AMD's case, TDP is an engineering number used for design purposes, similar to fuel consumption figures for a car, manufacturers will specify a number based on either a standard test (somewhat lacking for CPUs sadly) or their own internal definition, like in the case of TDP. And just as the real fuel consumption depends on how you drive the car, actual TDP will vary based on how the CPU is cooled and run, what heat sink it uses, the air temperature, the case temperature, etc. So TDP isn't really a useful benchmark since its not related to the data we're interested in, which is how much power we're paying for to render the same thing for each CPU.

If we're really interested in comparing CPU efficiency while rendering, we need to capture relevant data and sadly the open benchmark doesn't include how much energy is consumed during its benchmarks. This is not easy to do either as it requires special equipment and pulling your computer apart to do the measurements. So for now, we'll have to do without having accurate data on power draw, unless someone goes out there and benchmarks all the CPUs for us! We'd totally do that if the community wants to give us like, a million bucks?!

Pros and Cons

Despite the flaws in TDP, its still fun to speculate, this part of the article I decided to keep, but I updated the numbers using the power draw data from the anandtech data.

As you might have guessed, or known, the main cost in rendering after you buy the gear is power. Lets consider a hypothetical example. A short film rendered in cycles that lasts five minutes. We'll base the render time and associated energy cost on the data from the Barbershop benchmark for the 3970X and 10980XE.

So, a five minute animation, that's...

24 * 5 * 60 = 7200 frames

comparing energy expended between the 3970X and 10980XE that's...

7200 * (63887 - 62605) = 9,230,400 Joules of energy saved using AMD over intel.

Nine million Joules might not be easy to visualise so let's express it in $.

The Australian power price I pay right now is 27.56 cents / kilo watt hour (If you live in South Australia, sorry lads, you pay 37 cents, I guess the giant battery bank Elon Musk gave you guys hasn't helped much?).

A kilo watt hour is the energy expended for a 1000 watt sustained power draw for an hour.

kWh = Energy in MJ / 3.6 x 10^6

so we have our power usage cost as

cost = $0.2756 * 9.23 / 3.6 = $0.70 or that amount saved in our comparison.

Honestly, this is not a great deal of cash if you aren't rendering constantly. But what if? 🤔

We could also express this as the amount saved per 5 minutes of animation rendered. Say you render 500 minutes a year, then this is $70 saved in power using the 3970X vs the 10980XE chip.

Of course we take this with a grain of salt thanks to the anandtech power draw data not being clear on what workload was being run. But if the power draw scales similarly for intel and AMD then AMD wins here; the 10980XE is $150 cheaper though, so after one year, intel would still be ahead by $70. But that wouldn't last very long! Lets see, there are actually 525960 minutes in a year. So if you are a commercial render farm and rendering 24/7 then the saving would be;

savings per minute = .70 / 5 = $0.14

yearly savings = $73,634.40 🤑

That is a tad more significant, though it assumes 100% utilisation for a year, which isn't very realistic for most people, and even a render farm has down time either due to maintenance or just because there are no jobs running at the moment.

Another warning, small changes in the relative power consumption can make drastic swings in the cost savings calculations, so without solid data from the open data benchmarks themselves, you should treat this data as hypothetical only.

Power efficiency aside, Intel does have one advantage in that the single core performance is usually superior, you can see this in the faster 'sync' times when the CPU is synchronising data with the rendering engine, a stage of the rendering job that is overhead, nothing is being rendered yet. So for interactive test rendering, the intel rig will be showing you pixels three seconds faster each time you render. This is also only considering the Barbershop scene, if you have a very long sync time, the savings add up.

Anyway, I hope you get the point, that there are a ton of variables at play in this serious consideration we're making, power draw depends on how you use your gear. If you are rendering a lot, power draw is going to be more important. Single core performance might not matter for pure rendering, but there are cases where single core might dominate the job, like if you want to game on your workstation for e.g. :)

Did Intel just win?!

No. Intel got creamed, the mistakes in the original article made it look like Intel had the upper hand in power consumption, with that corrected, the only advantage Intel has is single core performance, since that isn't what rendering needs, AMD wins easily in this comparison.

What about AMD's 3990X?

The AMD 3990X has yet to drop, its got twice the core count of the 3970X and weirdly enough the same TDP. So that would make it seriously efficient? Nope. TDP, as I've explained above, is contrived by the manufacturer and actually means little in terms of efficiency. Look for actual power draw data, though I have to admit, looking for that is not easy, manufacturers do not seem to provide it so you'll have to hunt for it, sites that review hardware are a good bet, thanks to the contributors to this article I managed to get my hands on data from anandtech for a limited selection of CPUs they had.


The open data project is pretty cool, there is no way we could have assembled all the hardware it has data for and tested it ourselves, unless we had serious funding to do it (and we would if that funding came along!).

I found the data helpful for seeing how the different CPUs performed, I'd certainly be using this data myself if I were to be building a workstation or render farm in the near future.

However, the data is missing an important dimension on power draw. That makes it difficult to compare the ongoing costs of running the hardware, which, as we've seen in the hypothetical example above, can be quite dramatic in the extreme cases. There is a lot of uncertainty in how much you would spend in power, and that is a problem. Building a large scale render farm would be risky without this data as you'd not have a solid estimate for its main, ongoing cost.

Thanks to those who corrected me :)

This article, as I've pointed out, has been updated thanks to those above who I mentioned, and this turned the tables in AMD's favour in regards to efficiency as well as value or bang for buck. I have learned to be more thorough and it won't hurt to keep reminding you (and myself) to continue to be thorough and always research yourself.

Let us know if this article was helpful, or not, in the comments below! Suggestions and corrections welcome :D

P.s. We make a free addon for Blender, it helps you turn ordinary computers into a render farm, we're aiming to make it the best render farm software there is, you can help us by supporting our project here -> support our development crowdfund

You can also create a free account, subscribe to our mailing list and get our free addon, all in one go here -> create account, get free software

Please consider sharing this article on the web and social media to help us spread the word! You can use the buttons below to share on facebook and twitter!


James Crowther
James Crowther
Feb 06, 2020

Hi Robert, thank you for commenting! Its been so great to learn from you and the others about the nature of high end computing, power draw is something I've worked on before, but in those cases the manufacturers specs were usually reliable to within a few percent and so we trusted them.

I'd love to know, are you planning on building a render farm, or just a high end workstation? Single machine performance tends to get more expensive per unit of increased performance as you aim for lower and lower render times, at some point it makes more sense to choose a lower specced platform and duplicate it.

We're going to be doing an article on that, though the same…


Robert Ockelford
Robert Ockelford
Feb 05, 2020

Thanks for the revisions - this is really useful - I'm eyeing up AMD for the first time in a long time myself. Just one other consideration would be that the power drawn by the CPU is only a proportion of the power the whole system would draw.

So if in a hypothetical situation, two chips being compared drew 100w and 150W respectively and you assume that the rest of the system supporting the operation of those chips was the same in power draw, then you'd have a difference in power of 50W between the systems. But the energy savings of the faster chip wouldn't apply only to the CPU power draw (100w or 150W) you also save the energ…


James Crowther
James Crowther
Feb 05, 2020

Thanks Robert! I've updated the article, I used a formula based on seconds multiplied with cost, you can see the exact formula in the article above. I ended up with a score that was very similar to yours too. The score gets bigger for CPUs that are faster for the same cost, or cheaper for the same render time.

Would like to know your thoughts on the new method!


Robert Ockelford
Robert Ockelford
Jan 31, 2020

Very interested in this information, though it seems that the 'performance' is defined as being proportionate to seconds, then divided by cost. Would this not be 1/seconds, ie. more seconds would reduce the performance per dollar, where a larger number is better. Alternatively, you could have seconds x dollars where less is better.

Hope I have this right, don't normally comment on this kind of thing, but I think it is important in these situations to recheck.

Also, total cost and total power draw isn't just on the CPU (leaving out intel's optimistic TDP numbers) and you need to factor in the whole system cost (big AMD chips have expensive MB's) and the CPU is only a proportion of cost,…

Featured Posts
Recent Posts