The Intel i9 series CPUs represent a change in strategy from Intel for performance scaling in an era of CPUs that increase performance by adding cores. Like the transition from dual core to quad core CPUs that came before it, there will be some performance regressions in some scenarios.

So the question is: If you purchased a mid-range or high-end machine in the last five years, at what point do you replace the machine? Instead of replacing, when do upgrades make sense?

Thinking about my answer, the first question I asked myself was when was the last sea-change like this in the Intel product lineup. My answer is probably the Sandy Bridge architecture.

https://en.wikipedia.org/wiki/Sandy_Bridge

It was first demoed in 2009 and released in 2011. While the improvements to Sandy Bridge over Nehalem, the predecessor, were on par with what one would expect, the clock speed gains meant that (for the first time) only mildly enthusiastic overclockers could see near 4ghz on the desktop.

For this article, and video, I didn’t really consider even older systems, such as those based on Nehalem (or even older) but I expect much of the rationale here to be relevant even on those platforms.

Our Test Systems:

SandyBridge System

ASUS Sabertooth X79
EVGA Nvidia GTX 560Ti
16gb Corsair DDR3-1333
Corsair H100 AIO
Kingston HyperX 256gb SSD
Corsair TX850 PSU
Fractal Design Arc Midi
Intel i7-3820

I9 System

Asrock X299 Taichi
Nvidia Geforce GTX 1080
32gb GSkill TridentZ RGB DDR4-3600
Fractal S24 AIO Cooler
OCZ RD400 512gb NVMe
Corsair RM850 PSU
Intel i9 7900X

Testing was completed on the SandyBridge system with the i7 and then the i7 was swapped for the Xeon E5-2680.

For these tests, while nothing was overclocked the Xeon Long and Short power duration limits were adjusted to be as long as possible. This is a feature of ASUS boards and permits CPUs to remain in turbo longer than intel spec. Strangely, the Turbo multiplier went up to 35 on these CPUS; I was however expecting a max turbo multiplier of 33 for a maximum Xeon clockspeed of 3.3ghz

The graphics settings in each game were at medium or medium low and kept consistent between all three machines. The settings were hand-picked to something playable at 1440p on the i7 and then the settings replicated to the other machines. The important part of the test is not the individual graphics settings for each machine, but how well each machine performs relative to each other.

It should also be noted that, real world, at absurdly high graphics settings a lot of the game engines we’re testing do strange things and just don’t perform well.

For the game testing, we used Fraps, OCAT and native game benchmarking data (where possible). We used the FRAFS frame time grapher because it was convenient, but in the case of Tomb Raider, for example, the frame time data came from the game itself.

 

 

 

 

 

 

 

Results Summary

I’m not sure we learned anything new about gaming on a quad core. This is why we at Level1 have such a hard time recommending a quad core, even for “just gaming” – Yes, the 7700K @ 5ghz is the performance leader, especially at high framerates, but if anything happens as a background process, the system will stutter. This is because most of these games, even the older games like GTA V, can use four cores pretty well. As a result, we see lower 1% minimums and much more variability in our frame times (which is exactly what one would expect).

Entirely unexpectedly, however, is that the Xeon E5-2680 pulls ahead of the 7900X in some scenarios! Our best guess for why this might be happening is the differences between on-chip L2 and L3 cache memories on these two processors. The amount of L3 cache, the CPU cache that is shared by all the cores, is much smaller on the 7900X than the Xeon E5-2680. While the 7900X has a much larger amount of L2 cache memory per core, it is not shared. There is no question that strictly in terms of Instructions-Per-Clock the 7900X is the far-and-away winner. There is also no question that the 7900X’s ~ 1ghz clock speed advantage is not insignificant.

Expectedly, the 7900x was the winner in several scenarios (though perhaps by much less of a margin than one would expect).

However, for specific workloads (like gaming) the benefit of the larger shared L3 cache perhaps overshadows the higher clocks and larger per-core L2 cache. To be clear, though, I don’t think this rebalancing of L2/L3 cache in the 7900X was a mistake – it was sorely needed for nearly every other workload (except gaming). The subjective performance “feel” on the 7900x was that it was fine. Fans of AMD’s Ryzen CPUs should also take note – extremely high FPS is not everything and even Intel has regressed their own performance in order to better support mixed workloads (at least on the i9 CPUs – it will be interesting to see how the rest of the gaming performance story plays out with Intel’s upcoming mainstream six-core Coffee Lake chips).

For the frame times, I feel like the differences are a wash or slightly favor the 7900x. The averages were lower, in some cases, but perhaps the 7900x was more consistent to make up for it. To really zero in on a definitive winner, we’d need a lot more testing.

It should be noted (not part of the video) that the i9 7900x uncore speed inside plays an important role. Simply increasing the uncore clockspeed, the internal “mesh” speed inside the i9, we were able to significantly narrow the performance gap between the i9 and the Xeon; this is likely to be the subject of a future video.

Has there been no progress in raw CPU performance for five years!?

Looks that way, doesn’t it? There are only so many ways to multiply two integers, but this isn’t really the case. New CPUs can be worth it – but often the reason to buy is decidedly not because the new CPU is so much faster for multiplying two integers (or any other general purpose computing). New CPUs bring new functionality, new instructions and new optimizations. Then (maybe) software will be updated to use the new stuff (maybe not). New CPUs (and platforms) better support new peripherals like USB-C, Thunderbolt, USB 3.1 gen2, NVMe drives and somewhat faster memory.

A simple question of “What can I do with it?” may be the best to illustrate: “What can I do with a five-year-old CPU?” Well, if that CPU was good five years ago, not a lot has changed. It is likely to still be perfectly usable. A five-year-old GPU, on the other hand, has a lot more in common with a five-year-old sandwich and is about as useful.

 

 

The lesson? If you have a decent machine, and want to improve your gaming experience, just upgrade the GPU. Even though that original 4 core SandyBridge is not great if you want a high-fps experience similar to a modern platform, it is still possible to attain those framerates without replacing the entire machine. This strategy works well for upgraders with both X79 and X99 systems – those chipsets will readily accept server parts and many motherboard partners test the E5-16xx and 26xx series Xeon CPUs. I have upgraded many x79 systems to high-clock speed E5-16xx and E5-26xx v1/v2 CPUs  (and some X99 systems to Xeon E5 v3 CPUs) and it works well. Generally I have found the E5-16xx CPUs are unlocked and can be overclocked somewhat (the same cannot be said of the E5-26xx counterparts). With the upgrades I expect the service lifetime of these machines to be extended about one to two years, minimally.

And, If in a year or two you find that 5-7 year old CPU is holding you back, then you can pick up a new machine.