> E-Cores are turned off in the BIOS, because setting affinity to P-Cores caused massive stuttering in Call of Duty.
I understand doing this for the purpose of specifically analyzing the P-core microarchitecture in isolation. However this does make the test less interesting for potential customers. I don't think many people would disable E-cores in BIOS if they bought this CPU, so for the purpose of deciding which CPU to buy, it would be more interesting to see results which factor in the potential software/scheduling issues which come from the E-core/P-core split.
This isn't a criticism, just an observation. Real-world gaming results for these CPUs would be worse than what these results show.
I think many haven't yet grasped the future is heterogeneous computing, especially, when many desktops are actually laptops nowadays.
Software working poorly in such setup means no effort was made to actually make it perform well in first place.
Games requiring desktop cases looking like a rainbow aquarium with top everything will become a niche, in today's mobile computing world, and with diminishing sales and attention spans, maybe that isn't the way to keep studios going.
> Software working poorly in such setup means no effort was made to actually make it perform well in first place.
How do you optimize for a system architecture that doesn't exist yet?
(e.g. CoD could probably be fixed quite easily, but you can't do that before the hardware where that problem manifests even exists - I'd say it's much more likely that the problem is in the Windows scheduler though)
> Games requiring desktop cases looking like a rainbow aquarium with top everything will become a niche
PC gaming has been declared dead ever since the late 90s ;)
Not “no effort to make sure it performs well in the first place”, that isn’t fair. Lots of effort probably went into it performing well, just this case isn’t handled yet and to be fair this only impacts some people currently and there is a chance to update it.
This just reads like “if they haven’t handled this specific case they’ve made no effort at all across the board” which seems extreme.
Are you asking why the author of this article is disabling e-cores?
That'd be because this article is trying to analyze Intel's Lion Cove cores. The e-cores and issues caused by heterogeneous cores are therefore irrelevant, only the P-cores are Lion Cove.
There's a giant pile of software - decades worth of it, literally - which was already written and released, much of it now unmaintained and/or closed source, where the effort you cite is not a possibility.
By all means anything released in 2025 doesn't have that excuse - but I can't fault the authors of 15+ years old programs for not having a crystal ball and accounting for something which didn't exist at the time. Intel releasing something which behaves so poorly in that scenario is.... not really 100% fine in my eye. Maybe a big warning sticker on the box (performs poorly with pre-2024 software) would be justified. Thankfully workarounds exist.
P.S.
At least I would have expected them to work more closely with OS vendors and ensure their schedulers mitigate the problem, but nope, doesn't look like they did.
That was very different, here the problem is the game's threads get scheduled on the weak E-cores and it doesn't like that for some reasons, with the PS3 that would have been impossible, SPEs had a different ISA, and didn't even have direct access to memory, the problem was developers had to write very different code for the SPEs to unlock most of the performance of the Cell.
When Cell came to be, heterogeneous computing was mostly a HPC thing, and having the compute units only programmable in Assembly (you could use C, but really not), didn't help.
Now every mobile device, meaning any form of computing on the go, has multiple performance/low power CPU cores, a GPU, programble audio, NUMA, and maybe even a NPU.
Although Intel’s configurations are unhelpful to say the least: it’s hard to make good scheduling decisions when Intel sells a 2P, 8E, 2LE as a 12 cores to the user.
Is this a problem with Intel or with the OS scheduler? Haven't they had this kind of CPUs for a few years now, with this being touted as a reason to move from Win10 to 11?
Tell that to AMDs bulldozer. There is something to be said for considering theoretical performance, but one can't ignore how hardware works in practice, now.
To see what that means in practice, in my multi generational meta benchmark the 285K lands currently only on rank 12, behind the top Intel processors from the last two generations (i7-13700K and 14700K plus the respective i9) and several AMD processors. https://www.pc-kombo.com/us/benchmark/games/cpu. The 3D cache just helps a lot in games, but the loss against the own predecessor must hurt even more.
So, for one in other software the new processors do better. The 285K beats the i9-14900KS by a bit in my app benchmark collection (which is less extensive, but still). And second yes, according to https://www.computerbase.de/artikel/prozessoren/intel-core-u... for example they are less extreme in their energy usage and more efficient in general, albeit not more efficient than the AMD processors.
For gaming, those CPUs were a sidegrade at best. To be honest, it wouldn't have been a big issue, especially for folks upgrading from way older hardware, if only their pricing wasn't so out of line with the value that they provide (look at their GPUs, at least there the MSRP makes the hardware good value).
Such articles are very interesting for many people, because nowadays all CPU vendors are under-documenting their products.
Most people do not have enough time or knowledge (or money to buy CPU samples that may prove to be not useful) to run extensive sets of benchmarks to discover how the CPUs really work, so they appreciate when others do this and publish their results.
Besides learning useful details about the strengths and weaknesses of the latest Intel big core, which may help in the optimization of a program or in assessing the suitability of an Intel CPU for a certain application, there is not much to comment about it.
they still cannot reach power figures they had in the last, 3? generations. 13 and 14 series, which made these figures by literally burning themselves to the point of degradation.
intel has no competition to amd in the gaming segment right now. they control both the low energy efficiency market and the high performance one.
While Lunar Lake has excellent energy efficiency and AMD does not really have any CPU designed for low power levels, Lunar Lake had also a very ugly hardware bug (sporadic failure of MWAIT to detect the waking event).
This bug has disqualified Lunar Lake for me, and I really do not understand how such a major bug has not been discovered before the product launch (the bug has been discovered when in many computers running Linux the keyboard or the mouse did not function properly, because their events were not always reported to the operating system; there are simple workarounds for the bug, but not using MONITOR/MWAIT eliminates one of the few advantages that Intel/AMD CPUs have over Arm-based CPUs, so I do not consider this as an acceptable solution).
Fantastic article -as always. Regarding the top-down analysis: I was a bit surprised to see that in ~1/5 of the cases the pipeline stalls b/c the pipeline is Frontend Bound. Can that be? Similarly, why is Frontend Bandwidth a subgroup of Frontend Bound? Shouldn't one micro-op be enough?
Take front end bound with a grain of salt. Frequently I find a backend backpressure reason for it, e.g. long-tail memory loads needed for a conditional branch or atomic. There are limitations to sampling methods and top down analysis, consider it a start point to understanding the potential bottlenecks, not the final word.
> E-Cores are turned off in the BIOS, because setting affinity to P-Cores caused massive stuttering in Call of Duty.
I understand doing this for the purpose of specifically analyzing the P-core microarchitecture in isolation. However this does make the test less interesting for potential customers. I don't think many people would disable E-cores in BIOS if they bought this CPU, so for the purpose of deciding which CPU to buy, it would be more interesting to see results which factor in the potential software/scheduling issues which come from the E-core/P-core split.
This isn't a criticism, just an observation. Real-world gaming results for these CPUs would be worse than what these results show.
I think many haven't yet grasped the future is heterogeneous computing, especially, when many desktops are actually laptops nowadays.
Software working poorly in such setup means no effort was made to actually make it perform well in first place.
Games requiring desktop cases looking like a rainbow aquarium with top everything will become a niche, in today's mobile computing world, and with diminishing sales and attention spans, maybe that isn't the way to keep studios going.
> Software working poorly in such setup means no effort was made to actually make it perform well in first place.
How do you optimize for a system architecture that doesn't exist yet?
(e.g. CoD could probably be fixed quite easily, but you can't do that before the hardware where that problem manifests even exists - I'd say it's much more likely that the problem is in the Windows scheduler though)
> Games requiring desktop cases looking like a rainbow aquarium with top everything will become a niche
PC gaming has been declared dead ever since the late 90s ;)
Not “no effort to make sure it performs well in the first place”, that isn’t fair. Lots of effort probably went into it performing well, just this case isn’t handled yet and to be fair this only impacts some people currently and there is a chance to update it.
This just reads like “if they haven’t handled this specific case they’ve made no effort at all across the board” which seems extreme.
Then why taking the effort to look better that it actually is?
Are you asking why the author of this article is disabling e-cores?
That'd be because this article is trying to analyze Intel's Lion Cove cores. The e-cores and issues caused by heterogeneous cores are therefore irrelevant, only the P-cores are Lion Cove.
There's a giant pile of software - decades worth of it, literally - which was already written and released, much of it now unmaintained and/or closed source, where the effort you cite is not a possibility.
By all means anything released in 2025 doesn't have that excuse - but I can't fault the authors of 15+ years old programs for not having a crystal ball and accounting for something which didn't exist at the time. Intel releasing something which behaves so poorly in that scenario is.... not really 100% fine in my eye. Maybe a big warning sticker on the box (performs poorly with pre-2024 software) would be justified. Thankfully workarounds exist.
P.S. At least I would have expected them to work more closely with OS vendors and ensure their schedulers mitigate the problem, but nope, doesn't look like they did.
Which is funny because the pushback the PS3 got from developers was too much. Maybe it was "too heterogeneous"
But I guess the future was already set.
That was very different, here the problem is the game's threads get scheduled on the weak E-cores and it doesn't like that for some reasons, with the PS3 that would have been impossible, SPEs had a different ISA, and didn't even have direct access to memory, the problem was developers had to write very different code for the SPEs to unlock most of the performance of the Cell.
When Cell came to be, heterogeneous computing was mostly a HPC thing, and having the compute units only programmable in Assembly (you could use C, but really not), didn't help.
Now every mobile device, meaning any form of computing on the go, has multiple performance/low power CPU cores, a GPU, programble audio, NUMA, and maybe even a NPU.
Yeah intel either needs to ensure much better day 1 support for E / P core split or drop it entirely.
People see them as an active negative right now rather than how intel pitches them
Scheduling is the OS’s decision not the CPU’s.
Although Intel’s configurations are unhelpful to say the least: it’s hard to make good scheduling decisions when Intel sells a 2P, 8E, 2LE as a 12 cores to the user.
Is this a problem with Intel or with the OS scheduler? Haven't they had this kind of CPUs for a few years now, with this being touted as a reason to move from Win10 to 11?
It doesn't matter to the public's perception.
> Real-world gaming results for these CPUs would be worse than what these results show.
That's mostly an application and/or OS issue, not really a CPU one.
Tell that to AMDs bulldozer. There is something to be said for considering theoretical performance, but one can't ignore how hardware works in practice, now.
It’s a problem of compatibility of those games, not an issue with the processor. The kind of thing a game or windows update solves.
Old games won't get updates. That is why there are multiple separate tools that try to force the situation. e.g. Process Lasso
To see what that means in practice, in my multi generational meta benchmark the 285K lands currently only on rank 12, behind the top Intel processors from the last two generations (i7-13700K and 14700K plus the respective i9) and several AMD processors. https://www.pc-kombo.com/us/benchmark/games/cpu. The 3D cache just helps a lot in games, but the loss against the own predecessor must hurt even more.
Is your benchmark trustworthy?
I see strange discrepancy: my "old" i7-12700K vs i7-13700K:
Games: 170 vs 270 Software: 271 vs 1875 (!!!)
I can believe into 170 vs 270 (though, these two CPUs are not so different!) but 7x difference in software!? Is it believable?
[delayed]
I don't fully follow this, so what has been gained with the new models?
I seem to remember you'd need dedicated industrial cooling for the 14700k. Does the new model at least pump much less power?
So, for one in other software the new processors do better. The 285K beats the i9-14900KS by a bit in my app benchmark collection (which is less extensive, but still). And second yes, according to https://www.computerbase.de/artikel/prozessoren/intel-core-u... for example they are less extreme in their energy usage and more efficient in general, albeit not more efficient than the AMD processors.
But it is a valid question.
> I don't fully follow this, so what has been gained with the new models?
There were power efficiency gains, as well as nice productivity improvements for some workloads: https://www.youtube.com/watch?v=vjPXOurg0nU
For gaming, those CPUs were a sidegrade at best. To be honest, it wouldn't have been a big issue, especially for folks upgrading from way older hardware, if only their pricing wasn't so out of line with the value that they provide (look at their GPUs, at least there the MSRP makes the hardware good value).
122 points and no comments? Is this being botted or something?
Such articles are very interesting for many people, because nowadays all CPU vendors are under-documenting their products.
Most people do not have enough time or knowledge (or money to buy CPU samples that may prove to be not useful) to run extensive sets of benchmarks to discover how the CPUs really work, so they appreciate when others do this and publish their results.
Besides learning useful details about the strengths and weaknesses of the latest Intel big core, which may help in the optimization of a program or in assessing the suitability of an Intel CPU for a certain application, there is not much to comment about it.
it's a very good article.
Could be. Usually it means the subject is too advanced for the average HN user yet something that they are interested in.
>122 points and no comments?
Better no comments than having to trod through the typical FUD or off topic rants that tend to plague Intel and Microsoft topics.
exactly. I'm very happy to notice there are no 'x86 bad arm good' comments here as of now. a welcome change.
also - or maybe, first and foremost - it's just a very good article.
I mean what is there to comment. Intel botched another product release. It is just a sad state of affairs.
How so?
Not that I disbelieve, I just wasn't especially picking that up from the article.
they still cannot reach power figures they had in the last, 3? generations. 13 and 14 series, which made these figures by literally burning themselves to the point of degradation.
intel has no competition to amd in the gaming segment right now. they control both the low energy efficiency market and the high performance one.
Do they? I thought Lunar Lake was an incredibly good efficiency generation.
While Lunar Lake has excellent energy efficiency and AMD does not really have any CPU designed for low power levels, Lunar Lake had also a very ugly hardware bug (sporadic failure of MWAIT to detect the waking event).
This bug has disqualified Lunar Lake for me, and I really do not understand how such a major bug has not been discovered before the product launch (the bug has been discovered when in many computers running Linux the keyboard or the mouse did not function properly, because their events were not always reported to the operating system; there are simple workarounds for the bug, but not using MONITOR/MWAIT eliminates one of the few advantages that Intel/AMD CPUs have over Arm-based CPUs, so I do not consider this as an acceptable solution).
It is
Fantastic article -as always. Regarding the top-down analysis: I was a bit surprised to see that in ~1/5 of the cases the pipeline stalls b/c the pipeline is Frontend Bound. Can that be? Similarly, why is Frontend Bandwidth a subgroup of Frontend Bound? Shouldn't one micro-op be enough?
Take front end bound with a grain of salt. Frequently I find a backend backpressure reason for it, e.g. long-tail memory loads needed for a conditional branch or atomic. There are limitations to sampling methods and top down analysis, consider it a start point to understanding the potential bottlenecks, not the final word.
Interesting. You realize this by identifying the offending assembly instructions and then see that one operands comes from memory?
I suggests using a Xeon w7 2595X so that you have 26P cores and 0E cores
I would like to understand more of the article, which book should I read?
https://archive.org/details/computerarchitectureaquantitativ...
Say, is there any talk about Intel working on an AMD Strix Halo competitor, i.e. quad channel LPDDR5X in the consumer section?