The Death of CPU Scaling – an ExtremeTech Article

I read this excellent ExtremeTech article by this morning. I’ve been wondering about this myself for the past couple years.

CPU technology seemed to be following a new path of “more is better” rather than worrying about the performance of just one core. In other words, to me anyway, it seemed as if Intel, AMD, and others had just decided that it was cheaper to stuff more processors into a slab than it was to actually design and create a single core that performed at a higher level. I’ve been noticing this trend for a few years now; ever since the Intel Core 2 Duo came out, I guess.

To be honest, and I must add that I’m no expert on this topic by a long shot, I haven’t really noticed a huge performance jump between my current AMD Phenom quad and my old AMD Athlon64 single core CPU. I’m sure there is a vast difference, at least there appears to be when perusing the spec sheets of these processors. However, for my purposes, they’re pretty much interchangeable on my system.

Hruska writes:

For decades, microprocessors followed what’s known as Dennard scaling. Dennard predicted that oxide thickness, transistor length, and transistor width could all be scaled by a constant factor. Dennard scaling is what gave Moore’s law its teeth; it’s the reason the general-purpose microprocessor was able to overtake and dominate other types of computers.

And further on:

For the past seven years, Intel and AMD have emphasized multi-core CPUs as the answer to scaling system performance, but there are multiple reasons to think the trend towards rising core counts is largely over. First and foremost, there’s the fact that adding more CPU cores never results in perfect scaling. In any parallelized program, performance is ultimately limited by the amount of serial code (code that can only be executed on one processor). This is known as Amdahl’s law.

Where is CPU technology headed? Will the corporate bean counters try to pressure the companies to satisfy their stockholder’s greed or will the drive to create a quality product prevail? All good questions. The answers will eventually affect all of us who use computers in some way on a daily basis; not to mention that fact that the world is run by computers these days. Without that little CPU in that cash register at your grocery store, that girl behind the counter can’t even figure out how to count out your change from your purchase. We’ve become dependent on this technology.

Sadly, I think it’s turning us all into mushbrains. However, that’s a topic for another time.



Further reading: The death of CPU scaling: From one core to many — and why we’re still stuck – By on February 1, 2012 on


14 Comments on “The Death of CPU Scaling – an ExtremeTech Article”

  1. That may be true with AMDs, but the Intel Core 2 Duo and Intel Core i3, i5 and i7 systems are very speedy by comparison to say the Intel single core chips that were available before them and still available at liquidation shops. The biggest difference is of course between the intel Core 2 Duo and the subsequent ones (Intel Core i3, i5, and i7). But that’s not just due to chip differences. The motherboards are using newer technology and from my understanding on the Intel Core i3, i5 and i7 systems have done away with the frontside bus bottlenecks which really does make a major difference.

    However, with programs that do not take advantage of the true multitasking using the multiple cores, there may not appear to be major differences between the Intel Core i3 and the Intel Core 2 Duo. The real difference, and where Intel Core i3 in particular and the Intel Core i family of processors/motherboard pairs really shine is with programs that are doing that multi-core multitasking stuff. They really leave Intel Core 2 Duo in the dust then.

    Windows 7, 64-bit Linux and Mac OS X on the Intel Core i family chips really are a major step up and they really zip along compared to earlier ones.

    • Speedier, yes; more powerful, not necessarily. That’s the point they’re trying to make in the article. Processors appear to be doing more, but they’re not, really; not on a per core basis. We’re just stacking more cores in there to do more work. Speeds have not dramatically increased, nor has actual raw processing power.

      I suspect that it is a profit/loss thing, sadly. Look how long it’s taken to get to 64 bit computing. Will we ever see 128+ bit computing? Oftimes, an overclocked older single core processor can match, or in some cases, outperform some of these low-end multicores. But hey… like I said, I’m not expert. I just calls ’em as I sees ’em, you know.

      I’d liked to be able to gaze into my crystal ball to see what the next ten years is going to look like technology-wise. While I’m gazing, I ought to grab us next week’s lottery numbers, huh? 😉

  2. comhack says:

    Excellent post Eric!!!

    I enjoyed reading the comments and you both make some very good points. I do not know about AMD but I see a huge difference between my core2duo (2.66ghz) and my i5 (2.88ghz). Then again, there is a huge difference in ram on the machines (16gb vs 4gb) but that is also questionable as well.

    • Recent improvements in hardware (solid state drives, faster RAM access times, etc.) definitely has had the effect of simulating better performance of newer multicores on desktop systems. However, what would happen if we could check that performance on older, slower hardware. Would there be much of a difference between the single and the multis in that situation?

  3. I keep going back to the new tech on the motherboards. Not just faster RAM which we always expect to see as time goes on, but a whole new tech on the motherboard for the Intel Core i3, i5, i7 systems.

    A look at Intel’s new Core i3/i5/i7 processors and how they will affect rugged computing

    “There are, however, some interesting differences: As a first in this class of Intel CPUs, the memory controller and reasonably powerful integrated graphics with HD hardware acceleration and other new capabilities are now part of the processor, which means no more conventional Front Side Bus and “Northbridge” part of the chipset complementing the processor. These integrated graphics can be turned off when they are not needed, and Nvidia (who is probably not that thrilled about this Intel move) has already announced their “Optimus” technology (see what it is) that automatically determines whether to use the integrated graphics and extend battery life, or use an external NVIDIA GPU to boost graphics.”

  4. chekkizhar says:

    Just great man 😉 . Thanks for sharing….

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s