For a long time, you could judge a computer’s performance by a single number: processor speed. The more megahertz (MHz) you had, the better you could power through your work. And once you got up into the gigahertz (GHz) range, watch out!
Now, other factors certainly influenced how fast your PC ran — amount of memory, rotation speed of the hard drive, etc. I didn’t say processor speed was a particularly accurate way to judge a computer’s performance. But people did. Processor manufacturers Intel and AMD had little problem with that; they kept producing newer and newer products at faster and faster clock speeds.
Until they didn’t.
Buy a decent PC today and your processor is probably advertised as running somewhere around 2GHz. Even high-performance desktops hover a little above 3GHz. How come, when the Pentium 4 processor reached 3.8GHz way back in 2005?
The simple answer is heat. Just like an old 486 processor would require a cooling unit the size of your house if you tried to push it into the gigahertz range, the latest CPUs produce a lot more heat the faster they go. A much more cost-effective way to get more processing power is simply to have more processors working in parallel. And if you can get more processor cores onto a single die or package with a single interface to the rest of the computer, all the better.
That’s been the strategy of component makers for years now. Laptops and desktops with single-core processors are becoming a rarity, and dual-core CPUs are migrating to the low end of the market as quad-core models become more affordable. Even mobile devices are starting to be released with dual-core processors.
Four cores isn’t the end game, though. Not by a long shot. AMD just released a processor with 16 of the little bad boys. The various models range from 1.6GHz to 3.3GHz, with most able to temporarily surge by about 500MHz. Sure, the Opteron 6200 line is designed for servers, but that’s the technology that trickles down to consumers before too long. They already have plans to bring eight-core CPUs to desktops in the next few months.
Intel, meanwhile, is stuck with six-, eight- and 10-core server processors in their Xeon line. Lame, right? They even top out under 3GHz. Speed is important in servers, but more critical is stability. Keeping clock speeds lower while increasing the number of cores to work on different tasks simultaneously is certainly the most efficient way to go there.
For personal computers, the advantages to multi-core processors aren’t as pronounced, but they’re most assuredly still there. Simple programs still use only one thread, or set of step-by-step instructions followed by the CPU. More complex and more recent applications are multi-threaded, so they can literally do multiple things at once, but only if there are multiple processor cores.
Will there eventually be a ceiling to the number of cores a computer can use simultaneously? Perhaps. Just like you can’t throw all the ingredients of a recipe into a bowl at once, some instructions rely on others already being completed. Even a million cores can’t process a million lines of code at once if line 761,442 depends on the result of line 19. There’s also overhead in scheduling all those cores. But as long as tasks can be broken up, so can the machines doing them.
There’s always lots to process at twitter.com/CitizenjaQ.