Sony may have used it as the general processor for the PS3, but Cell was far better at multimedia and vector processing than it ever was at general purpose workloads (its design dates to a time when Sony expected to handle both CPU and GPU workloads with the same processor architecture). Cell is an excellent example of how a chip can be phenomenally good in theory, yet nearly impossible to leverage in practice. We’ll take some heat for this one, but we’d toss the Cell Broadband Engine on this pile as well. The G5 wasn’t a terrible CPU, but IBM wasn’t able to evolve the chip to compete with Intel.ĭishonorable Mention: Cell Broadband Engine> Apple was forced to move to Intel and x86 in order to field competitive laptops and improve its desktop performance. But IBM failed to deliver components that could hit these clocks at reasonable power consumption and the G5 was incapable of replacing the G4 in laptops due to high power draw. When it announced the first G5 products, Apple promised to launch a 3GHz chip within a year.
QC claimed that the issues with the chip were caused by poor OEM power management, but whether the problem was related to TSMC’s 20nm process, problems with Qualcomm’s implementation, or OEM optimization, the result was the same: A hot-running chip that won precious few top-tier designs and is missed by no one.Īpple’s partnership with IBM on the PowerPC 970 (marketed by Apple as the G5) was supposed to be a turning point for the company. The SoC was easily Qualcomm’s least-loved high-end chip in recent memory - Samsung skipped it altogether and other companies ran into serious problems with the device. The Snapdragon 810 was Qualcomm’s first attempt to build a big.Little CPU and was based on TSMC’s short-lived 20nm process. This made the processor appear much slower than its actual rated speed.” When your 486-class CPU is being choked by its own PCI bus you know you’ve got a problem.ĭishonorable Mention: Qualcomm Snapdragon 810 It also notes, “The graphics, sound, and PCI bus ran at the same speed as the processor clock also due to tight integration. The MediaGX shipped in 1997 with a CPU core stuck somewhere between 19, at a time when people really did replace their PCs every 2-3 years if they wanted to stay on the cutting edge. The entry for the MediaGX on Wikipedia includes the sentence “Whether this processor belongs in the fourth or fifth generation of x86 processors can be considered a matter of debate.” The 5th generation of x86 CPUs is the Pentium’s generation, while the 4th generation refers to 80486 CPUs. The MediaGX couldn’t compete with a dead manatee. Chips like the Cyrix 6×86 could at least claim to compete with Intel in business applications.
Motherboard compatibility was incredibly limited, the underlying CPU architecture (Cyrix 5×86) was equivalent to Intel’s 80486, and the CPU couldn’t connect to an off-die L2 cache (the only kind of L2 cache there was, back then). Unfortunately, this happened in 1998, which means all those components were really terrible. The MediaGX was the first attempt to build an integrated SoC processor for desktop, with graphics, CPU, PCI bus, and memory controller all on one die. Despite the cores flaws, it formed the backbone of AMD’s CPU family from late 2011 through early 2017. AMD did penance for Bulldozer by continuing to use it. It’s rare that a CPU is so bad, it nearly kills the company that invented it. Bulldozer couldn’t hit its target clocks, drew too much power, and its performance was a fraction of what it needed to be. AMD wanted a smaller core, with higher clocks to offset any penalties related to the shared design. Intel set revenue records with the chip, but its reputation took a beating.ĪMD’s Bulldozer was supposed to steal a march on Intel by cleverly sharing certain chip capabilities to improve efficiency and reduce die size.
Prescott and its dual-core sibling, Smithfield are the weakest desktop products Intel ever fielded relative to its competition at the time. The new chip was crippled by pipeline stalls that even its new branch prediction unit couldn’t prevent and parasitic leakage drove high power consumption, preventing the chip from hitting the clocks it needed to be successful. Prescott doubled down on the P4’s already-long pipeline, extending it to nearly 40 stages, while Intel simultaneously shrank the P4 to a 90nm die.