The CPU Shack Museum CPU History Museum for Intel CPUs, AMD Processor, Cyrix Microprocessors, Microcontrollers and more. Sat, 21 Mar 2020 04:20:13 +0000 en-US hourly 1 The Intel N60066: Unwrapping a Mystery Fri, 20 Mar 2020 23:57:50 +0000

Fischer & Porter 53MC5 – The beginning of the Mystery

One day last summer, I was browsing the deep dark corners for processors, a fun, yet dangerous activity.  I happened upon a lot of PCBs from some older industrial automation equipment.  No real information was provided (those buying these boards clearly would already know what they needed).  They did however have a RTC, an EPROM a 16MHz crystal, and a large 84-pin PLCC.  That PLCC was marked as an Intel N60066.  Seeing such a large chip, surrounded by such components almost always means its some sort of processor or microcontroller.  The problem is, there is no known Intel 60066 part.  The chips were all made in the late 80’s and early 90’s and had  1980 and 1985 copyrights.  A 1980 copyright typically screams MCS-51, as that was when it was introduced and nearly all such chips bear an Intel 1980 mark.

Intel N60066

The boards themselves were dated from 1990 all the way to the early 2000’s (I bought a lot of them, another problem I have).  Some had the part number 53MC5 and the logo of Fischer & Porter.  Fischer & Porter has existed since the 1930’s and was a leader in instrumentation.  They were bought by Elsag Bailey Process Automation (EBPA) in 1994 which itself was swallowed up by ABB in 1999.  The boards design was largely unchanged through all of these transitions. Searching for documentation on the 53MC5 part number (its a Loop Controller) didn’t yield details on what the N60066 was unfortunately.  The only thing left to do was to set it on fire…

Unfortunately this is the only way I currently have for opening plastic IC’s (I need to get some DMSO to try apparently).  After some careful work with the torch and some rough cleaning of the resulting die it was readily apparent that this was an MCU of some sort.  The die itself was marked… 1989 60066.  This wasn’t a custom marked standard product, this was a custom product by Intel for this application, a very surprising thing indeed.  Unlike other companies such as Motorola, Intel was not well known for custom designs/ASICs.  This wasn’t their market or business plan.  Intel made products to suit the needs they saw, if that worked for the end user, great, if not, perhaps you could look elsewhere.  They would gladly modify specs/testing of EXISTING parts, such as wider voltage ranges, or different timings, but a complete custom product? Nope, go talk to an ASIC design house.  Its likely Fischer & Porter ordered enough of these to make it worth Intel’s effort.

Knowing this was an MCU and suspecting a MCS-51 further searching revealed the answer, and it came from the most unusual of places.  In 2009 the US NRC (Nuclear Regulatory Commission) determined there was no adequate Probabilistic Risk Assessment (PRA) for Digital systems in their agency, so set about determining how best to calculate risk of digitally controlled systems.  They analyzed a system used to control feedwater in nuclear reactors.  These are critical systems responsible for making sure the reactor is kept with the right amount of cooling water at the right time, failure of course is not an option.  The 53MC5 is what is used for controlling the valves.  In this document we find this nugget:

The controller is an 8051 processor on board an application-specific integrated circuit (ASIC) chip that performs a variety of functions.

Well that certainly helps, it is indeed a custom ASIC based on an 8051.  The report also provided a diagram showing the ASIC system.  This is an 8051 core with RAM/ROM (normal) as well as a Watchdog timer, a PAL, I/O Buffers, and Address Logic.

I sent a couple of these chips to my friend Antoine in France for a proper die shot, which he is quite amazing at.

Intel N60066 die – 8051 core on the left. Die shot by Antoine Bercovici

The 8051 core is on the left of the die, with its RAM/ROM.  A very large PLA occupies the bottom right side of the day.  In the upper right is presumably the external watchdog timer for the ASIC.  The lines crossing the die mostly vertically are a top metal layer used for connecting all the various sections.

The hunt for a new CPU/MCU is part of the thrill of collecting.  The satisfaction of finding out what a mystery chip is can be worth many hours of dead ends in researching it.  Its not common to have to go to the NRC to find the answer though.

]]> 0
ESA Solar Orbiter: When SPARCs Fly Sun, 09 Feb 2020 23:58:20 +0000 ESA ERC-32SC

ERC-32SC – SPARC V7 MCM with RAM and MIL-STD-1553

In a few hours (assuming no more delays, UPDATE: Launch Successful) the joint NASA/ESA Solar Orbiter mission will launch on a ULA Atlas 5 Rocket out of Florida, USA.  This is a mission a long time coming for the ESA, which like NASA has to get its funding from the government, except in the case of ESA, that involves the governments of many countries in the EU, which can make planning a bit more tricky.  The mission was originally baselined in 2011 and hoped to launch in…2013…then 2017..then 2018 and finally a launch date in 2020.  The original proposal dates to the late 1990’s as a mission to replace the joint NASA/ESA SOHO Solar mission that had launched in 1995.  This creates some interesting design choices for a mission, as designing often happens before a mission is completely approved/funded.  For Solar Orbiter this is one of the main reasons for it being powered by a computer that by today’s standards is rather dated, space standards no less!

Solar Orbiter – ESA

The Solar Orbiter is powered by a processor designed by the ESA, the ERC-32SC.  This is the first generation of processors designed by the ESA.  It is a SPARC V7 compliant processor running at 25MHz and capable of 20MIPS.  The ERC-32SC is a single chip version of the original ERC-32 which was a MCM (Multi chip Module) containing 3 dies that made up the processor (the Atmel/Temic TSC691 Integer Unit TSC692 FPU and TSC693 Memory Controller) that was made on a 0.8u CMOS process.  The Single chip version was made possible by a processes shrink to 0.5u.  It was also made by Atmel,  (whom acquired Temic) and is commercially known as the TSC695 as it is designed for space use, is capable of handling a 300krad Total Ionizing Dose of radiation.  The computer used in the Solar Orbiter was built by RUAG and has two seperate ERC-32SC processor systems for redundancy.  Each of the ERC-32SCs are actually mounted on a MCM, the single chip SPARC, 48MB of DRAM (38 of which is used, the remainder is for Error Detection/Correction via Reed Solomon method), and a MIL-STD-1553 bus controller/RTC/IO are included in the package.

Fujitsu MB86900 – Original SPARC V7 Processor from 1987

The original specifications for this processor were developed back in the 1990’s, which is why it is a SPARC V7, equivalent to the very first Sun SPARC workstations of the late 1980’s powered by the likes of the Fujitsu MB86900/MB86901.  The ESA has developed several follow on processors since, all based on the later SPARC V8 architecture.  They are faster, and more efficient then the ERC-32SC, with some even being dual core processors.  They are known as the LEON-2 and the later LEON-3.  LEON2 has a 5-stage pipeline and no SMP support, while LEON3 increases the pipeline to 7-stages and adds SMP support.  LEON3 is also a VHDL core able to be added to many ASICS/FPGAs (LEON2 is a hard core).  The Solar Orbiter also has both LEON2 and LEON3 processors on board as well…

The Solar Orbiter caries with is 10 different scientific instruments, and each of them has their own processing subsystem, 9 of which are powered by LEON SPARC processors.  Its common for the main processor of a spacecraft to be the most powerful, but in this case the instruments each possess their own processor more powerful then that of the main spacecraft computer.   This is in large part due to many of these instruments being designed well after the original spacecraft bus and systems were baselined.  Payloads can be added/changed much later in the design of the spacecraft allowing their designers to use more modern computers.

Instrument Processor(s) Notes
Solar Orbiter OBC ERC-32SC – Atmel TSC695 Spacecraft Platform Processor
SoloHi LEON3FT – Microsemi RTAX2000 FPGA
LEON3FT – Cobham UT699
Two processors
SWA-HIS/EAS/PAS LEON2FT – Atmel AT697F up to 100MHz
STIX LEON3FT – Microsemi RTAX2000 FPGA
EUI LEON3FT – Cobham UT699 66MHz Single core
PHI LEON3FT – Cobham GR712RC Dual core up to 100MHz
SPICE 8051 + FPGA Long live the MCS-51

There is also likely more processors on this mission as well, but it can be hard to track them all down, nearly every system has its own processing (star trackers, radios/ attitude control etc)

So as you watch the launch tonight, and perhaps see science/pictures from the Solar Orbiter (or just benefit from its added help in predicting solar storms and allowing us here on Earth to prepare for them better) think of all the SPARCs it has taken to make it function.


]]> 1
ARMing the Modems of the 1990’s Sat, 25 Jan 2020 01:11:59 +0000

Racks of external modems at an ISP back in the day

Back in the 1990’s I worked at several ISP’s in my hometown.  These were the days of dial up, and by working at the ISP I got free dial up access which my family and I enjoyed.  We had several racks (white wire racks) of external modems for dial in.  This was the most common solution for smaller ISPs.  External modems were usually more reliable, cheap and easy to replace if/when they failed (and they did).  They got warm so it wasn’t uncommon to see a fan running to help move more air.  Surprisingly I could only find a few pictures of a such installations but you get that idea.

By the late 1990’s as dial in access and ISPs grew to be major concerns dial up solutions became much more sophisticated.  Gone were wire racks of modems and in were rackmount all in one dial in solutions.  These included boards that hosted dozens of modems on one PCB. with their own processing and management built in.  One of the largest companies for these solutions was Ascend Communications.  Their ‘MAX TNT’ modem solution once boasted over 2 million dial up ports during the 1990’s.  Such was Ascends popularity that they merged with Lucent in 1999, a deal that was the biggest ever at its time, valued at over $24 Billion ($37 Billion in 2020 USD). It wasn’t just traditional ISPs that needed dial up access, ATM’s and Credit Card processing became huge users as well.  It wasn’t uncommon to try to run a credit card at a store in the 1990’s and have to wait, because the machine got a busy signal.  The pictured Ascend board has 48 modems on a single PCB, and would be in a rack or case with several more boards, supporting 100s of simultaneous connections.

Ascen CSM/3 – 16x Conexant RL56CSMV/3 Chips provide 48 modems on one board.

Ascend’s technology was based primarily on modem chips provided by Conexant (Rockwell Semiconductor before 1999).  Rockwell had a long history of making modem controllers, dating back to the 1970’s.  Most of their modem controllers up through the 80’s and early 90’s were based on a derivative of the 6502  processor.  This 8-bit CPU was more the adequate for personal use modems up to 33.6kbaud or so, but began to become inadequate for some of the higher end modems of the 1990’s.  These ran at 56k, supported various voice. fax, and data modes and handled a lot of their own DSP needs as well.  Rockwell’s solution was to move to an ARM based solution, and integrate everything on chip.

One of the results of this was the Anyport Multiservice Access Processor. It was called the Multiservice Access Process because it handled, voice, data, 33.6/56k, ISDN, cellular, FAX and several other types of data access, and it did so in triplicate.  The RL56CSMV/3 supported 3 different ports on one chip.  The CSM3 series was the very first ARM cored device Rockwell produced.  Rockwell had licensed the ARM810 (not very common), the ARM7TDMI and a ‘future ARM architecture’ (which was the ARM9) back in January of 1997.  In less then two

Conexant RL56CSM/3 R7177-24 ARM7 (non-V version has no voice support)

years Rockwell had designed and released the first AnyPort device, remarkable at the time.  The CSM/CSMV used the ARM7TDMI running at 40MHz and made on a 0.35u process.  The CSM/CSMV has another interesting feature, and thats the backside of the chip….

Take a look of the backside of the 35mm BGA chip, the ball arrangement is very unusual!  There is a ring of balls around the outer edge and 4 squares of 16 balls inside of that.  This is a multi-die BGA package.  There are 4 die inside one BGA package, three dies for the 3 Digital Data Pumps (DDPs) and a seperate die for the ARM7 MCU (which is made on a different process then the mixed signal DDPs).  Most of the balls in the 16×16 squares are to be connected to GND, and used for thermal dissipation (dissipating heat via the main PCBs ground plane).  Its not uncommon to see multidie packages today, but a multi die BGA package in 1999 was fairly innovative.

Surprisingly many of these chips are still in service, in today’s world of high speed broadband connections there are still many who are stuck on dial up.  As recently as 2015 AOL was still serving 2.1 million dial up customs in the US (out of around 10 million dial up customers total), which was still netting the company nearly half a billion dollars a year (by far their largest source of revenue at the time.  There is also still plenty of other infrastructure that still rely on dial up, ISDN, and even FAX services that require end point connections like the CSMV so its end is probably still a long ways off.

]]> 2
Barn Find MOS MCS6502 – A Restoration Tue, 14 Jan 2020 23:18:42 +0000 ATARI Arcade BoardIn car collecting one of the ‘holy grail’ experiences is the ‘Barn Find’  finding and recovering a rare vehicle that has sat untouched, in some barn, or shed for some time.  They are often in rough, but original condition and can evoke much excitement.  As it turns out CPUs are not so different.  I recently purchased a very rough and very old ATARI Arcade board.

The pictures clearly showed it in terrible condition, with lots of oxidation and ‘stuff’ on it.  But it also had a white MOS 6502 processor.  These are some of the very first CPUs made by MOS and are rather desirable, as in addition to their use by ATARI, they were used in the very first Apple computer, the Apple 1.

When the board arrived it was clearly in bad shape, take a look at that nastiness.  What you can’t see, or rather smell, is the cow manure.  Clearly this board was in an actual barn at some point.  Probably relegated to such a retirement after serving in an Arcade parlor or bar for some time, either that or there was some bovin gaming going on.

You can see there is some oxidation on the lids of the various chips as well.  The ROMs and CPU are in sockets.  These sockets are nice, they are not a machine socket but rather a LIF, Low Insertion Force Socket, that helps as the pins on these chips are very delicate, and very possibly corroded.

Before attempting to remove the MCS6502 its best to see what I am working with, so I pulled some of the ROMs nearest to the 6502 to see how their pins looks and how easy they came out of their sockets.  They came out with not a lot of effort but you can see there is some oxidation on the pins.  What we do not want is the pins to be rusted TO the socket and then break off from the forces needed to remove the chip from the socket.

To help mitigate this risk I used some penetrating oil on the pins in the socket.  It seems strange to be squirting oil in the socket but it works.  It will help penetrate the rust and decrease the force needed to remove the 6502. After adding the oil I let the board sit on my heater in my office for several hours.  This helps the oil penetrate, as well as made my office smell like Deep Creep and cow manure, all in a days work.

Then I very gently work on removing the 6502, testing how tight it is and working it out from both ends.  It comes looses with very little drama, hopefully with all its pins intact….

Indeed!  All the pins are there.  The oil definitely helped as you can see 3-4 pins have some pretty good rust on them.  That is from moisture getting under the gold plating and bubbling up.  The pins at least seemed solid but now its time for some cleaning.

I have a wooden block I specifically made for these more delicate operations.  The chip can sit on the wood supporting the chip and the back of the pins, allowing them to be cleaning without being bent.

Various tools are used in this operation.

  • Cotton Swabs – for applying various cleaners and getting dirt off
  • Brake Cleaner – this is an Acetone based cleaner with Xylene, works very well for getting dirt off as well as removing the oils
  • Glass Cleaner – This is a very mild polishing compound, excellent for cleaning the pins of minor oxidation and cleaning the ceramic
  • Steel Wool – Use very very careful in long wiping motions, its easy to catch a pin wrong, but its needed to get some of the heavier rust off
  • Container – Hilariously this is the top of an old Lava Lamp, I use it to hold brake cleaner in for dipping the cotton swabs in
  • Magnifying glass – So I can see exactly how the rust looks, how deep, etc
  • Banana – For scale

This cleaning process takes me about an hour for this chip, it feel longer and can be nerve wracking, but slow and steady wins the race

After cleaning here is the MCS6502.  The pins still show a little oxidation but the worst is gone.  The ceramic is very clean and even the lid is nicer, with less red rust.  The lid is best to mostly leave alone, as the markings on these are very delicate, I was surprised they were intact at all. The biggest question though remains.  Does this MOS MCS6502 dated May of 1976 still work?  Its nearly 44 years old and who knows how long its been since 5 Volts has been applied to its NMOS transistors.

I stick it in the venerable 680x/650x Test Board, ensuring the board is configured right before applying power, and then…flip the switch…and the sight everyone wants to see Blinking LEDs!  It passes a function check with flying colors, and when further tested reveals that it is indeed old enough to have the (in)famous ROR bug.  The first MCS6502 did not support the ROR (Rotate Right) instruction.  It was in fact present, but behaved incorrectly.  Michael Steil over at has an excellent article on how the ROR instruction was broken.  MOS had chips with working ROR available in June of 1976.  That’s RIGHT after this particular 6502 was made, making it one of the very last ROR bug 6502s made.

Its fun to save and restore an old CPU, let alone one with so much history.  Not all such finds end up with such a happy ending.  Many old chips were set in black foam, that eventually rots the pins right off, always a pity, but today, we had success.

]]> 9
Chips in Space: Making MILSTAR Thu, 02 Jan 2020 23:02:26 +0000

Milstar Satellite

Back in the late 1970’s having a survivable space based strategic communications network became a priority for the US Military.  Several ideas were proposed, with many lofty goals for capabilities that at the time were not technologically feasible.  By 1983 the program had been narrowed to a highly survivable network of 10 satellites that could provide LDR (Low Data Rate) strategic communications in a wartime environment.  The program became known as MILSTAR (Military, Strategic, Tactical and Relay) and in 1983 President Reagan declared it a National Priority, meaning it would enjoy a fair amount of freedom in funding, lots and lots of funding.  RCA Astro Electronics was the prime contractor for the Milstar program, but during the development process was sold to GE Aerospace, then Martin Marietta, which became Lockheed Martin before the 3rd satellite was launched.  The first satellite was suppose to be ready for launch in 1987, but changing requirements delayed that by 7 years.

Milstar Program 5400 series TTL dies

The first satellite was delivered in 1993 and launched in February of 1994.  A second was launched in 1995 and these became Milstar-1. A third launch failed, which would have carried a hybrid satellite that added a Medium Data Rate (MDR system).  Three Block II satellites were launched in 2001-2003 which included the MDR system, bringing the constellation up to 5.  This provided 24/7 coverage between the 65 degree N/S latitudes, leaving the poles uncovered.

TI 54ALS161A

The LDR payload was subcontracted to TRW (which became Northrup Grumman) and consisted of 192 channels capable of data rates of a blazing 75 – 2400 baud.  These were designed for sending tasking orders to various strategic Air Force assets, nothing high bandwidth, even so many such orders could take several minutes to send.  Each satellite also had two 60GHz cross links, used to communicate with the other Milstar sats in the constellation.  The LDR (and later MDR) payloads were frequency hopping spread spectrum radio system with jam resistant technology.  The later MDR system was able to detect and effectively null jamming attempts.

The LDR system was built out of 630 LSI circuits, most of which were contained in hybrid multi layer MCM packages.  These LSIs were a mix of custom designs by TRW and off the shelf TTL parts.  Most of the TTL parts were sourced from TI and were ALS family devices (Advanced Low Power Schottky), the fastest/lowest power available.  TI began supplying such TTL (as bare dies for integration into MCMs) in the mid-1980’s.  These dies had to be of the highest quality, and traceable to the exact slice of the

Traceability Markings

exact wafer they came from. They were supplied in trays, marked with the date, diffusion run (a serial number for the process and wafer that made them) and the slice of that wafer, then stamped with the name/ID of the TI quality control person who verified them.

These TTL circuits are relatively simple the ones pictures are:
54ALS574A Octal D Edge Triggered Flip flop (used as a buffer usually)
54ALS193 Synchronous 4-Bit Up/Down Binary Counters With Dual Clock
54ALS161A Asynchronous 4-Bit Binary Counters


Looking at the dies of these small TTL circuits is quite interesting.  The 54ALS161A marking on the die appears to be on top of the a ‘160A marking.  TI didn’t make a mistake here, its just that the the 160 and 161 are essentially the same device.  The 161 is a binary counter, while the 160 was configured as a decade counter.  This only required one mask layer change to make it either one.

ALS573 and ALS574 die

Similarly with the 54ALS574, which shares a die with the more basic ‘573 D type transparent Latch.  This was pretty common with TTL (if you look at a list of the different 7400 series TTL you will notice many are very similar with but a minor change between two chips).  It is of course the same with CPUs, with one die being able to be used for multiple core counts, PCI0E lanes, cache sizes etc.

Together with others they perform all the function of a high reliability communications systems, so failure was not an option.  TI supplied thousands upon thousands of dies for characterization and testing.  The satellites were designed for a 10 year lifetime (it was hoped by them

Milstar Hybrid MCM Command Decoder (picture courtesy of The Smithsonian)

something better would be ready, no doubt creating another nice contract, but alas, as many things are, a follow on didn’t come along until just recently (the AEHF satellites).  This left the Milstar constellation to perform a critical role well past its design life, which it did and continues to do.  Even the original Milstar 1 satellite, launched in 1994 with 54ALS series TTL from the 1980s is still working, 25 years later, a testament to TRW and RCA Astro’s design.  Perhaps the only thing that will limit them will be the available fuel for their on-orbit Attitude Control Systems.

While not necessarily a CPU in itself these little dies worked together to get the job down.  I never could find any of the actual design, but it wouldn’t surprise me if the satellites ran AMD 2901 based systems, common at the time or a custom design based on ‘181 series 4-bit ALUs.  finding bare dies is always interesting, to be able to see into whats inside a computer chip, but to find ones that were made for a very specific purpose is even more interesting.  The Milstar Program cost around $22 Billion over its life time, so one must wonder how much each of these dies cost TRW, or the US Taxpayer?

]]> 1
RIP Chuck Peddle: Father of the 6502 Fri, 27 Dec 2019 22:56:14 +0000

Original MOS 6501 Processor from 1975 – Designed by Chuck Peddle.

On December 15th one of the truly greats of processor design passed away at age 82.  Chuck Peddle, born in 1937, before semiconductors were even invented, designed the 6502 processor back in 1974.  The 6502 (originally the 6501 actually) went on to become one of the most popular and widely used processors of all time.  It powered the likes of the Apple 1, Commodores, ATARIs and hundred of others.  It was copied, cloned, and expanded by dozens of companies in dozens of countries.  It was so popular that computers were designed to use it in the Soviet Union, eventually making their own version (Pravetz in Bulgaria).

Sitronix ST2064B – Based on the 65C02 – Core is visible in the upper right of the die. (photo by aberco)

The 6502 was a simple but useful 8-bit design, which meant that as time went along and processors migrated to 16 32 and 64-bits and speeds jumped from MHz to GHz the venerable 6502 continued to find uses, and be made, and expanded.  Chuck continued to be involved in all things 6502 until only a few years ago, designing new ways to interface FLASH memory (which hadn’t been invented when he designed the 6502) to the 6502.

The chips themselves, now in CMOS of course, continue to be made to this day by Western Design Center (WDC) and the 65C02 core is used in many many applications, notably LCD monitor controllers and keyboard controllers.  We can hope that the 6502 will have as long of life as Mr. Peddle, though I woud wager, that somewhere, somehow , in 2056 a 6502 will still be running.

]]> 0
CPU of the Day: Motorola MC68040VL Fri, 01 Nov 2019 23:12:28 +0000

Motorola MC68040VL

A month or so ago a friend was opening up a bunch of unmarked packages, and taking die photos and came across an interesting Motorola.  The die looked familiar, but at the same time different.  The die was marked 68040VL, and appeared to be smaller version of the 68040V.  The Motorola 68040V is a 3.3V static design of the Motorola MC68LC040 (It has dual MMUs but lacks the FPU of the 68040).  The 68040V was made on a 0.5u process and introduced in 1995.  Looking closely at the mask revealed the answer, in the form of 4 characters. F94E

Motorola Mask F94E – COLDFIRE 5102

Motorola uses mask codes for nearly all of their products, in many ways these are similar to Intel’s sspecs, but they are more closely related to actual silicon mask changes in the device.  Multiple devices may use the same mask/mask code just with different features enabled/disabled.  The Mask code F94E is that of the first generation Motorola COLDFIRE CPU, the MCF5102.  The COLDFIRE was the replacement for the Motorola 68k line, it was designed to be a 32-bit VL-RISC processor, thus the name 68040VL for VL-RISC. .  VL-RISC architectures support fixed length instruction (like a typical RISC) but also support variable length instructions like a traditional CISC processor.  This allows a lot more code flexibility and higher code density.  While this may be heresy to RISC purists it has become rather common.  The ST Transputer based ST20 core is a VL-RISC design, as is the more modern RISC-V architecture.  The COLDFIRE 5102 also had another trick, or treat up its sleeve.  It could execute 68040 code.

Motorola XCF5102PV20A 03F94E – 1995

The COLDFIRE, and the 68040 are microcoded processors, meaning they do not execute the instructions directly, the opcodes are translated in a PLA to the actual instructions that manipulate the flow of data. This is common in processors today and allows greater flexibility.  Its what allows the COLDFIRE to execute 68040 code as well as the new VL-RISC instructions.  In fact, its actually what allowed Motorola to re-spin the 68040V as the COLDFIRE, at its heart the COLDFIRE 5102 is actually a slightly modified 68040V.  It seems that Motorola may have even been thinking about calling it the 68040VL before renaming it COLDFIRE. There are some minor differences however.

The 68040V had dual 4K instruction/Data caches, which in the COLDFIRE 5102 have been reduced to 2K Instruction cache and 1K Data cache (clearly visible on the die).

68040V on the left with clearly larger cache’s and COLDFIRE on the right, smaller caches. Overall very similar designs.

It omits the dual MMU of the 68040V which is less needed in embedded processors (it got added back in in the V4e version a decade later) The COLDFIRE retains the 6-stage pipeline of the 68040 but uncouples the Instruction Fetch and Decoding stages, allowing for somewhat faster processing.  The Register structure is also the same, with 8 Data Registers, 8 Address Registers and a PC.   The COLDFIRE instruction set is actually a subset of the 68040, most 68040 code will run on a 68020 or higher.  For later versions of the COLDFIRE the opposite is not true, but in the 5102 the additional 68040 instructions are supported, to allow easier transition to the platform.

Motorola MC68040RC25V

By the 1990s the 68k line was getting a bit tired, and increasing competition was making it less relevant and competitive.  Motorola’s quick update to the design, made possible by good engineering and microcoding allowed them to make a ‘new’ product and compete again in the 32-bit embedded market.  The complete renaming of the design to COLDFIRE from 68040VL helped market it as ‘new’ and certainly COLDFIRE is a cool sounding name for a product that had grown cold and needed a bit of reheating.

Thanks to my friend aberco for sending me down this rabbit hole with his nice die photos

]]> 0
The Forgotten Ones: RISCy Business of Winbond Mon, 07 Oct 2019 22:05:38 +0000

Winbond W77E58P-40 – Your typical Winbond MCS-51 MCU

Winbond Electronics was founded in Taiwan back in 1987, and is most widely known for their memory products and system I/O controllers (found on many motherboards of the 1990s).  They also made a wide variety of microcontrollers, mostly based on the Intel MCS-51 core, like many many other companies have and continue to do.  They also made a few 8042 based controllers, typically used as keyboard controllers, and often integrated into their Super I/O chips.  So why do I find myself writing about Winbond, whose product portfolio seems admittedly boring?

It turns out, that once upon a time, Winbond decided to take a journey on a rather ambition path.  Back in the early 1990’s they began work on a 32-bit RISC processor, and not an ARM or MIPS processor that were just starting to become known at the time, but a processor based on the HP PA-RISC architecture. This may seem a odd, but HP, in a shift form their previous architectures, wanted the PA-RISC design to be available to others.  The Precision RISC Organization was formed to market and develop designs using the architecture outside of HP.  HP wanted to move all of their non-x86 systems to a single RISC architecture, and to help it become popular, and well supported, it was to be licensed to others.  This is one of the same reasons that made x86 so dominate in the PC universe.  More platforms running PA-RISC, even of they were not HP, meant more developers writing PA-RISC code, and that mean more software, more support, and a wider user base.  Along with Winbond, Hitachi and OKI also developed PA-RISC controllers.  Winbond’s path was innovative and much different then others, they saw the need for easy development as crucial to their products success, so when they designed their first PA-RISC processor, the W89K, they made it a bit special.

Original Winbond W90210F Development board

In 1994, most everyone had a Intel 486 based computer, so Winbond decided to make the W89K 486DX compatible, electrically and logically, this allowed many existing boards to be used as development systems.  Replace the 486DX processor with he W89K and replace the BIOS with a Winbond one, and instant development system.  The system hardware (RAM, PCI slots, etc are agnostic about what CPU is talking to them, so this is easier then it sounds, and much more so back in the 1990’s then today.

The W89K was made on a 0.8u double metal CMOS process and ran at up to 66MHz (clock doubled version using the standard 33MHz 486 bus) with 1.1 million transistors .  It implemented PA-RISC V1.1 with a 5-stage pipeline and 2K each of instruction and data caches, both fully associative.  It was designed with only the integer unit (no FPU) as a way to reduce die size and cost.  This was considered acceptable as it was targeted as a high end embedded controller (for use in things like printers). They did support a L2 external cache as well, something that was unusual for the PA-RISC.   Performance was around 89 DMIPS (Dhrystone MIPS for the 66MHz part.

Winbond W90210F 66MHz PA-RISC 0.8u CMOS – 4K instruction cache is in the upper left, while the smaller 2K Data cache is in the upper right (die shot by aberco)

The successor to the W89K was the W90K family which was developed in 1997.  The first processor of this family was the W90210F.  It originally was going to be still called the W89K family but Winbond decided in late 1987 to called it the W90K, due to its greatly improved design compared to the W89K.  The 90K maintained the same PA-RISC core as before but added a host of peripherals to increases its usefulness as an embedded controller.  These included embedded ROM/Flash interfaces, a DRAM controller, a DMA controller and various timers/counters.  It also added the 5 PA-RISC multimedia instructions  (MAX-1).  These were some of the very first SIMD instructions added to general purpose processors (originally designed for the PA7100LC).  Intel added similar support to the Pentium as ‘MMX’.  The W90210 also changed the cache structure.  The L1 Instruction cache continued to be direct mapped but was increased from 2K to 4K.  The Data cache remained 2K but was now 2-way set associative.   Clock speed remained the same at 66MHz.  A W90215F version was also made, that did not come with an embedded OS license (write your own).  These were used in a number of printers, set top boxes and digital picture frames back in the late 1990s.

W90221X – 100MHz with hardware MAC and SDRAM support. and built in 2-D Graphics Maintained the same package as the W90210F to simplify designs

In 1991 the last versions were released.  These were the W90220 and W90221.  These both had some big improvements over the previous design.  They were made on a CMOS triple layer 0.35u process allowing clock speeds of up to 150MHz (this appears to be the design goal, actual devices may have topped out at 133MHz in practice).  A Multiply Accumulator unit was added allowing for DSP like functionality and the pipeline was increased to 6-stages (a Load/store unit) which helped achieve the faster clock speeds.  Both caches were now 4K, with the instruction cache still being direct mapped, and the Data cache being 4-way set associative.  It also is the first of the line to support hardware branch prediction.  These were 3.3V parts with 5V I/O.

Digital picture Frame based on a PA-RISC processor (The W90215F)

The W90220 added a 2D graphics controller to the system as well as support (finally) for SDRAM.  Two versions of the 90220 were made.  The W90220F which was to hit 180MHz and support SDRAM and EDO RAM, and the cost reduced W90220X which was limited to 80-100MHz, had less I/O and no EDO RAM support.

By the early 2000’s the W90K PA-RISC processor was dead, its a rather unfortunate end to an ambitious project, and a processor that really had great potential and performance for its time.  In researching these processors it seemed that one of the reasons the processor failed was poor support from Winbond.  Its ironic that a processor designed for easy development and interfacing with existing PC peripherals would be hindered by poor tech support from the manufacturer but that appears to be the case.  Also contributing to its demise was the fall of PA-RISC itself, by 2000 H was ‘all in’ with Intel on PA-RISC’s successor, the IA-64 architecture Itanium processor and we all know how that turned out.  It’s perhaps interesting then that laser printers and digital picture frames are powered by a processor that evolved into what was suppose to be the next great Intel architecture, but now, in a twist of fate, is itself a wall hanger

]]> 0
The Story of the IBM Pentium 4 64-bit CPU Tue, 01 Oct 2019 23:07:09 +0000 Introduction

This time we will talk about one unique Intel processor, which did not appear on the retail market and whose reviews you will not find on the Internet. This processor was produced purely by special order for one well-known manufacturer of computer equipment. Also in the framework of this article I will try to assemble one of the most powerful retro-systems with this processor.

From the title of the article, I think many people understand that we will talk about the Socket 478 Intel processor

Most people are familiar with the Socket 478 that replaced Socket 370 at the end of 2001 (we omit Socket 423 due to its short lifespan of less then a year) and allowed the use of single-core, and then with Hyper Threading technology “pseudo-dual” processors that can perform two tasks in parallel. All production Intel processors within Socket 478 were 32-bit, even a couple of representatives from the Pentium Extreme Edition server segment on the «Gallatin» core. But as always there are exceptions. And this exception, or to be more precise, two exceptions, were two models of Pentium 4 processors with the Prescott core, which had 64-bit instructions (EM64T) at their disposal.

Intel Pentium 4 SL7QB 3.2GHz: 64-bits on S478

This pair of processors were commissioned by IBM for its eServer xSeries servers. These processors never hit the retail market and their circulation was not very large, so finding them now is very problematic. It is interesting that the fact that if you want and naturally have the right amount of money, or a large enough order, you can count on a special order of the processor that is needed for the specific needs, with characteristics that will be unique and will not be repeated in standard production products. And it should be noted that not a few such processors have been released, in fact, in the 70’s and early 80’s this was the very purpose of the now ubiquitous ‘sspec.’ Chips with an Sspec (Specification #) were chips that had some specification DIFFERENT from the standard part/datasheet.  A chip WITHOUT a sspec was a standard product.  By the late 1980’s all chips began to receive sspecs as a means of tracking things like revisions, steppings, etc.  I will talk about some a little later.

hat’s how the processor looks through the eyes of the CPU-Z utility. In the “Instructions” field after SSE3, the EM64T proudly shows off! Link to popular CPU-Z Validation.

Special processors made for IBM belonged to the Prescott core and were based on E0 stepping with support for 64-bit instructions, which is not typical for Socket 478! The first 64-bit CPUs for “everyone” appeared only with the arrival of the next LGA775 socket, and even then it wasn’t right away; some Pentium 4 models in LGA775 version were 32-bit. I specifically pointed out that the Pentium 4 Socket 478 model with EM64T support belonged to the E0-stepping, although later the more advanced stepping G1 was released, which did not have such innovations. The first model worked at a frequency of 3.2 GHz and had a SPEC code – SL7QB, the second was slightly faster with a frequency of 3.4 GHz and the SPEC code – SL7Q8.

For the rest, these were the usual «Prescott». But the presence of 64-bit instructions made these processors unique, capable of working with 64-bit operating systems and the same applications, allowing them to do what their 32-bit comrades simply could not do.


Not many companies were able to place their order with Intel, but the «Blue Giant» or IBM could do it, and all in order to defeat HP and Dell in a fierce struggle for the server market share for small and medium-sized businesses. And for one, in order to extend the life of their servers with Socket 478. For these purposes, these two processors were released, capable of executing 64-bit instructions. Another advantage of such processors in conjunction with 64-bit operating systems can be called support for a large amount of RAM, but interestingly, in the age of DDR1 with its small amounts of memory of this standard and chipsets of that time, operating more than four gigabytes of RAM was physically not possible even with 64-bits.

So the whole point of using these processors was precisely in supporting 64-bit operating systems and the same software, behind which IBM saw a promising future, as it once was when changing from 16-bit software to 32-bit back in the days of the i386 . And it should be noted they guessed (correctly), that the sunset is approaching the 32-bit era.

I managed to find a processor running at 3.2 GHz with a SPEC code – SL7QB in Canada, so its journey was not close to me. This processor was part of the IBM eServer xSeries 306 server. This server is a regular single-processor 1U blade server that can be installed in a rack. Inside the server, a single Socket 478 was used to hold the Pentium 4 processor, which had support for up to 4 gigabytes of RAM (and the chipset couldn’t see more RAM), two Gigabit network controllers, a pair of 64-bit / 66 MHz PCI-X expansion slots and the ability to support not very sophisticated RAID arrays from SATA-150 or SCSI drives.

Initially, such IBM servers supported conventional 32-bit Pentium 4 processors with Prescott cores, and then the option of using 64-bit Pentium 4 was added. These processors are listed under the part number 26K8430 for the server models using the IBM spare parts database (FRU) (41x and 45x).

If you look at the motherboard of this server, you can see that it is the simplest solution. In fact, this is dictated by the use of the Intel E7210 chipset, which is a close relative of the desktop Intel 875P, but lacking an AGP port, it uses a pair of PCI-X slots instead.

Windows Server 2003, x64 Edition, or various types of Linux were installed on the IBM eServer xSeries 306 server with a 64-bit Pentium 4. Subsequently, IBM expanded the range of its servers, where it was possible to install SL7QB or SL7Q8, among them were models: x206, x226 and x236.

Thanks to its pricing policy, the cost of new 64-bit servers was very affordable compared to competitors. At the time the updated servers were released (2nd half of 2004), prices for the xSeries 206 model started at $909 for a system with a 3.2 GHz processor and 256 MB of memory, the cost of a more advanced xSeries 306 started at $1,409 for a system with a 3.2 GHz processor and 512 MB of memory.

In the server lineup there are also similar models, but with the letter “m” added to the model name. Do not pay attention to them, as these are completely different machines, which are based on processors in a different – LGA775 version.

Squeeze everything to the last drop.

In assembling such a system, I wanted to squeeze everything out of it possible! and even more. But I ran into a number of problems both hardware and software. My goal was: 8 GB RAM + Windows 10 x64. But here a number of nuances arose.

Let’s start with the hardware problems. 4 GB of RAM are easily supported by all the boards, even with DDR1 you can get 4 GB on four slots with four sticks of one gigabyte each. But it is boring and not interesting. DDR2 opens up much more promising horizons, but here a problem arises, often suitable motherboards offer only 2 memory slots. A simple solution to install 2 strips of 4 GB. But the creator (Intel) introduced its limitations, I will dwell on them a little more in detail.

Often questions arise about installing more than 4 GB of memory on the relatively “recent” Intel chipsets with an external memory controller (Memory Controller Hub, MCH). Here we briefly consider the necessary conditions for this, since it is not always that the maximum possible amount is written in the manual for the board. Perhaps many believe that it is necessary to have an x86 processor with support for 64 bit expansion (EM64T), and a board that, in principle, allows you to install more than 4 GB of memory (supporting a sufficient number of slots and memory densities, this depends not only on the chipset, but also on specific board). And of course, a BIOS that can initialize this memory, correctly configure the mapping of PCI devices, and so on. Not all motherboards have a BIOS capable of doing this, but all because there were no 64bits on Socket 478 and all of the above motherboards from which the choice was made are transitional models, since their chipsets existed in LGA775 as well, and were already familiar with the 64-bit CPU architecture from Intel.

CPU: In fact, for addressing more than 4 GB of memory, a 64-bit x86 processor is generally not required, since starting with Pentium Pro, the ability to expand the physical address (PAE) to 64 GB has been introduced (address lines A32 # – A35 # have been added), but at the same time each task can address no more than 4 GB. However, a processor with 64-bit mode allows you to get the most benefits from RAM over 4 GB, and there will be much less problems with the operating system and drivers than in PAE mode. Note that the width of the address lines for 64-bit processors under LGA775 and even Xeon under LGA771 remained the same (36 bit), that is, they still have a maximum of 64 GB of memory, like Pentium Pro. Isn’t it true that the potential laid down in 1995 is impressive?

Chipset: The chipset must be able to address the address space abroad 4 GB, and this feature is not directly related to the supported DRAM organizations, since memory is understood in the broad sense here – this is all the address space available to the processor, in particular, the memory of PCI devices, BIOS, APIC etc. To do this, you must have at least one additional address line on the chipset. That is, the presence of the HA32 # line will provide addressing up to 8GB, HA33 # up to 16GB, HA34 # up to 32GB, and HA35 # up to 64GB.
And if the server chipsets from Intel (for S603/604/771) have no special problems with addressing, then a study of datasheets for Intel’s desktop chipsets showed that Intel’s first desktop chipset with support for advanced addressing is 955x . Earlier 865, 915, 920, 945 have an older address line HA31 #, that is, physically impossible to install more than 4 GB of RAM in the motherboards on these chipsets.

To summarize, the success of the whole undertaking in the hardware implementation consists of the correct BIOS that “understands” all available RAM + 64-bit Processor + Chipset no older than Intel 955x. But, there is one more nuance, this is the manufacturer of the final motherboard, which, even with a good combination of all circumstances, decided to save money and simply did not route the necessary lines from the chipset, and the lower the cost of the motherboard, the higher the risk. And the boards under consideration are from this lower cost range.

Is there a way out? It seems that there is (but to the end I’m not sure due to the lack of the necessary board) and it lies in Socket 478 motherboards based on the Intel G31 / G41 chipset. There are enough examples of working with 8 GB of RAM on motherboards based on the G31 chipset performed by LGA775, but I haven’t seen Socket 478, but as they say there’s a chance =) I’ll leave this for the near or distant future.

Software problem: As I wrote above, the ultimate task was to launch Windows 10 x64. At the moment, I have not been able to do this, one cannot cope here, but theoretically it is possible. Windows 7 x64 ran with a bang, no problems arose. But already with the installation of Windows 8.1 there were problems, or rather, there was only one problem – the lack of the NX-bit’s processor, and without this «feature» installation of a modern OS is impossible.

The fact is that NX-bit support is very different for x86 in 32-bit mode, x86 in 64-bit mode and PAE mode. For 32-bit mode, the good old PAE and NX bits via CPUID. That is, basically, you just need to change the value returned to EDX after CPUID with EAX = 80000001h (for example, delete the CPUID check and change the value in EDX to the desired one). NX bit functions are not supported in normal 32-bit mode, and you just need to “calm” the OS. There are software PAE patches for the kernel of the OS where everything works, including Windows 8.1 and early builds of Windows 10.

For 64-bit mode, NX bits are already in use and the NX bit value is located in the 64-bit record of page tables and catalogs (PTE and PDE). The difficulty is that even if you manage to trick the OS by deleting its check of NX bits, then the kernel (and all other drivers / programs) will try to switch the NX bit each time instructions are stored in the page table. This will cause the system to crash. I have, so far, found no confirmation of running Windows 10 x64 on the Pentium 4 Socket 478: SL7QB or SL7Q8, possibly due to the specificity of these processors and their low prevalence, but I want to believe that it will still be possible to do it, not for nothing that I tried out dozens of early builds of Windows 10.

We assemble Super Socket 478/x64 PC.

Having such a unique processor at your disposal, it’s absurd not to build a powerful x64-retro system on it. One of the options for using such a system in general can be to build a universal “PC-harvester” that supports all Microsoft operating systems from DOS to Windows 10. And here the most interesting part begins – the selection of components and software. The main component is of course the processor – the heart of the system, it remains to choose a motherboard where it can be installed.

The selection criterion has shifted towards building the fastest system with the fastest interfaces, so there are no AGP slots, only a PCI-Express x16 graphic port, and another PCI-Express x1, and preferably a couple, several PCI, support for DDR2 memory at least, as a variant of DDR3 and the more memory, the better. The list of candidates was as follows:

  • ASUS P4GD1 (Intel 915P/ DDR1 4Гб DDR-400/ PCI-Express x16, 2x PCI-Express x1, 3x PCI)
  • Biostar G31-M4 (Intel G31/ DDR2 4Гб DDR2-800 / PCI-Express x16, 2x PCI)
  • AsRock P4i945GC (Intel 915P/ DDR1 4Гб DDR2 4Гб DDR2-667/ PCI-Express x16, 1x PCI-Express x1, 2x PCI).


ASUS P4GD1 looks the best in terms of the number of available PCI-Express connectors and configuration flexibility, there is one drawback – this is the first generation DDR memory, all SATA connectors also support only 150 MB/s.

Biostar G31-M4

Biostar G31-M4 looks like a winner due to the support of 800MHz DDR2 memory, the presence of 4 300MB/s SATA2 ports, but the board is completely devoid of PCI-Express x1 ports and, most importantly, processors with 95 Watt TDP Max are supported, and that means goodbye to “Prescott” which needs more then 95W.  This minus crosses out all available advantages, one of which is support for all operating systems, the presence of appropriate drivers up to Windows 10 x64!

AsRock P4i945GC

AsRock P4i945GC – the best solution, one additional PCI-Express x1 slot, a pair of PCI, four SATA2 ports. Supported DDR2 memory with a frequency of 667 MHz. After weighing the pros and cons, I settled on the AsRock P4i945GC, also due to the fact that it is much easier to find these days on sale, but finding the ASUS P4GD1 is already a problem.

For such a system, the use of an SSD is a prerequisite and it is better that it is installed in a PCI-Express slot. The memory capacity is 4 GB, as a video card I decided to use the  GeForce GTX 980 Ti with 6GB, a memory capacity larger than that of the system itself. In a couple of free slots, you can install a couple of 3Dfx Voodoo 2 in SLi, or something “cool” in the PCI version, for example the same 3Dfx Voodoo 5500. The final assembly I got was as follows:

  •  Intel Pentium 4, 3.2GHz, Socket 478, «Prescott», SL7QB “64-bit Edition”
  • Thermaltake Big Typhoon
  • AsRock P4i945GC, Intel 945GC + ICH7, Socket 478, PCI-Express , DDR2-667 MHz, SATA-2
  • 4 GB (2x 2GB) DDR2 800MHz
  • GeForce GTX 980 Ti, 6GB, KFA2 8Pack Edition
  • SSD HyperX Predator PCIe 240GB
  • Zalman ZM1000-EBT 1000W PSU


To the start, let’s go!

But first, let’s go into the BIOS of the motherboard.

The photo shows that the processor is correctly recognized in the BIOS, indicating its 64-bit capacity. And this is how a 240 GB HyperX Predator PCIe x4 drive installed in the PCI-Express x1 slot is displayed in the BIOS.

I like this solution more than options with SATA options. cables do not get tangles and the appearance of the system becomes more «serious». Let’s see how using just one, instead of the recommended four lanes, PCI-Express will affect the performance of this SSD.

If this result is considered in relation to modern systems, then it is clearly better than any HDD, but loses to modern SSD. But considering that such numbers are available on Pentium 4 on Socket 478!, you can only rejoice at the old man, the responsiveness of the system turned out at a very high level. But you can still connect it to the PCI-Express x4 slot, though you will have to install either a PCI video card or the video card will work in a PCI-Express x1 slot. Another PCI-Express x4 slot is needed on the motherboard =)

(CPU-Z info – click to enlarge)

I really want to try this monster in practice, but before the test results I will dwell a little on the «not for everyone» processors, this should be interesting.

Not like everyone else.

Before starting the tests, I would like to dwell on some processor models, which, let’s say, appeared due to the «efforts» of other companies, and not at the direct initiative of Intel/AMD. First, look into the distant past.

Let’s start with the a Socket 7 AMD processor, which belongs to the K6-2 line on the «CXT» core. A processor with a non-traditional AMD K6-2 38L3054 model name. This processor operates at a frequency of 337 MHz, which is obtained by multiplying the multiplier 4.5 by the system bus 75 MHz. The solution, to put it mildly, is not standard, if you look at the official AMD datasheet, then for the K6-2 processor line you can see different models,

but the 337 MHz model is missing, because it was commissioned by IBM. This is what a processor made for IBM branded PCs looks like:

AMD K6-2 38L3054 - 337MHz

AMD K6-2 38L3054 – 337MHz

As you can see, there is no clock marking on the processor cover. In place of this information there is a marking AMD K6-2 38L3054 (apparently Part number IBM). Below in the photo is a close AMD K6-2 model with a frequency of 333 MHz (3.5 x 95 MHz).

AMD K6-2 333MHz


Xeon X5698

In this case, everything is in place, including information about the frequency of the model.


The following example applies to the LGA1366 socket. The Intel Xeon processor model with the X5698 index, belonging to the «Westemere» microarchitecture, has at its disposal only two cores, while all the other representatives of this server socket have at least four. But then these two cores work at a record clock frequency of 4.4 GHz! and their speed does not decrease under any circumstances, the processor also retained 12 MB of the third-level memory cache. Intel Xeon X5698 was released on special order in limited quantities.

The processor in fact is a 6-core Xeon model, where 4 cores are disabled, but the remaining two are selected at the production stage and are able to operate at that frequency 24/7 at full load. According to one version, these processors were manufactured for the New York Stock Exchange, where at that time the highest core performance was needed, so that multi-billion dollar banking transactions from Wall Street would instantly reach the addressee. The cost of such a processor was set at $ 20,000 apiece. You can find such a processor now, but the cost of a used version will be at the level of the fastest Ryzen 3 R9.

Intel Black Ops

These processors were installed in pairs, resulting in a workstation with four cores operating at 4.4 GHz, and all this at the beginning of 2011. Each processor had a TDP of 130 watts, and water cooling was clearly assumed. It would be nice to find two of these processors and install them in the EVGA SR-2 motherboard.

Continuing the story of Wall Street, it is worth mentioning an even more interesting processor that replaced the Intel Xeon X5698. A special processor model belonging to the «Ivy Bridge» microarchitecture got its own name, immortalized on the lid of the heat distributor, this is not often seen. The name of this processor is Intel “BLACKOPS”. By special order, Intel has released two “BLACKOPS” models. The first worked at a frequency of 4.4 GHz and had at its disposal 4 cores, but at the same time, all 25 MB of the third-level cache was available.

Finding photos in decent quality of this processor is not so easy. But I managed to find a screenshot of the CPU-Z of this processor. It can be seen below.

The x44 multiplier, four cores and a TDP of 250 W, not every VRM motherboard can handle such a processor.

The older model worked at a frequency of 4.6 GHz with six active cores and 25 MB of L3 cache. Both processors have disabled Hyper-Threading Technology. The processors were installed in motherboards with an LGA2011 socket and had a TDP of 250 W, which naturally implied the use of a factory-built VRM. The presence of 25 MB of L3 cache  indicates that these processors were selected from the most successful 10 core die. I could not find information about the cost of processors, but I think it is not far from the cost of the Xeon X5698, in any case it was clearly 4-digit. More information about these processors, and others of Intel’s special ‘Everest’ series can be found in the CPU Shack’s Everest article.

Dual marked Pentium 4 3GHz, or 3.4GHz (one would hope it would also run at 3.2GHz)

At the time of the LGA775 Pentium, Core2 Duo and Quad, Intel made some of its processor models specifically for Dell, IBM, and Apple. Since the Intel Pentium 4 550 model was available for all markets, according to SPEC, the SL8BY and SL8BM variants were intended for Dell. In the first case, the frequency from 3.4 GHz was underestimated to 3.2, in the second to 3.0 GHz. This allowed a single processor to be used in multiple build configurations, simplifying the supply chain and logistics for the builder.

Intel Xeon X5557 SLBFX – Made specifically for Apple for use in the Mac Pro without a heatspreader.

To some extent, the Core 2 Duo E8290 model may be interesting, the model number itself already looks unusual. This 2-core processor operates at a frequency of 2833 MHz and a system bus frequency of 1333 MHz and is based on the Wolfdale core. This processor differs from the usual Intel Core 2 Duo E8300 in the absence of Virtualization technology and Intel Trusted Execution security technology, otherwise they are completely identical. Like its predecessor, the Core 2 Duo E8190 was used in the Apple iMac. This list also includes the Core 2 Quad Q9700 and Core 2 Quad Q9705, which are 167 MHz faster than the well-known Core 2 Quad Q9650, but have only half the level 3 cache, 6 MB instead of 12 for the core 2 Quad Q9650.


There are still a lot of other processors that came through OEM channels and which it is practically impossible to meet in retail, the most modern processor of this kind can be considered Intel Core i9-9990XE, which Intel did not even set the selling price, since the circulation obviously does not reach 1000 pieces. (the typical minimum order qty)

After a short digression, it’s time to press the «Power» button and launch the slowest x64 Monster.


Tests are a good thing, especially when there is something with what to compare. As part of this experiment, I would not want to compare Prescott with Prescott, I just don’t see the point, and it was not for nothing that I installed the GTX 980 Ti. Below I will give the results of those tests that are sharpened by 64 bits, and also try to play modern games.

Testing was conducted in Windows 7 x64 SP1 using the following software:

  • WinRAR x64 v. 5.40
  • WinRAR x32 v. 5.40
  • Cinebench 11.5 x64
  • Cinebench R15
  • Cinebench R20
  • 3DMark 2006 v.1.1.1
  • 3DMark 2011 v.
  • 3DMark (2013) v.2.9.6631
  • Far Cry
  • Battlefield 4
  • Crysis 3
  • Rise of the Tomb Raider

WinRAR v. 5.40 (32/64-bit version)
Kb/s (more is better)

The percentage difference is not significant, only 2% faster, but it is also in favor of the 64-bit version

It also gives you a reminder that the 64-bit version is better

Cinebench 11.5 (32/64-bit version);
points (more is better)

Everything here is similar to the previous result, around 2%

Cinebench R15
points (more is better)

Here it’s already more interesting, since Cinebench R15 exists only in the 64-bit version, so we can say the increase was 100% compared to the usual «Prescott». Therefore, I decided to add some competitors close in importance.  Interesting that the performance rated Athlon 64 3200+ is identical in performance (for once the PR rating is correct it seems)

Cinebench R20
I will not give graphs, I’ll just say that while the test was “spinning”, I managed to drink coffee twice =) I will give only a screenshot with the final result.  This test really rewards multi-core CPUs, so being limited to one core, and a small cache, really hinders it.

HWBOT x265 Benchmark v.2.2.0 – 1080p
FPS (more is better)
All the difference is visible in the screenshot.

Geekbench 4 v.4.2.3, Single/Multi-Core Score
points (more is better)

We pass now to 3D tests =) Will the giant GeForce GTX 980 Ti be able to help? Between them the difference in age is as much as 11 years. Although during the «honeymoon» month, when they were together in a system of serious quarrels between them, it wasn’t a trifle 😉 It’s scary to think if the GeForce RTX 2080 Ti was installed instead of the GeForce GTX 980 Ti.

3Dmark 2006 v.1.1.1, Score

Although the Pentium 4 tried its best, it couldn’t «satisfy» the GeForce GTX 980 Ti. The final result is 4666 3DMarks. In the heart of the HWBOT test I found a similar result on points – 5155, which was obtained on Intel Pentium 4 3.2 GHz Northwood and GeForce GTX 9800GT @ 850/1102 MHz.

Despite the difference of at least 10 generations, a more powerful video card without processor support could not «pull out» the final result. By the way, the balance of components must be observed under any conditions and at any time, and the GeForce RTX 2080 should not be mixed with four or, God forbid, dual-core CPU.

3DMark 2011 v.1.0.132 – Performance 720p/ Extreme 1080p

The final numbers of the result have not changed much, and FPS in a number of subtests froze in place, the video card is clearly experiencing processor hunger. Under equal conditions, the GeForce GTX 980 Ti on modern systems is gaining ~ P20123 and X9123. It’s not difficult to calculate the difference.

3DMark (2013) Fire Strike/ Extreme
In fact, I wanted to launch Fire Strike most of all, the very feeling that «this» works already instills pride and confidence in the future.

Yes, the result, as in the previous case, is extremely small, but it is still there! I think many more users are armed with the GeForce GTX 980 Ti, so you can check the results with your own and be glad how much your system bypasses mine =)

What about the games? Easy, let’s start with the “heavy.”

Battlefield 4 (Tashgar)
Frames/sec (Medium / min / max)

Even despite the high-speed SSD, loading took longer than on a modern PC, but as a result the Tashgar card was chosen, where you can ride a jeep with a breeze. All graphics settings in both resolutions were set to Medium. Although looking at the graph, we can say: Yes, what is the difference 😀 It’s a pity that the FPS did not reach 30 frames per second, I hope that the future overclock will help to reduce the gap.

Rise of the Tomb Raider
An unpleasant surprise awaited me here, the game did not want to start, even despite a couple of reinstallations. After clicking on the shortcut on the desktop, only an error warning appeared What I did not understand the reason for, I can only assume that the launch requires a set of any processor instructions that are not physically available for this processor.

Crysis 3
Here the situation is a little better, it was possible to go to the main menu, select the settings, but they could not advance further the menu, neither the “new” game, nor the loading of existing saves, showed a 3D screen, only a black screen, frozen forever. Why didn’t 3D rendering begin? Perhaps for the same reason as with Rise of the Tomb Raider.

Far Cry (1024×768/1280×1080, Max Quality, demo 3DNews – Research, 2x loop)
Average result, frames/sec

In higher resolution, greater FPS? It’s just that the video card is tired of working in low resolutions =)

What can be summarized by the 3D component? There is a lack of processor power for this video card. From here it does not matter what settings and what resolution is set. You can tighten up the performance by replacing the RAM with a faster one, by setting timings instead of fives – fours, or even all three. It is possible at such a frequency, but miracles cannot be expected from my “Chinese” kit. It’s better to overclock the processor to at least 3.8 GHz, of course, all 4 GHz, but I don’t know how the motherboard will behave, but I have a desire to try it.

By pure processor power, you need to understand that this is an ordinary “Prescott”, albeit with a tremendous zest under the hood.


As for the first impressions of the resulting 64-bit system on Socket 478, they are the most positive, even despite the fact that the processor was unable to swing the video card. But as I wrote at the beginning of the article, this assembly claims to be a «for all» role and even for launching DOS games or GLIDE from 3Dfx.

This article is part of The CPU Shack’s continued partnership with guest author max1024, hailing from Belarus. I have provided some minor edits/tweaks in the translation from Belorussian to English.

]]> 5
Pardon the Mess…Upgrading PHP – FIXED Wed, 18 Sep 2019 20:51:39 +0000 Moving The CPU Shack to PHP 7 and it has broken some old legacy code (now why would a museum have old code? ha).  A few things (like the header and the OLD pictures section) are not working, should be fixed soon.


EDIT: Looks like we got it all fixed, if ya notice anything broken/not working let me know


]]> 0