We recently received several Remote Server management cards, powered by the Agilent (spun off of HP in 1999) N2530 SoC. This SoC provides the processing for remotely administering, and managing servers. At its hearts is an ARM processor running at 33MHz. Proudly marked on the chip, is ‘ARM 701 POWERED.’ There is one problem, there never was an ARM701 processor core. The N2530 is in fact powered by an ARM710. A typo was made when marked the Rev D chips, and later fixed on the Revision E. I have not yet received an example of a Rev C (or earlier) to see if they too have this error, but E and later certainly did not. The Agilent N2530 was used for many years in the early 2000′s on cards by Dell, Fujitsu, and IBM (and likely others). Essentially forming a computer within a computer, these cards often had their own graphics support (ATI Mobility Radeon, among others) as well as support for CD-ROMs, hard drives, LAN (for access) and everything else you would find in a stand alone computer. Typically they could remote start, reboot, and power down servers, all over a network connection.
Bloomberg Business Week recently published an interesting article on ARM’s rise to power in the processing world. There first major design ‘win’ was a failed product known as the Apple Newton, yet they would go on to become a powerhouse that is no challenging Intel.
In ARM’s formative years, the 1990′s, the most popular RISC processor was the MIPS architecture, which powered high end computers by SGI, while Intel made super computers (the Paragon) based on another RISC design, the i860. Now, nearly 2 decades later, after Intel abandoned their foray into the ARM architecture (StrongARM and X-Scale) RISC is again challenging Intel in the server market, this time, led by ARM.
MIPS, now owned by Imagination, is again turning out new IP cores to compete with ARM, and other embedded cores. Their Warrior class processors are already providing 64-bit embedded processing power, though with a lot less press that the likes of Apple’s A7.
When the POWER5 processor was released in 2004 it was made in two versions, a DCM (Dual Chip Module) containing a POWER5 die and its 36MB L3 cache die, as well as a MCM containing 4 POWER5 die and 4 L3 cache dies totaling 144MB. The POWER5 is a dual core processor, thus the DCM was a quad core, and the MCM an 8 core processor. The POWER5 contains 276 million transistors and was made on a 130nm CMOS9S process.
In 2005 IBM shrank the POWER5 onto a 90nm CMOS10S manufacturing process resulting in the POWER5+. This allowed speeds to increase to 2.3GHz from the previous max of 1.9GHz. The main benefit from the process shrink was less power draw, and thus less heat. This allowed IBM to make the POWER5+ in a QCM (Quad Chip Module) as well as the previous form factors. The QCM ran at up to 1.8GHz and contained a pair of POWER5+ dies and 72MB of L3 Cache.
The POWER5+ was more then a die shrink, IBM reworked much of the POWER5 to improve performance, adding new floating point instructions, doubling the TLB size, improved SMP support, and an enhanced memory controller to mention just a few.
The result? A much improved processor and a very fine looking QCM.
Fully 10 years before Western Digital released their first hard drive, they made processors, calculator chips, Floppy Disk controllers and a host of other IC’s. Western Digital began in 1970 making primarily calculator chips. In 1976 they announced the multi-chip MCP-1600 processor. This was an implementation of the PDP-11 minicomputer in silicon. It consisted of a CP1611 Register/ALU chip, a CP1621 control chip and either 2 or 4 CP1631 512x 22bit MICROMs that contained the microcode implementation of the PDP-11 architecture. Physically this was an 8-bit design, but clever microcode programming allowed it to function as a 16-bit processor. Use of microcode allowed it to also implement floating point support, a very new concept in hardware in 1976. The MCP-1600 was used in DECs LSI-11 microcomputer among others.
Having the microcode separate from the ALU/Control didn’t help with board layout or cost, but it did provide a very flexible platform to implement other architectures on. In the late 1970s UCSD (University of California in San Diego) was working on a project, led by Kenneth Bowles, to make a portable version of the Pascal programming language, a version that could run on multiple hardware platforms, very similar to how Java has become today. The code was compiled to a ‘p-code’ or pseudo code that could them be executed by a virtual machine onto whatever hardware. Typically this virtual machine would be implemented in software, however the design of the MCP-1600 was such that it could be implemented in hardware, or rather microcode. Thus in 1978, the WDC MICROENGINE was born. This was to be a 5 chip set (original documentation states 4, but it ended up being 5) that consisted of the CP2151 Data chip (if you have a CP2151 you would like to donate, let us know) , the CP2161 Control chip, and 3 512 x 22 bit MICROMs which contained the microcode to directly execute UCSD Pascal on the data chip. The CP2151 was no different from the CP1611 of the MCP-1600 chipset and could be interchanged.
The ESA’s comet chaser Rosetta has just today awoken from a long deep sleep on its comet chasing (and landing) mission. The solar powered spacecraft was launched back in 2004. It is based on the Mars Mariner II (itself based on the Voyager and Galileo) spacecraft design of the early 1990s (when the mission was first conceived.) Main differences include using very large solar arrays versus a RT (Radioisotope Thermal Generator) and upgraded electronics.
In order to conserve power on its outward loop (near Jupiter’s orbit) most all systems were put to sleep in June of 2011 and a task set on the main computer to waken the spacecraft 2.5 years later and call home. The computer in charge of that is powered by a Dynex MAS31750 16-bit processor running at 25MHz, based on the MIL-STD-1750A architecture.
A reader recently asked why such an old CPU design is still being used rather then say an x86 processor. As mentioned above the Rosetta design was began in the 1990′s, the 1750A was THE standard high reliability processor at the time, so it wasn’t as out of date as it is now that its been flying through space for 10 years (and 10 years in the clean room). The 1750A is also an open architecture, no licenses are or were required to develop a processor to support it (unlike x86). Modern designs do use more modern processors such as PowerPC based CPUs like the RAD750 and its older cousin the RAD6000. Space system electronics will always lag current tech due to the very long lead times in their design (it may be 10 years of design n the ground before it flies, and the main computer is selected early on). x86 is used in systems with 1) lots of power, and 2) somewhat easily accessible. Notably the International Space Station and Hubble. x86 was not designed with high reliability and radiation tolerance in mind, meaning other methods (hardware/software) have to be used to ensure it works in space.
Currently the ESA designs with an open-source processor known as the LEON, which is SPARC-V8 based.
Last week we showed you an educational kit from Zilog showing the process involved in making and assembling a Z80 processor, from polished wafer to packaging. Zilog also made a kit for marketing the various packages used. This kit contains a shrink DIP 64 pin socket, a shrink DIP 64pin package, a 48 pin DIP and 40 pin DIP, all the common packages used at the time.
At the time is a little hard to track down as no date is provided with this kit. We can get very close though looking at the back where Zilog lists which devices are available in these packages. The usual Z80 and Z8000 series are both there as well as the Z8 microcontroller family. The one odd-ball is the Zilog Z800. The Z800 was an upgraded Z80 released in 1985, adding on chip cache an MMU and a vastly expanded instruction set (over 2000 instruction/addressing modes). It was wholly unsuccessful partly do to bad marketing by Zilog, and partly because it did more then it needed to. It never entered mass production, and by 1986 Zilog has redesigned it, converted the design to CMOS (from NMOS) and released it as the Z280 which met the same fate as the Z800. It seemed that making an overly complicated Z80 wasn’t what the market wanted. THe Z180 (designed by Hitachi) and the Zilog eZ80 (released in 2001), have enjoyed much wider success, mainly because they kept closer to the simplicity of the original Z80.
So when was this kit put together? Likely 1985, as the Z800 was nly talked about for a few months before quietly being put away.
Here is a neat kit from Zilog. Its an Educational Kit showing some of the steps of producing a Z-80 processor. It includes:
- A raw polished wafer slice before any etching has occured. This is what a processor starts out with (sliced from a single large ingot)
- A slice of an etched wafer. In this case it appears to be some sort of memory, but the process is the same for a processor.
- Several cut die, these are cut from a wafer after testing. The red dot notes that these particular dice failed one or more of the tested and should be discard. Thats probably why they made it into this kit rather then a saleable device.
- An bare unfinished package. These packages are rarely if ever made by the company making the processor. They are made by companies such as NGK (who also makes spark plugs) and Kyocera. The bottom of the die cavity is usually connected to the ground pin of the package.
- Next is a package with the die placed in the die cavity. No bonding wires are installed in this example but that would be the next step. The very fine gold bonding wires connect the pad ring on the edge of the die, to the pads in the die cavity. Those [ads are connected through the package to the 40 pins of the ceramic DIP package.
- Finally we have a completed device. The lid is usually soldered or brazed onto the package and markings applied. The marking on this example make it a ‘Marketing Sample’ as they are there solely for looks, rather then to identify the device, its date, and lot.
These types of kits were produced for educational use, and given to schools, as well as sales people to assist in marketing Zilog’s various products
Welcome to 2014 and a new year of exciting processors and technology finds at the CPU Shack Museum. We’ll spend the next couple weeks posting some of the more interesting finds of 2013 that didn’t get posted before.
The PA-RISC was HP’s architecture meant to unify all their non x86 processors of the 1980′s. The project began in the 1980′s and produced over a dozen processors designs, ending with the PA-8900 in 2005, though the Itanium borrows heavily from the PA-RISC line. HP discontinued support for PA-RISC servers in 2013 and recently announced that they will discontinue use of the Itanium as well.
Early PA-RISC processors were multi-chip designs such as this PA-7000. The PA-7000 pictured is only the CPU, the FPU was a separate chips, as was the L1 caches (no support for L2 caches). A memory controller was also a separate chip. Made on a 1 micron process the PA-7000 had 580,000 transistors and ran at 66MHz. Early versions had 2 lugs for the heatsink on the package while later versions had only a single lug.
NASA has successfully launched the $671 million MAVEN mission to Mars for atmospheric research. Like the Mars Reconnaissance Orbiter it is based on, it’s main computer is a BAE RAD750, a radiation hardened PowerPC 750 architecture. This processor first flew on the Deep Impact Comet chaser and is capable of withstanding up to 1 million rads of radiation. The entire processor sub-system can handle 200,000 rads. To put this in perspective, 1000 rads is considered a lethal dose for a typical human. Likely much higher then a Apple Mac G3 that the PowerPC 750 was originally used in back in 1998 as well. The processor can be clocked at up to 200MHz though often will run slower for power conservation.
The MAVEN should reach Mars within a few days of the Indian Space Agency’s $71 million Mangalyaan Orbiter launched earlier this month. MAVEN is taking a faster route, at the expense of a heavier booster and larger fuel consumption. The Mangalyaan Orbiter’s main processor is the GEC/Plessey (Originally produced by Marconi and now Dynex) MAR31750, a MIL-STD-1750A processor system.
‘Itanium is dead’ is a phrase that has been used for over a decade, in fact many claimed that the Itanium experiment was dead before it even launched in 2001. The last hold-out of the Itanium architecture was HP, likely because the Itanium had a lot in common with its own PA-RISC. However HP has announced that they will be transitioning their NonStop sever series to x86, presumably the new 15-core Xeons Intel is developing. Itanium was launched with goal of storming the server market, billed as the next greatest thing, it failed to make the inroads expected, largely due to the 2 decades of x86 code it didnt support, and poor initial compiler support. Many things were learned from Itanium so though it will become but a footnote, its technology will live on.
Interestingly other architectures that seemed to be n the brink are getting continued support in new chips. Imagination, known for their graphics IP, purchased MIPS, and now has announced the MIPS Warrior P-class core. This core supports speeds of over 2GHz, and is the first MIPS core with 128 bit SIMD support.
Broadcom, historically a MIPS powerhouse, has announced a 64-bit ARM server class processor with speeds of up to 3GHz. Perhaps ironic that ARM is now being introduced into a market that Itanium was designed for. Broadcom has an ARM Architecture license, meaning they can roll their own designs that implement the ARM instruction set, similar to Qualcomm and several others.
POWER continues to show its remarkable flexibility. Used by IBM in larger mainframes in the POWER7 and POWER8 implementations it crunches data at speeds up to 4.4GHz. On the other end of the spectrum, Freescale (formerly Motorola, one of the developers of the POWER architecture) has announced the 1.8GHz quad-core QorIQ T2080 for control applications such as networking, and other embedded use. These days the POWER architecture is not often talked about, at least in the embedded market, but it continues to soldier on and be widely used. LSI has used it in their Fusion-MPT RAID controllers, Xilinx continues to offer it embedded in FPGAs and BAE continues to offer it in the form of the RAD750 for space-based applications.
Perhaps it is this flexibility of use that has continued to allow architectures to be used. Itanium was very focused, and did its one job very well. Same goes for the Alpha architecture, and the Intel i860, all of which are now discontinued. ARM, MIPS, POWER, x86 and a host of MCU architectures continue to be used because of their flexibility and large code bases.
So what architecture will be next to fall? And will a truly new architecture be introduced that has the power and flexibility to stick around?