February 9th, 2020 ~ by admin

ESA Solar Orbiter: When SPARCs Fly

ESA ERC-32SC

ERC-32SC – SPARC V7 MCM with RAM and MIL-STD-1553

In a few hours (assuming no more delays, UPDATE: Launch Successful) the joint NASA/ESA Solar Orbiter mission will launch on a ULA Atlas 5 Rocket out of Florida, USA.  This is a mission a long time coming for the ESA, which like NASA has to get its funding from the government, except in the case of ESA, that involves the governments of many countries in the EU, which can make planning a bit more tricky.  The mission was originally baselined in 2011 and hoped to launch in…2013…then 2017..then 2018 and finally a launch date in 2020.  The original proposal dates to the late 1990’s as a mission to replace the joint NASA/ESA SOHO Solar mission that had launched in 1995.  This creates some interesting design choices for a mission, as designing often happens before a mission is completely approved/funded.  For Solar Orbiter this is one of the main reasons for it being powered by a computer that by today’s standards is rather dated, space standards no less!

Solar Orbiter – ESA

The Solar Orbiter is powered by a processor designed by the ESA, the ERC-32SC.  This is the first generation of processors designed by the ESA.  It is a SPARC V7 compliant processor running at 25MHz and capable of 20MIPS.  The ERC-32SC is a single chip version of the original ERC-32 which was a MCM (Multi chip Module) containing 3 dies that made up the processor (the Atmel/Temic TSC691 Integer Unit TSC692 FPU and TSC693 Memory Controller) that was made on a 0.8u CMOS process.  The Single chip version was made possible by a processes shrink to 0.5u.  It was also made by Atmel,  (whom acquired Temic) and is commercially known as the TSC695 as it is designed for space use, is capable of handling a 300krad Total Ionizing Dose of radiation.  The computer used in the Solar Orbiter was built by RUAG and has two seperate ERC-32SC processor systems for redundancy.  Each of the ERC-32SCs are actually mounted on a MCM, the single chip SPARC, 48MB of DRAM (38 of which is used, the remainder is for Error Detection/Correction via Reed Solomon method), and a MIL-STD-1553 bus controller/RTC/IO are included in the package.

Fujitsu MB86900 – Original SPARC V7 Processor from 1987

The original specifications for this processor were developed back in the 1990’s, which is why it is a SPARC V7, equivalent to the very first Sun SPARC workstations of the late 1980’s powered by the likes of the Fujitsu MB86900/MB86901.  The ESA has developed several follow on processors since, all based on the later SPARC V8 architecture.  They are faster, and more efficient then the ERC-32SC, with some even being dual core processors.  They are known as the LEON-2 and the later LEON-3.  LEON2 has a 5-stage pipeline and no SMP support, while LEON3 increases the pipeline to 7-stages and adds SMP support.  LEON3 is also a VHDL core able to be added to many ASICS/FPGAs (LEON2 is a hard core).  The Solar Orbiter also has both LEON2 and LEON3 processors on board as well…

The Solar Orbiter caries with is 10 different scientific instruments, and each of them has their own processing subsystem, 9 of which are powered by LEON SPARC processors.  Its common for the main processor of a spacecraft to be the most powerful, but in this case the instruments each possess their own processor more powerful then that of the main spacecraft computer.   This is in large part due to many of these instruments being designed well after the original spacecraft bus and systems were baselined.  Payloads can be added/changed much later in the design of the spacecraft allowing their designers to use more modern computers.

Instrument Processor(s) Notes
Solar Orbiter OBC ERC-32SC – Atmel TSC695 Spacecraft Platform Processor
SoloHi LEON3FT – Microsemi RTAX2000 FPGA
MAG-IBS/OBS LEON3FT – Microsemi RTAX2000 FPGA
RPW-SCM/ANT LEON3FT – Microsemi RTAX4000D FPGA
LEON3FT – Cobham UT699
Two processors
SWA-HIS/EAS/PAS LEON2FT – Atmel AT697F up to 100MHz
EPD-SIS LEON2FT – IP Core
STIX LEON3FT – Microsemi RTAX2000 FPGA
EUI LEON3FT – Cobham UT699 66MHz Single core
METIS LEON2FT – Atmel AT697F
PHI LEON3FT – Cobham GR712RC Dual core up to 100MHz
SPICE 8051 + FPGA Long live the MCS-51

There is also likely more processors on this mission as well, but it can be hard to track them all down, nearly every system has its own processing (star trackers, radios/ attitude control etc)

So as you watch the launch tonight, and perhaps see science/pictures from the Solar Orbiter (or just benefit from its added help in predicting solar storms and allowing us here on Earth to prepare for them better) think of all the SPARCs it has taken to make it function.

 

January 24th, 2020 ~ by admin

ARMing the Modems of the 1990’s

Racks of external modems at an ISP back in the day

Back in the 1990’s I worked at several ISP’s in my hometown.  These were the days of dial up, and by working at the ISP I got free dial up access which my family and I enjoyed.  We had several racks (white wire racks) of external modems for dial in.  This was the most common solution for smaller ISPs.  External modems were usually more reliable, cheap and easy to replace if/when they failed (and they did).  They got warm so it wasn’t uncommon to see a fan running to help move more air.  Surprisingly I could only find a few pictures of a such installations but you get that idea.

By the late 1990’s as dial in access and ISPs grew to be major concerns dial up solutions became much more sophisticated.  Gone were wire racks of modems and in were rackmount all in one dial in solutions.  These included boards that hosted dozens of modems on one PCB. with their own processing and management built in.  One of the largest companies for these solutions was Ascend Communications.  Their ‘MAX TNT’ modem solution once boasted over 2 million dial up ports during the 1990’s.  Such was Ascends popularity that they merged with Lucent in 1999, a deal that was the biggest ever at its time, valued at over $24 Billion ($37 Billion in 2020 USD). It wasn’t just traditional ISPs that needed dial up access, ATM’s and Credit Card processing became huge users as well.  It wasn’t uncommon to try to run a credit card at a store in the 1990’s and have to wait, because the machine got a busy signal.  The pictured Ascend board has 48 modems on a single PCB, and would be in a rack or case with several more boards, supporting 100s of simultaneous connections.

Ascen CSM/3 – 16x Conexant RL56CSMV/3 Chips provide 48 modems on one board.

Ascend’s technology was based primarily on modem chips provided by Conexant (Rockwell Semiconductor before 1999).  Rockwell had a long history of making modem controllers, dating back to the 1970’s.  Most of their modem controllers up through the 80’s and early 90’s were based on a derivative of the 6502  processor.  This 8-bit CPU was more the adequate for personal use modems up to 33.6kbaud or so, but began to become inadequate for some of the higher end modems of the 1990’s.  These ran at 56k, supported various voice. fax, and data modes and handled a lot of their own DSP needs as well.  Rockwell’s solution was to move to an ARM based solution, and integrate everything on chip.

One of the results of this was the Anyport Multiservice Access Processor. It was called the Multiservice Access Process because it handled, voice, data, 33.6/56k, ISDN, cellular, FAX and several other types of data access, and it did so in triplicate.  The RL56CSMV/3 supported 3 different ports on one chip.  The CSM3 series was the very first ARM cored device Rockwell produced.  Rockwell had licensed the ARM810 (not very common), the ARM7TDMI and a ‘future ARM architecture’ (which was the ARM9) back in January of 1997.  In less then two

Conexant RL56CSM/3 R7177-24 ARM7 (non-V version has no voice support)

years Rockwell had designed and released the first AnyPort device, remarkable at the time.  The CSM/CSMV used the ARM7TDMI running at 40MHz and made on a 0.35u process.  The CSM/CSMV has another interesting feature, and thats the backside of the chip….

Take a look of the backside of the 35mm BGA chip, the ball arrangement is very unusual!  There is a ring of balls around the outer edge and 4 squares of 16 balls inside of that.  This is a multi-die BGA package.  There are 4 die inside one BGA package, three dies for the 3 Digital Data Pumps (DDPs) and a seperate die for the ARM7 MCU (which is made on a different process then the mixed signal DDPs).  Most of the balls in the 16×16 squares are to be connected to GND, and used for thermal dissipation (dissipating heat via the main PCBs ground plane).  Its not uncommon to see multidie packages today, but a multi die BGA package in 1999 was fairly innovative.

Surprisingly many of these chips are still in service, in today’s world of high speed broadband connections there are still many who are stuck on dial up.  As recently as 2015 AOL was still serving 2.1 million dial up customs in the US (out of around 10 million dial up customers total), which was still netting the company nearly half a billion dollars a year (by far their largest source of revenue at the time.  There is also still plenty of other infrastructure that still rely on dial up, ISDN, and even FAX services that require end point connections like the CSMV so its end is probably still a long ways off.

Tags:
, ,

Posted in:
CPU of the Day

January 14th, 2020 ~ by admin

Barn Find MOS MCS6502 – A Restoration

ATARI Arcade BoardIn car collecting one of the ‘holy grail’ experiences is the ‘Barn Find’  finding and recovering a rare vehicle that has sat untouched, in some barn, or shed for some time.  They are often in rough, but original condition and can evoke much excitement.  As it turns out CPUs are not so different.  I recently purchased a very rough and very old ATARI Arcade board.

The pictures clearly showed it in terrible condition, with lots of oxidation and ‘stuff’ on it.  But it also had a white MOS 6502 processor.  These are some of the very first CPUs made by MOS and are rather desirable, as in addition to their use by ATARI, they were used in the very first Apple computer, the Apple 1.

When the board arrived it was clearly in bad shape, take a look at that nastiness.  What you can’t see, or rather smell, is the cow manure.  Clearly this board was in an actual barn at some point.  Probably relegated to such a retirement after serving in an Arcade parlor or bar for some time, either that or there was some bovin gaming going on.

You can see there is some oxidation on the lids of the various chips as well.  The ROMs and CPU are in sockets.  These sockets are nice, they are not a machine socket but rather a LIF, Low Insertion Force Socket, that helps as the pins on these chips are very delicate, and very possibly corroded.

Before attempting to remove the MCS6502 its best to see what I am working with, so I pulled some of the ROMs nearest to the 6502 to see how their pins looks and how easy they came out of their sockets.  They came out with not a lot of effort but you can see there is some oxidation on the pins.  What we do not want is the pins to be rusted TO the socket and then break off from the forces needed to remove the chip from the socket.

To help mitigate this risk I used some penetrating oil on the pins in the socket.  It seems strange to be squirting oil in the socket but it works.  It will help penetrate the rust and decrease the force needed to remove the 6502. After adding the oil I let the board sit on my heater in my office for several hours.  This helps the oil penetrate, as well as made my office smell like Deep Creep and cow manure, all in a days work.

Then I very gently work on removing the 6502, testing how tight it is and working it out from both ends.  It comes looses with very little drama, hopefully with all its pins intact….

Read More »

Tags:
, ,

Posted in:
How To

January 2nd, 2020 ~ by admin

Chips in Space: Making MILSTAR

Milstar Satellite

Back in the late 1970’s having a survivable space based strategic communications network became a priority for the US Military.  Several ideas were proposed, with many lofty goals for capabilities that at the time were not technologically feasible.  By 1983 the program had been narrowed to a highly survivable network of 10 satellites that could provide LDR (Low Data Rate) strategic communications in a wartime environment.  The program became known as MILSTAR (Military, Strategic, Tactical and Relay) and in 1983 President Reagan declared it a National Priority, meaning it would enjoy a fair amount of freedom in funding, lots and lots of funding.  RCA Astro Electronics was the prime contractor for the Milstar program, but during the development process was sold to GE Aerospace, then Martin Marietta, which became Lockheed Martin before the 3rd satellite was launched.  The first satellite was suppose to be ready for launch in 1987, but changing requirements delayed that by 7 years.

Milstar Program 5400 series TTL dies

The first satellite was delivered in 1993 and launched in February of 1994.  A second was launched in 1995 and these became Milstar-1. A third launch failed, which would have carried a hybrid satellite that added a Medium Data Rate (MDR system).  Three Block II satellites were launched in 2001-2003 which included the MDR system, bringing the constellation up to 5.  This provided 24/7 coverage between the 65 degree N/S latitudes, leaving the poles uncovered.

TI 54ALS161A

The LDR payload was subcontracted to TRW (which became Northrup Grumman) and consisted of 192 channels capable of data rates of a blazing 75 – 2400 baud.  These were designed for sending tasking orders to various strategic Air Force assets, nothing high bandwidth, even so many such orders could take several minutes to send.  Each satellite also had two 60GHz cross links, used to communicate with the other Milstar sats in the constellation.  The LDR (and later MDR) payloads were frequency hopping spread spectrum radio system with jam resistant technology.  The later MDR system was able to detect and effectively null jamming attempts.

The LDR system was built out of 630 LSI circuits, most of which were contained in hybrid multi layer MCM packages.  These LSIs were a mix of custom designs by TRW and off the shelf TTL parts.  Most of the TTL parts were sourced from TI and were ALS family devices (Advanced Low Power Schottky), the fastest/lowest power available.  TI began supplying such TTL (as bare dies for integration into MCMs) in the mid-1980’s.  These dies had to be of the highest quality, and traceable to the exact slice of the

Traceability Markings

exact wafer they came from. They were supplied in trays, marked with the date, diffusion run (a serial number for the process and wafer that made them) and the slice of that wafer, then stamped with the name/ID of the TI quality control person who verified them.

These TTL circuits are relatively simple the ones pictures are:
54ALS574A Octal D Edge Triggered Flip flop (used as a buffer usually)
54ALS193 Synchronous 4-Bit Up/Down Binary Counters With Dual Clock
54ALS161A Asynchronous 4-Bit Binary Counters

ALS160-161

Looking at the dies of these small TTL circuits is quite interesting.  The 54ALS161A marking on the die appears to be on top of the a ‘160A marking.  TI didn’t make a mistake here, its just that the the 160 and 161 are essentially the same device.  The 161 is a binary counter, while the 160 was configured as a decade counter.  This only required one mask layer change to make it either one.

ALS573 and ALS574 die

Similarly with the 54ALS574, which shares a die with the more basic ‘573 D type transparent Latch.  This was pretty common with TTL (if you look at a list of the different 7400 series TTL you will notice many are very similar with but a minor change between two chips).  It is of course the same with CPUs, with one die being able to be used for multiple core counts, PCI0E lanes, cache sizes etc.

Together with others they perform all the function of a high reliability communications systems, so failure was not an option.  TI supplied thousands upon thousands of dies for characterization and testing.  The satellites were designed for a 10 year lifetime (it was hoped by them

Milstar Hybrid MCM Command Decoder (picture courtesy of The Smithsonian)

something better would be ready, no doubt creating another nice contract, but alas, as many things are, a follow on didn’t come along until just recently (the AEHF satellites).  This left the Milstar constellation to perform a critical role well past its design life, which it did and continues to do.  Even the original Milstar 1 satellite, launched in 1994 with 54ALS series TTL from the 1980s is still working, 25 years later, a testament to TRW and RCA Astro’s design.  Perhaps the only thing that will limit them will be the available fuel for their on-orbit Attitude Control Systems.

While not necessarily a CPU in itself these little dies worked together to get the job down.  I never could find any of the actual design, but it wouldn’t surprise me if the satellites ran AMD 2901 based systems, common at the time or a custom design based on ‘181 series 4-bit ALUs.  finding bare dies is always interesting, to be able to see into whats inside a computer chip, but to find ones that were made for a very specific purpose is even more interesting.  The Milstar Program cost around $22 Billion over its life time, so one must wonder how much each of these dies cost TRW, or the US Taxpayer?

Tags:
, ,

Posted in:
CPU of the Day

December 27th, 2019 ~ by admin

RIP Chuck Peddle: Father of the 6502

Original MOS 6501 Processor from 1975 – Designed by Chuck Peddle.

On December 15th one of the truly greats of processor design passed away at age 82.  Chuck Peddle, born in 1937, before semiconductors were even invented, designed the 6502 processor back in 1974.  The 6502 (originally the 6501 actually) went on to become one of the most popular and widely used processors of all time.  It powered the likes of the Apple 1, Commodores, ATARIs and hundred of others.  It was copied, cloned, and expanded by dozens of companies in dozens of countries.  It was so popular that computers were designed to use it in the Soviet Union, eventually making their own version (Pravetz in Bulgaria).

Sitronix ST2064B – Based on the 65C02 – Core is visible in the upper right of the die. (photo by aberco)

The 6502 was a simple but useful 8-bit design, which meant that as time went along and processors migrated to 16 32 and 64-bits and speeds jumped from MHz to GHz the venerable 6502 continued to find uses, and be made, and expanded.  Chuck continued to be involved in all things 6502 until only a few years ago, designing new ways to interface FLASH memory (which hadn’t been invented when he designed the 6502) to the 6502.

The chips themselves, now in CMOS of course, continue to be made to this day by Western Design Center (WDC) and the 65C02 core is used in many many applications, notably LCD monitor controllers and keyboard controllers.  We can hope that the 6502 will have as long of life as Mr. Peddle, though I woud wager, that somewhere, somehow , in 2056 a 6502 will still be running.

Tags:
,

Posted in:
Museum News

November 1st, 2019 ~ by admin

CPU of the Day: Motorola MC68040VL

Motorola MC68040VL

A month or so ago a friend was opening up a bunch of unmarked packages, and taking die photos and came across an interesting Motorola.  The die looked familiar, but at the same time different.  The die was marked 68040VL, and appeared to be smaller version of the 68040V.  The Motorola 68040V is a 3.3V static design of the Motorola MC68LC040 (It has dual MMUs but lacks the FPU of the 68040).  The 68040V was made on a 0.5u process and introduced in 1995.  Looking closely at the mask revealed the answer, in the form of 4 characters. F94E

Motorola Mask F94E – COLDFIRE 5102

Motorola uses mask codes for nearly all of their products, in many ways these are similar to Intel’s sspecs, but they are more closely related to actual silicon mask changes in the device.  Multiple devices may use the same mask/mask code just with different features enabled/disabled.  The Mask code F94E is that of the first generation Motorola COLDFIRE CPU, the MCF5102.  The COLDFIRE was the replacement for the Motorola 68k line, it was designed to be a 32-bit VL-RISC processor, thus the name 68040VL for VL-RISC. .  VL-RISC architectures support fixed length instruction (like a typical RISC) but also support variable length instructions like a traditional CISC processor.  This allows a lot more code flexibility and higher code density.  While this may be heresy to RISC purists it has become rather common.  The ST Transputer based ST20 core is a VL-RISC design, as is the more modern RISC-V architecture.  The COLDFIRE 5102 also had another trick, or treat up its sleeve.  It could execute 68040 code.

Read More »

October 7th, 2019 ~ by admin

The Forgotten Ones: RISCy Business of Winbond

Winbond W77E58P-40 – Your typical Winbond MCS-51 MCU

Winbond Electronics was founded in Taiwan back in 1987, and is most widely known for their memory products and system I/O controllers (found on many motherboards of the 1990s).  They also made a wide variety of microcontrollers, mostly based on the Intel MCS-51 core, like many many other companies have and continue to do.  They also made a few 8042 based controllers, typically used as keyboard controllers, and often integrated into their Super I/O chips.  So why do I find myself writing about Winbond, whose product portfolio seems admittedly boring?

It turns out, that once upon a time, Winbond decided to take a journey on a rather ambition path.  Back in the early 1990’s they began work on a 32-bit RISC processor, and not an ARM or MIPS processor that were just starting to become known at the time, but a processor based on the HP PA-RISC architecture. This may seem a odd, but HP, in a shift form their previous architectures, wanted the PA-RISC design to be available to others.  The Precision RISC Organization was formed to market and develop designs using the architecture outside of HP.  HP wanted to move all of their non-x86 systems to a single RISC architecture, and to help it become popular, and well supported, it was to be licensed to others.  This is one of the same reasons that made x86 so dominate in the PC universe.  More platforms running PA-RISC, even of they were not HP, meant more developers writing PA-RISC code, and that mean more software, more support, and a wider user base.  Along with Winbond, Hitachi and OKI also developed PA-RISC controllers.  Winbond’s path was innovative and much different then others, they saw the need for easy development as crucial to their products success, so when they designed their first PA-RISC processor, the W89K, they made it a bit special.

Read More »

Posted in:
CPU of the Day

October 1st, 2019 ~ by admin

The Story of the IBM Pentium 4 64-bit CPU

Introduction

This time we will talk about one unique Intel processor, which did not appear on the retail market and whose reviews you will not find on the Internet. This processor was produced purely by special order for one well-known manufacturer of computer equipment. Also in the framework of this article I will try to assemble one of the most powerful retro-systems with this processor.

From the title of the article, I think many people understand that we will talk about the Socket 478 Intel processor

Most people are familiar with the Socket 478 that replaced Socket 370 at the end of 2001 (we omit Socket 423 due to its short lifespan of less then a year) and allowed the use of single-core, and then with Hyper Threading technology “pseudo-dual” processors that can perform two tasks in parallel. All production Intel processors within Socket 478 were 32-bit, even a couple of representatives from the Pentium Extreme Edition server segment on the «Gallatin» core. But as always there are exceptions. And this exception, or to be more precise, two exceptions, were two models of Pentium 4 processors with the Prescott core, which had 64-bit instructions (EM64T) at their disposal.

Intel Pentium 4 SL7QB 3.2GHz: 64-bits on S478

This pair of processors were commissioned by IBM for its eServer xSeries servers. These processors never hit the retail market and their circulation was not very large, so finding them now is very problematic. It is interesting that the fact that if you want and naturally have the right amount of money, or a large enough order, you can count on a special order of the processor that is needed for the specific needs, with characteristics that will be unique and will not be repeated in standard production products. And it should be noted that not a few such processors have been released, in fact, in the 70’s and early 80’s this was the very purpose of the now ubiquitous ‘sspec.’ Chips with an Sspec (Specification #) were chips that had some specification DIFFERENT from the standard part/datasheet.  A chip WITHOUT a sspec was a standard product.  By the late 1980’s all chips began to receive sspecs as a means of tracking things like revisions, steppings, etc.  I will talk about some a little later.

hat’s how the processor looks through the eyes of the CPU-Z utility. In the “Instructions” field after SSE3, the EM64T proudly shows off! Link to popular CPU-Z Validation.

Special processors made for IBM belonged to the Prescott core and were based on E0 stepping with support for 64-bit instructions, which is not typical for Socket 478! The first 64-bit CPUs for “everyone” appeared only with the arrival of the next LGA775 socket, and even then it wasn’t right away; some Pentium 4 models in LGA775 version were 32-bit. I specifically pointed out that the Pentium 4 Socket 478 model with EM64T support belonged to the E0-stepping, although later the more advanced stepping G1 was released, which did not have such innovations. The first model worked at a frequency of 3.2 GHz and had a SPEC code – SL7QB, the second was slightly faster with a frequency of 3.4 GHz and the SPEC code – SL7Q8.

For the rest, these were the usual «Prescott». But the presence of 64-bit instructions made these processors unique, capable of working with 64-bit operating systems and the same applications, allowing them to do what their 32-bit comrades simply could not do.

Read More »

September 18th, 2019 ~ by admin

Pardon the Mess…Upgrading PHP – FIXED

Moving The CPU Shack to PHP 7 and it has broken some old legacy code (now why would a museum have old code? ha).  A few things (like the header and the OLD pictures section) are not working, should be fixed soon.

Thanks!

EDIT: Looks like we got it all fixed, if ya notice anything broken/not working let me know

 

Posted in:
Museum News

August 28th, 2019 ~ by admin

Sushi Tacos and Lasers: Marking Intel Processors

Intel ink stamp used for marking chips in the 1970’s

In 1987 Intel became the first semiconductor manufacturer to use lasers to mark all component parts, including ceramic packages (they still used ink for some but had the capability and eventually rolled out laser marking to most all of their assembly/test locations).  Conventional ink marking for ceramic packages required a post-mark ink cure time and production yields ranged from 96%-98% before rework.  That percentage may be good on a school exam, but in the production environment, having to rework 2-4% of everything off the line is unacceptable.  It costs resources, money and time that do not go to making profit.

Intel A80387-20B SX024 remarked with a laser

With lasers, however, the cure operation was not needed and yields increased to better then 99.95%.  Lasers were so consistent that marking became a zero rework process and overall productivity increased by 25%.  Throughput also increased significantly (less rework and lasers are faster) and inspection requirements dropped by 95%.  These lasers were originally developed for ceramic packages but found to work well on plastic packages as well.  They also made remarking significantly easier, old markings could be crossed out with the laser and new marking made.  No stencils, pads or masks were needed, the lasers were programmable and very fast.

Intel continues to use laser marking today (as do most manufacturers).  Intel uses laser marking systems from Rofin-Sinar (now owned by Coherent).  These lasers are typically from the PowerLine E line, which are a diode end-pumped Nd: YVO4 (Neodymium doped yttrium vanadate) diode laser.  These are basically a high ends high power version of the diode lasers used in laser pointers.  Intel went with diode lasers as they were faster, and cleaner then CO2

Intel Package marked SUSHI TACO SALAD. Perhaps the technician was getting hungry while trying to dial in the laser settings.

lasers (at the same power levels).  These lasers typically run in the 10-40Watt range.  Most commonly they are a 532nm laser (green light).  In order to achieve the speeds needed, these marking systems are ran in a pulsed mode, 1-200KHz depending on the speed and material being marked.  This allows the laser to run at very high power, for very short pulses.

This of course requires some tuning, essentially simple trial and error to find the right setting for a given material.  Today’s packages are very thin, and marking on the organic substrate (or the silicon die itself) must be done in a way that leaves the markings visible, but does not damage the underlying structure. These markings are often only a few microns deep on silicon and 25 microns on a package, as deeper then th

Motorola PP603 Engineering Sample with ROFIN BAASEL test marking on the die

at is the chips circuitry.

Rofin offers testing and calibration for some of their bigger customers (such as Intel) where they help develop the settings needed.  This results in a lot of ‘oddly’ marked chips.  Companies will ship packages, dies and whatever else needs to be marked to Rofin along with

specifications of the markings (how wide, tall, deep etc) and the systems/settings are worked out to make it workable on the production line.  Anyone that has used a CO2 desktop laser knows they are not the fastest thing around.  An engraving project completion time is measured in minutes.  When marking chips, speed and accuracy are of paramount importance.  Rofin advertises their lasers as such “Our semiconductor marking solutions achieve marking speeds up to 1600 characters/second. Even at a character height of 0.2 mm and line widths of less than 30 µm they still ensure best readability.”

Package with laser settings engraved

Here we have a test chip package from Intel, marked up by Rofin, there is tests of the 3d-Bar code, Lots numbers s-specs and others.  There is also some calibration markings, its useful to engrave the settings used as for the test, as the test.  In this case we see 25k, 650mms and 23.8A.  These are 3 of the fundamental settings for the laser system.  25k is the pulse rate (25KHz) of the laser, 650mms is the speed, or feed rate, 650mm per sec (about 2ft/sec),  thats a relatively slow speed, but probably was one step in the calibration process.  The 23.8A is the current for the laser, in amps.  Its a rather high current compared to say a continuous wave CO2 laser which runs currents in the milliamps, but these are pulsed lasers, so that current is only needed for a fraction of a second.

Marking can also be done on the die itself.  Here we see a sample

Flip chip marking marketing sample by ROFIN SINAR in Tempe, AZ

(probably an actually marketing sample given away to customers) of a flip chip die, with ROFIN SINAR markings on it, and erven their phone number for their location in Tempe, AZ (only a few miles from several fabs in Chandler, AZ (including Intel and Motorola (now NXP)).

As chips become smaller, marking technology continues to evolve with it.  Markings today have become much less about what the consumer sees, and much more about traceability and trackability.  Being able to follow a device through the supply chain, or trace a defective device back to when/where it was produced.  Marking enhancements also play a great role in combating counterfeiting, helping them out of the supply chain.

There is a lot that goes into designing, making, assembling and even marking a computer chip, and often times things that seem the simplest, such as placing marking on a chip, are anything but simple, and just as important as the fabrication of the die itself.

Posted in:
CPU of the Day