October 12th, 2018 ~ by admin

Xilinx gets ARMed up for Free

Xilinx Virtex II Pro FPGAs from the 2000’s included embedded PowerPC processor cores.

Recently ARM announced they would be providing IP for the Cortex-M1 and M3 cores for free for users of Xilinx FPGA’s.  The Cortex-M1 and M3 are some of the most basic ARM cores, taking 12-25,000 gates for the Von Neumann architecture M1 and around 43,000 for the full up Harvard architecture M3 (with full ARM THUMB instruction set support).  Xilinx already offers FPGAs/SoCs with built in ARM cores, the SYNQ series is available with a variety of high end ARM cores such as the Cortex-A53 and the RF focused R5 core.  These obviously are fairly high gate county, and cost cores, where as the M1 and M3 cores are being provided without license, and without any royalties.  Drop in the IP into your FPGA design and go.

ARM and Xilinx say this is to meet the needs of their customers, who want to be able to use the same ARM architecture in their FPGA designs as in ASICs etc, and at the lowest investment in time and cost.  This certainly makes sense, having a free ARM core is better then a low cost ARM core, and removing the ‘paperwork’ hassle helps, but that’s probably not the only reason ARM is doing this, and doing it specifically for Xilinx.

There are a couple other things at play here, ARM Mx cores are basic RISC processors, used for when you just need to get some basic processing done, no frills, low power, and easy to use.  It turns out that’s a market that is now seeing some competition from the SiFive RISC-V core.  This is a basic, easy to use RISC core, that is synthesizable into ASICS, and FPGAs, and comes with a one time low cost license fee and no royalties.  Its being used by such heavyweights as Nvidia, and could threaten the Cortex-Mx domain, so it makes sense for ARM to offer, essentially their introductory processor core, for free, as a way to sway people to the ARM ecosystem.  But why Xilinx?

Perhaps Xilinx is just the start of ARM’s plans, Xilinx is one of the biggest providers of FPGAs in the world so certainly that will help keep people in the ARM. Xilinx infact, already has a drop-in 32-bit RISC processor core available to all their customers, the MicroBlaze and PicoBlaze, of their own design.  There are also drop in 80C186 cores, MCS-51 cores, the LEON SPARC core and many others. The other big name in FPGAs is Altera, a company that has competed with Xilinx for the better part of 30-years and was, in June of 2015 bought by none other then Intel.

Altera has had a close relationship with Intel since the 1980’s when Intel first started assisting Altera with fab’ing their PLDs.

This gave Altera greater access to Intel’s fab/engineering prowess, but also to all of Intel’s IP.  Is Intel going to offer free ARM cores on Altera FPGAs (the Stratix/Arria series does include hard Cortex-A9/A53 cores already)?  It seems unlikely that they would work to support their architectural competitor any more then they have to.  It is more likely that Intel would offer some form of 32-bit x86 processor core for their FPGAs.  Now x86 isn’t exactly known for low gate counts, but it is possible.  Currently softcore 8086 and 80186 processor (the Turbo86 and Turbo186) are 22,000 and 30,000 gates respectively, really a rounding error in FPGAs that now have millions of gates. More and more, FPGAs are becoming less FPGA like, and more ‘configurable processor’ like.

September 30th, 2018 ~ by admin

Peavey and the Motorola DSP56000

Motorola XSP56001ZL20 – 20.5MHz 1990

In 1985 Motorola was looking to create a DSP (Digital Signal Processor) line of processors to go with their very popular 68000 series of general purpose processors.  DSP’s are similar to a normal processor but, as their name implies, are designed to work on signals, versus data stored in memory.  Typical signal data is audio, video, RF (such as RADAR information) and anything else that comes in via an ADC.  These signals are processed via algorithm such as FFTs (Fast Fourier Transforms) to manipulate, change or analyse them.  In audio, this can be used for cleaning up an audio stream, adding effects to it, or even generating audio.

In the 1980’s the main single chip DSP competitors was the still in use TI TMS320 series. the ATT/WE DSP16 series, and some DSP’s from OKI/NEC.  When Motorola began work on what would become the DSP56000 they asked one of their long time customers, Peavey, what they would like to see in a DSP. Peavey is an audio equipment manufacturer, making such things as guitar amps and keyboards, so would have a good idea of what would be useful in a DSP designed for audio signals.

These were packaged in a ‘SLAM’ package. The contacts/traces were easily damaged by leaking batteries.

The DSP5600 is a 24-bit processor made on a 1.5u HCMOS process with around 150,000 transistors.  24-bits were selected as that was ideal for audio sampling at the time (and most ADS/DACs at the time max’d out at 20-bits of resolution anyways.  These DSP’s had a 3-stage pipeline and ran at 20.5MHz, 27MHz and 33MHz.  This provided around 10.25 MIPS of performance (at 20.5MHz).  They were a fixed point (no floating point support in hardware) design, which was adequate at the time.  A total of 62-instructions were provided.

The DSP56001 is identical to the DSP56000 except that it has 512×24-bits of on-chip program
RAM instead of 3.75K of program ROM and a 32×24-bit bootstrap ROM for loading the program RAM.  This is the version that became most popular.  Peavey used the 560001 (3 of them actually) to power the DPM3 SE keyboard back in 1990.  Recently J. Acorn, from Crasno Electronics in Canada sent The CPU Shack Museum an e-mail inquiring if I had a few of these now obsolete 56001 DSPs spare, to rebuild some dead Peavey keyboards.   As a Museum, I not only like to collect and present vintage IC’s but also regularly help people with project such as this, and have thousands of CPU’s sitting around that have been acquired through the years (really its a bit crazy how much I have collected lol).  Mr. Acorn needed 2 of these DSPs to replace ones destroyed by a leaking battery in a keyboard, and two is exactly what I had spare.  I dug them out, packaged them, and off to Canada they went.  The result?  A restored and working Peavey keyboard.  You can read about the restoration process on Crasno’s site.

The 56000 series continued to be made by Motorola (and then Freescale) up until 2012 when it was announced it would be discontinued as a standalone product.  The 56000 series cores though live on, inside of other Freescale (now NXP) products.

 

Posted in:
CPU of the Day

August 25th, 2018 ~ by admin

CPU of the Day: FOCUS on 32-bits

1983 HP FOCUS Board set – Pre FPU. Top left: Memory. Top Right: I/O and CPU bottom center

The year is 1981, Intel is making the 8/16-bit 8086/8088, and Motorola has released the 16/32-bit 68000 processor to much fanfare.  Motorola marketed this as the first 32-bit processor, but while it supports 32-bit instructions/data it does so with a 16-bit ALU.  HP, always used the MC68000 in their 9000 Series 200 line of computers, providing rather good performance for 1981. But this was the 1980’s and HP wasn’t satisfied with good, they wanted more, they wanted to implement a full 32-bit computer on something less then the 5,000 IC’s typically used to implement one at that time.  This meant making a processor like nothing else before, something with more then the 68,000 transistors of the MC68000 or even the 134,000 transistors of the new i286 Intel had announced.  What HP made is simply remarkable, in 1981 they announced the HP 9000 Series 500 computers, powered by an all new fully 32-bit processor called the FOCUS.  FOCUS was made on HP’s high density NMOS-III process, a 1.5u process, and used 450,000 transistors.  Thats 450,000 transistors on a single 40.8mm2 piece of 1.5u silicon in 1981, a smaller die than the Intel 286.

Read More »

Posted in:
CPU of the Day

August 15th, 2018 ~ by admin

CPU of the Day: The 61 Knights of the Intel Xeon Phi

Xeon Phi – Knights Corner – Engineering Sample

In June of 2013, 20 years after the release of the Intel Pentium Processor, Intel released a new processor, technically a co-processor that Intel referred to as a MIC (Many Integrated Core).  It was branded as a Xeon, specifically the Xeon Phi 7000 series but at its core, it was nothing like a Xeon of 2013.  Code named Knights Corner, it built on the Knights Ferry.  Knights Ferry used many Larrabee GPGPU cores and was not designed as a commercial product.  Knights Corner , however, was, and to do so, Intel stuck with an architecture that customers were very familiar with, x86.  The Knights Corner integrated 61 Pentium P54CS cores onto a single chip.  The original Pentium P54CS was made on a 0.35u process and topped out at 200MHz.  They included 16K of L1 cache on die, and typically 256-512K of L2 Cache off chip.  The implementation of the Pentium on the Phi gets a bit of an upgrade.  The cores are made on a 22nm process (16 times smaller) and clocked at up to 1.2GHz.  L1 cache has been increased to 64K per core (32K Instruction  32K Data).  L2 cache remains at 512K

Knights Corner Die. – 62 Cores – 8 GDDR5 Memory Controllers

per core, but at 22nm, integrating all 30.5MB of cache on the same die becomes relatively easy.  The biggest change to the cores is adding support for 64 bit instructions, as well as adding a new execution unit called the VPU. This VPU (Vector Processing Unit) has its own 512-bit wide SIMD instruction set, integer support, Fused Multiply/Add, and other advanced features that are more commonly found in GPU’s. The VPU is the result of Intel’s work with Larrabee, the precursor to Knights Corner.  Interestingly MMX/SSE are not supported by the cores natively, this is handled in software (using virtualization) and leveraging the VPU included with the 61x Pentium Cores.  With the VPU, each core has 4 execution units (VPU, FXU, and 2 x Integer units). This allows the cores to support 4-way multi-threading; in practice, 2 threads are most common as 2 execution units are usually tied up calculating memory addresses.

Knights Corner Sample – This is a 1.09GHz part while production versions were bumped to 1.1GHz – Elpida 2Gbit GDDR5 RAM chips surround the core.

For some reason Intel was very vague about information on die sizes/transistor count on the Phi.  Many sources claim 350mm2 die with 5 Billion transistors.  Taking apart a Phi shows that the die is actually much larger.  In fact the Xeon Phi die is 705mm2 and has 5.1 Billion transistors.  A 22nm Haswell Xeon with 18 cores has a die area of 622mm2 containing 5.6 Billion transistors. This means the Xeon Phi die wasn’t the most efficient is its use of space, likely due to the amount of room needed for the very large rings used to connect all the cores.  Looking at the die you can also see a lot of unused space.   There are actually 62 cores per die (with only 61 used max.)  This means 31MB of L2 cache which at 6 transistors per cell (bit) accounts for 1.5 Billion of the transistors.  L1 Cache is 64K per core so another 190 Million transistors there.  That leaves the bulk of the die for the cores, memory controllers, and the 3 interprocessor communication rings that handle communication between cores, MC’s (8 GDDR5 Memory Controllers per die), and the outside world.

Each Xeon Phi board includes the processor, as well as 6-16GB of GDDR5 Memory (8GB on the Engineering Sample here).  Memory is handled by 32 Elpida EDW2032BBBG-6 2Gbit GDDR5 6 Gbps chips. This gives the card is 352 Gbps memory bandwidth and 1 TFLOPS of computing performance.  All in a PCI-E car that dissipates around 300W.   Card/System management is provided by a NXP LPC2365FBD100 72MHz ARM7TDMI processor.

Knights Corner Xeon Phi with cooler removed. 16x 2Gbit GDDR5 (+16 on the back)

In January of 2013 the Texas Advanced Computing Center in Austin, TX announced the Stampede Supercomputer, the first large scale deployment of Xeon Phi Processors.  It used 6880 of them in its 6400 compute nodes and could hit nearly 10PFLOPS of performance. In June of 2013 the Chinese supercomputer Tianhe-2 became the fastest supercomputer in the world, a title it held until the end of 2015.  It was powered by 32,000 Intel Xeon E5-2692 2.2GHz 12C Ivy Bridge processors and a massive 48,000 Xeon Phi co-processors resulting in over 33PFLOPs.

Tianhe 2 Super Computer with 48,000 Knights Corner Processors.

Intel made a successor to Knights Corner, known as Knights Landing, that was based on the Atom core, but then began to wind down the project.   Avinash Sodani, chief architect of the Knights Landing chip took a job at Cavium Networks (who make multicore MIPS networking processors), and Intel then hired Raja Koduri, the chief architect of AMD’s GPU processors.  Intel’s future seems to be one based on Xeon, and GPU’s.

Like the Knights of old, the the Xeon Phi has been passed up by other technologies, certainly still useful, but destined to the halls of museums and history books.  It came, and it conquered the Top500 Supercomputer list, and then quietly fades away.  On July 27th Intel quietly announced the discontinuation of the Xeon Phi line, with last orders accepted the end of this August (2018).

 

 

July 23rd, 2018 ~ by admin

A Sampling of Sample Processors

AMD K6-2 Marketing Sample

During the development of most any given processor many chips are produced before it is released for commercial use.  These pre-production chips serve a wide variety of purposes in the design and debugging of the processor to ensure that the final CPU work well, sells well, and is compatible with all the vendors parts (motherboards, cooling solutions, power supplies, etc).  These chips are generally referred to as samples, and there is several types of them.  We’ll use Intel/AMD as the main examples but most all processor companies work in similar ways.

When a processor design is first being developed, the package for it is also often being developed as well, what will the new processors silicon die reside in?  How many pins? How will it dissipate heat?  This type of testing is often handled with Mechanical Samples.  Mechanical Samples are exactly as they sound, they test the mechanical aspects of the processor, the physical fit of it.  THese are often sent to board/socket manufacturers to ensure the processor will fit in sockets/boards, and with the automated equipment used to build systems.  Cooling solution companies may also receive these to test how a heatsink fits on the CPU. Mechanical samples may not contain a die at all, or may be chips that were tested as bad, or simply just untested chips (Intel used a lot of untested Mechanical Samples in their educational kits).

Thermal Sample for the LGA2011 Sandy Bridge Xeon

The next samples typically made are Electrical/Thermal Samples.  These again do not have an actually processor die in them, but electrically do work.  Electrical/Thermal samples are used to test the power draw and heat dissipation of a processor.  They often use a daisy chain transistor design, which serves to draw/dissipate power.  If a processor is expected to dissipate 135W of heat, a Thermal sample can be made to draw/dissipate exactly that.  These can test the the power supplies on motherboards, as well as the heat dissipation abilities of cooling solutions.  Some Thermal Samples have a temperature sensor added directly to the package to help see what temps they achieve.  Electrical Samples and Thermal Samples could also be used as purely Mechanical Samples too, and this is sometimes seen marked on the sample.

The first samples made that actually contain a functioning processor die are Engineering Samples.  Engineering Samples (also known as ES) are the most well known samples.  Overclockers often like to find ES CPUs as they will often allow for easier overclocking due to some not having locked in speed (multiplier locked).  Engineering Sample CPUs themselves come in several types as well.  Usually the first run is known as ES1, these can be thought of as an ‘Alpha’ version.  They are very likely to be buggy, and rarely run at the same speed as a production chip would.  These exist to test the overall processor design, or some subset of it.  Some are made to test just one part of the CPU, for example , the memory controller, or the cache.  Later versions of

Motorola PowerPC 8260 Engineering Sample (note the PPC prefix)

Engineering Samples are often called ‘ES2.’ These processors are getting closer to final production and are a lot less buggy, these would be considered ‘Beta’ Samples.  Most of the time these are quite usable chips, and often are very similar in clock speed/features to a production processor.   Intel denoted these chips with a Q-spec (such as QBGC) rather then production processor having an S-spec (such as SL5G8).  AMD typically uses part numbers starting with ‘1’ for ES1 CPUs and ‘2’ for ES2 CPUs. (such as Opterons 1S160805L4BGC or 2S16….).  Other companies have similar methodologies.  Motorola (Freescale) used the PPC prefix for most ES CPUs and Texas Instruments uses ‘TMP’ (not to be confused with Toshiba who also uses the TMP pre-fix, but for processors in general). Once a company is fairly confident a design is ready for release one final version is made.

These are known as Qualification Samples (QS).  QS processors almost always have a one to one equivalence with a production part, since that is their purpose, to make sure the design is ready for release.  These processors are by far the most widely made chips, as they are shipped by

Alchemy Au1000 MIPS Processor – Qualification Sample

the thousands to vendors, system builders/integrations, and even the media outlets for review.  The hope is that nothing major wrong is found with them, and that any bugs that are found can be dealt with in software or firmware, not requiring an entire silicon fix.  Intel continues to use Q-specs for these as well, leading to some confusion with the previously mentioned ES CPU’s.  AMD usually uses part numbers beginning with ‘Z’ for QS CPU’s and like Intel, does not offer these CPU’s for sale to the general public, they are either given to vendors, or sold exclusively to them for testing.   Motorola uses XC, or XPC for these, and unlike AMD/Intel, mass produces these and sells them, often for years, before they decide that a part/design is truly fully qualified/characterized (in which case the prefixe is changed to MC. or MPC).  Texas Instruments uses the ‘TMX” prefix for their Qual. Samples. and tended to make/sell them like Motorola did with theirs, changing the prefix to TMS for fully qualified production parts.

Read More »

July 3rd, 2018 ~ by admin

CPU of the Day: The Intel Everest Series

Mt. Everest – Tallest on Earth

Mt. Everest is the tallest mountain here on Earth, the pinnacle of climbing challenges.  There is no going higher then Mt. Everest.  At Intel the pseudo-unofficial codename for the absolute fastest speed bin of a particular processor is…Everest.  Everest processors are the fastest an architecture will so reliably.  Sometimes these processors end up an normal products, available for consumers to purchase.  The first good example of this is the Core 2 Extreme QX9775 Yorkfield core (Core Architecture).  They were a quad-core processor running at 3.2GHz, fast but not mind blazingly so.  The Xeon equivalent was the X5492 (Harpertown) 4-core at 3.4GHz.

Xeon X5698 – Westmere – 4.4GHz – Mid 2010

The next well know Everest was a chip based on the Westmere (shrink of Nehalem) architecture.  The Westmere Everest became known as the Xeon X5698, and was available for OEMs only, in fact it was a special order processor made with one particular type of client in mind. These were to be used for High Frequency Stock traders, and other such high speed transactional processing, where the ability to complete trades as fast, and reliability as possible is the entire nature of the business.  This means that single thread performance is far more important then having multiple core, and as such, the X5698 uses a 6-core die with only 2 cores active, but retaining access to the entire 12MB of L3 cache.  Clock speed was fixed at 4.4GHz, the cores did not reduce frequency as processing demands changed, as this would introduce uncertainty in how fast it would complete a given task. Doing task ‘X’ should take a predictable amount of time and not depend on what speed the processor chose to run at.  The next fastest Westmere processor was the X5690, which was a 6-core (all cores enabled) running at 3.46GHz (the same chip essentially as the Core i7 990X).  The X5698 was nearly 1GHz faster.  The X5690 cost around $1800, where as the X5698 cost around $20,000 EACH (based on costs OEMs charged to add a 2nd one so they may have marked it up some).  The impressive thing is that these chips would go faster.  Intel sampled 4.66GHz versions and Supermicro built systems using X5698’s overclocked to 4.8GHz.  All this back in 2011.

4.4GHz Jaketown (Sandy Bridge) Everest Sample 2010-2011

Intel’s next architecture was known as Sandy Bridge.  Sandy Bridge topped at at 3.5GHz (6-cores) for the Core i7 Extreme 3970X and 3.6GHz for the 4-core i7-3820 and similar Xeon E5-1620.  Intel demo’d an air cooled Sandy Bridge running on stage for a presentation at 4.9GHz, so the core certainly had some room to spare.  There is no documentation (that I could find) that Intel actually released anything faster then 3.6GHz, at least that I could find, but evidence suggests that they at least were thinking about it.  The picture is a Sandy Bridge Xeon in LGA2011 marked JKT EVEREST SS 4.4GHZ INTERNAL USE ONLY. JKT is short for Jaketown, Intel’s codename for the 32nm Xeon E5-2600 series.  That gives a very good idea what this processor was to be.  SS is likely to be a Single Socket (as often at those speeds getting dual systems working can be tricky).  Sandy was certainly capable of hitting 4.4GHz, with 4-core, and even air cooling, so perhaps these were samples for a limited OEM run, much like the previous Westmere X5698 processors.

Read More »

June 26th, 2018 ~ by admin

Image Gallery Thumbnails working again..

Good News! The issue with the Thumbnails not working in the galleries here  has been resolved,   They aren’t super fast loading, but they do work.  This means I can start uploading a few thousand new ones.

Posted in:
Museum News

June 10th, 2018 ~ by admin

The Collector’s Guide to Vintage Intel Microchips

The CPU Shack Museum is proud to announce the availability of The Collector’s guide to Vintage Intel Microchips, written by George Phillips Jr. This e-book (PDF) contains over 1300 pages, and 900 photographs of Intel Microchips from the 1960’s and 70’s along with their functions, package variations, rarity, and valuations.  Everything from the 3101 Static RAM to the i4004 4-bit processor. The author, George Phillips, has moved this book into public domain.  Originally published back in 2007 it is still a very useful resource.  Being 10 years old, some of the values are inaccurate and there has been a few more Intel chip types from the 1970’s found since then, as well as some different package/marking variations.  However the Guide is really an important resource for any collection that includes Intel IC’s from the 1970’s.

I have been collecting information for an update to it for some time, so if you have any Intel chips/variations not in this guide, feel free to let me know.

You can download The Collector’s Guide to Vintage Intel Microchips here (pdf 22.9MB)

Posted in:
Products, Research

May 27th, 2018 ~ by admin

Mainframes and Supercomputers, From the Beginning Till Today.

This article is provided by guest author max1024, hailing from Belarus.  I have provided some minor edits/tweaks in the translation from Belarusian to English.

Mainframes and Supercomputers, From the Beginning Till Today.

Introduction

We all have computers that we like to use, but there are also more productive options in the form of servers with two or even four processor sockets. And then one day I was interested, but what is even faster? And the answer to my question led me to a separate class of computers: super-computers and mainframes. How this class of computer equipment developed, as it was in the past and what it has achieved now, with what figures of performance it operated and whether it is possible to use such machines at home, I will try to explain all this in this article.

FLOPS’s

First you need to determine what the super-computer differs from the mainframe and which is faster. Supercomputers are called the fastest computers. Their main difference from mainframes is that all the computing resources of such a computer are aimed at solving one global problem in the shortest possible time. Mainframes on the contrary solve at once a lot of different tasks. Supercomputers are at the very top of any computer charts and as a result faster than mainframes.

The need for mankind to quickly solve various problems has always existed, but the impetus for the emergence of superfast machines was the arms race of well-known superpower countries and the need for nuclear calculations for the design and modeling of nuclear explosions and weapons. To create an atomic weapon, colossal computational power was required, since neither physicists nor mathematicians were able to calculate and make long-term forecasts using the colossal amounts of data by hand. For such purposes, a computer “brain” was required. Further, the military purposes smoothly passed into biological, chemical, astronomical, meteorological and others. All this made it necessary to invent not just a personal computer, but something more, so the first mainframes and supercomputers appeared.

The beginning of the production of ultrafast machines falls on the mid-1960s. An important criterion for any machine was its performance. And here on each user speaks of the well-known abbreviation “FLOPS”. Most of those who overclock or test processors for stability are likely to use the utility “LinX”, which gives the final result of performance in Gigaflops. “FLOPS” means FLoating-point Operations Per Second, is a non-system specific unit used to measure the performance of any computer and shows how many floating-point arithmetic operations per second the given computing system performs.

“LinX” is a benchmark of “Intel Linpack” with a convenient graphical environment and is designed to simplify performance checks and stability of the system using the Intel Linpack (Math Kernel Library) test. In turn, Linpack is the most popular software product for evaluating the performance of supercomputers and mainframes included in the TOP500 supercomputer ranking, which is made twice a year by specialists in the United States from the Lawrence Berkeley National Laboratory and the University of Tennessee.

When correlating the results in Giga, Mega and Terra-FLOPS, it should be remembered that the performance results of supercomputers always are based on 64-bit processing, while in everyday life the processors or graphics cards producers can indicate performance on 32-bit data, thereby the result may seem to be doubled.

The Beginning

Read More »

April 11th, 2018 ~ by admin

PowerPC Processor for TESS Planet Hunter – Updated

TESS Orbiter – Freescale (now NXP) 2010  PowerPC e500

UPDATE: I received a note from a NASA engineer that the final flight DHU was made by SEAKR Engineering rather then Space Micro.  It turns out MIT pursued 2 different DHU systems in the design of TESS.  The Space Micro IPC 7000 was referred to as the DHU and a system by SEAKR (the Athena-3) was selected as the ADHU (Alternate Data Handling Unit).  Apparently MIT wasn’t sure which would be best so essentially characterized both (and most documentation from early on shows the Space Micro system).  In the end however, the SEAKR Athena-3 Single Board computer was selected.

If all goes well, in a few days the NASA TESS (Transiting Exoplanet Survey Satellite) will be launched on a SpaceX Falcon 9 rocket to startits mission to survey a large portion of the sky for possibly Earth-like planets.  TESS’s finds will make great candidates for further study by either Hubble, or JWST (when it finally launches).  While TESS can see transiting planets (the dimming of a star as an exoplanet passes in front of it) it cannot determine much about its composition, or the composition of its atmosphere.  However, having a list of exoplanets to further check out, especially Earth-sized ones, it’s a big help.  TESS was created as part of the NASA Medium Class Explorers Program (MIDEX) which is for mission up to around $200 Million total cost to NASA (not including launch).  TESS itself cost about $75 million (developed in large part by MIT and built by Orbital-ATK on their LEOStar-2 Platform) and the launch services contract was $87 Million with the remainder taken by operations and contingency funding.

Space Micro Proton 400k with Freescale 2020 processor

That makes this one of the least expensive NASA missions, but one that has engendered much more public interest then its cost suggests.  Finding alien worlds captivates people hearts and minds.  So what is at the heart of the TESS orbiter?  Obviously the premier technology is its 4 cameras that will scan the sky, but the computer that powers these is no less interesting.

The 4 cameras are interfaced to a Data Handling Unit (DHU).  Initially the DHU was to be the Space Micro IPC-7000 computer.  The IPC-7000 consists of a TI TMS320C67xx 32-bit DSP and a pair of Xilinx Virtex-7 FPGAS.  They handle all the pre-processing of the imagery collected by the cameras, making it into a format that is easily transmitted back to earth.  The rest of the spacecraft functions (such as actually sending/storing the data and other space craft house-keeping) is handled by a Space Micro Proton 400k SBC.  The Proton 400k is based on a Freescale 2020 1GHz Dual Core PowerPC processor made on a 0.45u process..  Each PowerPC e500v2 core has a 7-stage pipeline with 32K of I-cache and 32K of D-Cache and shares a single 512K L2 Cache.  The computer also containing a pair of 192GB solid state memory boards for buffering imagery data (data is relayed to Earth only once per orbit, so it needs to store data from around 14 days).

Athena-3 SBC – Powered by a 1.067GHz Freescale P2010 Processor

The final flight version of TESS switched to an ADHU made by SEAKR Engineering.  This uses a very similar setup but a bit less powerful processor.  The heart of the ADHU is the Freescale P2010 e500 processor at 1066MHz with 1GB of DDR2 RAM and 1-4GB of Flash.  This is the single core version of the P2020 used in the initial Proton 400k.  The ADHU also includes a RCC5 triple Xilinx Virtex-5 FPGA board to handle additional camera processing functions (and anything else not handled by the P2010 processor).  Solid state storage is a Gen 3 FMC also by SEAKR, containing 3 boards with a total of 192GB of Flash.  The ADHU handled all of the science, processing the raw camera data into useful science data and handling the sending of data to the 100-125MBit/sec Ka-band transmitter.  It also supplies some star reference information used by the MAU (Master Avionics Unit) computer to provide finer attitude control of the satellite.  The MAU is the LeoStar-2 Satellites main computer, and handles all the mechanics of flying the spacecraft outside of the science work done by the ADHU.

Freescale P2020 Processor

In many ways this is a very advanced processor compared to the RAD750 processors we often see on large scale NASA missions.  The Freescale 2020/2010 is not an inherently radiation hardened design, however both Space Micro and SEAKR  implements many radiation mitigating designs in the system design to compensate for this.  It is not as robust as the RAD750 but it is a $75 million earth satellite with a target mission life of 2-years so it doesn’t need to be. The 2020 processor does give TESS tremendous processing power for a scientific satellite, allowing for a lot of pre-processing of the imagery.  This allows TESS to handle much of the grunt work, and send scientists here on Earth only the very best data, in a format that is the most useful to them.