Archive for the 'Processor News' Category

February 27th, 2014 ~ by admin

The Unlikely Tale of How ARM Came to Rule the World

Bloomberg Business Week recently published an interesting article on ARM’s rise to power in the processing world.  There first major design ‘win’ was a failed product known as the Apple Newton, yet they would go on to become a powerhouse that is no challenging Intel.

In ARM’s formative years, the 1990′s, the most popular RISC processor was the MIPS architecture, which powered high end computers by SGI, while Intel made super computers (the Paragon) based on another RISC design, the i860.  Now, nearly 2 decades later, after Intel abandoned their foray into the ARM architecture (StrongARM and X-Scale) RISC is again challenging Intel in the server market, this time, led by ARM.

MIPS, now owned by Imagination, is again turning out new IP cores to compete with ARM, and other embedded cores.  Their Warrior class processors are already providing 64-bit embedded processing power, though with a lot less press that the likes of Apple’s A7.

Tags:
,

Posted in:
Processor News

January 20th, 2014 ~ by admin

Welcome Back Rosetta: The Dynex MAS31750 Awakens

Rosetta Comet Chaser - Dynex 1750

Rosetta Comet Chaser – Dynex 1750

The ESA’s comet chaser Rosetta has just today awoken from a long deep sleep on its comet chasing (and landing) mission.  The solar powered spacecraft was launched back in 2004.  It is based on the Mars Mariner II (itself based on the Voyager and Galileo) spacecraft design of the early 1990s (when the mission was first conceived.)  Main differences include using very large solar arrays versus a RT (Radioisotope Thermal Generator) and upgraded electronics.

In order to conserve power on its outward loop (near Jupiter’s orbit) most all systems were put to sleep in June of 2011 and a task set on the main computer to waken the spacecraft 2.5 years later and call home.  The computer in charge of that is powered by a Dynex MAS31750 16-bit processor running at 25MHz, based on the MIL-STD-1750A architecture.

A reader recently asked why such an old CPU design is still being used rather then say an x86 processor.  As mentioned above the Rosetta design was began in the 1990′s, the 1750A was THE standard high reliability processor at the time, so it wasn’t as out of date as it is now that its been flying through space for 10 years (and 10 years in the clean room).  The 1750A is also an open architecture, no licenses are or were required to develop a processor to support it (unlike x86). Modern designs do use more modern processors such as PowerPC based CPUs like the RAD750 and its older cousin the RAD6000.  Space system electronics will always lag current tech due to the very long lead times in their design (it may be 10 years of design n the ground before it flies, and the main computer is selected early on).  x86 is used in systems with 1) lots of power, and 2) somewhat easily accessible.  Notably the International Space Station and Hubble.  x86 was not designed with high reliability and radiation tolerance in mind, meaning other methods (hardware/software) have to be used to ensure it works in space.

Currently the ESA designs with an open-source processor known as the LEON, which is SPARC-V8 based.

November 19th, 2013 ~ by admin

MAVEN To Mars: Another BAE RAD750 CPU

MAVEN to Mars - RAD750 Powered

MAVEN to Mars – RAD750 Powered

NASA has successfully launched the $671 million MAVEN mission to Mars for atmospheric research.  Like the Mars Reconnaissance Orbiter it is based on, it’s main computer is a BAE RAD750,  a radiation hardened PowerPC 750 architecture.  This processor first flew on the Deep Impact Comet chaser and is capable of withstanding up to 1 million rads of radiation.  The entire processor sub-system can handle 200,000 rads.  To put this in perspective, 1000 rads is considered a lethal dose for a typical human.  Likely much higher then a Apple Mac G3 that the PowerPC 750 was originally used in back in 1998 as well.   The processor can be clocked at up to 200MHz though often will run slower for power conservation.

The MAVEN should reach Mars within a few days of the Indian Space Agency’s $71 million Mangalyaan Orbiter launched earlier this month.  MAVEN is taking a faster route, at the expense of a heavier booster and larger fuel consumption.  The Mangalyaan Orbiter’s main processor is the GEC/Plessey (Originally produced by Marconi and now Dynex) MAR31750, a MIL-STD-1750A processor system.

November 17th, 2013 ~ by admin

Itanium is Dead – And other Processor News

Itanium Sales Forecasts vs Reality

Itanium Sales Forecasts vs Reality

‘Itanium is dead’ is a phrase that has been used for over a decade, in fact many claimed that the Itanium experiment was dead before it even launched in 2001.  The last hold-out of the Itanium architecture was HP, likely because the Itanium had a lot in common with its own PA-RISC.  However HP has announced that they will be transitioning their NonStop sever series to x86, presumably the new 15-core Xeons Intel is developing.  Itanium was launched with goal of storming the server market, billed as the next greatest thing, it failed to make the inroads expected, largely due to the 2 decades of x86 code it didnt support, and poor initial compiler support.  Many things were learned from Itanium so though it will become but a footnote, its technology will live on.

Interestingly other architectures that seemed to be n the brink are getting continued support in new chips.  Imagination, known for their graphics IP, purchased MIPS, and now has announced the MIPS Warrior P-class core.  This core supports speeds of over 2GHz, and is the first MIPS core with 128 bit SIMD support.

Broadcom, historically a MIPS powerhouse, has announced a 64-bit ARM server class processor with speeds of up to 3GHz. Perhaps ironic that ARM is now being introduced into a market that Itanium was designed for. Broadcom has an ARM Architecture license, meaning they can roll their own designs that implement the ARM instruction set, similar to Qualcomm and several others.

POWER continues to show its remarkable flexibility.  Used by IBM in larger mainframes in the POWER7 and POWER8 implementations it crunches data at speeds up to 4.4GHz.  On the other end of the spectrum, Freescale (formerly Motorola, one of the developers of the POWER architecture) has announced the 1.8GHz quad-core QorIQ T2080 for control applications such as networking, and other embedded use.  These days the POWER architecture is not often talked about, at least in the embedded market, but it continues to soldier on and be widely used.  LSI has used it in their Fusion-MPT RAID controllers, Xilinx continues to offer it embedded in FPGAs and BAE continues to offer it in the form of the RAD750 for space-based applications.

Perhaps it is this flexibility of use that has continued to allow architectures to be used.  Itanium was very focused, and did its one job very well. Same goes for the Alpha architecture, and the Intel i860, all of which are now discontinued.  ARM, MIPS, POWER, x86 and a host of MCU architectures continue to be used because of their flexibility and large code bases.

So what architecture will be next to fall? And will a truly new architecture be introduced that has the power and flexibility to stick around?

September 18th, 2013 ~ by admin

Hold the Phone – Why Intel Making the A7 Might Not Be Awesome – Updated

Update: Sept 20th: It has been confirmed that the A7 is still made by Samsung, most likely on their 28nm High-k dielectric process (same as the Exynos in the Galaxy S5). The M7 has also been confirmed to be an off the shelf NXP LPC1800 ARM Cortex-M3, running at up to 180MHz, nothing spectacular, and fairly common for sensor interface.  What does this mean for Apple? It means if they can get that much performance out of Samsung’s 28nm process, when and if they Do switch to Intel, the possibilities are quite interesting.

However, its still interesting to play what if, so the below analysis remains.
It has been rumored that Apple’s new A7 processor may be fab’d by Intel, rather then TSMC and Samsung.  Previous generations of the Ax have been fab’d by Samsung and in July it was announced that TSMC had picked up an Apple contract.  Intel has in the last year begun to market, albeit quietly its excess fab capacity.  This is an entirely smart move by Intel.  It will help them use their multi-billion dollar fabs to the fullest capacity, as well as test and experiment with other designs.

Apple using Intel as a contract fab makes sense, for Apple.  Intel has the best fab technology in the world, bar none.  Apple is less concerned with competing on price, than they are making the best devices possible.  To have the best devices you need the best (fastest and lowest power) chips.  To have the best chips you need the best processes, and that means Intel.  None of this is in question.  If the A7 is fab’d by Intel it will greatly help Apple attain its market leading position.  However, what is in question is whether this is a ‘huge win’ for Intel.  One blog even referred to Intel making Apple chips as “That’d be a hell of a score for Intel.”  In reality this will have little benefit to Intel.  Certainly not financially. Lets look at why.

Read More »

May 23rd, 2013 ~ by admin

Invisible Processors: They have us surrounded

Jack Ganssle wrote an article on the continued demand for 4 and 8-bit microcontroller.  Every year the ‘experts’ and sales people claim will be the end of the 8-bit microcontroller.  Companies have strived to make upgrade paths to 32 bit.  But the fact remains, basic microcontrollers, sold for pennies, are all that is needed for the majority of applications, applications that we use everyday without a single thought of the processor in it.

Ken Olson, head of Digital Equipment Corporation, said in 1977 (six years after the first commercially-successful microprocessor was introduced) “There is no reason anyone would want a computer in their home.”

Count the processors in your home and ponder that statement.  Unless you live in a cave you do not have enough fingers and toes to count all the computers in your home.  Its a good read, check it out.

January 12th, 2013 ~ by admin

The Intel 80186 Gets Turbocharged – VAutomation Turbo186

Original Intel 6MHz 80186 Made in 1984

Original Intel 6MHz 80186 Made in 1984

2012 marked the 30th anniversary of the introduction of the Intel 80186 and 80188 microprocessors.  These were the first, and arguably only, x86 processors designed from the beginning as embedded processors.  It included many on-chip peripherals such as a DMA channels, timers and other features previously handled by external chips.  Initially released at 6MHz, clock for clock many instructions were faster then the 8086 it was based on, due to hardware improvements.

In 1987 Intel move the 186 to a CMOS process and added more enhancement including math co-processor support, power down modes and a DRAM refresh controller.  Speeds were increased up to 25MHz (from the 10MHz max of the NMOS version).  Through the years Intel continued to developed new versions of the 186 with added features, lower voltages, and different packages.  It was not until 2007 that Intel finally stopped production of the 186 series.  It continued to be made by others under license including AMD, who made versions running up to 50MHz.  Fujitsu and Siemens also produced the 186 series. Like the 8051 the 186 gained significant support, being embedded in millions of devices.  The instruction set was familiar, debugging and development systems were (and are) plentiful so the 186 core continues to be in wide use.

As IC complexity and transistor counts increased the need for a processor core that could not just be embedded into a system, but be embedded into a custom ASIC or SoC became apparent. IC’s were being designed to handle things like DVD playback, set-top boxes, flat panel control and more.  These applications still required some sort of processor to handle them but having to have a separate IC for it was not economical.

pixelworks PW166B - 67MHz Tubro186 based Flatpanel Controller made in 2004

pixelworks PW166B – 67MHz Tubro186 based Flatpanel Controller made in 2004

VAutomation (founded in 1994) designed Verilog and VHDL synthesizable cores (meaning they could be ‘dropped’ into an IC design or FPGA).  In November 1996 VAutomation licensed the 8086/8, 80186/8 and the CMOS versions from Intel.   This gave them them ability to design their own compatible models of these processors without fear of litigation.  More importantly it allowed them to sub-license these designs to others.  In 1997 VAutomation demo’d their first 186, the V186 core.  This was a Intel 80186 compatible core that could be synthesized into a customers design.  It was ‘technology independent  which means it was not restricted to a certain process or even technology.  It could be used in CMOS, ECL, 0.35u, 1 micron, whatever the client needed.  On a 0.35u CMOS process it was capable of speeds in excess of 60MHz, and did so with less then 28,000 gates. One of the first licensees was Pixelworks, which made controllers for monitors.  Typical licensing was a $25,000 fee up front and royalties on a per device basis usually split into a high volume (over 500,000 units) and low volume.  Typical price per chip was $0.25-$2.00, which was cheaper then the $15 price Intel was charging for a discrete 80C186.

Read More »

November 18th, 2012 ~ by admin

48 Cores and Beyond – Why more cores?

Intel 48 core Single Chip Cloud Processor

Recently two companies announced 48 core processors.  Intel announced they are working on a 48 core processor idea for smart phones and tablets. They believe it will be in use within 10 years, which is an eternity in computer terms.  Meanwhile Cavium, makers of MIPS based networking processors announced a new 64bit MIPS based 48-core networking processor.  The Octeon III, running at 2.5GHz is expected to beginning shipping soon.  Cavium already makes and ships a 32 core MIPS processor.  So clearly multi-core processors are not something we need to wait 10 years for.

Tilera, another processor company, is ramping up production of the TILE-Gx family.  This processor running at 1-1.2GHz supports from 9 to 100 cores (currently they are shipping 36 core versions).  NetLogic (now owned by Broadcom) made a 32 core MIPS64 processor and Azul Systems has been shipping a 54 core processor for several years now.  Adapteva is now shipping a custom 64 core processor (the Epiphany-IV).  This design is expected to scale to many thousands of cores.

Why is this all important?

Tilera multi-core wafer

While your personal computer, which typically is running a dual, or quad core, or perhaps even a new Intel 10 core Xeon is practical for most general computing, these processors are not adequate for many jobs.  Ramping up clock speed, the GHz wars, was long thought to be the solution to increasing performance in computing.  Just making the pipe faster and faster, and reducing the bottlenecks that fed it new instructions (memory, disk, cache, etc) was the proposed solution to needing more performance.  To a point it worked, until a wall was hit, that wall was power and thermal requirements.  With increasing clock speed processors ran hotter and began drawing immense amounts of current (some processors were pulling well over 100 amps, albeit at low voltage).  This was somewhat alleviated by process shrinks, but still, the performance, per watt, was decreasing.

Many computing tasks are repetitive, do the exact same thing to each of a set of data, and the results are not interdependent  meaning A does not have to happen before you can do B.  You can perform an operation on A, B and C, all at once and then spit out the results.  This is typically true of processing network data, processing video, audio, and many other tasks.  Coding and compiling methods had to be updated, allowing programs to run in many ‘threads’ which could be split amongst many cores (either real or virtual) on a processor, but once done, the performance gains were tremendous.

Clearspeed CSX700 192 cores @ 250MHz

This allows a processor to have increased performance, at a relatively low clock speed.  Work loads can also be balanced, a task that does not lend itself to parallelism, can be assigns to a single core, while the other cores can be busy doing other work.

There are several main benefits to multi-cores:

Increased performance for parallel tasks:  This was the original goal, split a single problem into many smaller ones, and process them all at once.  That is why massively multi-core processors began in the embedded world, dealing with digital signal processing and networking.

Dynamic Performance:  Dynamic clocking of multi-core processors has led to tremendous power savings.  Some tasks don’t need all the performance on all the cores, so a modern multi-core processor can dynamically scale the clock speed, and voltage, of each core, as needed.  If not all cores are needed, they can be put to sleep, saving power. If a non-parallel task is encountered, a single core can be dedicated to it, at an increased clock speed.

Upgradeability:  If a system is designed correctly, and the code is written/compiled well, the system does not know, or care how many cores the processor has.  This means that performance can, in general, be upgraded just by swapping out the physical silicon with one with more cores.  This is common in larger super computers, and other systems.  HP even made a custom Itanium, called the Hondo MX2 that integrated 2 Madison cores on a single Itanium module.  This allowed their Superdome servers to be upgraded with nothing more then a processor swap, much cheaper then replacing the entire server.

Not all tasks are easily handled in a parallel fashion, and for this reason clock speed is still important in some applications where B cannot happen until A is complete (data dependencies).  There are, and will continue to be systems where this is the focus, such as the IBM system Zec12 which runs at a stunning 5.5GHz.  However, as power becomes a more and more important aspect of computing, we will continue to see an ever increasing number of cores per chip in many applications. Is there a limit?  Many think not, and Intel has made a case for the use of 1000+ core processors.

October 16th, 2012 ~ by admin

Renesas: The Auto Bailout of the Semiconductor Industry

In 2003 Renesas Technology was formed as a joint company between Hitachi and Mitsubishi, combining their semiconductor operations.  In 2010 Renesas Electronics was created by the merger of NEC Electronics, and Renesas Technology.  This created the largest supplier of microcontrollers in the world, combining the product portfolios of NEC, Mitsubishi and Hitachi.  This allowed them to stop competing amongst themselves, and compete with Samsung, Infineon and other suppliers.

Renesas ended up with the following microcontroller families:

  • Hitachi: H8, H8S, H8SX, SuperH
  • Mitsubishi: M16, M32, R32, 720, 740
  • NEC: V850, 78K

In addition Renesas has developed it’s own designs including:

  • RX Series – a replacement for the Hitachi H8SX and Mitsubishi R32C designs.
  • RL78 Series – a replacement that combines the NEC 78k and Mitsubishi R8C devices
  • RH850 Series - successor to the NEC V850 for automotive use
  • R8C Series – Value derivative of the Mitsubishi M16C

Hitachi SH-3

One of the largest markets for these microcontrollers (and associated other parts) is the automotive industry, with today’s vehicles containing, on average, $350 in just IC’s per car.  $350 may not sound like much when a car costs $20,000, but the Average Sale Price (ASP) per component, is 33 cents, meaning there are, on average, over 1000 IC’s in a modern car, of which 50-100 are microcontrollers.  They do everything from run the stereo, to monitor and adjust engine parameters.  As more features (entertainment, navigation, stability control, etc) are added, the count goes up.

The market downturn in 2008-2009 hit the automotive industry, and is suppliers very hard.  With very small profit margins this hit Renesas very hard as well.  Combined with increasing competition from Samsung  Renesas has been driven into high levels of debt, and a distinct lack of profitability.

Read More »

September 6th, 2012 ~ by admin

Apple iPhone Update: Whats changed since the iPhone 4

Back in 2010 we did a write up on the many processors in each iPhone for each version through the iPhone 4.  Since then Apple has released the iPhone 4 (CDMA) and the mid-cycle refresh iPhone 4S.  Seeing as the iPhone 5 should be released on September 12th here is a quick update to bring our table up to date.

CPUs by function and generation of iPhone:

Function 2G 3G 3GS 4 4-CDMA 4S
App Processor Samsung S3C6400 400-412MHz ARM1176JZ Samsung S3C6400 400-412MHz ARM1176JZ Samsung S5PC100 600MHZ ARM Cortex A8 Apple A4 800MHz ARM Cortex A8 Apple A4 800MHz ARM Cortex A8 Apple A5 900Mhz Dual core ARM Cortex-A9
Baseband S-GOLD2 ARM926EJ-S <200MHz Infineon X-Gold 608 ARM926 312MHz + ARM7TDMI-S Infineon X-Gold 608 ARM926 312MHz + ARM7TDMI-S X-Gold 618 ARM1176 416MHz Qualcomm MDM6600 ARM1136JS 512MHz Qualcomm MDM6610 ARM1136JS 512MHz
GPS NA Infineon HammerHead II Infineon  HammerHead II BCM4750 (no CPU core) see above see above
Bluetooth BlueCore XA-RISC BlueCore XA-RISC BCM4325 (2 CPU cores) BCM4329 (2 CPU cores) BCM4329 (2 CPU Cores) BCM4330ARM Cortex-M3 + Bluetooth CPU
Wifi Marvell 88W8686 Feroceon ARMv5 128MHz Marvell 88W8686 Feroceon ARMv5 128MHz see above see above see above see above
TouchScreen Multi-chip BCM5974 TI TI TI TI
OS Nucleus by Mentor Graphics Nucleus Nucleus ThreadX by ExpressLogic REX by Qualcomm REX by Qualcomm
Total Cores 5 7 7 5 5 6

Apple iPhone 4 CDMA

The CDMA version of the iPhone 4 switched from an Infineon X-Gold baseband to a Qualcomm MDM6600 running a 512MHz ARM1136JS core.  Interestingly this baseband supports GSM but due to antenna issues it is not implemented here. The Qualcomm Gobi, as it is known, also has integrated GPS, removing the need for the old Broadcom BCM4750.  This sets the stage for the iPhone 4S.

Read More »