Google-Apps
Hauptmenü

Post a Comment On: Ken Shirriff's blog

"The Texas Instruments TMX 1795: the (almost) first, forgotten microprocessor"

31 Comments -

1 – 31 of 31
Blogger Dogzilla said...

Thanks for a great article. I worked on the Z80 at Mostek, also on their ill-fated version of the 68000. I enjoy the history.

May 10, 2015 at 9:44 AM

Blogger Leo Richard Comerford said...

A word of caution on the Datapoint book by Lamont Wood: it's basically self-published by one of the former Datapoint people, John Frassanito. Which is not to claim that there's anything wrong with it (I haven't read it myself) but it's surely Frassanito's telling of the story.

May 10, 2015 at 11:33 AM

Anonymous Anonymous said...

Good work. Always fun to read anything that credits Datapoint as "at the creation".

btw, although you mentioned CMOS CPU speed later in the article, it was, as my Datapoint colleagues of the time (I joined in Jan 71), "main board" designer Gary Asbell (deceased) and assembly language designer/programmer Harry Pyle told me, the overwhelming reason for rejecting the Intel or TI CMOS CPUs was CMOS's much slower clock speed of a few 100 KHz versus their proven "main board" of small and medium scale TTL running MUCH faster. iow, even all the other problems could be solve, the CMOS CPUs speed would always be the show stopper. So SSI/MSI TTL won.

Len Conrad

May 10, 2015 at 11:55 AM

Anonymous Poul-Henning Kamp said...

Is the history of the Hewlett Packard "nanoprocessor" (partno 1820-1691 and ...92) fully unravelled at this point ?

Some of the date-codes I have seen can put it very early in the race.

Poul-Henning

May 10, 2015 at 1:03 PM

Anonymous Anonymous said...

Thanks for the great article.

Whenever I click an image on the article It shows as 'not found' on Picasa.

May 10, 2015 at 1:55 PM

Blogger Ed said...

Great research and write-up Ken! One quibble with your analysis of little-endian versus big-endian: note that the 6502 gains some cycles of performance advantage over the similar 6800 by having both operands and indirect addresses appear LSB first, because indexing arithmetic is performed byte-serial.

May 10, 2015 at 2:14 PM

Blogger Ken Shirriff said...

Poul-Henning: thank you for the interesting note on the HP "nanoprocessor", which I hadn't heard of. There's not much information on it, so let me know if you have a concrete date. One source said that it was NMOS, which would suggest it came after PMOS chips like the 8008. Another source said it was based on the BPC processor, which was in 1972. Another source says the BPC was the first microprocessor HP built. So my guess is the nanoprocessor was 1972 or later.

Anonymous: I think I've figured out the Picasaweb permissions and hopefully the photos work now.

Ed: yes, the 6502 gets an advantage from little-endian. On the other hand, the 6800 went big-endian. I guess the difference is the indexed absolute addressing the 6502 has.

May 10, 2015 at 2:54 PM

Blogger Unknown said...

Nice write up.
It all comes down to the definition of a microprocessor.

When bringing up a definition one should try to not exclude anything beforehand - after all, we want to do an open ended search, don't we?

While you're right, that such a beast must reside on a single chip (for the major parts), I do not realy count versions with an external micro programm ROM out per se. After all, microcode is just a way to formulate a task with less abstraction - and VLIW systems do go the very same path. It doesn't realy matter for the existance of a computer if a task is formulated in a more abstract way which again is interpreted by microcode or not. Thus ruling out the AL1 as 'too little' isn't as easy as it seams.

Similar the exclusion of systems with more components than the bare minimum may need some rethinking. Just because a TMS1802 already includes ROM and I/O, doesn't make the processor part disapear from the chip. It's much like the first car (Benz three wheeler) already includes a passenger seat, so not only beeing the first motor car, but also the first passenger car.

Last but not least, unusual ways of data processing like serial formats, decimal numbers or specialized streaming structures should not rule out a concept. To be named a processor, a design should be turing complete. No matter how aweful programming might be, as long as it's possible to do anything, it's a computer.

So puting this together, a definition for 'microprocessor' would fit all devices that are capable of running general progamms, where all main components reside on a chip. Programm and data storage may or may not be on chip. Similar for I/O.

Did I miss anything?

May 10, 2015 at 5:36 PM

Anonymous Anonymous said...

Thank you for telling this story again... Tech users of today know NOTHING of how computing works...OR, computers!! Technologists are REAL hard to find.. Especilly someone that knows where all these glittering toys came from.

The company where I had my first real eye-opening into the depth of what computing was about, was a small speciality publishing company near Chicago. We bought, and used with an astounshing result, Datapoint 2200 serial numbers 2 and 6. Over the course of a couple of years, we used the 2200's as input devices for specialty text, and thru other, computer-related programming radically changed the face of the publishing industry.

GREAT place to pinpoint the start of a future...

May 10, 2015 at 9:29 PM

Blogger markhahn said...

In that TI Electronics ad, it's amusing to see a figure that looks a lot like Moore's...

May 10, 2015 at 10:26 PM

Blogger Lee Stewart said...

Wasn't the main reason for Datapoint's demise the discovery they were double-accounting for orders not yet manufactured as already delivered?

May 11, 2015 at 11:04 AM

Comment deleted

This comment has been removed by the author.

May 11, 2015 at 11:05 AM

Blogger J. Peterson said...

Lamont Wood wrote a book, Datapoint: The Lost Story of the Texans Who Invented the Personal Computer Revolution. I'm about halfway through it, and it does a good job of recounting the stories behind the creation of Datapoint and their relationships with TI and Intel.

May 11, 2015 at 2:33 PM

Blogger Unknown said...

Thanks for the great article"

May 12, 2015 at 9:07 AM

Blogger mfischer said...

First, congratulations on an excellent article.  Very few people, even among those who pay attention to the history of computers prior to 1975, are aware of all of these chips and systems.  Even fewer can recognize their relative significance.

I agree with your conclusion that the TMX 1795 was the first operational microprocessor, but have different opinions about several other items.  Also, I agree strongly with the position expressed in "The inevitability of microprocessors" -- the single-chip microprocessor was, at best, a minor invention, and inevitable once the major invention of the integrated circuit had been made about a decade earlier.  There was a major invention within the set of products you discuss -- the concept of the desktop PC -- achieved by the Datapoint 2200 -- a complete system with CPU, memory, bulk storage, display, keyboard, I/O ports, and an operating system in a single desktop enclosure with a footprint suitable for use on a desk.

First, I contend that the 4004 should be excluded from consideration as the first microprocessor for the same reason as the CADC -- the 4004 was NOT a general-purpose computer.  While the block diagram of the 4004 looks like that a conventional CPU, the 4004 had two, physical separate address spaces for program code and data, and the 4004 instruction set provided NO means for writing to program memory (even if said memory were implemented using RAM instead of the 4001 ROM+I/O chips intended as program store for the 4004).  Furthermore, program memory space was organized 4Kx8, whereas data memory space was organized 1280x4 (and accessible only 20x4 at a time, so really more of a banked register space than general purpose data memory), hence a major hardware hack would have been needed to provide any manner of write access to program memory.  Each of the Four-Phase, Viatron, and Datapoint 2200-derived CPUs used conventional, general-purpose architectures, including a unified, flat, read/write address space.

Second, I disagree with your conclusion that the Four-Phase was the first computer with solid state memory.  If the basis for "first” is the date of the article describing the computer, then the first all-solid-state computer is the ILLIAC IV, which was extensively described in articles published in 1969.  However, ILLIAC IV was not operational until late 1971 (so ought to be considered to be the first supercomputer with solid state memory).  If the basis for "first" is the date of the public demonstration and/or delivery, then the Datapoint 2200, IBM System/7, and Data General SuperNova SC all preceded Four-Phase.  Also, the IBM System 370/145, the first mainframe with solid-state memory, was announced in October 1970, although I do not believe it was demonstrated, nor delivered, until well into 1971.  At best, the Four-Phase is the first system with both a general-purpose LSI CPU and solid-state memory.

May 17, 2015 at 8:56 PM

Blogger mfischer said...

I would like to offer a few clarifications about the 2200 and the relationship between the Datapoint and Intel versions of the architecture.  I was an early user of both the 2200 and 8008, and several years later worked as a processor architect at Datapoint, so I was able to cross-check my "external" view of that history with people who were directly involved.

The sole reason the Datapoint 2200 Version I was a serial design was memory cost, and as soon as 1K-bit DRAMs were available in the required quantities the 2200 Version II, with a parallel CPU and double the maximum memory capacity, was introduced.  In 1969 the only available form of solid-state memory was dynamic MOS shift registers.  These were especially low cost for Datapoint, because they were purchasing large quantities of 512-bit shift registers for use as the refresh memory of their extremely popular Datapoint 3300 glass TTY product.  It is worth keeping in mind that, when Datapoint approached Intel about making a custom CPU chip for the 2200, they were Intel's largest customer.

The greater speed of the 2200 Version II, versus the Version I and the 8008, was primarily due to its use of RAM and a parallel data path, not its use of TTL circuitry.  Minimum instruction execution time on the Version I was 8 microseconds, but this ballooned to 520 microseconds whenever an operand had to be fetched from or stored into memory because a full “rotation” of the shift registers was required before the next instruction could be fetched.  The Version II’s parallel CPU achieved instruction times of 1.6 to 4.8 microseconds and with RAM there was no penalty for non-sequential access.  It is also worth noting that the 2200 Version II (and subsequent Datapoint processors) had two sets of CPU registers, switchable using a single instruction -- a feature most people associate with the Z-80, introduced five years later.

While there is no doubt that the architecture of subsequent Datapoint and Intel processors diverged, I would stop short of saying that "later Datapoint architectures and the 8080 went in totally different directions."  The divergence began when Datapoint needed 16-bit capabilities long before Intel could put a 16-bit processor on a chip.  Datapoint extended the architecture while maintaining full upward compatibility from the 2200.  16-bit operations were implemented using pairs of 8-bit registers rather than widening the registers as on the 8086.  The most significant area where Datapoint took a different direction than Intel was by expanding address space beyond 64KB and adding memory protection using paging rather than segmentation.  The most important aspects of the programming model -- the CPU registers and condition codes -- remained equivalent all the way through the Datapoint 8000-series processors and the Intel 80286.  Two developments at Datapoint illustrate the closeness of the two architectures:  In the late 1970s the Datapoint 1500 series used the Z-80.  Datapoint's DOS for ran with minimal modification.  In the early 1980s, after the failure of a project to develop a single-chip successor to the 6600 CPU (with the consequence that the Datapoint 8600 went to market with a TTL emulator for the failed LSI chip), Datapoint developed a relatively simple hardware block known internally as the “TLX Engine”.  The TLX logic sat in front of an 80286 and translated Datapoint instructions to Intel instructions on-the-fly.  Had the instruction sets not had a nearly direct mapping, and especially if the condition codes and register side effects not been essentially identical, this would not have been practical.  The Datapoint 8400 (1983) and 7600 (1986) were desktop workstations using 6MHz TLX Engines.  The Datapoint 7900 (1986) was an SMP server with four 10MHz TLX Engines.  The TLX was dropped from the next generation of Datapoint products because the 80386 could emulate the Datapoint instruction set in software at comparable or better speeds.

May 17, 2015 at 9:17 PM

Blogger Johannes Thelen said...


Yes, finally confirmation to my doubts about AL1 court presentation. First time when I heard that setup, I was sure that is made by microcode trick.

I have ever seen that good resolution die photo of AL1 and it is now obvious that is not CPU in any means, just part of it.

But how you forgot GI/Pico Electronics calculator chip?

http://www.spingal.plus.com/micro/4001566.pdf

It is also claimed (wrongly) first CPU.


Ps. Nice blog, here is always something interesting to read. Thanks!

May 21, 2015 at 1:09 PM

Blogger Baylink said...

Wonderful piece.

Picked you up from the 1401/Bitcoin article via LCM on Facebook, and it's clear you should be a Regular Read; what is *not* clear is why Datamation or someone isn't *paying you* to write to this quality. :-)

Your How Bitcoin Mining Works article, BTW, is the only one I have ever come across that I could actually understand, though I *still* don't quite get where the "15 coin bonus" for a mined block *comes from*; it seems to be created whole cloth.

Back on this piece: TMS1802 is a name that sounds, as our TVTropes friends would put it, "suspiciously similar" to that of RCA's CDP1802 CMOS CPU, which dates to roughly the same period; any thoughts on that?

May 25, 2015 at 11:57 AM

Blogger Ken Shirriff said...

Baylink: thanks for your kind words. As for as the TI TMS 1802 and RCA CDP 1802, they are totally different and the RCA chip was 5 years later so it's just a confusing coincidence that they have the same number. RCA had their 1800 series and numbered chips sequentially. Ti probably was numbering sequentially too.

Similarly, RCA introduced the CD4004 binary counter in 1969 as part of the 4000 series. This is, of course, totally unrelated to the 4004 microprocessor.

May 25, 2015 at 12:41 PM

Blogger Dogzilla said...

Some friends bought an early game console, I think it was the RCA Studio II, based on the 1802. They hacked a keyboard into the system, and took advantage of the games video display to create a complete system.

On the same subject, in 1981, a large group of engineers at Mostek reverse engineered the Apple II, including the 80 column card, and included 64K bytes of DRAM in an excellent clone. It costs $300 for a complete box of parts, including chips, caps, resistors, keyboard, power supply, and case. Wish I bought one.

May 25, 2015 at 1:18 PM

Anonymous cc said...

This article fills in the blanks of my own blog that I started writing after I found one the first Datapoint CPU boards with parts on it from 1969 along with Intel's first run of 3101 static memory chips. Datapoint is the world's first microcomputer although calling it a PC is a bit of a reach.

May 29, 2015 at 3:44 AM

Blogger Ken Shirriff said...

mfischer: Thank you for your detailed comments. I had similar concerns about whether the 4004 should be considered a general-purpose microprocessor or just a hardcoded controller. However, Intel created the 4008/4009 interface chips which allowed standard RAM to be used as program memory that the 4004 could write to. Thus, the 4004 isn't restricted to just ROM programs. Also, the Intellec MCS-4 is a microcomputer using the 4004 that runs an assembler and can be programmed via a connected terminal. The 4004 is clearly being used as a microprocessor here, even if the address space organization is more Harvard than von Neumann.

Regarding the first computer with solid-state memory,
I haven't researched memory history myself, so you may be quite right about the earlier systems.
I used the source "To the Digital Age", p257: IBM claimed the System/370 Model 145 was the first computer with semiconductor main memory, but Four Phase actually beat them.

Have you seen my article on the
Intel 1405 shift register memory chips that Datapoint used?

To clarify my statement that later Datapoint architectures and the 8080 went in totally different directions, I meant that later Datapoint and Intel processors were based on the Datapoint 2200 / 8008 architecture, but the changes from the original architecture were entirely independent in the two companies.
Specifically, Datapoint added two register banks (2200 II); added prefixes for 16-bit instructions, block instructions and an entirely new X register (5500); and added assorted instructions such as linked list support (6600). The 8080 didn't take any of those ideas, but implemented totally different 16-bit (i.e. register pair) instructions and simplified the I/O instructions dramatically.
I've seen claims that Intel's evolution from the 8008 to the 8080 was based on Datapoint's suggestions, but after examining the instruction sets closely, I don't see any influence from Datapoint post-8008.

Finally, mfischer: can you send me an email? I expect to have Datapoint questions for you in the future. (My address is in the sidebar.)

June 7, 2015 at 9:38 AM

Blogger bhupesh said...

Hi Ken Sherrif,

My email ID is bhupeshkagrgi@gmail.com

I had a doubt regarding an IC used in a mobile Phone USB charger that i bought recently.
The name of the IC is CSC 7101C AVCcM.
I am not able to find any datasheet for such a name.
I really need to know the internals of the IC.

Really impressed with your work and know that you can definitely solve my issue.

Thanks And Regards,
Bhupesh Gargi
India

June 25, 2015 at 11:55 AM

Blogger Ray said...

Hi Ken, Don't think we have met yet. This is Ray Holt, designer of the CADC mentioned in this article. You did a good job researching some unknown history.

I wish to make an initial reply on your comment on my architecture. Having a multiplier and divider in parallel to a CPU is not unusual, just years ahead of the "co-processor" concept.

Re-labeling the SFF to CPU was nothing but a clarification to what it really did. Had nothing to do with 1st, 2nd or 3rd. Silly conclusion without asking me first.

I really find it interesting that very few are technical enough to recognize a 20-bit parallel divider and multiplier (with carry look-ahead) in any architecture is a HUGE accomplishment on the current P-channel technology .... by the way the exact technology used in the Intel 4-bit two years later.

Lastly, for now, it is also interesting that most early designs were concept only, did not work, or only worked in limited situations. The F14 CADC was not only current technology but ran a MIL-SPEC rating and worked perfectly the first time.

I would love to have a conversation on any of the early designs and technology and to put some real comments on architecture, definitions and any other concerns.

Fortunately, some of us are still alive that really know what went on. By the way, I also consulted for Intel in 1973-74 on marketing and training for the 4004 while having to keep secret the CADC for 30 years .... until 1998.

Ray Holt

July 30, 2016 at 9:02 PM

Blogger Ken Shirriff said...

Hi Ray! Thank you for writing. I've looked at the CADC system in detail and I hope to write more about it at some point, since it's very interesting but not as well known as it should be. I've studied your papers about it and I'd be interested in any additional information you can share. Do you have sample code and more details on the instruction set? Also, high-resolution die photos would be very informative.

Ken

August 2, 2016 at 9:50 AM

Blogger Ray said...

Ken, I would love to interact more on the CADC and the technology in 1968+. Many writers focus on the architecture and not the available technology. Computer design was around way before the 60's but to put it in micro chip form took more than computer design... speed, power consumption, process technology, etc were huge factors. Also, there is the consideration of the "atmosphere" (for lack of a better word) in the late 60's and early 70's. Micro chips and computer were not common words, even among engineers, so who's design, who's architecture, who's chips, who's did the logic, who did the chip design, and "what defined a micro chip or microprocessor" were not an issue to anyone .... except Intel in 1974 and past.

I do have more documentation .... my engineering notebook, the CADC technical manual. Please contact me at this link and I can arrange more information for you.

http://www.firstmicroprocessor.com/hirespeaker/about-a-speaker/

Thanks for your detail interest in all the early work. You are one of the rare technical historians.

Ray

August 6, 2016 at 6:13 PM

Blogger Ray said...

The photos on my website are scans from photos made from a microscope camera that took pictures of the chips. Here is one from my website.

http://www.firstmicroprocessor.com/thechips/parallel-multipler-pmu/

Ray

August 6, 2016 at 6:15 PM

Blogger Asterix said...

Anent the term "microprocessor", AES (Canada) marketed a cage full of cards, the AES-80 in 1972 and called it a "microprocessor". See:

http://bitsavers.informatik.uni-stuttgart.de/pdf/aes/

I've also hear the term "microprocessor" used in reference to a device that executes microcode.

Yup, just a marketing term...

November 8, 2016 at 5:40 PM

Blogger Ray said...

I saw this article again while searching for some early photos. Of course, I remember it as a great article. I just wanted to comment on the word " microprocessor ". I can assure everyone that until the early to mid 70's the term was not seriously used to mean what we are trying to make it mean today 50 years later. If it computed and was small it was a small or mini or micro processor. Since main frames were large and the next smallest frames were mini then a whole bunch of logic on a card with some integrated chips was a microprocessor. It had nothing to do with whether ram, rom, control, cpu, etc was on one chip or 2, or 3. And in reality it had nothing to do with architecture. Main frames and minis were designed many different was that did not affect their definition. The 4004 was definitely NOT a CPU on a chip or a microprocessor on a chip. It was basically an ALU with some control with many other chips added (including 59 TTL chips) to make it run. At best, its a partially designed integrated circuit chip set requiring external logic to perform.

Even as late as 1973 Hank Smith, the then Microprocessor Marketing Manager, Intel Corp. stated at the IEEE 1973 WESCON Professional Program Session 11 Proceedings

“A CPU uses P-channel MOS and is contained in 1, 2, 3 or 4 LSI standard dual-in-line packages from 16 – 42 pins per package”.

He tried to get a handle on the term CPU and not even microprocessor. Early definitions were weak based on today's force definitions.

Now to keep the microprocessor term in confusion do we also need to talk about chip size, on-board architecture,P or N Channel, temp spec range, "did it really work" or was it a pipe dream. The Wright Brothers actually fly first but others just had a pipe dream. So what counts ... who was first, how did they do it, when did they do it, was it reliable, how long did it last.

I would say looking back almost 50 years (older than most reading and writing in this blog) that everyone say THANKS to all of those that had vision and guts to commit to silly ideas of computing the size of your thumbnail and actually stuck with it to make it a reality. This would include most every chip designer from 1968-78. Many great micro designs come from the 70's and all of them with great features. The market was too small to continue the research on new ideas ... then the market was too big to compete with Intel on pricing. Thank you to the many pioneers that did what I did and did not get recognized and did great digital design work that none other thought of.

May 30, 2017 at 3:51 AM

Blogger Ray said...

Ken, I wanted to make a specific comment on David Patterson negative comment you referred to "Computer architecture expert David Patterson says, "No way Holt's computer is a microprocessor, using the word as we mean it today." This comment came from David Patterson, Computer Science professor at UC Berkeley in 1998-99 as a response from the Wall Street Journal asking him to verify the chips and documentation that I had in my possession 30 years later (1st reveal). First Dave references the term microprocessor 50 YEARS AGO to his definition in 1999. I might say the definition of most great products changed in 50 years ... light bulb, automobile. Definitions grow into what we use or want the product to be. Trying to say some 50 years ago did not fit today's definition is really quite childish and immature. I was told that his true negativity came from the fact that he just published a book on Computer Architecture and that the CADC unique and working architecture proved his book out of date. It's also interested to note that I could not find any reference in his future books to the unique and working architecture of the CADC, clearly a highlight in the computer design industry, controlling the highly successful F-14 for over 30 years. Intentionally dropping a well-known and published microprocessor architecture from a series of books is almost intentional sabotage to the industry. David Patterson seems like a good writer, designer, and publish, why would he do this?

May 30, 2017 at 4:11 AM

Blogger rodeone2 said...

I like your column posted here but as a developer from Texas Instruments you are leaving out the detailed information about the United States Government and the Air Force's already active Involvement with Texas Instruments they had many contracts between 1967 - 1969 and allot. Factually put it wasn't that TI took over DataPoint or anything as that.

It was the United States Government who stepped and did so on their own. It was they who harvested what technology they required from many entities and it was they who also took it at all will and also told Texas Instruments what they expected from those acquisitions.

Fact is TI was already contracted with the Government after designing the first micro chip and the Air Force was already using it in their aircraft computers in secrecy in 1966 through 67 much earlier than its public release and knowledge likewise. Thanks.

Another fact in a man went to work for TI's main office and plant in North Dallas Texas and designed the first chip in 1957 after he started at TI. His name was Jack Kilby and he went on to pioneer military, industrial, and commercial applications of microchip technology. He headed teams that created the first military systems and the first computer incorporating integrated circuits. All at TI. The guy was considered a literal genius not because of the chip but for his mathematics skills alone and his logical thinking.

May 26, 2021 at 5:08 PM

You can use some HTML tags, such as <b>, <i>, <a>

Comment moderation has been enabled. All comments must be approved by the blog author.

You will be asked to sign in after submitting your comment.
Please prove you're not a robot