[Up] [Next]

A Brief History of Computers in General, and the Personal Computer in Particular

The development of computers over the last few decades has been very rapid. While there are better sources out there for examining any aspect of the computer revolution in detail, this page provides a very brief overview of some of the major milestones in computing history to provide some historical perspective.

In a section of this site devoted to a homage to the computer front panel, there is some additional material giving detail on some segments of the history of computers. On this page and the one following, drawings of the front panels of the various models in the IBM System/360 range give the history of those computers, and on this page and the one following, the computers of the Digital Equipment Corporation are described and discussed. As well, this page in the section on the computer keyboard discusses the IBM PC and some of its early successors, specifically the IBM PCjr and the IBM PC AT, because each of them brought in a new style of keyboard.

Aids to Calculation

Since the dawn of civilization, people have had to do arithmetic, and it was an irksome and error-prone task.

In Mesopotamia, an early form of the abacus was in use in 2300 BC or earlier; in several cultures, the abacus was in use by 600 to 200 BC.

Napier's first publication describing logarithms and their use as an aid to calculation dates from 1614; William Oughtred invented the slide rule in 1622, which made their use for multiplication to limited precision convenient.

The first mechanical adding machines were invented by Pascal and Schickard in 1642.

Early Automatic Digital Computers

The first attempt at what we think of today as a computer, based on mechanical calculator technology, was, of course, the Analytical Engine by Charles Babbage, first described in 1837.

While that project may seem so impractical that it could not have avoided its eventual failure even under more favorable circumstances, the same cannot be said of the later project of his contemporary, Torres y Quevedo, who envisaged building an electrical computer using relays. Torres y Quevedo is best known for the demonstration machine he built to generate interest in, and raise money for, his computer project, a machine that could play a simple Chess endgame, first demonstrated in 1914.

Analog Computers

In the 1950s and early 1960s, popular works on the computer would often note that there were two fundamental kinds of computer, the digital computer and the analog computer. Today, when we think of computers, we generally only think of digital computers, because they can do all sorts of fun and exciting things.

Analog computers, like slide rules, are limited in the precision of the numbers they work with. They were used to solve complicated equations which would have been too expensive to solve more precisely on the digital computers available at the time.

A famous mechanical analog computer was the Differential Analyzer, constructed by Vannevar Bush starting in 1927. It makes a brief cameo in the movie "When Worlds Collide". One key component involved a wheel that was driven by a rotating disk; as the wheel could be moved do different positions on the disk, changing the effective gear ratio (somewhat the way some automatic transmissions work) it was used to calculate the integral of a function.

In the 1950s and early 1960s, electronic analog computers which connected operational amplifiers and other electronic components together with patch cords were commercially available.

The Beginnings of the Computer Era

The Harvard Mark I, conceived in 1939, used a program on a punched paper tape to control calculations performed with electromechanical relays. It became operational in 1944. As initially designed, it did not have a conditional branch instruction. Howard Aiken, its designer, referred to it as "Babbage's dream come true", which shows that Charles Babbage's work did not get rediscovered after the computer age was in full swing, as has occasionally been claimed.

The ENIAC was the first well-known fully electronic computer. Its completion was announced on February 14, 1946, after having been secretly constructed during World War II. It was originally programmed through plugging patch cords.

One of its components was an array of dials in which a table of values for a function to be used could be set up; John von Neumann developed a patch cord program that let the machine execute a program entered on those dials. This reduced the speed of the machine's computations, but it drastically shortened the amount of time it took to set the machine up for work on a different problem.

Soon afterwards, many computers were developed that used vacuum tubes to calculate electronically that were much smaller than ENIAC, based on the stored-program concept that John von Neumann had pioneered. Some were still large, like the Univac I, and some were small, like the Bendix G-15.

Von Neumann himself planned to go on from his work on the ENIAC to create a computer designed around the stored program concept, the EDVAC. As it happened, a British group based at the University of Cambridge was the first to complete a computer based on the EDVAC design, the EDSAC, in May, 1949; the month after, the Manchester Mark I was completed: however, a year before, in June, 1948, the Manchester group had a prototype machine working, giving them the honor of building the world's first stored-program electronic computer.

At first, one of the major problems facing computer designers was finding a way to store programs and data for rapid retrieval at reasonable cost. Recirculating memories, taken from devices originally invented for radar systems, such as mercury delay lines and magnetostrictive delay lines, were used, along with drum memories and their equivalent, head-per-track disks. A special cathode ray tube that stored what was displayed on its face for later readout, called a Williams Tube, provided some of the earliest random-access memories, but it was not as reliable as was desired in practice.

The Univac I and the DEUCE used mercury delay lines, the Bendix G-15 had a drum, the Ferranti Mercury used magnetostrictive delay lines, and the Maniac I and the IBM 701 used Williams tubes.

The magnetic core memory, used in the Whirlwind computer prototype, the AN/FSQ-7 computer used for air defense, and the commercial IBM 704 scientific computer, allowed computers to be built that were, in many important respects, not all that different from the computers we use today.

The IBM 704 computer was introduced in 1954. It performed single-precision floating-point arithmetic in hardware; as well, its single-precision floating-point arithmetic instructions retained additional information, normally generated in the course of performing addition, subtraction, or multiplication, that allowed programs to perform double-precision arithmetic to be short and efficient.

IBM developed the first FORTRAN compiler for the IBM 704. Higher-level languages existed before FORTRAN, but because this compiler was designed to generate highly optimized code, it overcame the major objection to higher-level languages, that they would be wasteful of valuable computer time.

Transistors replaced vacuum tubes, and then integrated circuits replaced discrete transistors.

Another major computer milestone took place on April 7, 1964, when IBM announced their System/360 line of computers.

These computers used microprogramming to allow the same instruction set, and thus the same software, to be used across a series of machines with a broad range of capabilities. The Model 75 performed 32-bit integer arithmetic and floating-point arithmetic, both single and double precision, directly in hardware; the Model 30 had an internal arithmetic-logic unit that was only 8 bits wide.

By this time, IBM was already the dominant computer company in the world. The IBM 704, and its transistorized successors such as the IBM 7090, helped to give it that status, and the IBM 1401, a smaller transistorized computer intended for commercial accounting work, was extremely successful in the marketplace.

The System/360 was named after the 360 degrees in a circle; the floating-point instructions, and commercial instructions to handle packed decimal quantities, were both optional features, while the basic instruction set worked with binary integers; so the machine was, and was specifically advertised as, suitable for installations with either scientific or commercial workloads. And because the related features were options, it was not necessary for one kind of customer to pay extra to be able to handle the other kind of work. Well, in theory; in practice, other brands of computer tended to be much cheaper than those from IBM, but IBM provided very reliable computers with excellent customer support.

IBM invented the vacuum column which significantly improved the performance of magnetic tape drives for computers; their 1403 line printer was legendary for its print quality and reliability; and they invented the hard disk for the RAMAC vacuum-tube computer from 1956.

As a consequence of IBM's major presence in the computer industry, their computers were very influential. Before the IBM System/360, nearly all computers that worked with binary numbers (many instead worked with decimal numbers only) had a word length (the size of numbers they worked with) that was a multiple of six bits. This was because a six bit character could encode the 26 letters of the alphabet, 10 digits, and an adequate number of punctuation marks and special symbols.

The IBM Systm/360 used an eight-bit byte as its fundamental unit of storage. This let it store decimal numbers in packed decimal format, four bits per digit, instead of storing them as six bit printable characters (like the IBM 705 and the IBM 1401 computers, for example).

To clarify: some decimal-only computers like the IBM 1620, the IBM 7070, and the NORC, also by IBM, and the LARC from Univac, had already been using packed decimal; and the Datamatic 1000 by Honeywell, a vacuum-tube computer with a 48-bit word, used both binary and packed decimal long before the System/360 came along.

So I'm not saying that IBM invented the idea of using no more bits than necessary for storing decimal numbers in a computer; that was obvious all along. Rather, what I'm trying to say is that IBM's desire to use this existing technique led them to choose a larger size for storing printable characters in the computer. This larger size made it possible to use upper and lower case with the computer, although lower case was still initially regarded as a luxury, and it was not supported by most peripheral equipment.

In 1969, a later implementation of the System/360, the System/360 Model 195, combined cache memory, introduced on the large-scale microprogrammed 360/85 computer, and pipelining with reservation stations using the Tomasulo algorithm, equivalent to out-of-order execution with register renaming, introduced on the 360/91 (and used on both the 91 and the 195 only in the floating-point unit). This was a degree of architectural sophistication that would only be seen in the mainstream of personal computing with microprocessors when the Pentium II came out (the nearly identical Pentium Pro being somewhat outside the mainstream).

Simple pipelining, where fetch, decode, and execute of successive instructions was overlapped, had been in use for quite some time; splitting the execute phase of instructions into parts would only be useful in practice if successive instructions didn't depend on one another. The IBM STRETCH computer from 1961 attempted such pipelining, and was a disappointment for IBM. The Control Data 6600 computer, a computer built from discrete transistors from 1965, on the other hand, was a success; it used a technique called "scoreboarding" which was a simpler form of out-of-order execution.

This is not to say it is defective; the need for register renaming can be avoided simply by using a larger register file - the IBM 360 had four floating-point registers, while RISC processors typically have banks of at least 32 registers. The scoreboard of the Control Data 6600 is eminently suitable for dealing with the remaining use case for out-of-order execution that a larger register file can't solve, cache misses.

That is not to say the CDC 6600 had a cache; it explicitly transferred data from a slower large memory to its smaller main memory with specialized instructions. One advantage it had in providing high performance with a simple design was that it was a new design from scratch, whereas IBM sought to provide high performance and strict compatibility with their existing System/360 line of mainframes, which is what made both the Tomasulo algorithm and cache memory necessities for them.


The year 1976 was marked by the installation of the first Cray I computer. A few years previously, there were a couple of other computers, such as the STAR-100 from Control Data and the Advanced Scientific Computer from Texas Instruments, that directly operated on one-dimensional arrays of numbers, or vectors. The earlier machines, because they performed calculations only on vectors in memory, only provided enhanced performance on those specialized where the vectors could be quite long. The Cray I had a set of eight vector registers, each of which had room for 64 double-precision floating-point numbers 64 bits in length, and, as well, attention was paid to ensuring it had high performance in those parts of calculations that worked with individual numbers.

As a result, the Cray I was very succesful, sparking reports that the supercomputer era had begun. A few years later, not only did several other companies offer computers of similar design, some considerably smaller and less expensive than the supercomputers from Cray, for users with smaller workloads, but as well add-on units, also resembling the Cray I in their design, were made to provide vector processing with existing large to mid-range computers. IBM offered a Vector Facility for their 3090 mainframe, starting in October 1985, and later for some of their other large mainframes, based on the same principles; Univac offered the Integrated Scientific Processor for the Univac 1100/90; and the Digital Equipment Corporation offered a Vector Processor for their VAX 6000 and VAX 9000 computers, also patterned after the Cray design.

Another line of development relating to vector calculations on a smaller scale may be noted here.

The AN/FSQ-7 computer, produced by IBM for air defense purposes, performed calculations on two 16-bit numbers at once, rather than on one number of whatever length at a time like other computers, to improve its performance in tracking the geographical location of aircraft. This vacuum tube computer was delivered in 1958.

Two computers planned as successors to it offered more flexibility. The AN/FSQ-31 and AN/FSQ-32 computers, dating from around 1959, had a 48 bit word, and their arithmetic unit was designed so that it could perform arithmetic on single 48-bit numbers or pairs of 24-bit numbers; and the TX-2 computer, completed in 1958, could divide its 36-bit word into two 18-bit numbers, four 9-bit numbers, or even one 27-bit number and one 9-bit number.

In 1997, Intel introduced its MMX feature for the Pentium microprocessor which divided a 64-bit word into two 32-bit numbers, four 16-bit numbers, or eight 8-bit numbers.

This was the event that brought this type of vector calculation back to general awareness, but before Intel, Hewlett-Packard provided a vector extension of this type, MAX, for its PA-RISC processors in 1994, and Sun provided VIS for its SPARC processors in 1995.

Since then, this type of vector calculation has been extended beyond what the TX-2 offered; with AltiVec for the PowerPC architecture, and SSE (Streaming SIMD Extensions) from Intel, words of 128 bits or longer are divided not only into multiple integers, but also into multiple floating-point numbers.

In January, 2015, IBM announced that its upcoming z13 mainframes, since delivered, would include vector instructions; these were also of this type, now common on microcomputers, as opposed to the more powerful Cray-style vector operations offered in 1985.

It may be noted that IBM introduced its z/Architecture in the year 2000; this extension of the System/360 mainframe architecture provided 64-bit addressing. The first machine on which it was implemented was the z/900. The z/900 was announced in October, 2000, and was to be available in December, 2000.

The 64-bit Itanium from Intel only became available in June, 2001, and the first chips from AMD that implemented the x64 extension to the 80386 architecture, that Intel later adopted as EM64T, were shipped in April, 2003.

However, AMD had releasd the x64 spec in 1999, and this was after Intel had described the Itanium, as it was a reaction to Intel's way of moving to 64 bits.

Thus it seemed that the microprocessor beat the mainframe to 64-bit addressing, but the 64-bit z/Architecture mainframe was delivered first.

However, there are other microprocessors besides those which are compatible with the Intel 80386. The Alpha microprocessor was introduced in 1992, and Sun adapted its SPARC architecture to 64-bit addressing in 1995, so IBM was anticipated by microprocessors used in servers and high-end workstations.

The Minicomputer Revolution

A transistorized computer, the PDP-1, was first delivered to Bolt, Beranek, and Newman in November, 1960, made by an up-and-coming computer company, the Digital Equipment Corporation. It had an 18-bit word. Another specimen of this model was sold to MIT, and some students there occasionally used it to play Spacewar.

The next model of computer sold by DEC was the PDP-4. It also had an 18 bit word, but it was not compatible with the PDP-1. It had a simpler design; for example, the opcode field in each instruction was five bits long in the PDP-1, but four bits long in the PDP-4, so the latter had about half as many instructions that involved working with data at an address in memory. The instruction set was even more constrained because it included two versions of most binary arithmetic instructions, one that used one's complement arithmetic and one that used two's complement arithmetic.

They then made an even simpler computer, initially envisaged for industrial process control applications, although from the start it was suitable for general-purpose use, the PDP-5. This computer had a word length of only 12 bits. It used memory location 0 to store its program counter.

They later made a large-scale computer, the PDP-6, with a 36-bit word and hardware floating-point, and a new model of computer compatible with the PDP-4, the PDP-7.

And then DEC made history with the PDP-8 computer. In a small configuration, it could sit on a table top, despite still being made from discrete transistors. It was similar to the PDP-5, but with one minor incompatibility; it had a real program counter, and so it moved the interrupt save locations one position earlier in memory. This was not a serious problem, as few PDP-5 computers were sold, and only a limited amount of software was developed for them.

The original PDP-8 sold for $18,000. It was introduced on March 22, 1965. It is considered to have begun the era of minicomputers. There were computers before that weren't giant mainframes that filled whole rooms; the Bendix G-15 filled one corner of a room, being a bit larger than a refrigerator, despite being made with vacuum tubes; the Recomp II was a box that sat on the floor beside a desk, being about as high as the desk and half as wide.

In 1961, the Packard-Bell pb205 computer was not that much bulkier than a PDP-8 would later be. However, to make it affordable, it used magnetostrictive delay lines instead of core as memory; by then, most computers did use core memory, and unwillingness to give up the convenience that offered may have limited its success. Also, the price was $40,000.

The later PDP-8/S, announced on August 23, 1966, set new milestones in the minicomputer era. It sold for under $10,000, and it could be delivered from stock. And it was much more compact than the original PDP-8. However, it achieved its low cost by using much slower core memory, and a serial arithmetic unit, so its lesser performance limited its popularity.

DEC then implemented this architecture with integrated circuits, providing two models, the full-featured PDP-8/I, and the less-expensive PDP-8/L for which some options were not available for expansion.

A revised integrated-circuit model included some modifications to the optional feature, the Extended Arithmetic Element, which provided hardware multiplication. This was the PDP-8/e. Introduced in the summer of 1970, its price was initially $6,500, and that price was later reduced to $4,995. DEC encouraged its sale to schools and colleges. Before there were microcomputers, a group called the "People's Computer Company" encouraged individuals to attempt to purchase one if they could afford it; they had a magazine that featured game programs written in BASIC.

Other companies besides DEC made minicomputers.

The Honeywell 316 computer, first made available in 1969, was a minicomputer that followed in the architectural tradition of the Computer Control Company (3c) DDP-116 from 1964; the Hewlett Packard 2116, from 1967, was the first in a line of minicomputers from that company. Both of these computers had 16 bit words; their basic architecture was similar to that of the PDP-8, the PDP-4, or the PDP-1, in that instructions did calculations between an accumulator and one memory location, the memory location was indicated by a short address which included one bit to indicate whether it referred to a location on the same page of memory as the current instruction or a location on the globally shared page zero of memory, and there was also an indirect bit in instructions to allow these short addresses to point to an address that took up a whole word (whether of 12, 16, or 18 bits) to allow broader access to memory. Some of the larger computers of this group also had an index register, and a bit to indicate if its contents would be added to the address before use.

When DEC decided to make its own minicomputer in the popular 16 bit word length, however, rather than designing something similar to the Honeywell 316 and the Hewlett-Packard 2114 with the PDP-8 and the PDP-4 as sources of inspiration, it did something quite different.

The first PDP-11/20 computers were delivered in the spring of 1970.

This computer's instruction word consisted of a four-bit opcode field, followed by two operand fields, each six bits long, consisting of three bits to indicate an addressing mode, and three bits to indicate a register.

If the addressing mode for either or both operands was indexed addressing, for each operand in that mode, a sixteen-bit address was appended to the instruction. The register field was used to indicate a register to use as an index register for the instruction.

So instructions could be 16, 32, or 48 bits in length, and they could be register-to-register, memory-to-register, register-to-memory, or memory-to-memory.

This was more than a little reminiscent of the IBM System/360 computer.

In one important respect, however, the PDP-11 was very unlike the System/360. The System/360 included instructions that worked with packed decimal numbers, and instructions to convert directly between them and character strings. So decimal and binary numbers in memory were organized the same way strings of digits in text were organized - with the most significant part in the lowest memory address. This is known as the "big-endian" numeric representation.

The Honeywell 316 computer, as one example, had instructions to perform a 32-bit two's complement binary addition. It was not as fancy as a System/360 mainframe, and so to make things simple, it picked up the least significant 16 bits of a 32-bit number from one word, then performing that part of the addition and saving the carry for later use, and then picked up the most significant bits of the 32-bit number from the next word. (Actually, it seems my memory is playing tricks on me, and the H-316 was consistently big-endian. However, there were other 16-bit minis that did do what is described here.)

It addressed memory as 16-bit words, not as 8-bit bytes. When character data was packed into 16-bit words, the first of two characters would be in the left, or most significant, half of the word.

So if you put the character string "ABCD" into such a computer, and read it out as a 32-bit integer, that integer would be composed of the ASCII codes for the letters in this order: C, D, A, and B. At the time, this was not much of a concern, but it seemed inelegant.

The PDP-11 addressed memory in 8-bit bytes, as the IBM System/360 did. But it, too, was a small minicomputer intended to be much cheaper than the IBM System/360. So, like the Honeywell 316, when it worked with 32-bit integers, it put the least significant 16-bit word first, and the most significant 16-bit word second.

How to be as beautifully consistent as the System/360, instead of messy like the Honeywell 316?

Well, while much later packed decimal and string hardware became available as options for larger PDP-11 models, it didn't start out with them. So the idea came to them: why not, when packing two characters of text in a 16-bit word, place the first character in the least significant half of the word? And so give that byte the lower address, since here individual bytes were addressable.

So now the ASCII codes for "ABCD" would be found in a 32-bit number in the order D, C, B, and A, which was at least systematic.

The PDP-11 originated the idea of making a computer that was consistently little-endian. A floating-point hardware option made for it, however, put the most significant portions of floating-point numbers in words with lower addresses, thus marring that consistency. But it is still the PDP-11 that inspired many later designs, particularly microprocessors such as the Intel 8008, 8080, 8086, 80386, and so on, the MOS Technology 6502, and the National Semiconductor 16032 to be little-endian. In contrast, the Texas Instruments 9900, as well as the Motorola 6800 and 68000, were big-endian.

Prelude to Microcomputers: The Pocket Calculator

In accordance with Moore's Law, as time went on, it became possible to put more transistors on a single chip and make it do more things.

So when once people were suitably amazed that one could get four NAND gates on a single chip, it became possible to put 64 bits of high speed memory on one chip, or a four-bit wide arithmetic-logic unit on one chip.

That something weird and wonderful was in the wind had perhaps already been apparent for some time when something happened to make it unmistakably obvious, on the first day of February in 1972. That was the day when Hewlett-Packard announced the HP-35 pocket calculator.

It fit in your pocket and ran on batteries. It calculated trignometric functions and logarithms to ten places of accuracy. And it cost $395.

In 1974, however, the Texas Instruments SR-50 came out, for only $170, and you didn't have to learn RPN to use it.

Shortly after, though, you could get a scientific calculator for as little as $25. One such calculator was the Microlith scientific (model 205, though that was just in small print on the back). It only calculated to 8 digit accuracy, with trig and log functions calculated only to 6 digits. It used a green vacuum fluorescent display instead of LEDs. That was a discounted price; it came out at a higher price when first introduced. June 13, 1976 was the day on which the TI-30 was introduced at an MSRP of $24.95; one web site gives that as the official day of the 'death' of the slide rule, abd another candidate might be July 11, 1976, when Keuffel & Esser made their last slide rule, to send it to the Smithsonian Institution.

If a single chip could calculate log and trig functions, it ought to be possible for a single chip to perform just basic arithmetic along with control functions to step through a program, which would not seem to be more complicated. However, memory was still expensive; as well, speed wasn't a critical issue in a pocket calculator, which could do its calculations one decimal digit at a time.

Hewlett-Packard didn't sit on its laurels, though; even as other companies came out with much cheaper rivals to the HP-35, they came out with the HP-65. Not only was it programmable - including with conditional branch instructions - but it could save programs on small magnetic cards.

Two years later, in October 1975, the Texas Instruments SR-52 came out as a cheaper rival; however, it was a bit too fat to fit in most pockets, although you wouldn't know it from the photos in the advertisements. It wasn't until the TI-59 came out in 1977 that Texas Instruments had a magnetic card calculator that was sufficiently svelte to conform to the common notion of what constituted a pocket calculator.

Before the HP-35, there had been electronic calculators that could sit on top of a desk for some time.

The Wang Laboratories LOCI-2, from 1965, was a programmable calculator that could calculate logarithms in addition to doing arithmetical functions. The Wang 320 KT and 320 KR provided trigonometric functions in degrees and radians respectively, but they were calculated by built-in programs in the same programming instruction set that the calculator presented to the user, so calculating the value of a trig function could take as much as ten seconds.

The Hewlett-Packard 9100A, available in 1968, was a programmable scientific calculator of relatively compact size and impressive capabilities. It offered log and trig functions, and the trig functions were in the same microcode as was used to implement the calculator's other basic functions, calculated using the rapid CORDIC algorithm. Thus, despite being constructed from discrete transistors, it was the first electronic programmable calculator that was basically comparable to an advanced scientific programmable calculator such as the HP-65 or the SR-52... those two being chosen as examples because the 9100A could save programs, and load them back in again, from magnetic cards as well.

Soon after, the 9100B offered a larger memory capacity, and provided one very useful additional function for transferring numbers between the display and the memory.

Other competitors came along later, such as the Wang 500 calculator from 1971 and the Monroe 1655 (among several different models with different capabilities) from 1970.

The Wang 500 could save programs on cassette tapes with a built-in digital cassette drive; the Monroe 1655 could have an optional card reader attached.

The cards which the Monroe 1655 could read had the same dimensions as a standard IBM 80-column punched card, and they were in a related format also devised originally by IBM. An IBM Port-a-Punch, introduced by IBM in 1958, allowed a 40 column card to be prepared in the field without a keypunch machine. The Port-a-Punch provided a framework allowing a stylus to punch out holes in special cards where the possible locations for holes were all scored in advance.

The Microcomputer Revolution

Before jumping to microprocessors, one other event of note should be mentioned here.

In the December 1972 issue of the Hewlett-Packard Journal, the HP 9830A was announced. Although billed as a calculator, it had an alphanumeric display of 32 characters (which scrolled horizontally to allow an 80-character line to be viewed with it) using the 5 by 7 dot matrix, it could be fitted with an optional 80-column thermal printer, it had a typewriter keyboard (basically in the upper-case ASCII pattern, although the @-sign was hidden as the shift of the RESULT key, and the characters [, \, ], ^ and _ were not entered as shifted letters so that the keyboard could be used for lower-case text) - and it could be programmed in BASIC.

It could also be used like a calculator as well from its numeric keypad.

Thus, even before microprocessor-based computers became available, a self-contained desktop unit that could be programmed in BASIC was available. Of course, if one was willing to put up with having two boxes - a minicomputer and a terminal - a PDP-8/e with an ASR-33 Teletype would fit on a normal desk as well, but while it was in some senses an evolutionary step instead of a revolutionary one, I think it's legitimate to view it as a milestone.

Incidentally, although its internal processor had commonalities with the design used in the HP 211x minicomputers, it was still significantly modified because its primary arithmetic capabilities revolved around operations on single decimal digits for calculator operation.

Also predating the microprocessor revolution, the PDP-11/05 and PDP-11/10, which were very compact minicomputers, basically two different designations of the same model, were introduced in June, 1972. In October, 1972, the GT40 terminal, using a PDP-11/05 to control a vector display, was introduced. This was also a single unit that sat on a desktop which had the power of a computer, although it was usually used only as a graphical terminal.

Somewhat later, but still before the microcomputer revolution started in earnest, in September of 1975 IBM introduced the IBM 5100, which could use BASIC or APL or both. To run BASIC, it emulated a System/3, to run APL, it emulated a System/370, and both of those computers were emulated using two levels of microcode, leading to the machine being slow, but that technique did limit the complexity needed for the hardware fo the physical CPU.

Sometimes, the IBM 5100 is unfavorably compared to the HP 85, which also used a cartridge tape and had a CRT and keyboard built in, as the HP 85 was about one-fifth the price; however, as the HP 85 was introduced five years later, that just reflects the rate of progress in microcomputers.

Over this same time frame, the microcomputer revolution was starting.

The cover of the July 1974 issue of Radio-Electronics magazine showed a computer you could build at home! It was the Mark 8, based on the Intel 8008 microprocessor.

The Intel 8008 fit in an 18-pin package, as small as that containing many ordinary integrated circuits. It used a single 8-bit bus for data and (in two parts) addresses. As there were two status bits included with the most significant byte of the address, it could only be connected to a maximum of 16K bytes of memory, not 64K, although at the time memory was too expensive for this to be much of a limitation. Initially, however, the support chips for the 8008 were in limited supply.

The Intel 8008 chip itself had been available from April 1972.

The Mark 8 computer based on the 8008 did not make much of a splash at the time.

The same was not the case, though, with the January 1975 issue of Popular Electronics. That was the one that had the Altair 8800 on the cover, based on the Intel 8080 chip, which had been available since April 1974. This chip was in a 40-pin package, a size in which many other 8-bit microprocessors were also packaged. It had a full 16-bit address bus along with an 8-bit data bus.

Magazines hit store shelves before their printed cover dates; according to Wikipedia, this magazine was distributed in November 1974. That can be said to be when the microcomputer revolution started in earnest.

The pace of events from then on was rapid.

In June 1976, Texas Instruments produced one of the first 16-bit microprocessors, the TMS 9900. It had an architecture very similar to that of the PDP-11, although there were also important differences. One feature that limited its speed, but allowed the circuitry for a 16-bit microprocessor to fit on a single die at the time, was that its sixteen general registers were all in main memory. A workspace pointer register indicated where they were in memory, allowing quick context switches for subroutines.

December 1976 marks the first shipment of the Processor Technology SOL-20 computer (announced in the July 1976 issue of Popular Electronics). It was designed to be easier to use than an Altair or an IMSAI (a clone of the Altair, built to a higher standard of production quality); instead of a box with lights and switches, it had a keyboard in it, and hooked up to a video monitor. The regulatory hurdles to including an RF modulator for use with one's TV set hadn't quite been sorted out at that time just yet.

It was moderately popular with early adopters. However, it was in the next year that the floodgates opened, as 1977 was the year of the original Commodore PET computer, the first Radio Shack TRS-80, and the Apple II. The Apple II and the Commodore PET were both introduced at the first West Coast Computer Faire, which opened on April 16, 1977. The TRS-80 was announced on August 3, 1977.

The Exidy Sorcerer, also identified with the early days of computers, dates from April, 1978, and so it came after these three famous computers.

This was also the year when North Star offered a 5 1/4" floppy disk drive system, admittedly one using more expensive and harder-to-find hard-sectored floppy disks, for S-100 bus computers. It included North Star DOS, a simple operating system with two-letter commands.

The Intel 8086 chip was released in the middle of 1978. This powerful 16-bit microprocessor could address up to one megabyte of memory, since the contents of its 16-bit segment registers were shifted left by four bits before being added to the 16-bit addresses in instructions to form physical memory addresses. (Since the segment registers could be loaded by user programs, the virtual address space - or, more precisely, the address space seen by the programmer; swap files and virtual memory came along with Windows and the 80286 - was also one megabyte in size.) Before the IBM PC, some other computer systems based on the 8086 were sold, but they tended to be expensive.

The Atari 400 and 800 were introduced in January, 1979. The less expensive model, the 400, had a membrane for a keyboard; these were the first home computers of the generation that followed the three home computers introduced in 1977.

The Sinclair ZX-80 was announced on January 29, 1980; it was a major stride forwards in making a computer of some sort widely affordable; its successor, the Sinclair ZX-81 from March 1981 was considerably more widely available in the form of the Timex-Sinclair 1000, and also considerably more successful. This displayed text and graphics on a TV set, but only in black and white, and the keyboard was a small membrane surface with blister bumps, so compromises were made to attain its very low price.

The TRS-80 color computer, which used the advanced Motorola 6809 processor, an 8-bit processor with a multiply instruction, was introduced in September, 1980, another entry in the second generation of 8-bit home computers.

The Commodore VIC-20 began life in Japan in 1980, and was introduced to the rest of the world in 1981; it too made computers more affordable, but it had a "real" keyboard and it displayed text and graphics in color, admittedly with only 22 characters per line of text, on the TV set to which it was connected.

By the time the IBM Personal Computer was announced on August 12, 1981, various brands of 8080 and Z-80 based computer running the CP/M operating system were well established in the market. But the IBM Personal Computer, although not fully compatible with them, was very similar - although offering the possibility of expanding main memory to one megabyte instead of being limited to 64K bytes (although some CP/M based systems used external circuitry to allow up to 128K bytes of memory to be used). This possibility of expansion, plus the prestige of the IBM name - and this didn't just mean their reputation for quality; while many systems using the same CP/M operating system were made by different manufacturers, they weren't fully compatible with each other, and so this meant standardization, a convenient and competitive market for third-party software - made the IBM Personal Computer an immediate success.

Although the 8088 chip in the IBM PC was a 5 MHz part, the computer used a 4.77 MHz clock to simplify the design of its video display circuitry. The IBM PC was expensive enough so that its initial success was with business users, but the CGA display card for it produced an output compatible with North American NTSC television standards, and it was available without floppy disk drives, as it had a port on the back for a special cassette tape drive (which still used the standard Philips audio Compact Cassette).

The Commodore 64 was introduced in January 1982; with 64 K of memory, it competed with machines like the Apple II and the TRS-80, but at a lower price; it also competed with the Atari 400 and 800, and the Radio Shack Color Computer from 1980, which was based on Motorola's 6809 processor that featured multiplication in hardware. Like the Commodore PET and VIC-20, and the Apple II, and the Atari 400 and 800 from 1979, it was based on the MOS Technology 6502; this chip was introduced after the Motorola 6800, and intended as a lower-cost competitor to it. Commodore used so many 6502 chips that it had purchased MOS Technology in 1976. The Commodore 64 was the successor to the VIC-20 from 1980, and had the same general shape as it, if a different color scheme.

The Intel 80286 chip was introduced on February 1, 1982. It offered a "protected mode" of operation which allowed the use of 24-bit addressing. The original IBM System/360 computer had the same size of address, allowing access to 16 megabytes of main memory. It lacked the "virtual real mode" feature introduced on the 80386, however, and that meant there was no practical way to use the chip for an advanced operating system that could both use the increased amount of memory and still use all the popular software written to run on the IBM PC with the original addressing conventions of the 8086 architecture.

July 1982 is when the Timex-Sinclair 1000 computer was introduced, a home computer for those of modest means.

1984 is memorable for two advances. In that year, IBM brought out a larger model of their personal computer, the IBM Personal Computer AT, which used the more powerful 80286 processor. A far more influential event, though, was the introduction by Apple of the Macintosh computer.

The Apple Lisa computer had been introduced in January, 1983. Like the Macintosh, it had a graphical user interface, so when the Macintosh came out, it did not come as a total shock. The Macintosh, however, was far less expensive, and was thus something home users could consider. Both of these computers were based on the Motorola 68000 processor, also used for early Apollo workstations, the Fortune Systems 32:16 computer, and a laboratory data collection computer from IBM, all of which were quite expensive systems. Motorola eventually made the 68008 chip, a version of the 68000 that had an external 8-bit bus available, and that was used in the Sinclair QL computer, announced on January 12, 1984.

The Sinclair QL, unlike the Macintosh, did not have a graphical user interface, although it did interact with the user by means of menus.

The Macintosh was famously announced with a television commercial that aired during the Super Bowl, which is the culminating game of American football, on January 22, 1984.

In 1985, Compaq brought out the Compaq Deskpro 386, which used Intel's new 386 microprocessor. This computer was faster and more powerful than an IBM Personal Computer AT, and with appropriate software, it could make use of more memory. Of course, it was expensive, but as the years went by, prices of systems based on the 80386 chip came down; as well, a compatible chip with a 16-bit external bus, the 80386SX, was offered by Intel starting in 1985, which allowed more affordable systems to use the capabilities that the 80386 offered over the 80286.

The Atari ST was introduced in June, 1985, although there was a delay of a month before it became widely available; it was based on the 68000 computer, and offered a graphical user environment by licensing GEM Desktop from Digital Research. It was considered to be an inexpensive alternative to the Macintosh, and it was also significantly less expensive than the Amiga, although that was partly because it lacked the special graphics chips that distinguished that computer.

The Commodore Amiga was introduced on July 23, 1985: it had a 68000 as its processor, but as that processor, powerful as it was by the standards of the time, was augmented by special graphics and sound chips, the Amiga had multimedia capabilities which were not available on the x86 and Windows platform until years later. Although less expensive than a Macintosh, its success in the market was limited, but it lasted until Motorola stopped making chips with the 680x0 architecture. In fact, after its apparent demise, a German company acquired the rights to the system, and successors to the Amiga are still made by that company and others to this very day, but these are mainly of interest to enthusiasts, however much the machine might deserve to be a mainstream computing alternative on its intrinsic merit.

On September 15, 1986, Apple announced the Apple IIgs; this computer used the WDC 65C816 chip, which had been introduced in 1983. It was a chip that was compatible with the very popular 6502 processor, but unlike that 8-bit chip, it was a 16-bit processor that could switch from operating as a 6502 to operating in its own native 16-bit mode.

In April, 1992, Microsoft offered version 3.1 of their Microsoft Windows software to the world. This allowed people to use their existing 80386-based computers compatible with the standard set by the IBM Personal Computer to enjoy a graphical user interface similar to that of the Macintosh, if not quite as elegant, at a far lower price.

There was a Microsoft Windows 1.0, and there were a Microsoft Windows 2.0, and a 3.0 as well, of course. The first version of Microsoft Windows required all the windows that were open to be tiled on the screen, rather than allowing overlapping windows as on the Macintosh and the early Xerox machines that pioneered the GUI, and this was generally seen as a serious limitation by reviewers at the time. Windows 3.0 was promoted by an arrangement that allowed Logitech to include a free copy with every mouse that they sold.

It was Windows 3.1, however, that enjoyed the major success that led to Windows continuing the dominance previously enjoyed by MS-DOS. The major factor usually credited for this is that Windows 3.1 was the first version to include TrueType, a technology licensed from Apple, thus allowing it to be used for preparing attractive documents on laser printers in a convenient fashion, with the ability to see the fonts being used on the computer's screen, just as had been possible on the Macintosh.

A brief note on digital vector fonts might be in order here.

Apple developed the TrueType format, which allowed the curved portions of character outlines to be represented by quadratic spline curves, as an alternative to licensing a digital font format that was already in existence and wide use at the time, Adobe's Type 1 fonts, which used Bezier curves.

The Adobe Type 1 font format, however, was not the first digital font format in existence. One which preceded it was the Ikarus font format, developed by Peter Karow. This format merely used circular arcs, so that several would have to be used to patch together a line with a changing curvature, but this was still much better than using tiny straight lines. This format is still supported by the program Font Master from DTL.

And Donald Knuth devised METAFONT, which instead of describing characters in terms of outlines, described a center line to be drawn with an imaginary pen nib which was also described. This accompanied this TeX typesetting program project.

But the granddaddy of all the electronic outline font formats was devised by Peter Purdy and Ronald McIntosh back in the 1960s for the Linofilm electronic CRT typesetter. This is the one that used the Archimedian spiral as the basic element for building the curved lines in characters, since it was an obvious and mathematically simple line of varying curvature that could substitute for the draftsman's French curve.

The year 1992 is also notable for the introduction, by the Digital Equipment Corporation, of the Alpha AXP 21064 microprocessor. This microprocessor, like several other RISC processors, avoided the use of condition code bits. It was one of the earliest chips to have a 64-bit architecture. Because it was a very high-performance design, representing the peak of what was possible to put on a single chip at the time, it was also quite expensive, and that limited its popularity in the marketplace, but it is remembered for the many innovations which it embodied.

That is not to say that nothing happened between 1984 and 1992. July 1985 marked the availability of the Atari ST computer, and in early 1986 one could get one's hands on an Amiga 1000. So there were GUI alternatives cheaper than a Macintosh before 1992, but this was one that just involved buying software for the computer you already had, the computer that was the standard everyone else was using for serious business purposes.

In 1989, the Intel 80486 chip came out; unlike previous chips, it included full floating-point hardware as a standard feature right on the chip, although later the pin-compatible 80486SX was offered at a lower price without floating-point.

In February, 1990, IBM released its RS/6000 line of computers. They were based on the POWER architecture, which later gave rise to the PowerPC. This RISC architecture had multiple sets of condition codes to allow the instructions that set conditions to be separated from conditional branch instructions, reducing the need for branch prediction.

The high-end machines in the RS/6000 line used a ten-chip processor, the RIOS-1, notable for being the first microprocessor to use register renaming and out-of-order execution. This technique was invented by IBM for the IBM System/360 Model 91 computer. That computer had several immediate descendants, the 95, the 360/195 and 370/195, but after them, IBM did not make use of this groundbreaking technique in its computers for a while. This has been perceived by some as an inexcusable oversight on their part, but given that this technique is only applicable to computer systems of a size and complexity that, until recently, were associated only with the very largest computers, it should be more appropriately viewed as a natural consequence of IBM making the computers that were relevant to its customers within its core business.

And IBM did make use of out-of-order execution when appropriate, and often before others.

And then out-of-order execution was again used, first in the RIOS-1, as noted, in 1990, and then in IBM mainframes in the IBM ES/9000 Model 520, from 1992. It was again used with the G3 CMOS processor in the 9672 processor in the ES/9000 family in 1996.

The RIOS-1, like the 360/91, only used OoO for its floating-point unit; the same was true of the Pentium Pro and Pentium II processors, which introduced this technique to the world of personal computers.

Out-of-order execution was first used in CMOS single-chip processors implementing the zArchitecture with the z196 from 2010.

In 1993, Intel offered the first Pentium chips, available in two versions, the full 66 MHz version, and a less expensive 60 MHz version. These chips were criticized for dissipating a considerable amount of heat, and there was the unfortunate issue of the floating-point division bug. These chips were pipelined, but in-order, in both their integer and floating-point units.

In 1994, the last chip in the Motorola 680x0 series was introduced, the 68060. Its integer unit, but not its floating-point unit was pipelined (with an in-order pipeline). This processor was never used in any Apple Macintosh computers (or, more correctly, no Apple Macintosh computers were manufactured with that chip; there were third-party upgrade boards that let you replace the existing 68040 processor in one with a 68060 and some interface circuitry), as Apple began selling Macintosh computers using the PowerPC chip instead in March 1994.

As a result, the 68060 never made much of a splash in the market, although there were a few high-end Amiga computers, such as the Amiga 4000T, that used it. There was even a motherboard, the Q60, made by a German company, that could fit in a PC case and which allowed one to run the operating system from the Sinclair QL computer with the 68060 chip.

The Intel Pentium Pro chip was announced on November 1, 1995. This design was optimized for 32-bit software, and was criticized for its performance with the 16-bit software that most people were still using. The later Pentium II resolved that issue, but was otherwise largely the same design, but of course with improvements. However, unlike the case with the Pentium Pro, the cache ran at a slower speed than the processor. Further improvements appeared in the Pentium III. A hardware random-number generator was included in the support chipset for that processor, as part of a feature which included a serial number on the chip that software could read. Although that feature could be disabled, it was controversial; the intent was to facilitate the distribution of software that had to be well-protected against piracy or misuse (i.e., software to display protected content under controlled circumstances).

The Pentium Pro was available in different versions; a 150 MHz version used a processor built on a 500 nm process (or 0.5 micron), while a 200 MHz version used a processor built on a 350 nm process.

The Pentium 4 chip, introduced on November 20, 2000, was a completely new design. It had fewer gate delays per pipeline stage. This meant that the chip's cycle frequency was faster, but instructions took more cycles to execute. At the time, this sounded to people like it was a marketing gimmick instead of an improvement. In fact, though, it was a real improvement, because the Pentium 4 was a pipelined chip with out-of-order execution, intended to issue new instructions in every single cycle, rather than waiting to perform one instruction after another: therefore, a higher cycle frequency did mean that more instructions were being performed in a given time, as long as the pipeline could be kept just as full.

But initially it required the use of a new and more expensive form of memory, which did not help its success in the market.

Intel's subsequent chips in the Core microarchitecture went back to many aspects of the Pentium III design for an important technical reason: shorter pipeline stages meant that the transistors on the chip were doing something a greater fraction of the time. This would produce more heat, and the characteristics of newer, smaller integrated circuit processes were not proving as favorable as hoped (this is known as the demise of Dennard scaling), and so a similar design on a newer process would have dissipated more heat than it would be practical to remove.

Some sources give a date of 2006-2007 for when Dennard scaling came to an end; looking at a graph of how clock speeds improved in processors, it seemed to me as if the transition from rapid progress to a much slower pace that shortly came almost to a stop took place in 2003. A closer look at what was going on at the time, though, shows that the high clock rates seen in 2003 were due to the design characteristics of the Pentium 4 processor of that period.

And, thus, other means of increasing performance were needed, and we entered the era of dual-core and quad-core microprocessors.

IBM came out with a dual-core PowerPC processor in the POWER4 series in 2001.

It was not until May 2005, though, that Intel released the Pentium D and the Pentium 840 Extreme Edition, and AMD released the Athlon 64 x2 on May 31, 2005; but a dual-core Opteron was released on April 21.

The first Core 2 Quad processor was the Extreme Edition QX6700, introduced in November 2006; it was followed by one "for the rest of us" in January 2007, the Q6600.

The Q6600 was a 65nm chip, with a clock frequency of 2.4 GHz, making it not too different from contemporary chips in raw speed, even though today's chips are on considerably smaller process nodes.

These early quad core chips were a multi-chip module with two dual-core dies in one package; in March 2008, AMD released a monolithic quad-core Phenom processor. However, the Core 2 Quad already had Hyper-Threading, whereas AMD did not introduce SMT to its line-up of processors until much later with Ryzen. And the original Threadripper 1950X from AMD used two eight-core dies to achieve its sixteen cores, while Intel's 18-core i9-7980XE was monolithic.

For comparison, in May, 2003, Pentium 4 chips with clock frequencies of up to 2.8 GHz on a 130 nm process were released.

In both cases, shortly after, more expensive versions with even higher speeds were released, these are merely the top-speed chips considered to be part of the "mainstream". Since the Pentium 4 achieved a high clock rate by using unusually short pipeline stages, however, the fact that similar clock speeds were subsequently achieved by the Core 2 design (which was viewed at the time as a return to an internal design, or microarchitecture, similar to that of the Pentium III) would imply that the move from 130 nm to 65 nm did make the logic gates faster, making it correct to view 65 nm as the point at which Dennard scaling came to an end.

We don't have to guess, however. Pentium III chips were also produced on a 130 nm process, and they went up to 1.4 GHz in speed, half the clock frequency of Pentium 4 chips made on the same process.

At 45nm, one could go up to 3.33 GHz, at 32nm, up to 3.4 GHz, at 22nm up to 3.5 GHz, so process in clock speed was very gradual after that point.

Outside the Processor

As well, some dates to provide a frame of reference for the progress in bus connectors in processors might be in order.

In the original IBM PC, in 1981, memory was either added in sockets on the motherboard, or by means of memory cards that plugged into the standard peripheral bus, which was also used for video cards. The IBM Personal Computer/AT from 1984 included a revised bus with an extended connector that was upwards compatible, adding support for a larger address bus, and for a 16-bit data bus instead of an 8-bit data bus.

After the IBM Personal System/2 introduced the Micro Channel bus in 1987, a competing standard, EISA, was offered by other computer manufacturers to offer similar features but with upwards compatibility with older peripherals.

Then the PCI bus was introduced by Intel in 1992, originally in a 32-bit version.

By 1986, 30-contact single in-line memory modules (SIMMs) were used with AT-compatible computers and others. The 72-pin SIMM was adopted when newer processors encouraged a move to a 32-bit memory bus from a 16-bit one, and the 168-pin DIMM, similarly, replaced matched pairs of 72-pin SIMMs as a 64-bit memory bus was needed for the Pentium. Newer generations of memory have resulted in changes to the DIMM design, adding more contacts.

High-performance graphics cards began to move to the Advanced Graphics Port (AGP) soon after Intel announced the spec in 1997, and then they, along with other peripherals, moved to PCI-Express (PCIe) after it was introduced in 2003.

The Transition to 64 Bits

As noted above, one of the first 64-bit microprocessors was the DEC Alpha, first released in 1992. The MIPS R4000 was announced on October 1st, 1991, and has been referred to as the first 64-bit microprocessor; this chip, and its derivatives, were used in some SGI workstations.

The Itanium was announced by Intel on October 4th, 1999. The first Itanium chips were released in June 2001. This was intended to be the architecture to be used by Intel customers who needed a 64-bit address space. The Pentium Pro, from November 1, 1995, introduced Physical Address Extensions, so that x86-based systems could have large memories even if individual programs could only make use of a virtual memory no more than 2 gigabytes in size.

AMD responded by announcing their plan to provide a 64-bit extension to the x86 architecture in 1999. Their first 64-bit Opteron chips were released in April, 2003, and Intel accepted that 64-bit virtual addresses were needed by their x86 customers, and so they accepted the AMD scheme under their own name of EM64T (with a few minor changes; nearly all programs use the subset of features common to both manufacturers) releasing chips which used it starting from 2005.

In the meantime, IBM delivered its first mainframes with the 64-bit zArchitecture, modified from the 32-bit architecture of previous mainframes derived from the IBM System/360, in 2000.

There were still a number of RAS features (Intel used that acronym to stand for Reliability, Availability, and Security, whereas originally IBM used it to mean Reliability, Availability, and Serviceability; of course, unlike an IBM 360 mainframe, one can't open up a microchip to swap out circuit boards) provided on Itanium processors that were not available even on the high-end commercial server Xeon chips with the x86 architecture.

This changed much later, in 2014, with the introduction of the Xeon E7 v2 line of processors.

History is Still Being Made: the AMD Resurgence

October 12, 2011 was the day when the FX-4100, FX-6100, FX-8120 and FX-8150 processors from AMD were released. These were Opterons with four, six, and (in the last two cases) eight cores respectively, based on the new Bulldozer microarchitecture.

These chips were made on a 32nm process. A pair of cores shared a single 256-bit SIMD unit, which limited the design's power for programs that made use of AVX-256 instructions.

The base clock frequency of the FX-4100 was 3.6 GHz. However, the Bulldozer design was based on individual pipeline stages with a small number of gate delays, like that of the Pentium 4 from Intel.

This was the beginning of a difficult era for AMD. Chips made with the Bulldozer microarchitecture, and its successors, Piledriver, Steamroller, and Excavator were percieved as having very disappointing performance. This led to AMD competing primarily in the lower end of the market, and having to price their chips based on the performance they achieved which was less than expected.

On March 2, 2017, the first Ryzen chips, the Ryzen 7 1800X, the Ryzen 7 1700X, and the Ryzen 7 1700, were available from AMD. These chips were based on an all-new Zen microarchitecture which corrected the mis-steps of the Bulldozer microarchitecture.

These chips were the first ones from AMD to include SMT (simultaneous multithreading), a feature Intel had offered for some time under the brand name HyperThreading.

Although the Zen microarchitecture was a big improvement over Bulldozer and Piledriver and the rest, however, the performance of an individual core was not equal to that of a single core on an Intel processor.

But the Ryzen processors from AMD were still very impressive, because they had eight cores, while Intel processors had four.

Intel did also make server processors with higher core counts, and indeed during the Bulldozer years, AMD also made Opterons with 12 and 16 cores. (The 16 core one was a Piledriver, however.) Those chips, being intended for business, sold at premium prices. So software intended for the consumer, particularly computer games, generally wasn't designed to make effective use of a larger number of cores.

Thus, Intel's competitive response to the introduction of the first Ryzen chips was to come out with a six-core chip, the i7-8700K in October. Because the individual cores were more powerful, it matched the eight-core Ryzen chips in total throughput, but because there was more performance in each core, it performed significantly better on games that could only make use of a limited number of cores.

Also, while AMD placed a 256-bit vector unit in each core with Ryzen, rather than sharing them between cores as in Bulldozer and its related successors, Intel had increased the amount of vector processing power in its cores, leaving AMD still behind.

The next generation of AMD Ryzen chips was announced on August 13, 2018; these included significant improvements over the previous generation, but Intel had also been improving its processors.

The third generation of AMD Ryzen chips was announced on July 7, 2019. At this point, AMD had achieved near-parity with Intel. AMD had spun off its physical chipmaking functions to a separate company, GlobalFoundries, in October, 2009. Because of the high cost of building facilities to make chips at more and more advanced process nodes, GlobalFoundries eventually declined to pursue the next major node after 14nm, although they have gone somewhat beyond that later with a 12nm process.

As a result, AMD was having the CPU portion of its Ryzen multi-chip module processor chips made by TSMC on their 7nm process.

Intel's 10nm process was basically equivalent the the process TSMC called 7nm, but Intel was having troubles getting it to work. Those troubles would turn out to take longer than expected to emerge from.

So with the third Ryzen generation, there was no real reason to hesitate in getting an AMD processor, if one wanted the best.

On November 5, 2020, AMD announced their fourth generation of chips. While the previous generation was close enough to Intel's chips in per-core performance that the difference was not significant, with this generation AMD could now claim leadership.

Meanwhile, Intel was finally able to produce chips in volume on its 10nm process. However, not all the problems with that process had been eliminated; they were only making laptop chips on the 10nm process, because they could not attain clock rates that would be competitive on the desktop with that process as it was.

Thus, on March 30, 2021, Intel released the i9-11900K and the i7-11700K, among other chips, based on the 14nm process. They were based on designs originally intended for 10nm chips.

Many reviews of these chips criticized them as having performance that was not much better, and perhaps even slightly worse, than their predecessors in Intel's previous generation.

However, these chips included support for AVX-512 instructions; and so I had suspected that when programs are written to make effective use of this new capability, they will turn out to be very impressive, thus giving AMD extremely serious competition. As of this writing, though, events have yet to justify my optimism.

Also present on this site is this page with a few words concerning recent events in the GPU rivalry between AMD and Nvidia.

[Up] [Next]