[Next] [Up] [Previous]

The Third Generation

Just as transistors replaced vacuum tubes, integrated circuits then replaced discrete transistors.

Another major computer milestone took place on April 7, 1964, when IBM announced their System/360 line of computers. An image of a System/360 Model 50, from an advertising brochure for a third-party communications processor for System/360 computers, is shown at left. This model is the next larger model after the model 40, IBM's most popular model of System/360. The 360/50 was microprogrammed, and had an internal ALU that was 32 bits wide.

These computers used microprogramming to allow the same instruction set, and thus the same software, to be used across a series of machines with a broad range of capabilities. The Model 75 performed 32-bit integer arithmetic and floating-point arithmetic, both single and double precision, directly in hardware; the Model 30 had an internal arithmetic-logic unit that was only 8 bits wide.

The initial announcement of the IBM System/360 referred to the models 30, 40, 50, 60, 62, and 70. There was some time between when the announcement was made, and the first System/360 computers were ready to be shipped; as a result, advances in core memory technology led to Model 60 and Model 62, two versions of the same computer with different speeds of core memory, being replaced by the Model 65, with still faster core memory (the model 65 had a memory cycle time of 0.75 microseconds, whereas the model 60 was to have memory with a cycle time of 2 microseconds, and the model 62 was to have memory with a cycle time of 1 microsecond), and the Model 70 being replaced by the Model 75, again with faster core memory (again 0.75 microseconds, as against 1 microsecond for the model 70).

The IBM System/360 was billed as using integrated circuits rather than discrete transistors, and this was generally accepted at the time. However, instead of using monolithic integrated circuits, its integrated circuits were of a type that IBM termed Solid Logic Technology. An IBM image of such a circuit is shown at right. Easily visible in the image are three resistors, the black areas between the metal traces on the alumina substrate. The small square transistors, to each of which three traces go, are not as easily visible, so I have retouched the black-and-white image with a splash of color, so that each of the four transistors present is highlighted by being red in color.

Later, a System/360 Model 67 was introduced; this, however, was not a Model 65 with faster memory, instead it was a Model 65 with the addition of Dynamic Address Translation, which allowed the computer to use a swap file and run time-sharing operating systems, not only ones from IBM, but also third-party ones such as the Michigan Terminal System, developed at the University of Michigan, in cooperation with the other universities that used it, such as Wayne State University, the Rensselaer Polytechnic Institute, the University of British Columbia, Simon Fraser University, the University of Alberta (Edmonton), the University of Durham, and the University of Newcastle upon Tyne, which shared its computer with both the University of Durham and Newcastle Polytechnic. (At the time, the University of Calgary was known as the University of Alberta (Calgary), as its computer was a Control Data 6600, MTS was not an option for them.) Michigan State University also ran MTS, but without actively participating in its development, and it was also used by Hewlett-Packard and by the Goddard Space Flight Center at NASA.

By this time, IBM was already the dominant computer company in the world. The IBM 704, and its transistorized successors such as the IBM 7090, helped to give it that status, and the IBM 1401, a smaller transistorized computer intended for commercial accounting work, was extremely successful in the marketplace.

The System/360 was named after the 360 degrees in a circle; the floating-point instructions, and commercial instructions to handle packed decimal quantities, were both optional features, while the basic instruction set worked with binary integers; so the machine was, and was specifically advertised as, suitable for installations with either scientific or commercial workloads. And because the related features were options, it was not necessary for one kind of customer to pay extra to be able to handle the other kind of work.

However, the fact that IBM's computers were more expensive than those of its competitors somewhat limited the benefit of being able to use the same computer for these two divergent workloads. Grosch's Law - the fact that a bigger computer is a better bargain - did mean that one computer able to do twice as much work was cheaper than two; but the real benefits of combining scientific and commercial workloads on one machine were in other areas: flexibility of scheduling, ease of training, the need for only one set of staff to operate the computer (mounting tapes, handling printouts, and so on).


IBM succeeded the System/360 series with System/370, announced on June 30, 1970. Several models of that series were improved versions of corresponding models of System/360. All the System/370 models used monolithic integrated circuits for their logic, rather than SLT (Solid Logic Technology), used by most System/360 models; some also used semiconductor main memory instead of core memory. While System/370 mainframes largely resembled System/360 mainframes, they had a different color scheme, being black instead of light beige. A System/370 Model 155 console is pictured at right.

Among the many innovations from IBM that advanced the state of computing, one can note their invention of the vacuum column which significantly improved the performance of magnetic tape drives for computers; their 1403 line printer was legendary for its print quality and reliability, and they invented the hard disk for the RAMAC vacuum-tube computer from 1956. Much later, they also invented the original 8-inch floppy disk as a means of storing microcode for several models of the IBM System/370 series, such as the model 145. Also, the Tomasulo algorithm, first implemented in the IBM System/360 Model 91, was another IBM invention which made modern computers with out-of-order execution possible. (The Control Data 6600 had a partial implementation of out-of-order execution which was able to deal with some kinds of pipeline hazards, but not pipeline hazards of all the fundamental types.)

As a consequence of IBM's major presence in the computer industry, their computers were very influential. Before the IBM System/360, nearly all computers that worked with binary numbers (many instead worked with decimal numbers only) had a word length (the size of numbers they worked with) that was a multiple of six bits. This was because a six bit character could encode the 26 letters of the alphabet, 10 digits, and an adequate number of punctuation marks and special symbols.

The IBM System/360 used an eight-bit byte as its fundamental unit of storage. This let it store decimal numbers in packed decimal format, four bits per digit, instead of storing them as six bit printable characters (like the IBM 705 and the IBM 1401 computers, for example).

To clarify: some decimal-only computers like the IBM 1620, the IBM 7070, and the NORC, also by IBM, and the LARC from Univac, had already been using packed decimal; and the Datamatic 1000 by Honeywell, a vacuum-tube computer with a 48-bit word, used both binary and packed decimal long before the System/360 came along.

So I'm not saying that IBM invented the idea of using no more bits than necessary for storing decimal numbers in a computer; that was obvious all along. Rather, what I'm trying to say is that IBM's desire to use this existing technique led them to choose a larger size for storing printable characters in the computer. This larger size made it possible to use upper and lower case with the computer, although lower case was still initially regarded as a luxury, and it was not supported by most peripheral equipment for the IBM System/360 which handled text.


Generally speaking, computers prior to the System/360 referred to the six-bit area of storage which could contain a single printable character as a character. The System/360 documentation referred to the eight-bit area of storage that could contain a character on that machine as a byte rather than a character. Why?

One obvious reason is to avoid confusion with other computers where the term character refers to a different amount of space.

As well, the 8-bit byte was chosen for the System/360 because in addition to containing a character, it could also contain two decimal digits in binary-coded decimal (BCD) form. Thus, a byte was an area of storage intended to hold other things besides characters.

This kind of flexibility is not precluded by the use of six-bit characters; in 1962, the NCR 315 computer had as its basic unit of storage the slab which could contain either two six-bit characters, or three BCD digits.

The word byte was not coined for the IBM System/360, however. Instead, it is generally believed to have originated at IBM for use with the IBM 7030 computer (also known as the STRETCH). That computer had a 64-bit word, but it had bit addressing, and so the size of a byte was variable rather than fixed.

The term "byte" was also used in documentation for the PDP-6 (which had the PDP-10 as a compatible successor); on that machine, byte manipulation instructions could operate on any number of bits within a single 36-bit word, but they could not cross word boundaries, unlike the case of the STRETCH.

As well, in his set of books The Art of Computer Programming, Donald E. Knuth devised a simple hypothetical computer architecture to allow algorithms to be presented in assembler language. To avoid making his examples specific to either binary or decimal computers, he defined the memory cells of MIX, called bytes, as having the capacity to hold anywhere from 64 to 100 different values. So four trits in a ternary computer, with 81 values, would also qualify (and I remember reading of a professor who implemented MIX with that byte size in a simulator program, so as to validate programs in the MIX language submitted by students).

So one competing definition of a byte - but specific to the MIX architecture - existed that specifically excluded eight-bit bytes; and I remember that this led to someone writing a letter to BYTE magazine saying they were mistaken in using that term in connection with all those 8-bit chips out there!

Despite the PDP-6, though, it definitely was the IBM System/360 that put the word "byte" into the language, making it familiar to everyone who worked with computers. And it is almost universally thought of as referring to eight bits of memory. Despite this, to completely eliminate any possibility of ambiguity, many international communications standards use the term "octet" to refer to exactly eight bits of storage. This may also have to do with the standards bodies being European, and IBM being an American company, of course.

One could have claimed that the PDP-10 was the only machine to use "real" bytes, because it was the only machine to follow, even if imperfectly, the model of the STRETCH, for which the term was coined!

On the other hand, one of the most obvious examples of IBM's influence in popularizing the eight-bit byte was when the Digital Equipment Corporation brought out their PDP-11 computer, with a 16-bit word, in 1970; their previous lines of computers had word lengths of 12, 18, and 36 bits. This will be more fully covered in a later section devoted to the minicomputer.


In 1969, a later implementation of the System/360, the System/360 Model 195, combined cache memory, introduced on the large-scale microprogrammed 360/85 computer, and pipelining with reservation stations using the Tomasulo algorithm, equivalent to out-of-order execution with register renaming, introduced on the 360/91 (and used on both the 91 and the 195 only in the floating-point unit). This was a degree of architectural sophistication that would only be seen in the mainstream of personal computing with microprocessors when the Pentium II came out (the nearly identical Pentium Pro being somewhat outside the mainstream).

Simple pipelining, where fetch, decode, and execute of successive instructions was overlapped, had been in use for quite some time; splitting the execute phase of instructions into parts would only be useful in practice if successive instructions didn't depend on one another. The IBM 7094 II was one example of a computer from IBM that used this technique.

The IBM STRETCH computer from 1961, one attempt by IBM to design a very fast and powerful computer, was a disappointment for IBM, just as the IBM 360/91 was. On the other hand, the IBM 360/85 performed better than expected, which led to IBM adding cache to the IBM 360/91 to produce the IBM 360/195 to correct its performance. The image below shows how the IBM System/360 Model 195 was pictured in one IBM advertisement:



Supercomputers

The word "Supercomputer" was used on the cover of the April, 1970 issue of Datamation, which had articles about the Control Data 7600 computer (a compatible successor to the Control Data 6600), the IBM System/360 Model 195, and the Illiac IV.

A book about the Illiac IV called it "The First Supercomputer". This computer, built by Burroughs, processed one stream of instructions, which were executed in parallel by 64 calculating units arranged in a square array. Each unit had its own disk storage as well as memory; they could communicate with adjacent units, and there was provision for individual units to execute or not execute each given instruction based on local flags. Although those behind the Illiac IV were very enthusiastic about it, the fact that it often required new parallel algorithms, and these often performed a larger number of total operations to obtain a given result, meant that a computer of that design was significantly less efficient in terms of total throughput per transistor - so that it only made sense to use such a computer when it was the only way to obtain one's answers in the time desired. Thus, large computers having a SIMD (Single Instruction stream/Multiple Data stream) design never became very popular. However, that term is also often applied to the vector instructions such as MMX that are now quite common in microprocessors, and, indeed, it is applicable to any kind of computing on vectors as units.

The Illiac IV effort was headed by Daniel Slotnick. He had proposed a massively-parallel SIMD computer back in the early 1960s, by the name of SOLOMON, and an effort, involving Westinghouse and the U. S. military, built small prototype systems. A paper about that system was presented at the 1962 FJCC. And I also do remember that this effort recieved enough notice that it was mentioned in some works about computers intended for the general public.

The year 1976 was marked by the installation of the first Cray I computer, at the Los Alamos National Laboratory. Pictured at right is another Cray I computer, when in use at the National Center for Atmospheric Research. A few years previously, there were a couple of other computers, such as the STAR-100 from Control Data and the Advanced Scientific Computer from Texas Instruments, that directly operated on one-dimensional arrays of numbers, or vectors. The earlier machines, because they performed calculations only on vectors in memory, only provided enhanced performance on those specialized where the vectors could be quite long. The Cray I had a set of eight vector registers, each of which had room for 64 double-precision floating-point numbers 64 bits in length, and, as well, attention was paid to ensuring it had high performance in those parts of calculations that worked with individual numbers.

As a result, the Cray I was very succesful, sparking reports that the supercomputer era had begun. A few years later, not only did several other companies offer computers of similar design, some considerably smaller and less expensive than the supercomputers from Cray, for users with smaller workloads, but as well add-on units, also resembling the Cray I in their design, were made to provide vector processing with existing large to mid-range computers. IBM offered a Vector Facility for their 3090 mainframe, starting in October 1985, and later for some of their other large mainframes, based on the same principles; Univac offered the Integrated Scientific Processor for the Univac 1100/90; and the Digital Equipment Corporation offered a Vector Processor for their VAX 6000 and VAX 9000 computers, also patterned after the Cray design.


Another line of development relating to vector calculations on a smaller scale may be noted here.

The AN/FSQ-7 computer, produced by IBM for air defense purposes, performed calculations on two 16-bit numbers at once, rather than on one number of whatever length at a time like other computers, to improve its performance in tracking the geographical location of aircraft. This vacuum tube computer was delivered in 1958. Much later, after the SAGE early-warning system of which it was a part was scrapped, due to ICBMs rather than bomber aircraft becoming the delivery mechanism of choice for nuclear weapons, the front panel of at least one AN/FSQ-7 ended up being used as a prop in a large number of movies and TV shows. Sometimes these same movies or TV shows also used the front panel from the Burroughs 205 computer.

Here are photographs of the AN/FSQ-7 and the Burroughs 205 (originally the ElectroData 205), from which you can see if you recognize them:

Two computers planned as successors to it offered more flexibility. The AN/FSQ-31 and AN/FSQ-32 computers, dating from around 1959, had a 48 bit word, and their arithmetic unit was designed so that it could perform arithmetic on single 48-bit numbers or pairs of 24-bit numbers; and the TX-2 computer, completed in 1958, could divide its 36-bit word into two 18-bit numbers, four 9-bit numbers, or even one 27-bit number and one 9-bit number.

In 1997, Intel introduced its MMX feature for the Pentium microprocessor which divided a 64-bit word into two 32-bit numbers, four 16-bit numbers, or eight 8-bit numbers.

This was the event that brought this type of vector calculation back to general awareness, but before Intel, Hewlett-Packard provided a vector extension of this type, MAX, for its PA-RISC processors in 1994, and Sun provided VIS for its SPARC processors in 1995.

Since then, this type of vector calculation has been extended beyond what the TX-2 offered; with AltiVec for the PowerPC architecture, and SSE (Streaming SIMD Extensions) from Intel, words of 128 bits or longer are divided not only into multiple integers, but also into multiple floating-point numbers.

In January, 2015, IBM announced that its upcoming z13 mainframes, since delivered, would include vector instructions; these were also of this type, now common on microcomputers, as opposed to the more powerful Cray-style vector operations offered in 1985.

As microprocessors became more and more powerful, it wasn't too many years after the Cray I popularized the concept of the supercomputer, that supercomputers, instead of being individual CPUs of a particularly large and fast kind, were vast arrays of interconnected computers using standard microprocessors. Unlike the Illiac IV, these were MIMD (Multiple Instruction stream/ Multiple Data stream) computers, since the microprocessors operated normally, each one fetching its own program steps from memory; this provided more flexibility.

It may be noted that IBM introduced its z/Architecture in the year 2000; this extension of the System/360 mainframe architecture provided 64-bit addressing. The first machine on which it was implemented was the z/900. The z/900 was announced in October, 2000, and was to be available in December, 2000.

The 64-bit Itanium from Intel only became available in June, 2001, and the first chips from AMD that implemented the x64 extension to the 80386 architecture, that Intel later adopted as EM64T, were shipped in April, 2003.

However, AMD had releasd the x64 spec in 1999, and this was after Intel had described the Itanium, as it was a reaction to Intel's way of moving to 64 bits.

Thus it seemed that the microprocessor beat the mainframe to 64-bit addressing, but the 64-bit z/Architecture mainframe was delivered first.

However, there are other microprocessors besides those which are compatible with the Intel 80386. The Alpha microprocessor was introduced in 1992, and Sun adapted its SPARC architecture to 64-bit addressing in 1995, so IBM was anticipated by microprocessors used in servers and high-end workstations.

The Telephone Company

A few words about a subject neglected in other parts of this history of the computer are in order here. A number of innovations important to the computer revolution came from Bell Labs over the years.

In 1948, the transistor was invented by Bardeen, Brittain, and Shockley, at Bell Labs.

The UNIX operating system was developed at Bell Labs. Initial work started on a PDP-7 in 1969, and then work continued on a PDP-11. A version of Unix for the PDP-11 was distributed by Bell Labs to academic users only on a non-profit basis. Later, UNIX became generally available once it was possible to do this and satisfy the requirements of an antitrust consent decree that the Bell Telephone Company was under.

Just as IBM, because it had greater economies of scale, was able to use integrated circuits in its System/360 line of computers when monolithic integrated circuits were still too expensive to be practical, by developing its own type of integrated circuit, Solid Logic Technology, Bell also developed its own kind of integrated circuit. What they originated were Beam-Lead integrated circuits, invented by M. P. Lepselter, which can be considered to be a forerunner of Silicon-on-Sapphire (SOS) and Semiconductor-on-Insulator (SOI) integrated circuits; in a Beam-Lead integrated circuit, the interconnects between components were heavy enough to provide structural stability to the circuit, while the silicon not forming a part of components was removed.

Shown above is an image from a Bell Labs advertisement from 1966 of a test strip containing three beam-lead transistors.

Another important computer-related innovation from Bell was their Electronic Switching System. It used a computer designed for extremely high reliability. Programs were stored in a separate address space, making the machine one with a Harvard architecture, on a read-only memory. The memory used for them was magnetic in nature, consisting of metal cards, which could be removed to be written on a separate device so that the ESS 1 could be re-programmed, thus it was a writeable read-only memory in the way that an EPROM (Erasable Programmable Read-Only-Memory) chip would later be.

I believe I read that a later iteration of the ESS used normal read-write memory for programs, but this memory was connected to the computer which performed the switching of telephone calls so that it could only read it. A second computer, with its own program memory, was also connected to that memory, and it was used when the program for telephone switching needed to be updated or altered, or simply reloaded.

It is my opinion that this design should be taken as a source of inspiration these days, given all the problems we have with computer viruses and other attacks on computers connected to the Internet.

The Monolithic Integrated Circuit Story

Although this page is titled "The Third Generation", so far it has dealt with the System/360 computer, which indeed did exclusively use integrated circuits... of a sort... and the consequences of its introduction. The Control Data 6600 and the PDP-6, as well as the original KA-10 version of the PDP-10, mentioned above, were made with discrete transistors.

As noted, since IBM was the biggest computer company, it had the economies of scale that made their Solid Logic Technology a feasible option.

IBM was able to produce its own integrated circuits before monolithic integrated circuits were available at prices suitable for use in civilian computers. This was IBM's Solid Logic Technology, which involved automated manufacturing of chips where silicon dies containing individual transistors were placed on a tiny printed circuit on an alumina substrate.

In conventional transistor circuitry, each transistor was in a package, such as a TO-5 can; in SLT, the dies were placed on the circuit, and the whole assembly was then passivated and packaged as a unit. This is why SLT is legitimately regarded as a form of integrated circuit; even though it may only be a halfway step between conventional transistors and monolithic integrated circuits, a big gain in compactness was obtained by putting multiple logic gates in a single package, leaving the further gains obtained by monolithic integrated circuits as a fraction of the total difference between them and discrete transistors.

Let's look at the story of integrated circuits in the rest of the computer industry now.

Sometimes, you may read that the integrated circuit was a result of the Apollo space program. The real story, though, is a bit more complicated than that; the Apollo program did play a genuine role, but so did the requirements of the U.S. military. Those may be a bit less glamorous, and also less comfortable or more controversial, but to gloss over this would be inaccurate.

The Minuteman II missile included a guidance computer that used integrated circuits, and it was the orders for integrated circuits that this engendered that enabled the microelectronics industry to take its first steps.

But even after those orders were filled, Fairchild and other early makers of integrated circuits still weren't ready to take on the commercial market, as they were still too expensive for that.

The U.S. Air Force made more early orders for integrated circuits, for example, for computers to be used in military aircraft.

For several years, however, NASA did purchase more integrated circuits than the entire U.S. Air Force, so the role of the space program was also significant, and that included Apollo itself as well as space missions that preceded it.

It was only after the integrated circuit industry had the military and NASA as its main customers for several years that they were finally producing integrated circuits at a low enough cost that private industry could consider making use of them.

And several types of monolithic integrated circuits were developed as the technology gradually improved. The first ones from Fairchild employed RTL, resistor-transistor logic; initially, their Micrologic line consisted of four types of integrated circuit.

The image at right, from an advertising brochure for Fairchild's Epitaxial Micrologic, shows images of the eight types of integrated circuit in a somewhat later improved form of their early RTL integrated circuits. The innovation here is that instead of the components being placed directly on an N-type substrate, the substrate was P-type, and an N-type well was first fabricated on the substrate before the components were fabricated in that well. This provided better electrical isolation.

The dies are labelled as follows:

The first integrated circuit introduced was the type F flip-flop; types G and S appear to have been added next, and then several months later, types B, H, and C were added to the available product line.

Epitaxial micrologic added types G1 and D.

Texas Instruments was the supplier of the chips used in the guidance computer of the Minuteman II missile; these chips were also RTL chips.

At Texas Instruments, the DCTL Series 51 chips were developed to meet the requirements of the Optical Aspect Computer used in the first Interplanetary Monitoring Platform satellite, launched in 1963.

Delco was the company that the Air Force selected to build the Magic series of computers used in military aircraft. The initial model of computer in that series used Fairchild Micrologic integrated circuits.

Both versions of the Apollo Guidance Computer, Block I and Block II, used for unmanned and manned Apollo flights respectively, used RTL (Resistor-Transistor Logic) made by Fairchild.

Of course, though, the logic family most identified with the era of SSI (Small-Scale Integration) was TTL (Transistor-Transistor Logic). And the best-known representative of that was the 7400 series of integrated circuits, supplied by National Semiconductor and others. The first line of TTL integrated circuits offered for sale, however, was called SUHL, for Sylvania Universal High-Level Logic, and, as the name says, it came from Sylvania.


Other Third-Generation Computers

And so I finally get around to mentioning third-generation computers other than the IBM System/360. Most of them will be minicomputers, although I talk about the phenomenon of the minicomputer (which had its beginnings during the era of the discrete transistor) on the next page.

As an illustration of the change wrought by integrated circuits, an image of the Varian DATA/620 computer, made with discrete transistors, from one advertisement, is shown on the left, and an image of the Varian 620/i, now made with integrated circuits, from an advertisement from April, 1967, is shown on the right. The width of both computers is the same, as the 620/i, like most minicomputers, is designed to fit in standard 19-inch rackmount enclosures.

This advertisement shows the 620/i being held in one hand to emphasize its new smaller size.

Incidentally, the Varian 620 and 620/i computers had 16-bit instructions, but could optionally have 18-bit wide memory and an 18-bit ALU instead of 16-bit wide memory and a 16-bit ALU. This option was dropped for the later 620/L and 620/f models, which were only available in the 16-bit configuration.

A modified form of this idea resurfaced many years later, in the General Instrument CP1600 microprocessor. This microprocessor used instructions that were 10 bits wide, but it dealt with data that was 8 bits wide or 16 bits wide.

The chip could be configured with memory that was 16 bits wide; in which case, data would be fetched 16 bits at a time, and a 16 bit word would be used to contain an instruction, with the high six bits unused. Or it could be configured with memory that was 10 bits wide; in that case, data would be fetched 8 bits at a time, and memory containing data would have the high two bits unused out of each 10-bit memory word.


The RCA Spectra 70 series, announced in 1965, and first delivered in 1966, the computers in which were partially compatible with the IBM System/360, made use of monolithic integrated circuits in the models 70/45 and 70/55, while the smaller scale 70/15 and 70/25 in their initial line-up were made with discrete transistors.


The SDS 92 computer, shown at right, was first delivered during 1965, and was advertised as "the first commercial computer to make extensive use of monolithic integrated circuits"; it was also noted that "all the flip-flops in the computer were integrated"; thus, presumably, it also contained some discrete transistors as well.

This computer had a 12-bit word; instructions were both 12 bits and 24 bits in length; its instruction set looked much more like that of an 8-bit microprocessor than the extremely simplified instruction set of the PDP-8; the instructions that accessed memory were normally two 12-bit words long, and contained 15-bit addresses, although there was also an addressing mode that accessed memory locations indirectly from instructions that were one word long.


And then the 16-bit SEL 810A computer, pictured at left, came along, advertised as "the first computer to use monolithic integrated circuits throughout". The race was close enough, however, that there are other computers that might actually be candidates for that title as well.

Systems Engineering Laboratories was a company based in Florida, and a number of their sales were made to NASA for the space program; their computers were used, for example, in flight simulator training systems.

Of course, when integrated circuits became a better option than discrete transistors, pretty well all the computer companies quickly switched over. Thus, while the original PDP-8 and the PDP-8/S minicomputers were made with discrete transistors, the PDP-8/I and all subsequent models were made with integrated circuits, for example.




[Next] [Up] [Previous]