A Timelapse of the Tech Sector, Part 1: Before the Internet

Updated on August 4, 2021

This version has been updated with more sources and source links, more recent data, minor corrections, and more ongoing trends.

Introduction

There are many “snapshots” in time. They usually happen in the moment, looking at where things are right now and speculating about where things are going. What I’m doing here is a series of snapshots intended to make a long-term timelapse. This is the way that I look at markets. It goes back farther than most research analysts would consider necessary, but I find it helpful to examine the strategies and conditions that made businesses successful in the past. It’s also a useful reminder to calm down and look at the long term.

“Tech” is a broad business category, similar to the way that sci-fi is a modifying description for movies. There can be sci-fi dramas and sci-fi comedy films, but not just sci-fi alone. The sci-fi is meaningless without the story. Tech is the same type of descriptor. Within tech, there is manufacturing, consumer goods, consumer services, and others. But there is still a tech flavor, a reliance on silicon, which forms the backbone for the entire sector (as it traditionally existed).

This discussion is a brief review of how the tech sector has changed over time, with a light touch on the major advancements, in both technology and business, which facilitated these changes. We tend to think of tech as today’s hottest developments, but these developments are built on a long period of incremental changes, pushed by a few major advancements. Today’s old news is yesterday’s high tech, but today’s high tech would not be possible without yesterday’s research. Looking to the future requires an understanding of the past.

It’s not possible (or necessary) to detail every computing innovation from the last 100 years in a short report. Entire books have been written about the smallest of changes. But the evolution of the tech sector is a fascinating warning for anyone betting on the next major opportunity. The first mover in the market is not always the winner, and the largest company is not always the most successful. Even the best innovators can be burned by new technology.

Much of the story about the technology is oversimplified and condensed, but that’s because this is not a tech story. This is a business story. And we’re not talking about stock prices. The purpose here is to take a high-level look at the business history of the tech sector in the broadest and most shallow way possible. What it reveals is a pattern that may be useful for future long-term predictions. It can also translate into a general understanding of how markets can become fragmented with every new innovation, mature over time, and consolidate into a small number of major winners.

To keep consistent themes in digestible pieces, I’m splitting this one into multiple parts.

Technology as a Physical Space

Before the Internet, changes in technology were primarily physical. As time passed, computing devices became smaller, cheaper, and easier to use. A summary of the different computer categories describes this change: from mainframe, to minicomputer, to microcomputer (personal computer), to “pocket computer” (and mobile phone).[1] The company that forced the move to the next market was almost always a startup, but the previous market leaders were not always left behind. Their influence on the research for these new markets helped to set the standard.

It starts with IBM.

The Mainframe Era (1950s to 1980s)

From the early 1900s to the early 1990s, IBM was the premier tech company.[2] It started with paper punch card technology, developed all the way back in the 1880s, which remained the standard for almost 100 years, when magnetic tape finally made it fully obsolete. But these punch card systems were not computers, and they could never become real computers. It was only a preview for something better.

Research supported by the US government through WWII led to the development of the “mainframe” computer. Mainframes were the iconic massive computers that filled an entire room. They still used punch cards to submit programs, but the processing was now done by vacuum tubes. These were later replaced by faster machines that used transistors. During this time, computers were mostly used by trained specialists, and even the programmers would never see the actual machine. It was expensive and slow, but much better than what could be done before.

Only a small number of very large companies could build mainframes, and the research needed to produce them was supported by contracts with the government, primarily for the military. IBM did not build the first mainframe, but it dominated this market as it had dominated the market for punch card systems. Through the 1960s and 1970s, the company held a 70% market share.[3] IBM was unique in its ability to secure government contracts for massive research and development projects, and it was one of the few companies large enough to commit resources into building the type of systems the government needed (other major customers included railroads and insurance businesses).

IBM was so influential that it became the target of an antitrust lawsuit that started in 1969 and lasted through the 1970s into the early 1980s. The charge against IBM was that it was using its dominant position in the market for computer hardware (the mainframes) to force customers to buy its computer software (the programs that ran on the mainframes). IBM eventually won the case, but it revealed an industry of layers:

  • At the bottom layer is the physical computer hardware. In the mainframe era, these were the mainframes. IBM had control of this market with a 70% market share.

  • The top layer is the software programs that make the computer useful. In the mainframe era, this layer could be directed by the companies that made the computers. Controlling the hardware platform allowed IBM to set the standards for the software that ran on its systems.

Other companies with fewer resources did not have the ability to compete with IBM’s mainframes, so they didn’t try. Instead, they focused on a newer, smaller type of computer: a minicomputer. A startup company called Digital Equipment Corporation (DEC)—a name that intentionally avoided including the word “computer” to stay away from IBM—began developing and selling minicomputers.

The new, smaller computers (still the size of a refrigerator) were introduced at a time when computer terminals became more common. A larger computer could take care of processing while end users interacted with a terminal that behaved a lot like today’s personal computers. The main difference was that the large computer did the work and the small one just asked it for the answers. It was on one of these terminals where Bill Gates famously learned how to code.

Connections between these large computers also gradually introduced the concept of the Internet, an idea that had been considered impossible before it was demonstrated and proven in 1969. Computers were becoming better at using the same systems, but they were still vastly different for each generation.

From Mainframe, to Mini, to Micro (1970s)

It was the introduction of the integrated circuit that allowed computers to continue to become smaller and more powerful. Using silicon, engineers could print transistors right on to the circuit board. This was the beginning of Moore’s Law, which says that the number of transistors on an integrated circuit will double about every two years. This is also when the tech sector slowly began to burst into thousands of pieces. Another layer became the focus of the industry.

  • The bottom layer of computer hardware is the components that go inside a computer. These are the integrated circuits.

  • The top layer of computer hardware is the entire physical computers. These were the mainframes and minicomputers produced by IBM and DEC.

  • Above the hardware are the software programs that make the computer useful.

Texas Instruments, Motorola, and Fairchild Semiconductor were the leaders in integrated circuits, supplying the internal components for both mainframes and the new minicomputers. As a signal of the region’s burgeoning startup culture, the engineering talent within Fairchild Semiconductor drained into Silicon Valley in the form of many new startups. One, Intel, was formed by Gordon Moore (known for Moore’s Law), and Robert Noyce (the inventor of the integrated circuit) in 1968.[4] The next year, AMD, which became Intel’s primary competitor, was also founded by a group of colleagues that defected from Fairchild Semiconductor.[5]

Intel developed a new memory chip that quickly replaced a major computer component with a much smaller version. Then, with Japanese firms eating up the memory business, Intel shifted to something new. In 1973, the company revealed a new component called a microprocessor. It was essentially an entire computer within one tiny silicon chip.

In hindsight, the microprocessor made personal computers virtually inevitable. With a microprocessor, a minicomputer could become dramatically cheaper, and a computer the size of a modern desktop could be made useful. It was a microcomputer that could become a “personal computer.” It may not have been as powerful, but it was cheaper and smaller, and opened up the market for amateur programmers.

These more complex computers required more complex programs to make them run properly. The most important program is called the operating system. An operating system links the individual programs to the computer’s hardware, and a modern computer cannot function without one. This is the last layer we need to get the full picture.

  • The bottom layer of computer hardware is the components that go inside a computer. The most important are the microprocessors made by Intel and AMD.

  • The top layer of computer hardware is the entire physical computers. These were the mainframes and minicomputers produced by IBM and DEC, but now they also included the microcomputer, also known as the personal computer.

  • The bottom layer of computer software (on top of the physical hardware layers) is the operating system that links the physical computer with computer programs.

  • The top layer of computer software is the programs that make the computer useful.

Up to this point, the companies building the computers were also generating almost all of the software to make the hardware useful, including the operating systems. Now they were starting to build computers and allow others to do the programming. The more programmers making useful programs for a computer, the more people would buy it.

The Personal Computer Revolution (1970s to 1990s)

The Altair, developed by a mostly-forgotten company named MITS, was the personal computer that broke open the market. It was released in 1975 and sold to “hobbyists” (a.k.a. nerds) by the thousands. But it did not have an operating system. This is where Bill Gates arrived on the scene: with the help of Paul Allen, he founded Microsoft in 1975 and wrote an operating system for the Altair that made it easier for other people to use the computer.[6] Major computer companies of the time saw it as a toy.[7]

Meanwhile, Steve Jobs and Steve Wozniak, the Apple co-founders, began producing their own Apple II computers in 1977.[8] These were designed to be useful to a wider range of people, and quickly became a best-seller after the first reliable spreadsheet program was introduced in 1979. Personal computers found their way into business use.

Apple was quickly overtaken by Commodore, a company that had switched from typewriters and calculators to personal computers. And, for a few years, it looked like the company would overwhelm the market with its affordable computers. Commodore produced a series of famously powerful computers that were also extremely cheap. In 1983, Commodore had about 50% market share in personal computers.

Dozens of other companies began selling their own personal computers, each with a different hardware design, and each with a different set of software. Many of them were vertically integrated, making both the software and the hardware, and they tightly controlled every part of their computer’s production. This includes companies that almost no one remembers, such as Wang Laboratories, and names that would seem out of place today, such as Texas Instruments, Xerox, Radio Shack, and Atari. There were also countless tiny computer companies founded by the hobbyists of the 1970s.

All of them were wiped out by IBM’s decision to build a personal computer.

In the personal computer market, IBM did something that no other computer company was doing. IBM, realizing that it was late to the party, desperately outsourced almost all of the components for its personal computer. Instead of using its traditional vertically integrated approach, it built the first IBM personal computer almost entirely out of parts that it could buy off the shelf. Microsoft’s PC-DOS was licensed to be the operating system for IBM’s first PC, with the condition that Microsoft could license its operating systems to other computer manufacturers. (The decision to go with Microsoft, rather than an IBM operating system, may have been influenced by the anti-trust suit against IBM’s mainframe business.) An Intel microprocessor was chosen to power the computer.

The outsourcing made an IBM PC cheaper and faster to produce, but it also made the design easy to reverse-engineer for compatible copies. The IBM PC was introduced in 1981. By 1983, the first IBM PC “clone” was already on sale.

Compaq, a company founded by a team of defectors from Texas Instruments, created the first legal IBM PC clone. It was not a direct competitor to the IBM PC. Like DEC in the Mainframe Era (which by now had become a giant of the minicomputer market), Compaq was a startup that chose to sidestep IBM’s main influence. The first Compaq PC-compatible computer was much smaller, and it was designed to be carried. At 30 pounds, it was not like a modern laptop, but it was a significant improvement over IBM’s desktop models.[9] Compaq’s $111 million in sales set a record for the best first year of any American company ever.

Other PC clones quickly followed, including Dell and HP, and almost all of them used the same components as IBM. By 1990, the IBM PC and PC clones had a market share of about 84% in the PC market. The layers of the industry adopted their standards.

  • At the bottom layer of computer hardware for personal computers, Intel had about 80% market share for processors in 1990. This rose to about 85% by the year 2000. AMD filled in the rest.

  • The top layer of computer hardware for personal computers was dominated by the PC standard. It went from about 84% market share in 1990 to more than 97% by 2000. Apple filled in the rest.

  • At the bottom layer of computer software, Microsoft matched the PC growth. In 1990, it had an 80% market share for operating systems. By 2000, this had grown to more than 97%.[10]

  • The top layer of computer software transformed in ways that are not part of this story. The short version is that Microsoft, with its Microsoft Office products, also came to be a significant player in this part of the industry.

There was a difference between PC market share and IBM’s contribution to personal computers. The competing standards for computer hardware were resolved by IBM’s decision to enter the PC market. But IBM did not lead the market. Computers became generic boxes that all used Microsoft’s operating system (first DOS, then Windows). Microsoft became the platform that linked computer companies with computer users.

In response, the industry consolidated. In 1998, DEC was swallowed by Compaq. Compaq, which found itself in financial trouble, was finally sold to HP in 2002. In 2005, IBM sold its PC division to Lenovo, a Chinese company. Meanwhile, Dell, with a low-cost, direct-to-consumer sales model, steadily rose to the top of the PC market. Today, those three companies, Lenovo, HP, and Dell, make up more than 60% of the PC market (Lenovo, as IBM’s legacy, is the largest), with Apple in a distant 4th place, at about 7%. Apple remains the only vertically integrated personal computer manufacturer.

End of Part 1

As the story continues, the same themes will begin to repeat, and the markets of today look very similar in many ways. The layers of the industry still exist, and their interaction over several years is an important piece of evaluating individual investments. It’s also useful because these layers show up in every industry. Wal-Mart does not make everything that it sells.

Part 1 really is the story of IBM. Competing with IBM was most effective by avoiding IBM. IBM was examined by antitrust authorities in the same way that many tech companies are now being threatened. IBM had real market power. It chose the standards that all personal computers (other than Apple) still use today. But, after the Internet, the market moved on. That’s Part 2.

The PC revolution could have easily become an Apple standard or a Commodore standard instead of an IBM standard. And it took a decade (the 80s) for the IBM standard to be confirmed—a long time to avoid an investment in a new industry with lots of potential. But after that standard was set, the next decade (the 90s) was pretty good too. Not many people who invested in the 90s felt bad about missing the 80s.

Andrew Wagner
Chief Investment Officer
Wagner Road Capital Management


[1] The technical changes and challenges that came with new computing standards are well-documented in two great books by Paul Ceruzzi. A History of Modern Computing covers them in detail, while Computing offers a much shorter summary of these major innovations.

[2] A good description of IBM’s early years can be found in the biography of Thomas J. Watson Sr., covered in one chapter of the book, The Giants of Enterprise, by Richard Tedlow.

[3] From CNET.

[4] Robert Noyce is also profiled in Giants of Enterprise.

[5] Fairchild Semiconductor, ironically, was also formed by a group of defectors. They came from a company called Shockley Semiconductor. Gordon Moore and Robert Noyce were also among the “traitorous eight” that left Shockley Semiconductor in 1957 to form Fairchild Semiconductor. Silicon Valley startup culture has never been big on loyalty.

[6] A good resource on Microsoft’s early years is the book Hard Drive, a biography about Bill Gates by James Wallace & Jim Erickson. It ends abruptly in 1992, so it does not include some of the more intriguing events that come later, but it’s a fascinating profile of (at the time) the world’s most eligible bachelor.

[7] Jeremy Reimer’s 10-part report for Ars Technica includes a much more detailed summary of how the personal computer market developed. This is the source for most of my personal computer market share data.

[8] Steve Jobs by Walter Isaacson provides the definitive story on Apple’s rise.

[9] Other companies developed portable computers before Compaq, but Compaq was the first to successfully copy the IBM standards.

[10] Basically all IBM clones used Microsoft’s operating system.


Marketing Disclosure: Wagner Road Capital Management is a registered investment adviser.  Information presented is for educational purposes only and does not intend to make an offer or solicitation for the sale or purchase of any specific securities, investments, or investment strategies.  Investments involve risk and, unless otherwise stated, are not guaranteed.  Be sure to first consult with a qualified financial adviser and/or tax professional before implementing any strategy discussed herein. Past performance is not indicative of future performance.