Join 425,000 subscribers and get a daily digest of news, geek trivia, and our feature articles.
By submitting your email, you agree to the Terms of Use and Privacy Policy.
Benj Edwards is an Associate Editor for How-To Geek. For over 15 years, he has written about technology and tech history for sites such as The Atlantic, Fast Company, PCMag, PCWorld, Macworld, Ars Technica, and Wired. In 2005, he created Vintage Computing and Gaming, a blog devoted to tech history. He also created The Culture of Tech podcast and regularly contributes to the Retronauts retrogaming podcast. Read more…
If you buy a computer that runs Windows, it’s usually called a “PC.” But sometimes a Mac is called a PC, too. What’s going on here, and why is a PC called a “PC” anyway? We’ll explain.
In August 1981, IBM released the IBM Personal Computer in the United States. Since the product’s name was a mouthful, people almost immediately began calling it the “IBM PC” for short.
Within a few years, companies began cloning IBM PCs and creating compatible machines without the IBM label. At that point, the term “PC compatible” or just “PC” became a generic catchall for non-IBM computers that descended from the IBM PC. These PCs could use peripherals and software created for that platform and its clones.
Microsoft Windows originated on the IBM PC platform as a graphical shell that ran on top of MS-DOS, which originated in 1981 alongside the IBM PC. While Windows did briefly appear on alternative architectures (and can also run on ARM chips currently in a special version), it’s historically been tied primarily to the PC-compatible platform. So when people say “PC” these days, they usually mean a computer running Windows, although sometimes people clarify by saying “Windows PC” instead.
Even so, IBM didn’t pull the term “personal computer” out of thin air, and that’s where things get complicated. Since “PC” is short for “personal computer,” the term can apply to other types of machines as well. “Personal computer” is a broad historical category that predated the IBM PC, and it that can include computers like Macs, Commodores, Ataris, and more.
But wait—what is a personal computer anyway?
What makes a personal computer? It’s a contentious topic among historians. For example, just what constituted the “first personal computer” is constantly up for debate, much like the first video game. That debate comes from the meaning of the term “personal computer” itself, which can vary depending on subjective criteria such as size, price, commercial availability, and more.
Broadly speaking, the concept of a “personal computer” emerged in America in the mid-1970s (although the term itself reportedly originated in 1968, and early contenders shipped in 1971). It typically referred to a new wave of small, microprocessor-based computers that were inexpensive enough to be owned and operated by one person at a time.
Why one person at a time? Because before the personal computer era, most computers were very large and very expensive. To varying degrees, they required a specially trained staff to run, and were generally only owned by large organizations such as governments, corporations, and universities.
To get the most out of these large, expensive computers, engineers invented time-sharing, which allowed several or many people to use a large computer at the same time, sometimes remotely through teletypes. This was sort of an intermediate step between monolithic single-task computers and personal computing since individuals could run personalized computing sessions—but the computer was still not under the user’s ownership or control.
Computers had a lot of potential, and the idea of owning your own personal computer that you could completely control was exciting for many technically-minded people in the 1970s. It spawned groups like the Homebrew Computer Club in California, where members such as Steve Wozniak and Steve Jobs shaped the future of the personal computer industry.
Since the term “personal computer” can apply to any computer that an individual can own or operate, that leaves the door open for non-PCs (in the IBM PC sense) to be called PCs as well.
In particular, when Macs switched from PowerPC to the x86 architecture, more and more people began grouping Macs in with PCs. In the 2000s, PC World Editor-in-Chief Harry McCracken notably decided to cover Macs in the historically PC-compatible magazine. “We never went whole-hog on covering Macs in PC World, but stopped pretending they didn’t exist,” he told How-To Geek. Apple noticed, and even mentioned PC World’s reviews in its famous Mac vs. PC ads.
It’s not surprising that Steve Jobs thought this way. In 1976, Jobs helped kick-start the personal computer industry before IBM released the IBM PC. And what’s more, in 1980, Apple ran an ad in the Wall Street Journal written in the voice of Steve Jobs that claimed he and Steve Wozniak invented the personal computer (which, we should note, is not generally regarded to be true.)
The aforementioned Jobs discussion about iPads and PCs from D5 is also enlightening because there was once a general debate about whether to consider devices such as smartphones and tablets “PCs” (or even merely a “computer”). Some favor excluding them because they typically can’t run arbitrary software without vendor permission, and yet they are computer devices owned and operated by a single person. Most industry analyst groups (such as these stats on Statistia) don’t group smartphones and tablets in with desktop and laptop computers, however, and instead consider them different market segments.
So what is a PC? It all depends on context. Like many terms, it can mean multiple things depending on how you use it. And in the end, one thing is clear: Language is very complicated.
Facebook
Instagram
Twitter
LinkedIn
RSS Feed
The Best Free Tech Newsletter Anywhere
By submitting your email, you agree to the Terms of Use and Privacy Policy.