Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
The History of Computing

The History of Computing

Charles Edge

Computers touch all most every aspect of our lives today. We take the way they work for granted and the unsung heroes who built the technology, protocols, philosophies, and circuit boards, patched them all together - and sometimes willed amazingness out of nothing. Not in this podcast. Welcome to the History of Computing. Let's get our nerd on!
bookmark
Share icon

All episodes

Best episodes

Top 10 The History of Computing Episodes

Goodpods has curated a list of the 10 best The History of Computing episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to The History of Computing for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite The History of Computing episode by adding your comments to the episode page.

The History of Computing - One History Of 3D Printing

One History Of 3D Printing

The History of Computing

play

05/03/23 • 30 min

One of the hardest parts of telling any history, is which innovations are significant enough to warrant mention. Too much, and the history is so vast that it can't be told. Too few, and it's incomplete. Arguably, no history is ever complete. Yet there's a critical path of innovation to get where we are today, and hundreds of smaller innovations that get missed along the way, or are out of scope for this exact story.

Children have probably been placing sand into buckets to make sandcastles since the beginning of time. Bricks have survived from round 7500BC in modern-day Turkey where humans made molds to allow clay to dry and bake in the sun until it formed bricks. Bricks that could be stacked. And it wasn’t long before molds were used for more. Now we can just print a mold on a 3d printer. A mold is simply a block with a hollow cavity that allows putting some material in there. People then allow it to set and pull out a shape. Humanity has known how to do this for more than 6,000 years, initially with lost wax casting with statues surviving from the Indus Valley Civilization, stretching between parts of modern day Pakistan and India. That evolved to allow casting in gold and silver and copper and then flourished in the Bronze Age when stone molds were used to cast axes around 3,000 BCE. The Egyptians used plaster to cast molds of the heads of rulers. So molds and then casting were known throughout the time of the earliest written works and so the beginning of civilization.

The next few thousand years saw humanity learn to pack more into those molds, to replace objects from nature with those we made synthetically, and ultimately molding and casting did its part on the path to industrialization. As we came out of the industrial revolution, the impact of all these technologies gave us more and more options both in terms of free time as humans to think as well as new modes of thinking. And so in 1868 John Wesley Hyatt invented injection molding, patenting the machine in 1872. And we were able to mass produce not just with metal and glass and clay but with synthetics. And more options came but that whole idea of a mold to avoid manual carving and be able to produce replicas stretched back far into the history of humanity.

So here we are on the precipice of yet another world-changing technology becoming ubiquitous. And yet not. 3d printing still feels like a hobbyists journey rather than a mature technology like we see in science fiction shows like Star Trek with their replicators or printing a gun in the Netflix show Lost In Space. In fact the initial idea of 3d printing came from a story called Things Pass By written all the way back in 1945!

I have a love-hate relationship with 3D printing. Some jobs just work out great. Others feel very much like personal computers in the hobbyist era - just hacking away until things work. It’s usually my fault when things go awry. Just as it was when I wanted to print things out on the dot matrix printer on the Apple II. Maybe I fed the paper crooked or didn’t check that there was ink first or sent the print job using the wrong driver. One of the many things that could go wrong.

But those fast prints don’t match with the reality of leveling and cleaning nozzles and waiting for them to heat up and pulling filament out of weird places (how did it get there, exactly)! Or printing 10 add-ons for a printer to make it work the way it probably should have out of the box.

Another area where 3d printing is similar to the early days of the personal computer revolution is that there are a few different types of technology in use today. These include color-jet printing (CJP), direct metal printing (DMP), fused deposition modeling (FDM), Laser Additive Manufacturing (LAM, multi-jet printing (MJP), stereolithography (SLA), selective laser melting (SLM), and selective laser sintering (SLS). Each could be better for a given type of print job to be done. Some forms have flourished while others are either their infancy or have been abandoned like extinct languages.

Language isolates are languages that don’t fit into other families. Many are the last in a branch of a larger language family tree. Others come out of geographically isolated groups. Technology also has isolates. Konrad Zuse built computers in pre-World War II Germany and after that aren’t considered to influence other computers. In other words, every technology seems to have a couple of false starts. Hideo Kodama filed the first patent to 3d print in 1980 - but his method of using UV lights to harden material doesn’t get commercialized.

Another type of 3d printing includes printers that were inkjets that shot metal alloys onto surfaces. Inkjet printing was invented by Ichiro Endo at Canon in the 1950s, supposedly when he left a hot iron on a pen and ink bubbled out. Thus the “Bubble jet” printer. And Jon Vaught at HP was working on the same idea at about the same ti...

bookmark
plus icon
share episode
The History of Computing - The Evolution of Fonts on Computers

The Evolution of Fonts on Computers

The History of Computing

play

04/10/23 • 20 min

Gutenburg shipped the first working printing press around 1450 and typeface was born. Before then most books were hand written, often in blackletter calligraphy. And they were expensive. The next few decades saw Nicolas Jensen develop the Roman typeface, Aldus Manutius and Francesco Griffo create the first italic typeface. This represented a period where people were experimenting with making type that would save space.

The 1700s saw the start of a focus on readability. William Caslon created the Old Style typeface in 1734. John Baskerville developed Transitional typefaces in 1757. And Firmin Didot and Giambattista Bodoni created two typefaces that would become the modern family of Serif. Then slab Serif, which we now call Antique, came in 1815 ushering in an era of experimenting with using type for larger formats, suitable for advertisements in various printed materials. These were necessary as more presses were printing more books and made possible by new levels of precision in the metal-casting.

People started experimenting with various forms of typewriters in the mid-1860s and by the 1920s we got Frederic Goudy, the first real full-time type designer. Before him, it was part of a job. After him, it was a job. And we still use some of the typefaces he crafted, like Copperplate Gothic. And we saw an explosion of new fonts like Times New Roman in 1931.

At the time, most typewriters used typefaces on the end of a metal shaft. Hit a kit, the shaft hammers onto a strip of ink and leaves a letter on the page. Kerning, or the space between characters, and letter placement were often there to reduce the chance that those metal hammers jammed. And replacing a font would have meant replacing tons of precision parts. Then came the IBM Selectric typewriter in 1961. Here we saw precision parts that put all those letters on a ball. Hit a key, the ball rotates and presses the ink onto the paper. And the ball could be replaced. A single document could now have multiple fonts without a ton of work.

Xerox exploded that same year with the Xerox 914, one of the most successful products of all time. Now, we could type amazing documents with multiple fonts in the same document quickly - and photocopy them. And some of the numbers on those fancy documents were being spat out by those fancy computers, with their tubes. But as computers became transistorized heading into the 60s, it was only a matter of time before we put fonts on computer screens.

Here, we initially used bitmaps to render letters onto a screen. By bitmap we mean that a series, or an array of pixels on a screen is a map of bits and where each should be displayed on a screen. We used to call these raster fonts, but the drawback was that to make characters bigger, we needed a whole new map of bits. To go to a bigger screen, we probably needed a whole new map of bits. As people thought about things like bold, underline, italics, guess what - also a new file. But through the 50s, transistor counts weren’t nearly high enough to do something different than bitmaps as they rendered very quickly and you know, displays weren’t very high quality so who could tell the difference anyways.

Whirlwind was the first computer to project real-time graphics on the screen and the characters were simple blocky letters. But as the resolution of screens and the speed of interactivity increased, so did what was possible with drawing glyphs on screens.

Rudolf Hell was a German, experimenting with using cathode ray tubes to project a CRT image onto paper that was photosensitive and thus print using CRT. He designed a simple font called Digital Grotesk, in 1968. It looked good on the CRT and the paper. And so that font would not only be used to digitize typesetting, loosely based on Neuzeit Book.

And we quickly realized bitmaps weren’t efficient to draw fonts to screen and by 1974 moved to outline, or vector, fonts. Here a Bézier curve was drawn onto the screen using an algorithm that created the character, or glyph using an outline and then filling in the space between. These took up less memory and so drew on the screen faster. Those could be defined in an operating system, and were used not only to draw characters but also by some game designers to draw entire screens of information by defining a character as a block and so taking up less memory to do graphics.

These were scalable and by 1979 another German, Peter Karow, used spline algorithms wrote Ikarus, software that allowed a person to draw a shape on a screen and rasterize that. Now we could graphically create fonts that were scalable.

In the meantime, the team at Xerox PARC had been experimenting with different ways to send pages of content to the first laser printers. Bob Sproull and Bill Newman created the Press format for the Star. But this wasn’t incredibly flexible like what Karow would create. John Gaffney who was working with Ivan Sutherland ...

bookmark
plus icon
share episode
The History of Computing - The Story of Intel

The Story of Intel

The History of Computing

play

03/07/23 • 16 min

We’ve talked about the history of microchips, transistors, and other chip makers. Today we’re going to talk about Intel in a little more detail.

Intel is short for Integrated Electronics. They were founded in 1968 by Robert Noyce and Gordon Moore. Noyce was an Iowa kid who went off to MIT to get a PhD in physics in 1953. He went off to join the Shockley Semiconductor Lab to join up with William Shockley who’d developed the transistor as a means of bringing a solid-state alternative to vacuum tubes in computers and amplifiers.

Shockley became erratic after he won the Nobel Prize and 8 of the researchers left, now known as the “traitorous eight.” Between them came over 60 companies, including Intel - but first they went on to create a new company called Fairchild Semiconductor where Noyce invented the monolithic integrated circuit in 1959, or a single chip that contains multiple transistors.

After 10 years at Fairchild, Noyce joined up with coworker and fellow traitor Gordon Moore. Moore had gotten his PhD in chemistry from Caltech and had made an observation while at Fairchild that the number of transistors, resistors, diodes, or capacitors in an integrated circuit was doubling every year and so coined Moore’s Law, that it would continue to to do so. They wanted to make semiconductor memory cheaper and more practical.

They needed money to continue their research. Arthur Rock had helped them find a home at Fairchild when they left Shockley and helped them raise $2.5 million in backing in a couple of days.

The first day of the company, Andy Grove joined them from Fairchild. He’d fled the Hungarian revolution in the 50s and gotten a PhD in chemical engineering at the University of California, Berkeley. Then came Leslie Vadász, another Hungarian emigrant. Funding and money coming in from sales allowed them to hire some of the best in the business. People like Ted Hoff , Federico Faggin, and Stan Mazor.

That first year they released 64-bit static random-access memory in the 3101 chip, doubling what was on the market as well as the 3301 read-only memory chip, and the 1101. Then DRAM, or dynamic random-access memory in the 1103 in 1970, which became the bestselling chip within the first couple of years.

Armed with a lineup of chips and an explosion of companies that wanted to buy the chips, they went public within 2 years of being founded. 1971 saw Dov Frohman develop erasable programmable read-only memory, or EPROM, while working on a different problem. This meant they could reprogram chips using ultraviolet light and electricity.

In 1971 they also created the Intel 4004 chip, which was started in 1969 when a calculator manufacturer out of Japan ask them to develop 12 different chips. Instead they made one that could do all of the tasks of the 12, outperforming the ENIAC from 1946 and so the era of the microprocessor was born. And instead of taking up a basement at a university lab, it took up an eight of an inch by a sixth of an inch to hold a whopping 2,300 transistors. The chip didn’t contribute a ton to the bottom line of the company, but they’d built the first true microprocessor, which would eventually be what they were known for.

Instead they were making DRAM chips. But then came the 8008 in 1972, ushering in an 8-bit CPU. The memory chips were being used by other companies developing their own processors but they knew how and the Computer Terminal Corporation was looking to develop what was a trend for a hot minute, called programmable terminals. And given the doubling of speeds those gave way to microcomputers within just a few years.

The Intel 8080 was a 2 MHz chip that became the basis of the Altair 8800, SOL-20, and IMSAI 8080. By then Motorola, Zilog, and MOS Technology were hot on their heals releasing the Z80 and 6802 processors. But Gary Kildall wrote CP/M, one of the first operating systems, initially for the 8080 prior to porting it to other chips.

Sales had been good and Intel had been growing. By 1979 they saw the future was in chips and opened a new office in Haifa, Israiel, where they designed the 8088, which clocked in at 4.77 MHz. IBM chose this chip to be used in the original IBM Personal Computer. IBM was going to use an 8-bit chip, but the team at Microsoft talked them into going with the 16-bit 8088 and thus created the foundation of what would become the Wintel or Intel architecture, or x86, which would dominate the personal computer market for the next 40 years.

One reason IBM trusted Intel is that they had proven to be innovators. They had effectively invented the integrated circuit, then the microprocessor, then coined Moore’s Law, and by 1980 had built a 15,000 person company capable of shipping product in large quantities. They were intentional about culture, looking for openness, distributed decision making, and trading off bureaucracy for figuring out cool stuff.

That IBM decisi...

bookmark
plus icon
share episode
The History of Computing - The Silk Roads: Then And Now...

The Silk Roads: Then And Now...

The History of Computing

play

10/28/22 • 10 min

The Silk Road, or roads more appropriately, has been in use for thousands of years. Horses, jade, gold, and of course silk flowed across the trade routes. As did spices - and knowledge. The term Silk Road was coined by a German geographer named Ferdinand van Richthofen in 1870 to describe a network of routes that was somewhat formalized in the second century that some theorize date back 3000 years, given that silk has been found on Egyptian mummies from that time - or further. The use of silk itself in China in fact dates back perhaps 8,500 years.

Chinese silk has been found in Scythian graves, ancient Germanic graves, and along mountain ranges and waterways around modern India gold and silk flowed between east and west. These gave way to empires along the Carpathian Mountains or Kansu Corridor. There were Assyrian outposts in modern Iran and the Sogdia built cities around modern Samarkand in Uzbekistan, an area that has been inhabited since the 4th millennium BCE. The Sogdians developed trading networks that spanned over 1,500 miles - into ancient China. The road expanded with he Persian Royal Road from the 5th century BCE across Turkey and with the conquests of Alexander the Great in the 300s BCE, the Macedonian Empire pushed into Central Asia into modern Uzbekistan. The satrap Diodotus I claimed independence of one of those areas between the Hindu Kush, Pamirs, and Tengri Tagh mountains, which became known as the Hellenized name Bactria and called the Greco-Bactrian and then Into-Greek Kingdoms by history. Their culture also dates back thousands of years further.

The Bactrians became powerful enough to push into the Indus Valley, west along the Caspian Sea, and north to the Syr Darya river, known as the Jaxartes at the time and to the Aral Sea. They also pushed south into modern Pakistan and Afghanistan, and east to modern Kyrgyzstan. To cross the Silk Road was to cross through Bactria, and they were considered a Greek empire in the east. The Han Chinese called them Daxia in the third century BCE. They grew so wealthy from the trade that they became the target of conquest by neighboring peoples once the thirst for silk could not be unquenched in the Roman Empire. The Romans consumed so much silk that silver reserves were worn thin and they regulated how silk could be used - something some of the Muslim’s would do over the next generations.

Meanwhile, the Chinese hadn’t known where their silk was destined, but had been astute enough to limit who knew how silk was produced. The Chinese general Pan Chao in the first century AD and attempted to make contact with the Roman’s only to be thwarted by Parthians, who acted as the middlemen on many a trade route. It wasn’t until the Romans pushed East enough to control the Persian Gulf that an envoy was sent by Marcus Aurelius that made direct contact with China in 166 AD and from there, spread throughout the kingdom. Justinian even sent monks to bring home silkworm eggs but they were never able to reproduce silk, in part because they didn’t have mulberry trees. Yet, the west had perpetrated industrial espionage on the east, a practice that would be repeated in 1712 when a Jesuit priest found how the Chinese created porcelain.

The Silk Road was a place where great fortunes could be found or lost. The Dread Pirate Roberts was a character from a movie called the Princess Bride, who had left home to make his fortune, so he could spend his life with his love, Buttercup. The Silk Road had made many a fortune, so Ross Ulbricht used that name on a site he created called the Silk Road, along with Frosty and Attoid. He’d gotten his Bachelors at the University of Texas and Masters at Penn State University before he got the idea to start a website he called the Silk Road in 2011. Most people connected to the site via ToR and paid for items in bitcoins. After he graduated from Penn State, he’d started a couple of companies that didn’t do that well. Given the success of Amazon, he and a friend started a site to sell used books, but Ulbricht realized it was more profitable to be the middle man, as the Parthians had thousands of years earlier. The new site would be Underground Brokers and later changed to The Silk Road. Cryptocurrencies allowed for anonymous transactions. He got some help from others, including two that went by the pseudonyms Smedley (later suspected to be Mike Wattier) and Variety Jones (later suspected to be Thomas Clark).

They started to facilitate transactions in 2011. Business was good almost from the beginning. Then Gawker published an article about the site and more and more attention was paid to what was sold through this new darknet portal. The United States Department of Justice and other law enforcement agencies got involved. When bitcoins traded at less than $80 each, the United States Drug Enforcement Agency (DEA) seized 11 bitcoins, but couldn’t take the site down for good. It was actually an IRS investi...

bookmark
plus icon
share episode
The History of Computing - Origins of the Modern Patent And Copyright Systems
play

06/07/21 • 17 min

Once upon a time, the right to copy text wasn’t really necessary. If one had a book, one could copy the contents of the book by hiring scribes to labor away at the process and books were expensive. Then came the printing press. Now, the printer of a work would put a book out and another printer could set their press up to reproduce the same text. More people learned to read and information flowed from the presses at the fastest pace in history.

The printing press spread from Gutenberg’s workshop in the 1440s throughout Germany and then to the rest of Europe and appearing in England when William Caxton built the first press there in 1476. It was a time of great change, causing England to retreat into protectionism, and Henry VIII tried to restrict what could be printed in the 1500s. But Parliament needed to legislate further.

England was first to establish copyright when Parliament passed the Licensing of the Press Act in 1662, which regulated what could be printed. This was more to prevent printing scandalous materials and basically gave a monopoly to The Stationers’ Company to register, print, copy, and publish books. They could enter another printer and destroy their presses. That went on for a few decades until the act was allowed to lapse in 1694 but began the 350 year journey of refining what copyright and censorship means to a modern society.

The next big step came in England when the Statute of Anne was passed in 1710. It was named for the reigning last Queen of the House of Stuart. While previously a publisher could appeal to have a work censored by others because the publisher had created it, this statute took a page out of the patent laws and granted a right of protection against copying a work for 14 years. Reading through the law and further amendments it is clear that lawmakers were thinking far more deeply about the balance between protecting the license holder of a work and how to get more books to more people. They’d clearly become less protectionist and more concerned about a literate society.

There are examples in history of granting exclusive rights to an invention from the Greeks to the Romans to Papal Bulls. These granted land titles, various rights, or a status to people. Edward the Confessor started the process of establishing the Close Rolls in England in the 1050s, where a central copy of all those granted was kept. But they could also be used to grant a monopoly, with the first that’s been found being granted by Edward III to John Kempe of Flanders as a means of helping the cloth industry in England to flourish.

Still, this wasn’t exactly an exclusive right but instead a right to emigrate. And the letters were personal and so letters patent evolved to royal grants, which Queen Elizabeth was providing in the late 1500s. That emerged out of the need for patent laws proven by Venicians in the late 1400s, when they started granting exclusive rights by law to inventions for 10 years. King Henry II of France established a royal patent system in France and over time the French Academy of Sciences was put in charge of patent right review.

English law evolved and perpetual patents granted by monarchs were stifling progress. Monarchs might grant patents to raise money and so allow a specific industry to turn into a monopoly to raise funds for the royal family. James I was forced to revoke the previous patents, but a system was needed. And so the patent system was more formalized and those for inventions got limited to 14 years when the Statue of Monopolies was passed in England in 1624. The evolution over the next few decades is when we started seeing drawings added to patent requests and sometimes even required. We saw forks in industries and so the addition of medical patents, and an explosion in various types of patents requested.

They weren’t just in England. The mid-1600s saw the British Colonies issuing their own patents. Patent law was evolving outside of England as well. The French system was becoming larger with more discoveries. By 1729 there were digests of patents being printed in Paris and we still keep open listings of them so they’re easily proven in court. Given the maturation of the Age of Enlightenment, that clashed with the financial protectionism of patent laws and intellectual property as a concept emerged but borrowed from the patent institutions bringing us right back to the Statute of Anne, which established the modern Copyright system. That and the Statue of Monopolies is where the British Empire established the modern copyright and patent systems respectively, which we use globally today. Apparently they were worth keeping throughout the Age of Revolution, mostly probably because they’d long been removed from the monarchal control and handed to various public institutions.

The American Revolution came and went. The French Revolution came and went. The Latin American wars of independence, revolutio...

bookmark
plus icon
share episode
The History of Computing - The History Of Streaming Music

The History Of Streaming Music

The History of Computing

play

07/17/19 • 10 min

Severe Tire Damage. Like many things we cover in this podcast, the first streaming rock band on the Interwebs came out of Xerox PARC. 1993 was a great year. Mariah Carey released Dreamlover, Janet Jackson released That’s The Way Love Goes. Boyz II Men released In The Still of the Nite. OK, so it wasn’t that great a year. But Soul Asylum’s Runaway Train. That was some pretty good stuff out of Minnesota. But Severe Tire Damage was named after They Might Be Giants and a much more appropriate salvo into the world of streaming media. The members were from DEC Systems Research Center, Apple, and Xerox PARC. All members at the time and later are pretty notable in their own right and will likely show up here and there on later episodes of this podcast. So they kinda’ deserved to use half the bandwidth of the entire internet at the time.

The first big band to stream was the Rolling Stones, the following year. Severe Tire Damage did an opening stream of their own. Because they’re awesome. The Stones called the stunt a “good reminder of the democratic nature of the Internet.” They likely had no clue that the drummer is the father of ubiquitous computing, the third wave of computing. But if they have an Apple Watch, a NEST, use an app to remotely throw treats to their dog, use a phone to buy a plane ticket, or check their Twitter followers 20 times a day, they can probably thank Mark Weiser for his contributions to computing. They can also thank Steve Rubin for his contributions on the 3D engine in the Mac. Or his wife Amy for her bestselling book Impossible Cure.

But back to streaming media. Really, streaming media goes back to George O Squier getting patents for transmitting music over electrical lines in the 1910s and 1920s. This became Muzak. And for decades, people made fun of elevator music. While he originally meant for the technology to compete with radio, he ended up pivoting in the 30s to providing music to commercial clients. The name Muzak was a mashup of music and Kodak, mostly just for a unique trademark. By the end of the 30s Warner Brothers had acquired Muzak and then it went private again when George Benton, the chairman and publisher of the Encyclopædia Britannica pivoted the company into brainwashing for customers, alternating between music and silence in 15 minute intervals and playing soft tones to make people feel more comfortable while waiting for a doctor or standing in an elevator. Makes you wonder what he might have shoved into the Encyclopedia! Especially since he went on to become a senator. At least he led the charge to get rid of McCarthy who referred to him as “Little Willie Benton.” I guess some things never change. Benton passed away in 1973, but you can stream an interview with him from archives.org ( https://archive.org/details/gov.archives.arc.95761 ).

Popularity of Muzak waned over the following decades until they went bankrupt in 2009. After reorganization it was acquired in 2011 and is now Mood Media, which has also gone bankrupt. I guess people want a more democratic form of media these days. I blame the 60s. Not much else happened in streaming until the 1990s. A couple of technologies were maturing at this point to allow for streaming media. The first is the Internet. TCP/IP was standardized in 1982 but public commercial use didn’t really kick on until the late 1980s. We’ll reserve that story for another episode. The next is MPEG.

MPEG is short for the Moving Picture Experts Group. MPEG is a working group formed specifically to set standards for audio and video compression and the transmission of that audio and video over networks. The first meeting of the group was in 1988. The group defined a standard format for playing media on the Internet, soon to actually be a thing (but not yet). And thus the MPEG format was born. MPEG is now the international standard for encoding and compressing video images. Following the first release they moved quickly. In 1992, the MPEG-1 standard was approved at a meeting in London. This gave us MPEG Layer 3, or MP3 as well as video CDs. At the Porto meeting in 1994, we got MPEG-2 standard, thus DVDs, DVD players and AAC standard a long standard for iTunes and used for both television and audio encoding. MPEG-4 came in 1999, and the changes began to slow as adoption increased. Today, MPEG-7 and MPEG-21 are under development.

Then came the second wave of media. In 1997, Justin Frankel and Dmitry Boldyrev built WinAmp. A lot of people had a lot of CDs. Some of those people also had WinAmp or other MP3 players and rippers. By 1999 enough steam bad been built up that Sean Parker, Shawn Fanning, and John Fanning built a tool called Napster that allowed people to trade those MP3s online. At their height, 80 million people were trading music online. People started buying MP3 players, stereos had MP3 capabilities, ...

bookmark
plus icon
share episode
The History of Computing - The PASCAL Programming Language

The PASCAL Programming Language

The History of Computing

play

07/13/19 • 10 min

PASCAL was designed in 1969 by the Swiss computer scientist Niklaus Wirth and released in 1970, the same year Beneath the Planet of the Apes, Patton, and Love Story was released. The Beatles released Let It Be, Three Dog Night was ruling the airwaves with Mama Told Me Not To Come, and you could buy Pong and Simon Says for home. Wirth had been a PhD student at Berkeley in the early 1960s, at the same time Ken Thompson, co-inventor of Unix and author of the Go programming language was in school there. It’s not uncommon for a language to kick around for a decade or more gathering steam, but PASCAL quickly caught on. In 1983, PASCAL got legit and was standardized, in ISO 7185. The next year Wirth would win the 1984 Turing Award. Perhaps he listened to When Doves Cry when he heard. Or maybe he watched Beverly Hills Cop, Indiana Jones, Gremlines, Red Dawn, The Karate Kid, Ghostbusters, or Terminator on his flight home. Actually, probably not.

PASCAL is named after Blaise Pascal, the French Philosopher and Mathemetician. As with many programmers, PASCAL built THE WORLD’S FIRST FULLY FUNCTIONAL MECHANICAL CALCULATOR because he was lazy and his dad made him do too many calculations to help pay his bills. 400 years later, we still need calculators here and there, to help us with our bills. As with many natural scientists of the time, Blaise Pascal contributed to science and math in a variety of ways:

  • PASCAL’S LAW IN HYDROSTATICS
  • PASCAL’S THEOREM TO THE EMERGING FIELD OF PROJECTIVE GEOMETRY
  • IMPORTANT WORK on ATMOSPHERIC PRESSURE AND VACUUM including that REDISCOVERing THAT ATMOSPHERIC PRESSURE DECREASES WITH HEIGHT
  • A pioneer in THE THEORY OF PROBABILITY
  • While Indian and Chinese mathematicians had been using it for centuries, PASCAL POPULARIZED THE PASCAL’S TRIANGLE and was credited with providing PASCAL’s Identity
  • As with many in the 1600s he was deeply religious and dedicated the later part of his life to religious writings including Pensees, which helped shape the French Classical Period. Perhaps he wrote it while listening to Bonini or watching The History of Sir Francis Drake

The PASCAL programming language was built to teach students to program, but as with many tools students learn on, it grew in popularity as those students graduated from college throughout the 1970s and 1980s. I learned PASCAL in high school computer science in 1992. Yes, Kris Kross was making you Jump and Billy Ray Cyrus was singing Achy Breaky Heart the same year his daughter was born. I learned my first if, then, else, case, and while statements in PASCAL.

PASCAL is a procedural programming language that supports structured data structures and structured programming.At the time I would write programs on notebook paper and type them in next time I had a chance to play with a computer. I also learned enumerations, pointers, type definitions, and sets. PASCAL also gave me my first exposure to integers, real numbers, chars, and booleans. I can still remember writing the word program at the top of a piece of paper, followed by a word to describe the program I was about to write. Then writing begin and end. Never forgetting the period after the end of course. The structures were simple. Instead of echo you would simply use the word write to write text to the screen, followed by hello world in parenthesis wrapped in single quotes. After all, there are special characters if you use a comma and an exclamation point in hello word.

I also clearly remember wrapping my comments in {} because if you didn’t comment what you did it was assumed you stole your code from Byte managize. I also remember making my first procedure and how there was a difference between procedures and functions. The code was simple and readable. Later I would use AmigaPascal and hate life.

PASCAL eventually branched out into a number of versions including Visual PASCAL, Instant PASCAL, and Turbo PASCAL. There are still live variants including the freepascal compiler available at freepascal.org. PASCAL was the dominant language used in the early days of both Apple and Microsoft. So much so that most of the original Apple software was written in PASCAL, including Desk Accessories, which would later become Extensions. Perhaps the first awesome computer was the Apple II, where PASCAL was all over the place. Because developers knew PASCAL, it ended up being the main high-level language for the Lisa and then the Mac. In fact, some of the original Mac OS was hand-translated to assembly language from PASCAL. PASCAL wasn’t just for parts of the operating system. It was also used for a number of popular early programs, including Photoshop 1.

PASCAL became object-oriented first with Lisa Pascal, Classcal then with Object PASCAL in 1985. That year Apple released MacApp, which was an object oriented API for the classic Mac Operating...

bookmark
plus icon
share episode
The History of Computing - Grace Hopper

Grace Hopper

The History of Computing

play

07/20/19 • 8 min

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Today’s episode is on one of the finest minds in the history of computing: Grace Brewster Murray Hopper. Rear Admiral Hopper was born on December 9th, 1906 in New York City. She would go on to graduate from Vassar College in 1928, earn a master’s degree at Yale in 1930, and then a PhD from Yale in 1933, teaching at Vassar from 1931 until 1941. And her story might have ended there.

But then World War Two happened. Her great-grandfather was an admiral in the US Navy during the Civil War, and so Grace Hopper would try to enlist. But she was too old and a little too skinny. And she was, well, a she. So instead she went on to join the women’s branch of the United States Naval Reserve called WAVES, or Women Accepted for Volunteer Emergency Service, at the time. She graduated first in her class and was assigned to the Bureau of Ships project at Harvard as a Lieutenant where she was one of the original programmers of the IBM Automatic Sequence Controlled Calculator, better known as the Mark I.

The Mark I did what the analytical engine tried to do but using electromechanical components. Approved by the original IBM CEO Thomas Watson Sr, the project had begun in 1937 and was shipped to Harvard in 1944. If you can imagine, Hopper and the other programmers did conditional branching manually. Computers played a key role in the war effort and Hopper played a key role in the development of those computers. She co-authored three papers on the Mark I during those early days. She also found a moth in the Mark II in 1947, creating a term everyone in software uses today: debugging.

When peace came, she was offered a professorship at Vassar. But she had a much bigger destiny to fulfill. Hopper stayed on at Harvard working on Navy contracts because the Navy didn’t want her yet. Yet. She would leave Harvard to go into the private sector for a bit. At this point she could have ended up with Remington Rand designing electric razors (yes, that Remington), or working on the battery division, which would be sold to Rayovac decades later. But she ended up there as a package deal with the UNIVAC. And her destiny began to unfold.

You see, writing machine code sucks. She wanted to write software, not machine language. She wanted to write code in English that would then run as machine code. This was highly controversial at the time because programmers didn’t see the value in allow what was mainly mathematical notation for data processing to be available in a higher level language, which she proposed would be English statements. She published her first paper on what she called compilers in 1952.

There’s a lot to unpack about what compilers brought to computing. For starters, they opened up programming to people that would otherwise have seen a bunch of mathematical notations and run away. In her words: “I could say "Subtract income tax from pay" instead of trying to write that in octal code or using all kinds of symbols.” This opened the field up to the next generation of programmers. It also had a second consequence: the computer was no longer just there to do math. Because the Mark I had been based on the Analytical Engine, it was considered a huge and amazing calculator. But putting actual English words out there and then compiling (you can’t really call it converting because that’s an oversimplification) those into machine code meant the blinders started to come off and that next generation of programmers started to think of computers as... more.

The detractors had a couple of valid points. This was the early days of processing. The compiler created code that wasn’t as efficient as machine code developed by hand. Especially as there were more and more instructions you could compile. There’s really no way around that. But the detractors might not have realized how much faster processors would get. After all they were computing with gears just a few decades earlier. The compiler also opened up the industry to non-mathematicians. I’m pretty sure an objection was that some day someone would write a fart app. And they did. But Grace Hopper was right, the compiler transformed computing into the industry it is today. We still compile code and without the compiler we wouldn’t be close to having the industry we have today. In 1954 she basically became the first director of software development when she was promoted to the Director of Automatic Programming.

Feeling like an underachiever yet? She was still in the Navy Reserve and in 1957 was promoted to Commander. But she was hard at work at her day job as she and her team at Remington Rand developed a language called FLOW-MATIC the first natural language programming language. In 1959, a bunch of computer nerds were assembled in a conference c...

bookmark
plus icon
share episode
The History of Computing - The History of Computer Viruses

The History of Computer Viruses

The History of Computing

play

07/26/19 • 17 min

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we’re able to be prepared for the innovations of the future! Todays episode is not about Fear, Uncertainty, and Death. Instead it’s about viruses. As with many innovations in technology, early technology had security vulnerabilities. In fact, we still have them!

Today there are a lot of types of malware. And most gets to devices over the Internet. But we had viruses long before the Internet; in fact we’ve had them about as long as we’ve had computers. The concept of the virus came from a paper published by a Hungarian Scientist in 1949 called “Theory of Self-reproducing automata.” The first virus though, didn’t come until 1971 with Creeper. It copied between DEC PDP-10s running TENEX over the ARPANET, the predecessor to the Internet. It didn’t hurt anything; it just output a simple little message to the teletype that read “I’m the creeper: catch me if you can.” The original was written by Bob Thomas but it was made self-replicating by Ray Tomlinson thus basically making him the father of the worm. He also happened to make the first email program. You know that @ symbol in an email address? He put it there. Luckily he didn’t make that self replicating as well.

The first antivirus software was written to, um, to catch Creeper. Also written by Ray Tomlinson in 1972 when his little haxie had gotten a bit out of control. This makes him the father of the worm, creator of the anti-virus industry, and the creator of phishing, I mean, um email. My kinda’ guy.

The first virus to rear its head in the wild came in 1981 when a 15 year old Mt Lebanon high school kid named Rich Skrenta wrote Elk Cloner. Rich went on to work at Sun, AOL, create Newhoo (now called the Open Directory Project) and found Blekko, which became part of IBM Watson in 2015 (probably because of the syntax used in searching and indexes). But back to 1982. Because Blade Runner, E.T., and Tron were born that year. As was Elk Cloner, which that snotty little kid Rich wrote to mess with gamers. The virus would attach itself to a game running on version 3.3 of the Apple DOS operating system (the very idea of DOS on an Apple today is kinda’ funny) and then activate on the 50th play of the game, displaying a poem about the virus on the screen. Let’s look at the Whitman-esque prose:

Elk Cloner: The program with a personality

It will get on all your disks

It will infiltrate your chips

Yes, it's Cloner!

It will stick to you like glue

It will modify RAM too

Send in the Cloner!

This wasn’t just a virus. It was a boot sector virus! I guess Apple’s MASTER CREATE would then be the first anti-virus software. Maybe Rich sent one to Kurt Angle, Orin Hatch, Daya, or Mark Cuban. All from Mt Lebanon. Early viruses were mostly targeted at games and bulletin board services. Fred Cohen coined the term Computer Virus the next year, in 1983.

The first PC virus came also to DOS, but this time to MS-DOS in 1986. Ashar, later called Brain, was the brainchild of Basit and Amjad Farooq Alvi, who supposedly were only trying to protect their own medical software from piracy. Back then people didn’t pay for a lot of the software they used. As organizations have gotten bigger and software has gotten cheaper the pirate mentality seems to have subsided a bit. For nearly a decade there was a slow roll of viruses here and there, mainly spread by being promiscuous with how floppy disks were shared. A lot of the viruses were boot sector viruses and a lot of them weren’t terribly harmful. After all, if they erased the computer they couldn’t spread very far. The virus started “Welcome to the Dungeon.” The following year, the poor Alvi brothers realized if they’d of said Welcome to the Jungle they’d be rich, but Axl Rose beat them to it. The brothers still run a company called Brain Telecommunication Limited in Pakistan. We’ll talk about zombies later. There’s an obvious connection here.

Brain was able to spread because people started sharing software over bulletin board systems. This was when trojan horses, or malware masked as a juicy piece of software, or embedded into other software started to become prolific. The Rootkits, or toolkits that an attacker could use to orchestrate various events on the targeted computer, began to get a bit more sophisticated, doing things like phoning home for further instructions. By the late 80s and early 90s, more and more valuable data was being stored on computers and so lax security created an easy way to get access to that data. Viruses started to go from just being pranks by kids to being something more.

A few people saw the writing on the wall. Bernd Fix wrote a tool to remove a virus in 1987. Andreas Luning and Kai Figge released The Ultimate Virus Killer, an Antivirus for the Atari ST. NOD antivirus was re...

bookmark
plus icon
share episode
The History of Computing - eBay, Pez, and Immigration

eBay, Pez, and Immigration

The History of Computing

play

10/07/21 • 9 min

We talk about a lot of immigrants in this podcast. There’s the Hungarian mathemeticians and scientists that helped usher in the nuclear age and were pivotal in the early days of computing. There are the Germans who found a safe haven in the US following World War II. There are a number of Jewish immigrants who fled persecution, like Jack Tramiel - a Holocaust survivor who founded Commodore and later took the helm at Atari. An Wang immigrated from China to attend Harvard and stayed. And the list goes on and on. Georges Doriot, the father of venture capital came to the US from France in 1899, also to go to Harvard.

We could even go back further and look at great thinkers like Nikolai Tesla who emigrated from the former Austrian empire. And then there’s the fact that many Americans, and most of the greats in computer science, are immigrants if we go a generation or four back.

Pierre Omidyar’s parents were Iranian. They moved to Paris so his mom could get a doctorate in linguistics at the famous Sorbonne. While in Paris, his dad became a surgeon, and they had a son. They didn’t move to the US to flee oppression but found opportunity in the new land, with his dad becoming a urologist at Johns Hopkins.

He learned to program in high school and got paid to do it at a whopping 6 bucks an hour. Omidyar would go on to Tufts, where he wrote shareware to manage memory on a Mac. And then the University of California, Berkeley before going to work on the MacDraw team at Apple.

He started a pen-computing company, then a little e-commerce company called eShop, which Microsoft bought. And then he ended up at General Magic in 1994. We did a dedicated episode on them - but supporting developers at a day job let him have a little side hustle building these newish web page things.

In 1995, his girlfriend, who would become his wife, wanted to auction off (and buy) Pez dispensers online. So Omidyar, who’d been experimenting with e-commerce since eShop, built a little auction site. He called it auction web. But that was a little boring. They lived in the Bay Area around San Francisco and so he changed it to electronic Bay, or eBay for short. The first sale was a broken laser printer he had laying around that he originally posted for a dollar and after a week, went for $14.83.

The site was hosted out of his house and when people started using the site, he needed to upgrade the plan. It was gonna’ cost 8 times the original $30. So he started to charge a nominal fee to those running auctions. More people continued to sell things and he had to hire his first employee, Chris Agarpao.

Within just a year they were doing millions of dollars of business. And this is when they hired Jeffrey Skoll to be the president of the company. By the end of 1997 they’d already done 2 million auctions and took $6.7 million in venture capital from Benchmark Capital. More people, more weird stuff. But no guns, drugs, booze, Nazi paraphernalia, or legal documents. And nothing that was against the law.

They were growing fast and by 1998 brought in veteran executive Meg Whitman to be the CEO. She had been a VP of strategy at Disney, then the CEO of FTD, then a GM for Playskool before that. By then, eBay was making $4.7 million a year with 30 employees.

Then came Beanie Babies. And excellent management. They perfected the online auction model, with new vendors coming into their space all the time, but never managing to unseat the giant.

Over the years they made onboarding fast and secure. It took minutes to be able to sell and the sellers are the ones where the money is made with a transaction fee being charged per sale, in addition to a nominal percentage of the transaction. Executives flowed in from Disney, Pepsi, GM, and anywhere they were looking to expand.

Under Whitman’s tenure they weathered the storm of the dot com bubble bursting, grew from 30 to 15,000 employees, took the company to an IPO, bought PayPal, bought StubHub, and scaled the company up to handle over $8 billion in revenue. The IPO made Omidyar a billionaire.

John Donahoe replaced Whitman in 2008 when she decided to make a run at politics, working on Romney and then McCain’s campaigns. She then ran for the governor of California and lost. She came back to the corporate world taking on the CEO position at Hewlett-Packard.

Under Donahoe they bought Skype, then sold it off. They bought part of Craigslist, then tried to develop a competing product. And finally sold off PayPal, which is now a public entity of its own right.

Over the years since, revenues have gone up and down. Sometimes due to selling off companies like they did with PayPal and later with StubHub in 2019. They now sit at nearly $11 billion in revenues, over 13,000 employees, and are a mature business. There are still over 300,000 listings for Beanie Babies. And to the original inspiration over 50,000 listings for the word Pe...

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does The History of Computing have?

The History of Computing currently has 211 episodes available.

What topics does The History of Computing cover?

The podcast is about Podcasts, Technology, History, Microsoft, Linux and Apple.

What is the most popular episode on The History of Computing?

The episode title 'Claude Shannon and the Origins of Information Theory' is the most popular.

What is the average episode length on The History of Computing?

The average episode length on The History of Computing is 16 minutes.

How often are episodes of The History of Computing released?

Episodes of The History of Computing are typically released every 4 days, 4 hours.

When was the first episode of The History of Computing?

The first episode of The History of Computing was released on Jul 7, 2019.

Show more FAQ

Toggle view more icon

Comments