Computer History – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Thu, 24 Dec 2020 03:03:37 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 Turing Machines and Computability Theory https://linuxhint.com/turing_machines_computability_theory/ Sun, 15 Nov 2020 03:32:12 +0000 https://linuxhint.com/?p=76591

The Turing machine is the central theoretical construct in computer science. The Turing machine is an abstract mathematical model of computation. The use of Turing machines helps to explain what computation is by demarcating the so-called “computable functions.”

Alan Turing’s early research into logic focused on a famous unsolved problem known as the Entscheidungsproblem. The Entscheidungsproblem (roughly translated from German as the decision problem) was proposed by philosopher and mathematician David Hilbert in 1928. The problem asked whether there was an algorithm that would decide every statement in a formal language.

A formal language is a system of axioms and inference rules such as those in arithmetic or first-order logic. The axioms can be any symbols, and the inference rules can be any list of rules for manipulating those symbols.  “Deciding every statement” meant either outputting whether the statement was true/false or outputting whether the statement was derivable/underivable. Kurt Godel’s completeness theorem proved that an algorithm deciding for validity is equivalent to an effective procedure deciding for derivability. Alan Turing’s 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem”, proved a negative result, that it was impossible to algorithmically decide every statement in a formal system.

Alan Turing

To prove a negative result for the Entscheidungsproblem, Turing needed to formalize the notion of an algorithm. Turing’s formalization of an algorithm was a mathematical model of computing that later became known as the Turing machine. A Turing machine has a finite set of states that the machine can be in. The Turing machine has an infinitely long tape that is divided into squares. On every square in the tape, there is a symbol drawn from a finite set of symbols. At any moment in the computation, the Turing machine is reading the symbol on one square of the tape. The Turing machine can replace that symbol with another symbol and move to either the square to the right or the square to the left. The action the Turing machine takes is automatically determined by the state the machine is in. After the replacement symbol and move to a different square action has taken place, the Turing machine can transition to a different state. Each different state has a different set of rules about how to replace symbols and which direction to move.

A Rare Physical Implementation of the Turing Machine Design (without an infinite tape)

The canonical formulation of the Turing machine usually consists of a binary alphabet of exclusively 0s and 1s. This formulation matches the intuition of modern computer programmers, given that all modern computers use binary. In fact, Turing machines are neutral with respect to the size of the alphabet of symbols. A Turing machine can also use any symbol, whether numeral or drawn from any other type of alphabets such as pictorial symbols or the Latin alphabet. Any formulation of every possible finite alphabet is provably reducible to a binary Turing machine.

Turing machines assume that an infinite amount of memory is available. No real physically instantiated machines can meet this requirement of being a Turing machine. A Turing machine also assumes a potentially infinite amount of time can be spent computing the function. These assumptions were made to generate the most expansive class of possible functions for Turing’s definition of computable functions. Turing’s computable functions are any functions that can be computed by a Turing machine. Many of these computable functions might never be computable by any physically instantiated machine because they require too much time or memory.

The Church-Turing Thesis asserts the equivalence of computable functions and functions that can be computed by a Turing machine. This entails that all functions not computable by Turing machines cannot be computed by any other method. David Hilbert had expected a positive answer to the Entscheidungsproblem, which would mean that all problems are computable. Turing’s result has led to the discovery of many uncomputable problems.

The most famous uncomputable problem is the Halting Problem. The Halting Problem is the problem of creating an algorithm that can, in the general case, decide whether a computer program with its input will halt or go on forever. While there are specific cases where the Halting problem can be solved, it cannot be solved for every computer program with any input. This result has had important consequences for computer programming, as computer programmers need to be aware of the possibility of infinite loops and the impossibility of detecting all infinite loops ahead of running their programs.

Another implication of the Turing machine is the possibility of universal Turing machines. Implicit in Turing’s design is the concept of storing the program that modifies the data alongside the data it modifies. This suggested the possibility of general-purpose and reprogrammable computers. Modern computers are typically universal Turing machines in the sense that they can be programmed to run any algorithm. This eliminated the need for different hardware for each potential computer program and introduced the hardware/software distinction.

The Turing machine model directly led to the invention of computers, but it is not the same blueprint used to engineer modern computers. The von Neumann architecture used as a blueprint for modern computers uses the stored program concept implicit in the Turing machine model but is different from the rest of the Turing machine model in several important ways. The biggest differences are that the von Neumann architecture doesn’t use a read-write head and instead includes multiple registers, random access memory, data buses, a small set of basic machine instructions, and multiple bit processing capabilities. The von Neumann architecture also explicitly allows for specialized input and output devices such as keyboards and monitors.

The Turing machine model was the first mathematical model of computation. It led directly to the invention of physical computers. Physical computers have all the same capabilities that Turing machines have, assuming a limited memory and time constraints on actual computation. The Turing formulation still plays a central role in the study of computation. Computer scientists are still actively involved in researching whether specific functions are computable by Turing machines.

]]>
History of the Transistor and the Transistor Computer https://linuxhint.com/transistor_computer_history/ Mon, 09 Nov 2020 20:21:23 +0000 https://linuxhint.com/?p=76387 The invention of transistors is one of the most important breakthroughs of the 20th century. In fact, most electronic devices used in day-to-day activities rely on transistors. From the simple calculator to complex alarm systems, this minute electronic component has made major contributions in electronics and electronic communications.

The Dawn of Transistors

Transistors are semiconductor devices that have two main functions in an electronic circuit – an amplifier and a switch. Before the era of transistors, vacuum tubes were predominantly used as an amplifier or a switch for the first half of the twentieth century. However, the high operating voltage requirement, high power consumption, and high production of heat caused vacuum tubes to become inefficient and unreliable over time. Not to mention, these tubes are bulky and fragile because the casing is made of glass. To solve this dilemma, years of research were done by different manufacturers for a suitable replacement.

At long last, in December of 1947, three physicists from Bell Laboratories successfully invented the first working transistor. John Bardeen, Walter Brattain, and William Shockley spent years of research to finally develop a working point-contact transistor. Shockley further improved the device into a bipolar junction transistor in 1948, which was the type of transistor that was widely used in the 1950s. Such was the importance of their invention that Bardeen, Brattain, and Shockley were awarded the renowned Nobel Prize in 1956.

Evolution of Transistors

Much like any other device, transistors have also gone through several innovations. Back in the late 1950s, germanium played a crucial role in the development of transistors. Germanium-based transistors, however, have major drawbacks, with current leakage and intolerance of temperatures greater than 75 °C. Additionally, germanium is rare and expensive. This prompted the researchers at Bell Labs to look for a better alternative.

Gordon Teal a resounding name in the evolution of transistors. An American engineer at Bell Labs, Teal developed a method to produce pure germanium crystals to be used for germanium-based transistors. Likewise, Teal experimented with silicon as a possible replacement for germanium. In 1953, he moved back to Texas after he was offered the research director position at Texas Instruments (TI).[1] Bringing his experience and knowledge on semiconductor crystals, he continued to work on purified silicon as a replacement for germanium. In April 1954, Teal and his team at TI developed the first silicon transistor, which was announced to the world in May of that year. Because of its superior characteristics, silicon gradually replaced germanium as the semiconductor used for transistors.

With the introduction of silicon transistors, researchers at Bell Labs achieved yet another breakthrough by developing a transistor that could surpass the performance of the bipolar junction transistor. In 1959, Mohamed Atalla and Dawon Kahng invented the metal-oxide-semiconductor field-effect transistor (MOSFET) with lower power consumption and higher density than the bipolar transistor. These valuable characteristics greatly popularized the MOSFET transistor, which has since become the most widely manufactured device in history.[2]

Transforming Computer Technology

The invention of transistors was also revolutionary in the miniaturization of computers. Like earlier electronic devices, the first generation of computers used vacuum tubes as switches and amplifiers. After the advent of transistors, manufacturers also adopted the small device to build smaller, more efficient computers. In the years that followed, vacuum tubes were completely replaced by transistors, giving rise to the second generation of transistor computers.

The first computer to use transistors was believed to be the University of Manchester Transistor Computer. The Transistor Computer was built as a prototype, consisting of 92-point contact transistors and 550 diodes, and became fully operational in 1953. In 1955, the full-sized version of this computer was introduced, with 200-point contact transistors and 1300 diodes. Though the majority of the circuit used transistors, this device was not considered a completely transistorized computer, as vacuum tubes were still used in its clock generator.[3]

In the mid-1950s, similar machines began sprouting up. The University of Manchester’s design was later adopted by Metropolitan-Vickers, who produced seven machines using bipolar junction transistors in 1956. However, the device, called the Metrovick 950, was not commercially available and was only used within the company. Likewise, Bell Labs came up with the TRADIC device in 1954,[4] but like the Transistor Computer, the TRADIC used vacuum tubes for its clock power.

Built for the US Air Force in 1955, the Burroughs Atlas Mod 1-J1 Guidance Computer was the first computer to eliminate vacuum tubes entirely, and this model was the first fully transistorized computer. MIT also developed TX-0, their own transistor computer in 1956. Transistor computers also began to emerge in other parts of the world. The first device to show up in Asia was Japan’s ETL Mark III, released in 1956. The DRTE, released in 1957, and the Austrian Mailüfterl, released in 1958, were Canada’s and Europe’s first transistor computers, respectively. In 1959, Italy also released their first transistor computer, the Olivetti Elea 9003, which was later made available in the private market.[5]

Although transistor computers were emerging globally in the 1950s, they were not made commercially available until 1959, when General Electric released the General Electric 210. Consequently, other manufacturers also introduced their own flagship transistor computer models. The IBM 7070 and the RCA 501 were some of the first models released, among others.[6] Large-scale computers also followed this trend. The Philco Transac models S-1000 and S-2000 were among the first commercially available large-scale transistorized computers.

The evolution of transistor designs brought about major changes in computer design. The production of transistorized computers increased over time, as the technology became available commercially. Eventually, integrated circuits were adopted in the 1960s, giving way to the third generation of computers.

Small Size, Big Changes

Transistors have been preeminent since their invention over 70 years ago. This technology has propelled the invention and development of many other electronic devices. The humble size of the transistor does not cloak the magnitude of its contribution to technology. The transistor has undeniably changed the face of electronic circuitry and has brought about significant changes in the world, particularly in computer technology.

Sources:

[1]        Michael Riordan, “The Lost History of the Transistor”, 30 April 2004, https://spectrum.ieee.org/tech-history/silicon-revolution/the-lost-history-of-the-transistor Accessed 20 Oct 2020
[2]        Wikipedia. “History of the Transistor”, N.d.,  https://en.wikipedia.org/wiki/History_of_the_transistor, Accessed 20 Oct 2020
[3]        Wikipedia. “Transistor Computer”, N.d., https://en.wikipedia.org/wiki/Transistor_computer, Accessed 20 Oct 2020
[4]        “The Transistor” N.d., http://www.historyofcomputercommunications.info/supporting-documents/a.5-the-transistor-1947.html Accessed 20 Oct 2020
[5]        Wikipedia. “Transistor Computer”, N.d., https://en.wikipedia.org/wiki/Transistor_computer, Accessed 20 Oct 2020
[6]        “The Transistor” N.d., http://www.historyofcomputercommunications.info/supporting-documents/a.5-the-transistor-1947.html Accessed 20 Oct 2020 ]]> The Case of Ada Lovelace: Genius or Fraud? https://linuxhint.com/ada_lovelace/ Mon, 09 Nov 2020 09:02:55 +0000 https://linuxhint.com/?p=76365 Ada King, Countess of Lovelace, was a 19th century English mathematician who is today commonly given the moniker of the world’s first computer programmer. According to some, Ada Lovelace is a mathematical genius who was crucial in the development of the computer and to whom credit has been unjustly denied. According to others, Ada’s contributions have been overstated and her role is only one of minor historical interest.

Ada Bryon was born in London on December 10, 1815. She was the daughter of the infamous Lord Byron and the Baroness Anabella Milbanke. While the pair was perhaps the most intelligent couple in Europe, they were incredibly different in temperament. Lord Bryon was one of the greatest poets while Baroness Milbanke was one of the most prodigious mathematicians. Lord Bryon was known for his scandalous, wild exploits while Baroness Milbanke was austere and religious. The couple split 5 weeks after Ada’s birth.

Ada spent her childhood undergoing a strict and rigorous educational plan. Ada’s true interest in mathematics seems to have been ignited after a meeting with Charles Babbage. Charles Babbage was the son of a wealthy banker and showed genius in mathematics at an early age. After graduating from Cambridge, his bright career was postponed for many years while he was unfairly denied research positions at several universities. During this period, Babbage lived off his family’s wealth and continued to produce papers on a variety of topics.

Babbage’s interest soon turned to produce trigonometry and logarithmic table books. These books were enormously valuable, especially to militaries for their use in ship navigation. The tables were produced by assigning the calculations to mathematicians to write down into a manuscript and then copying the manuscript by the printing press. The production of these tables was incredibly laborious and time-consuming, with many different opportunities for errors to slip in. Babbage’s focus turned to the design and invention of a mechanical calculator that could use Isaac Newton’s “method of differences” algorithm to automate the work of these mathematicians.

At age 17, Ada travelled from her mother’s country estate to London for her debutante season. While at a party thrown by the philosopher and mathematician Charles Babbage on behalf of his 17-year-old son, Ada was introduced to the 41-year-old Charles due to their common interest in mathematics. Babbage showed Ada the prototype of his “Difference Engine” machine. The Difference Engine was supposed to be a special-purpose calculator that would inspire Babbage’s design for a Turing-complete universal computer.

Following the meeting with Babbage, Ada kept up a friendship with Babbage while spending the next several years getting married and raising 3 children. In 1839, Ada wrote Babbage asking about a recommendation for a tutor in mathematics. Babbage recommended the preeminent logician Augustus De Morgan. Augustus De Morgan was a close friend of George Boole, the inventor of Boolean algebra, making Ada only two degrees of separation from another major figure in the history of computing. De Morgan’s first subject for Ada was calculus which Ada quickly excelled in.

Babbage had attempted to secure funding for his idea for a purely mechanical Turing-complete universal computer but was rebuked by funding agencies in England. In 1840, Babbage gave a lecture on the idea in Italy. A young engineer named Luigi Menabrea attended the lecture, taking notes, and later publishing them in French. In 1843, Ada decided to translate the notes into English and incorporate her own notes into the paper. Ada spent several months publishing the notes which are considered her magnum opus.

ada lovelaces notes

Ada Lovelace’s Notes

 
Ada’s notes are incredibly thorough and demonstrate excellent technical knowledge. More importantly, Ada gives original insights to many of the most important ideas in computing. Among Ada’s most prescient comments: “the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated.” She also famously makes an important claim about the possibility of artificial intelligence: “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform…. Its province is to assist us in making available what we are already acquainted with.”. Another original insight found in Ada’s paper is the idea that the Analytical Engine could manipulate more than just arithmetic numbers with special reference to musical notes. This idea does not seem to be present in Babbage’s work and is unique to Ada.

After the publication of Babbage’s notes, Ada proposed to be in charge of Babbage’s Analytical Engine project including securing funding and hiring engineers. Babbage’s role would be to supervise the technical details. As the record appears in their correspondence, it seems that Babbage mostly agreed to her terms. This was an unusual decision on Babbage’s part, as he was long noted for his temperamental and domineering character. Ada was herself surprised and wrote that “I have never seen him so agreeable, so reasonable, or in such good spirits!”.

The two continued to think up schemes for funding but Ada had to delay more serious efforts on the project as her health became a problem. Over the next several years, Ada’s health declined precipitously and she was tragically diagnosed with cancer. Today, it is widely believed she suffered from ovarian cancer. Ada tried a variety of cures but eventually realized that death was imminent. She called on her friend Charles Dickens to read a story about death from one of his books. In her final months, Ada asked to be buried next to her late absentee father which deeply angered her mother and husband. Ada had long been an admirer of her father despite her mother’s attempts to inculcate the opposite.

Ada survived longer than expected, several months after falling into serious decline. Nurse Florence Nightingale, another friend, said of her passing on November 27, 1852: “They said she could not possibly have lived so long, were it not for the tremendous vitality of the brain, that would not die.”. Ada Lovelace was 36 years old.

Ada’s final wish was to have her correspondences collected and organized. From these writings, Ada appears to have had brilliant and systematic views in a variety of fields of knowledge. In perhaps her most perspicuous moment, she writes in one letter to a friend: “It does not appear to me that cerebral matter need be more unmanageable to mathematicians than sidereal & planetary matter & movements; if they would but inspect it from the right point of view. I hope to bequeath to the generations a Calculus of the Nervous System.”. These ideas preempted similar ideas from George Boole by a decade and many other figures in psychology by much longer.

The provenance of the idea of computation is a complicated and difficult issue. It seems that Alan Turing was not aware of Babbage and Ada’s work on the Analytical Engine in 1937 when he published “On Computable Numbers”. Ada was clearly one of the most brilliant minds in history. Her reflections on information processing and artificial intelligence are completely original and far ahead of her time. The bulk of the credit for designing the blueprints of the Difference Engine and the Analytical Engine belongs to Babbage but Ada had an important role in clarifying these blueprints. In summary, Ada Lovelace didn’t invent the computer but had she not tragically died so young, she might have played a very large role in the construction of the first computer or in the development of the idea of universal computation. In many ways, Ada saw deeper than Babbage to the potential of the Analytical Engine. Had Ada lived longer, she might have made the contributions of Turing or Von Neumann.

]]>
The Work of John von Neumann https://linuxhint.com/john_von_neumann/ Mon, 02 Nov 2020 10:15:38 +0000 https://linuxhint.com/?p=75210

John von Neumann

John von Neumann was born in Budapest on December 28, 1903, into a wealthy banking family that had been elevated to the Hungarian nobility. From an early age, he showed great intellect and was labeled a prodigy. By the age of 6, von Neumann could speak Ancient Greek and divide a pair of 8-digit numbers in his head, and by 8, he had learned differential and integral calculus. When von Neumann was 15, his father arranged for Gábor Szegő to serve as his private math tutor. At their first lesson, the famous mathematician Szegő was brought to tears after watching the speed and ability of the young von Neumann. In addition to these incredible feats, von Neumann had a photographic memory and could recite entire novels word-for-word.

Von Neumann completed a two-year certificate in chemistry at the University of Berlin and a PhD in mathematics at Pázmány Péter University. After completing his PhD, von Neumann went to the University of Göttingen to study under David Hilbert, one of an important mathematician whose work helped to develop the computer. Thereafter, von Neumann went to Princeton University to accept a lifetime appointment to the Institute of Advanced Study. His office was several doors away from Albert Einstein’s office, and Einstein complained that von Neumann played German march music on his office phonograph too loudly.

While at Princeton, von Neumann was brought in to work on the Manhattan Project. He took many trips to Los Alamos Laboratory to monitor the development of atomic weapons, and he was crucial in many stages of the design and construction of the two nuclear weapons dropped on Japan. He was an eyewitness to the first test of an atomic bomb on July 16, 1945, and he served on the committee tasked with deciding which two Japanese cities would be targets for the bomb. For his involvement in the Manhattan Project, von Neumann became perhaps the biggest inspiration for the character Dr. Strangelove in Stanley Kubrick’s homonymous film.

Dr. Strangelove

Around the time during which he worked on the atomic bomb, von Neumann began working on ideas that would form the basis of computer science. Von Neumann had met with Alan Turing years earlier, and reports suggest that von Neumann was influenced by Turing’s paper “On Computable Numbers.” Certainly, due to his prior work with Hilbert, von Neumann was in a great position to recognize the significance of Turing’s work.

In 1945, while in the final stages of his work on the Manhattan Project, von Neumann told friends and colleagues that he was thinking about even more consequential work. While on a train to Los Alamos, von Neumann wrote a document called “First Draft of a Report on the EDVAC”. This 101-page document contains the design of the von Neumann architecture, which has remained the dominant paradigm in computer architecture since its introduction. The von Neumann architecture is typically associated with the stored-program computer concept, but it also includes a 4-part engineered design that differs from other stored-program concepts.

Most importantly, the von Neumann architecture is a stored-program computer. Stored-program computers use one memory unit to store both the computer programs and the data that the computer programs take as input. The stored-program design is typically contrasted with the Harvard architecture, which uses separate memory units to store the computer program and the program’s data.

The idea of a stored-program architecture was tacitly suggested by Turing’s work on universal Turing machines, as these machines are theoretical versions of stored-program computers. However, von Neumann recognized the value of explicitly engineering this property in computers. The alternative methods of programming computers required manually wiring or rewiring the computer’s circuits, a process that was so labor intensive that computers were often built for one function and never reprogrammed. With the new design, computers became easily reprogrammable and able to implement many different programs; however, access controls had to be enabled to prevent certain types of programs such as viruses from reprogramming crucial software like the operating system.

The most well-known design limitation of the von Neumann architecture is called the ‘von Neumann bottleneck’. The von Neumann bottleneck is caused by the stored-program architecture, as the data and program share the same bus to the central processing unit. The transfer of information from memory to CPU is typically much slower than actual processing in the CPU. The von Neumann design increases the amount of information transfer required because both the computer program and the program’s data need to be transferred to the CPU. One of the best methods of ameliorating this problem has been the usage of CPU caches. CPU caches serve as intermediaries between the main memory and the CPU. These CPU caches provide small amounts of quick-to-access memory near the processor core.

The von Neumann architecture consists of four parts: the control unit, the processing unit (including the arithmetic and logic unit (ALU)), the memory unit, and the input/output mechanisms. The input/output mechanisms include the standard devices associated with computers, including keyboards as inputs and display screens as outputs. The input mechanisms write to the Memory Unit which stores the computer programs and the program’s data. The control unit and the processing unit comprise the central processor. The control unit directs central processing according to the instructions it receives. The processing unit contains an ALU that performs a basic arithmetic or bitwise operations on a string of bits. The ALU can perform many different functions; therefore, it is the function of the control unit to direct the ALU so that it performs the correct function on the correct string.

The von Neumann Architecture

Following its introduction, the von Neumann architecture became the standard computer architecture, and the Harvard architecture was relegated to microcontrollers and signal processing. The von Neumann architecture is still in use today, but newer and more complicated designs inspired by the von Neumann architecture have eclipsed the original architecture in terms of popularity.

]]>
Who invented the microprocessor? https://linuxhint.com/who_invented_the_microprocessor/ Wed, 28 Oct 2020 17:53:07 +0000 https://linuxhint.com/?p=74474 The microprocessor is the engine of all modern computers including, desktops, laptops, and smartphones. The microprocessor is the component of computers that performs all of the functions of the Central Processing Unit (CPU). The microprocessor is one type of integrated circuit. An integrated circuit is a collection of circuits on a silicon chip. A typical integrated circuit might connect billions of transistors in a structured way to form the various logic gates and perform different operations.

Microprocessors follow the machine instructions, and it can involve one of three basic functions. The first function is calculating various mathematical operations, which is done by the Arithmetic Logic Unit. The next function is moving data to different memory registers. The final function of a microprocessor is to read the instructions and jump to new instructions if needed.

The history of the invention of the microprocessor is tendentious and controversial; the invention of the transistor was the first step. They came into production in 1947, long before microprocessors arrived on the scene. These original transistors were bipolar transistors. Integrated circuits containing multiple bipolar transistors were developed in the 1960s. The 1960s also saw the invention of the metal-oxide-semiconductor (MOS) transistor. These transistors were originally slow, unreliable, and expensive, but rapid innovation made them the best option in transistors by the middle of the decade.

In 1967, the D200 computer by Autonetics became the first computer to be built of MOS transistors. The computer was used for aviation and navigation. At one point, it was even a candidate for use on the space shuttle. This implementation of a 24 MOS chip computer set off an arms race. Subsequent computer designs competed to lower the 24 MOS chip design requirement of the D200 down to as close to 1 as possible.

Intel Engineer, Ted Hoff, is one of the best candidates for the inventor of microprocessors, and he is usually given credit by historians of technology. Hoff was the 12th employee of Intel. He was personally headhunted by Intel co-founder, Robert Noyce. After signing on, he convinced a Japanese company named BUSICOM to finance a project to build a single chip. He designed a microprocessor that became the Intel 4004 and led the team that would be in charge of building it. His team was made up of Intel employees: Federico Faggin, Stanley Mazor, and Masatoshi Shima. Mr. Faggin, in particular, is recognized as a crucial collaborator in early development. After financing the early stages of the project for Ted Hoff’s team, BUSICOM became increasingly skeptical of the need to finance such a radical project. Intel realized the value of the intellectual property of the design and bought back the rights from BUSICOM.

The Intel 4004 CPU, the world’s first microprocessor

In 1971, Intel produced the 4004 with a single CPU. This was marked as the first microprocessor. The computer was a 4 bits microprocessor, only allowing for symbols that were 4 bits wide. The 4004 itself was used in very few commercial applications because it was outpaced by superior microprocessor designs within months of its release. The known use cases of the 4004 include a pinball machine and a word processor. Hoff was honored in 2010 by US President Barack Obama with the National Medal of Technology and Innovation for his efforts.

Ted Hoff with Stanley Mazor and Federico Faggin awarded the National Medal of Technology and Innovation for his work on the Intel 4004

Following the 4 bit design, 8-bit microprocessors soon became the standard for all computing. In 1970, Intel was hired by Computer Terminal Corporation to build a single MOS chip to replace the processor of their Datapoint 2200 computer. The design became Intel’s 8008 chip, an 8-bit microprocessor. At the same time, Texas Instruments was contracted to design a microprocessor. A year later and before the development of Intel’s chip, Texas Instruments had designed the TMC 1795. Computer Terminal Corporation rejected the design in favor of its older model. The Texas Instruments chip never found a purchaser, though it is clear that Texas Instruments deserves the credit for the first 8-bit microprocessor.

Intel was quick to commercialize the 8008 microprocessor after buying back the rights from Computer Terminal Corporation. Intel’s 8008 was the first commercially successful microprocessor. By April 1972, Intel had hundreds of thousands of 8008 chips ready to ship out. The success of the 8008 led to the 8080 and then the 8086, which eventually became the x86.

Still, one more contender has entered into the invention debate and made the battle over the patent rights to the microprocessor drawn out and highly litigious. Texas Instruments had originally secured multiple patents for their TMC 1795. In 1990, a little known inventor from La Palma, California named Gilbert Hyatt, was granted a patent for the single-chip processor. The controversial patent number 4,942,516 was granted based on a computer he built in 1969 using bipolar chipboards. Hyatt had begun working on building a microprocessor in 1967 and quit his job in 1968 to start a company devoted to building the first microprocessor. Hyatt’s company Microcomputer Inc. had financial backing from Intel founder’s Gordon Moore and Robert Noyce. The patent could have led to billions of dollars in settlements in favor of Mr. Hyatt from computer manufacturers. Texas Instruments eventually succeeded in having Hyatt’s patent rescinded in 1996 after a protracted legal case and paying out substantial royalties to Hyatt. Hyatt still maintains that his design was the first microprocessor and that it only failed to achieve commercial success because of disputes with the other backers of his company.

Gilbert Hyatt of Microcomputer Inc.

Intel is still one of the largest microprocessor developers today. They have successfully stayed ahead of massive technological change. In 1965, Gordon Moore, one of the founders of Intel, published a paper predicting that the number of transistors in an integrated circuit would double every year. Ten years later, in 1975, he predicted the doubling would take place every two years. His prediction has so far been almost entirely correct. The dispute over the inventor of the microprocessor might never be fully settled, but it is clear that the development of microprocessors with smaller and cheaper transistors has changed the world by ushering in the computer revolution and the advent of personal computers.

]]>
The First Mainframe Computer: Harvard Mark I https://linuxhint.com/first_mainframe_computer_harvard_mark_i/ Mon, 26 Oct 2020 05:48:53 +0000 https://linuxhint.com/?p=73706 The mainframe computer, or ‘big iron’ in the computer industry, is the longest-running computer system in history. This technology has been substantially useful since the World War II era. In fact, the first mainframe computer was used mainly by the US Navy during the war. Like supercomputers, the mainframe computer addressed the need for an automatic, large-scale calculator as a more efficient and error-free way of computing. It was the invention of such machines that redefined the term ‘computer’ to refer to devices that can carry out automatic calculations of mathematical operations, a term that used to refer to humans who performed the manual calculations of such operations. Today, the importance of this technology in large-scale transaction processing remains unparalleled. Large industries in both the public and private sectors, from government and banking to aviation and healthcare, are in constant need of faster large-scale mainframes with higher stability and reliability. Consequently, big irons continue to evolve, as they remain at the core of every IT infrastructure.

Inspired by Babbage

Howard Aiken was a graduate student at Harvard when he came up with the concept of a device that can automatically calculate differential equations, after encountering difficulties in solving mathematical physics problems in his research.[1] He envisioned a machine that could take in loads of mathematical inputs and produce precise and reliable results in a short time. After coming up with an initial design, he approached some manufacturers, but none were interested. Unabashed, Aiken explored other technological advances to improve his design. He eventually came upon Henry Babbage’s demonstration of his father’s Analytical Engine at Harvard, performed 70 years prior. Noticing the similarities between his design and that of Charles Babbage’s, Aiken studied Babbage’s work on the Analytical Engine and used his principles in the development of a new conceptual design. Aiken finished the design in 1937 and obtained the support of the Harvard faculty, who were impressed by his efforts. He presented his design to several manufacturers. Aiken eventually gained the nod from IBM in 1939 after Thomas Watson, then chairman of IBM, saw it as good publicity for the company and as an opportunity to showcase the company’s talents.[2]

Automatic Sequence Controlled Calculator

Construction of the machine started in 1939 at the IBM plant in Endicott, NY. The original design was composed of electromechanical components, such as switches, relays, rotating shafts, and clutches. A total of over 750,000 components, 500 miles of wires, and 3 million connections were used.[3] Input occurred through a 24-channel punched paper tape, two card readers, and a card punch, and the output was printed by two built-in typewriters.[4] The completed device occupied a whole room, weighing five tons and measuring 51 feet long, 8 feet high, and 2 feet deep. The device was enclosed in an elaborate casing designed by IBM’s industrial designer, Normal Bel Geddes. After five years and roughly $300,000 later, IBM shipped the enormous calculator to Harvard in February 1944. The device was originally called the Automatic Sequence Controlled Calculator (ASCC) by IBM. As the largest electromechanical calculator at the time, the ASCC could process addition or subtraction in 1 second, multiplication in 6 seconds, and division in 15.3 seconds. Furthermore, the device could compute logarithmic and trigonometric functions in just over a minute.[5] Because it is basically a calculator that can compute massive mathematical operations, the device was also called the ‘Harvard Calculator.’[6] It was only later, when there was a rift between Aiken and IBM, that Aiken began calling the device ‘Harvard Mark I,’ or simply, ‘Mark I.’

First Operators

Mark I was first operated by Harvard civilians under the direction of Robert Campbell, who ran a series of test runs after the device’s installation. In May 1944, the US Navy Bureau of Ships sent in its crew to operate the device, together with the technicians at Harvard. In 1946, Aiken and Grace Hopper published the machine’s instruction manual, A Manual of Operation for the Automatic Sequence Controlled Calculator, which documents the machine’s physical components, operation, maintenance, and instructions on how to program the machine. Because of its elaborate and detailed instructions, the manual also became the first computer programming textbook. The mathematical tables printed by Mark I from 1946-1950 were compiled in a series of books titled, Annals of the Computation Laboratory.

A Gigantic Military Aid

For the most part, Mark I was used to calculate and print mathematical tables that were used by the military in designing a wide range of military equipment, such as underwater detection systems, surveillance cameras, and radars. Mark I was also used to compute Bessel Functions in one of its longest-running projects, which some referred to as ‘Bessie.’ But perhaps its most notable contribution to the military was in the Manhattan Project, an undertaking that created the first nuclear weapons. John von Neumann, a Manhattan Project veteran, ran one of the first programs on Mark I while working on the implosion of atomic bombs.

The Mark I Controversy

The success of the Harvard Mark I success is not spared from its controversies. After the device’s launch in 1944, the Harvard News Office issued a press release claiming Aiken to be the sole inventor of the machine and disregarding the efforts of IBM engineers. Of the eight pages, only one paragraph was written about IBM’s contribution, with no mention of the company’s crucial role in the construction and development of the machine. Moreover, the release was issued without any consultation from IBM.[7] These deeply enraged Thomas Watson, who had personally approved Aiken’s project, and he reluctantly attended the dedication ceremony in August 1944. Though he was later appeased by Aiken, all future projects by Aiken were constructed without the help of IBM.

Leaving a Mark

The Harvard Mark I is a monumental invention in the history of computing. Mark I churned mathematical tables for 16 years, concluding its final computations in 1959. After Mark I, Aiken developed three more machines of its kind, which he named Mark II, Mark III, and Mark IV. Just like any other device, the development of its more advanced successors rendered Mark I technologically obsolete. Today, portions of the original machine are on display at the Harvard University Science Center, while some sections of the device went to IBM and the Smithsonian Institute.

Sources:

[1] Collection of Historical Scientific Instruments. “The Mark I Computer at Harvard University” N.d., http://sites.harvard.edu/~chsi/markone/about.html Accessed 12 Oct 2020

[2] Jeremy Norman. “Key Aspects of the Development of the Harvard Mark 1 and its Software by Howard Aiken and Grace Hopper”, History of Information, N.d., https://www.historyofinformation.com/detail.php?id=624 Accessed 12 Oct 2020

[3] Wikipedia. “Harvard Mark I”, N.d., https://en.wikipedia.org/wiki/Harvard_Mark_I Accessed 12 Oct 2020

[4] Britannica. “Harvard Mark I” N.d., https://www.britannica.com/technology/Harvard-Mark-I 12 Oct 2020

[5] Wikipedia. “Harvard Mark I”, N.d., https://en.wikipedia.org/wiki/Harvard_Mark_I Accessed 12 Oct 2020

[6] Collection of Historical Scientific Instruments. “The Mark I Computer at Harvard University” N.d., http://sites.harvard.edu/~chsi/markone/about.html Accessed 12 Oct 2020

[7] J.A.N. Lee. “Computer Pioneers”, IEEE Computer Society, N.d., https://history.computer.org/pioneers/aiken.html 12 Oct 2020

]]>
Donald Knuth: A Professional Biography https://linuxhint.com/donald_jnuth_art_computer_programming_bio/ Tue, 20 Oct 2020 13:14:46 +0000 https://linuxhint.com/?p=72537 As a luminary in the field of computer science Donald Knuth has been named the “father of the analysis of algorithms” and has been the recipient of numerous prestigious awards. He is not only a mathematical and computer programming genius, but also a well-known professor, author, lecturer, and musician.

Younger Years

Born to German-American parents Ervin Henry Knuth and Louise Marie Bohning on January 10, 1938 in Wisconsin, Donald Ervin Knuth was a child prodigy. He went to Milwaukee Lutheran High School and was already showcasing his analytical genius after winning a contest in eighth’ grade by developing an algorithm that found 4500 words in the title of ‘Ziegler’s Giant Bar, beating the judges’ former measure at 2500 words.[1]

In college, Knuth majored in physics after receiving a scholarship at Case Institute of Technology, but later switched to mathematics. While in college, he stumbled upon an IBM 650 computer which he then used to build different computer programs. Among the popular programs he created was one used to analyse the performance of basketball players on the team he managed, thereby helping them win games.

Knuth is one of the rare individuals receiving two degrees in the same year. He earned his B.S. in mathematics in 1960, and was awarded an M.S. in mathematics as a special faculty award, which noted his academic performance as exceptional. [2] Three years later, he earned his PhD in mathematics at the California Institute of Technology (CalTech).

Academic Career

Knuth joined CalTech as an assistant professor after finishing his PhD in 1963. He later became an associate professor and continued teaching at the university until 1968. He left CalTech and moved to the Institute for Defense Analyses’ Communications Research Division (IDA) to do mathematical research, but left after one year.

After his brief stay at the IDA, he continued his career in academe by joining the faculty at Stanford University. He found his niche at Stanford and continued to teach there until his retirement in 1993. Honored as Professor of the Art of Computer Programming, he carried Emeritus status thereafter. During his stay, he created a number of important courses, among which were: Analysis of Algorithms, Concrete Mathematics, and Programming and Problem Solving Seminar.[3] Following retirement and until the present, he occasionally gives free lectures at Stanford University on various technical issues. He collectively calls his lectures “Computer Musings”. Considering his popularity his lectures were posted online at the Youtube channel, “stanfordonline”.[4]

Writing Career

Knuth is also widely recognized as the author of The Art of Computer Programming (TAOCP), a study in programming algorithms and methods implemented in computer systems. He began writing the book in 1962 while he was still working on his PhD. Prior to that, Knuth was writing compilers for different computers. His expertise in this reached the publisher Addison-Wesley by word of mouth and they closed a deal with him to write a book on compiler design. When he finished the first hand-written draft in 1965 with 12 chapters 5] the publisher decided to reorganize his draft into seven volumes and in 1968 the first volume was published. By 1973, the first three volumes of the book were published. Volume 4’s publishing was suspended due to production issues over typography usage. Much to Knuth’s dislike, Addison-Wesley’s use of computerized typesetting for the 1973 release of Volume 2 did not produce high quality prints. A known perfectionist, Knuth wanted to emulate the typesetting used for the original volumes and this was no longer available.

This led to another remarkable accomplishment, the TeX and Metafont digital typesetting systems which were used for subsequent releases of his TAOCP revisions. It was during the development of TeX that he came up with literate programming, a method of programming where a source code can be embedded in descriptive text. He later published the TeX and Metafont programs which he subsequently published. The TeXbook and The METAFONTbook were published in 1984 and 1986 respectively.[6] Interestingly, Knuth offered to pay $2.56 (256 pennies is one hexadecimal dollar), known as a Knuth reward check[7], for every error found in the books. This resulted in further fine-tuning of the content and more polished revisions of books later published.

Aside from TAOCP, Knuth also authored a mathematical book, Surreal Numbers. He has also written articles for the Journal of Recreational Mathematics and contributed to Joseph Madachy’s Mathematics on Vacation.

Raised a Lutheran, Knuth also wrote books related to his religion. He published 3:16 Bible Texts Illuminated providing an analysis of chapter 3, verse 16 of every book in the Bible. He was invited to give lectures based on this book which consequently led to writing Things a Computer Scientist Rarely Talks About, based on his lecture on God and Computer Science.

Knuth’s brilliance and wisdom in computer science, displayed by his books, has been especially significant in the world of computer programming. He received over 100 awards for his works, two of which are highly reputable – the first ACM Grace Murray Hopper Award in 1971 and an ACM Turing Award in 1974.

Musical Inclination

Most computer geeks are more technical than creative. Knuth is one of the exceptions. In addition to his computer and mathematical expertise, he is an organist and a composer. His musical skills are likely inherited from his father, an organist. Notably he created a musical masterpiece, Fantastica Apocalyptica, a piece for organ, completed in 2016 celebrating the revelation of Saint John the Divine. It premiered in Sweden in 2018.

A Leading Light

A distinguished computer scientist and contemporary author, Knuth’s achievements in the field of mathematics and computer science are notably important and have inspired many aspiring programmers through the years. A well-deserved recipient of a multitude of awards, Knuth made computer programming an art throughout his career.

Sources:

  1. Wikipedia. “Donald Knuth”, N.d., https://en.wikipedia.org/wiki/Donald_Knuth Accessed 09 Oct 2020
  2. David Walden, “A.M. Turing Award – Donald (“Don”) Ervin Knuth”, N.d. https://amturing.acm.org/award_winners/knuth_1013846.cfm Accessed 09 Oct 2020
  3. Wikipedia. “Donald Knuth”, N.d., https://en.wikipedia.org/wiki/Donald_Knuth Accessed 09 Oct 2020
  4. Donald Knuth, “Computer Musings”, N.d., https://www-cs-faculty.stanford.edu/~knuth/musings.html Accessed 09 Oct 2020
  5. David Walden, “A.M. Turing Award – Donald (“Don”) Ervin Knuth”, N.d. https://amturing.acm.org/award_winners/knuth_1013846.cfm Accessed 09 Oct 2020
  6. Wikipedia. “Donald Knuth”, N.d., https://en.wikipedia.org/wiki/Donald_Knuth Accessed 09 Oct 2020
  7. Wikipedia. “The Art of Computer History”, N.d., https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming#History Accessed 09 Oct 2020
]]>
The History of Computer Mouse https://linuxhint.com/computer_mouse_history/ Tue, 20 Oct 2020 04:12:43 +0000 https://linuxhint.com/?p=72434 Many of today’s online transactions can be conveniently done with just a click of a mouse. Prior to the invention of the mouse, people were only using the keyboard as an input device. Imagine the struggle of memorizing a whole gamut of commands to perform the functions and operations using just a keyboard. Douglas Engelbart must’ve gone through the same struggle when he thought of inventing a device that would make things easier for computer operators.

A Mouse on the Wheels

Douglas Engelbart invented the very first mouse in 1964 at Stanford Research Institute (SRI). Unlike today’s optical mouse, Engelbart’s invention used two perpendicular wheels enclosed in a wooden box, with one button on top. It can move from side to side and forwards and backward; thus, it was first called “X-Y position indicator for a display system.”[1] The name sounds too technical and lengthy for a layman to use. Hence, Bill English, the man who helped Engelbart build the device, used a mouse to refer to the device in his 1965 publication “Computer-Aided Display Control” [2] because of its resemblance to the small mammal.

Get the Ball Rolling

In 1968, German company Telefunken, led by Rainer Mallebrein, developed a mouse that used a rolling ball instead of wheels. It was called Rollkugel (rolling ball) and was an optional device for the SIG 100-86 computer system of Germany’s Federal Air Traffic Control.[3] Telefunken didn’t create any patent for the device and considered it unimportant at the time.

Billie English, while working at Xerox PARC (Palo Alto Research Center), further developed Engelbart’s invention by replacing the wheels with a rolling ball in 1972. Infrared light and sensors were used to detect x and y directions. In addition, it used a 9-pin connector to send the signals to the computer. English’s version of the mouse rolled in with Xerox’s minicomputer system with a graphical user interface, Xerox Alto, the first computer released for individual use, and the first computer to use a mouse.[4] Because it’s far easier to explore the GUI with this small device, Xerox continued to include it as part of the package in their subsequent releases of personal computers. Now, this also piqued Apple’s interest, and made an agreement with Xerox to use their mouse for Macintosh computers.[5] Apple issued Macintosh computers with the device in 1984, and this further boosted the mouse’s popularity.

Turning the Ball to Light

Because of its ease of use, the ball mouse has become essential for computer users. However, it still has its downsides. Among it, and probably the most common, is its functionality being hampered when it starts to gather dirt, and users need to do some dismantling and cleaning for it to work again. This led to the evolution of the ball mouse to an optical mouse where Light Emitting Diodes (LED) and a light detector for motion detection replaced the ball. Some research was done in the early 1980s to use light instead of a ball to detect motion, but development halted due to high production cost. In 1988, Xerox, again, was the first to issue a computer with an optical mouse. The optical mouse invented by Lisa M. Williams and Robert S. Cherry of Xerox Microelectronics Center received a US patent and was released with Xerox STAR. Earlier developed optical mice, however, weren’t very popular as they required a special mouse pad for motion detection. Moreover, they also had one major limitation – the ability to detect motion in shiny or glass surfaces.

It wasn’t until the late 1990s that an optical mouse that didn’t need a special mouse pad and had more surface tolerance was introduced to the market. Modern optical mice are embedded with optoelectronic sensors to take images of the surface and image-processing chips. This significant improvement made the mouse more ergonomic, eliminating the need to clean and the use of a mouse pad. Moreover, it’s no longer surface-dependent when detecting motion. The first mice to use such technology were Microsoft IntelliMouse with IntelliEye and IntelliMouse Explorer, both introduced in 1999.[6]

An Even Better Light

Just when everybody thought the mouse had reached its peak in terms of innovation, Sun Microsystems introduced a laser mouse. But it was mainly used with their servers and workstations. A laser mouse works just like an optical mouse, but instead of using LED, this variation uses infrared laser diodes to illuminate the surface where the mouse operates. This captures a more defined image of the surface and better precision than the optical mouse. Optical mice might have overcome much of its surface-related issues, but multi-colored surfaces may still affect its performance. Laser mice do not have such problems and can track on any kind of surface smoothly. Though it was first introduced in 1998, it wasn’t until 2004 that it infiltrated the consumer market when Logitech released the MX 1000 laser mouse.[7]

A Mouse Without A Tail

While there are limitless innovations on the motion detection aspect of the mouse, another part that manufacturers continue to work on is the mouse’s tail. From a 9-pin connector to a 6-pin PS/2 connector until it evolved to the now widely used wired mouse using a USB connection. But one significant innovation is the invention of the wireless mouse.

The use of wireless mice dates back to 1984 when Logitech released Logitech Metaphor operating on infrared signals. The advent of wireless technology brought about further improvement in its wireless capability. It was later on improved using radio signals such as Bluetooth and Wi-Fi. Nowadays, wireless mice using USB receivers are becoming more and more popular. The latest innovation is the use of an even smaller receiver, the nano receiver.

How Far Can It Crawl?

The mouse, small as it is, has been around for over 50 years and no signs of becoming obsolete. On the contrary, it has become a necessity, wired and wireless alike, for computer users, even with the emergence of trackpads and touch screen computers. With continuing technological advancement, only time can tell what tomorrow’s mouse will be like.

Sources:

  1. Elin Gunnarson, “The History of The Computer Mouse”, Nov 6, 2019 https://www.soluno.com/computermouse-history/ Accessed 07 Oct 2020
  2. Wikipedia. “Computer Mouse”, N.d., https://en.wikipedia.org/wiki/Computer_mouse Accessed 07 Oct 2020
  3. Wikipedia. “Computer Mouse”, N.d., https://en.wikipedia.org/wiki/Computer_mouse Accessed 07 Oct 2020
  4. “The History of Computer Mouse”, N.d., https://www.computinghistory.org.uk/det/613/the-history-of-the-computer-mouse/ Accessed 07 Oct 2020
  5. Elin Gunnarson, “The History of The Computer Mouse”, Nov 6, 2019 https://www.soluno.com/computermouse-history/ Accessed 07 Oct 2020
  6. “Optical Mouse”, N.d. http://www.edubilla.com/invention/optical-mouse/ Accessed 07 Oct 2020
  7. Wikipedia. “Optical Mouse”, N.d., https://en.wikipedia.org/wiki/Optical_mouse Accessed 07 Oct 2020
]]>
The History of Cray Supercomputers https://linuxhint.com/cray_supercomputers_history/ Mon, 19 Oct 2020 22:53:34 +0000 https://linuxhint.com/?p=72354 Today’s fastest supercomputer, Fugaku by Fujitsu, has a speed of 415 petaflops (Pflops).[1] But would you believe that the first supercomputer is slower than an iPhone? The CDC 6600, considered to be the first supercomputer, was running at a speed of 3 megaflops (Mflops) and was the fastest supercomputer from 1964 to 1969. [2] It was later overtaken by its successor, CDC 7600, designed by the same man behind CDC 6600, Seymour Cray.

Cray’s Anatomy

Seymour Cray was an American engineer and supercomputer architect who spent most of his life designing supercomputers and has been credited as the man who created the supercomputing industry. Widely recognized as the “father of supercomputing” [3], he was first an employee before becoming a businessman. He was a graduate of Electrical Engineering at the University of Minnesota in 1949 and completed his Master’s degree in Applied Mathematics in the same institution in 1951.

From ERA to CDC

In1950, while still finishing his master’s degree, Cray joined Engineering Research Associate (ERA), a new local company in Saint Paul, Minnesota. His expertise in digital computer technology led him to his very first project, ERA 1103, widely known as UNIVAC 1103, which then became the first scientific computer.[4] When ERA was purchased by Remington Rand and was merged with its UNIVAC department, many of its founders left to form Control Data Corporation (CDC). In 1958, Cray left ERA and joined his colleagues at CDC.

While at CDC, Cray set-up a lab in his own home in Chippewa Falls, Wisconsin, where he designed what came to be the first supercomputer, CDC 6600. It was released in 1964 and dominated the market for five years selling 200 units at $9 million each.[5] In 1968, with countless technical innovations, Cray completed the design of CDC 7600. Significantly faster than CDC 6600, CDC 7600 can process data at 36.4 Mflops.[6] CDC once again dominated the supercomputer industry with the release of CDC 7600. The success of the first two series of CDC supercomputers encouraged Cray to work on its third series, CDC 8600. It was, however, stalled when CDC went into some financial difficulties and prioritized another supercomputer project, STAR 100. It wasn’t a workable arrangement for Cray, and so he decided to leave CDC to put-up his own company.

Starting-up Boldly

In the same year that he left CDC, Cray founded his own company, Cray Research Inc (CRI). With some doubts and still unaware of his reputation, he approached Wall Street for seed capital. To his surprise, investors lined up to back him up, and he conveniently acquired the funds he needed to put-up the company that provided the world’s fastest supercomputers for decades.

Two years after his departure from CDC, they released STAR-100, which was three times faster than CDC 7600 and one of the first machines to employ vector processing, where the registers and memories are arranged to speed up the processing of a single operation on a large set of data.[7] However, poor implementation of the concept led to its poor performance and, eventually, its failure. With his knowledge and expertise in electronics and digital computer technology, Cray used a different approach in vector processing and replaced transistors with integrated circuits. With this and other enhancements in the design, Cray overcame the limitations of his competitors, and in 1976, CRI released its first vector supercomputer, Cray-1. Delivering 80MHz processor speed, and with a speed of 160 Mflops, Cray-1 surpassed the speed of any other computer at the time. The first system was acquired by Los Alamos National Laboratory after winning the bid at $8.8 million. Selling over 80 systems in the following years, Cray-1 is one of the most successful supercomputers in history. The success eventually made Cray a celebrity.

Following the success of Cray-1, the team at Cray Research, headed by principal designer Steve Chen, developed Cray X-MP. It was the first supercomputer by Cray Research that used multiple processors. Cray X-MP has a processor speed of 105 MHz and a speed of 800 Mflops. It became the world’s fastest supercomputer from 1983-1985.

Seymour Cray, in the meantime, started working on Cray-2 and with 244MHz processor speed and 1.9 gigaflops (Gflops) system performance, took over Cray X-MP’s spot as the fastest supercomputer of CRI in 1985. It, however, fell short to Russia’s M13, which runs at a speed of 2.4 Gflops and the first one to break the gigaflop barrier.[8]

In 1988, Cray Research unveiled Cray X-MP’s successor, Cray Y-MP. Another multiprocessor machine and improvement of Cray X-MP, it can handle up to 8 processors with a maximum speed of 2.667 Gflops. In addition, it has a higher memory bandwidth than Cray X-MP.[9]

Three years later, Cray Research released started releasing the Cray C90 series, another multiprocessor supercomputer that has double the capacity and speed of Cray Y-MP.

The Spin-Off

While Cray Y-MP was being developed, Seymour Cray was simultaneously developing Cray-3. Aiming to achieve 12 times the speed of Cray 2, he explored using gallium arsenide as semiconductors for the new machine. With Cray Y-MP underway, and because Cray 2’s sales were lower than Cray X-MP, the company decided to put its development on hold. Undaunted, Cray left CRI and formed another company, Cray Computer Corporation (CCC), in Colorado Springs, Colorado, in 1988 and continued to work on the Cray-3 project. Because it was more ambitious than Cray-2 and various experiments were necessary, it proved to be more expensive than any of its predecessors. With numerous supercomputers emerging in the market, Cray-3 had no launch customer when it was completed in 1993. Its first and only model was instead sent to the National Center for Atmospheric Research (NRAC) for demonstration. [10] With no other sales prospect for Cray-3, CCC filed for bankruptcy in 1995.

SRC Computers and the Death of Seymour Cray

Cray used what remained of CCC to put up SRC Computers in 1995. With his unwavering passion for supercomputing, Cray went on to work on Cray-4, but it was interrupted when he died from injuries suffered from a car accident in 1996. After his death, Cray-4 was never completed.

CRI went on to release its C90 computer series up until 1996 when it was acquired by Silicon Graphics, which merged with Tera Computer Company in 2000. In the same year, Tera renamed itself to Cray, Inc.

The Legacy of Cray Supercomputers

Supercomputers play an important role in the field of computational science, from weather forecasting, pharmaceuticals, and nuclear studies, to name a few. To cater to today’s demand for faster data processing and to lead the race in supercomputing, manufacturers are in constant pursuit of innovations. For decades, Seymour Cray’s brilliance in supercomputing produced a series of supercomputers that have become pillars of today’s giants. His works may no longer be in use today, but he undoubtedly built a legacy in the world of supercomputing.

Sources:

  1. Yevgeniy Sverdlik, June 22, 2020, “The World’s 10 Fastest Supercomputers – in Pictures” https://www.datacenterknowledge.com/supercomputers/world-s-10-fastest-supercomputers-pictures/gallery?slide=1 Accessed 05 Oct 2020
  2. Wikipedia. “CDC 6600”, N.d., https://en.wikipedia.org/wiki/CDC_6600 Accessed 05 Oct 2020
  3. Wikipedia. “Seymour Cray” N.d., https://en.wikipedia.org/wiki/Seymour_Cray Accessed 05 Oct 2020
  4. “Cray Supercomputer”, N.d., https://history-computer.com/ModernComputer/Electronic/Cray.html Accessed 05 Oct 2020
  5. Wikipedia. “History of Supercomputing”, N.d., https://en.wikipedia.org/wiki/History_of_supercomputing Accessed 05 Oct 2020
  6. Wikipedia. “CDC 7600” N.d., https://en.wikipedia.org/wiki/CDC_7600 Accessed 05 Oct 2020
  7. Wikipedia. “Cray 1” N.d., https://en.wikipedia.org/wiki/Cray-1 Accessed 05 Oct 2020
  8. Google Arts and Culture. “The Cray 2 Supercomputer” N.d., https://artsandculture.google.com/asset/the-cray-2-supercomputer-seymour-cray/NQE7aCDl2Zb0dA Accessed 05 October 2020
  9. “The Cray Y-MP”. 14 Nov 1995, http://www.netlib.org/benchmark/top500/reports/report94/Architec/node9.html Accessed 05 Oct 2020
  10. Wikipedia. “Cray 3” N.d., https://en.wikipedia.org/wiki/Cray-3 Accessed 05 Oct 2020
]]>
History of the Babbage Engine https://linuxhint.com/history_babbage_engine/ Thu, 15 Oct 2020 19:22:39 +0000 https://linuxhint.com/?p=71755

Long before computers became handy and electronic, they were first very mechanical, consisting of large gears, long rods, columns of discs, levers, springs, and metal frames, and were powered by cranking a handle. Widely regarded as “the father of computers,” [1] Charles Babbage, an English mathematical genius and philosopher, invented what is known today as the Babbage Engine, also known as the Difference Engine.Built to eliminate errors and to automate and speed up the mathematical computation of polynomial functions, Charles Babbage designed three versions of the Babbage Engine, each an enhanced and improved version of its predecessor. Babbage employed the mathematical method known as the method of finite differences, both to power and to name the calculating machine.

The Birth of Difference Engine

In 1820, the Royal Astronomical Society assigned a task to Babbage and his friend John Herschel to improve the numerical tables in the navigational book Nautical Almanac.[2]  After formulating the equations, Babbage and Herschel assigned clerks to perform the computations. To reduce errors, they had another set of clerks perform the arithmetic. Despite this, they still found a lot of discrepancies in the results. This spurred Babbage to design a machine that could produce error-free results in a shorter period. He began constructing a small engine, referred to as Difference Engine 0,  and it was completed in 1822.[3] The machine consisted of 18 wheels and 3 axes and produced accurate results at a rate of 33 digits per minute.[4] Babbage presented the prototype to the Royal Astronomical Society and proposed a larger-scale model that could be used by the government for nautical and astronomical calculations. Impressed by the accuracy of the engine, the government agreed to fund his project, which gave way to the construction of Difference Engine 1.

The Mishaps of Difference Engine 1

In 1823, the Chancellor of Exchequer agreed to fund Babbage’s Difference Engine project and granted him £1700[5] to get started. The engine has two sections, the calculating section, and the printing section, with a total of 25,000 parts and dimensions of 260 cm high, 230 cm broad, and 100 cm deep.[6] In 1824, Babbage began constructing the machine in the two rooms of his house, but later realized he needed a bigger space and some competent workers to finish the project. He hired an engineer, Joseph Clement, to take charge of the mechanical work. Clement hired more workers and used his workshop for the project.

However, the construction took much longer than Babbage and the government had anticipated. In 1830, Clement’s workers had fabricated all the parts, but most of the sections had not yet been assembled. Because the project was taking so long, Babbage and the government decided to pull the project out of Clement’s workshop. By that time, Babbage had a building in his property set aside for building the Difference Engine. Clement’s resistance, however, made things difficult for Babbage. Clement now insisted that the engine belonged to him, based on the trade practices of the time. In 1832, Clement assembled a portion of the calculating mechanism and Babbage presented it to the government for demonstration [2]. This was only one-seventh of the whole calculating section but was a working model. Construction of the engine continued, and the calculating section was close to completion, but the printing section was left untouched. Work on the project stopped in 1833, but it was only in 1834 that Clement agreed to transfer the engine to Babbage’s workshop when both had parted ways. This urged the government to stop funding the project, considering that more funds would be needed to reorganize the engine in Babbage’s workshop. By this time, the government already spent £17,000.[7]

Because of these unfortunate events, Babbage had lost the motivation to continue with the project. Instead, he shifted his focus to a more ambitious engine, the Analytical Engine, which he believed could do all the things that the Difference Engine could do and much more.

Abandoned but Not Forgotten

Babbage spent most of his years designing the Analytical Engine after the failure of the Difference Engine, but it was the very development of the Analytical Engine that provoked Babbage to turn back to the Difference Engine. In 1947, using the arithmetic mechanisms of the Analytical Engine, he redesigned the Difference Engine, refining the original design with simpler mechanisms and fewer parts. The new version, which he called Difference Engine 2, only had a third of the parts of the original [8] and could calculate more quickly than the previous one. He completed the design in 1849 and presented it to the British government. Due to the previous failure of Difference Engine 1, the government declined to support the project. Babbage passed on the design and the surviving sections of Difference Engine 1 to his son, Major-General Henry Prevost Babbage, who showed great interest in his father’s work [2]. After his father died in 1871, Henry Babbage continued to work on and publicize his father’s work. Difference Engine 2, however, was never constructed to completion.

Difference Engine, In Modern Times

In the 1980s, more than 100 years after Babbage’s death, Allan Broomley, an associate professor at the University of Sydney, took interest in the original drawings of the Babbage Engine at the Science Museum Library in London. His studies caught the attention of the Museum’s then Curator of Computing, Doron Swade, who led the construction of the Difference Engine 2 calculating section from 1985-1991. Nathan Myhrvold, former Chief Technology Officer at Microsoft, then commissioned the construction of the printing section of the engine. The first complete version of the Difference Engine 2 was finally completed in 2002,[9] and it works just as Babbage had designed.  After the successful completion of the first Difference Engine 2, Myhrvold also funded the construction of its clone, which was completed in 2008.

Today, the original Difference Engine 2 is on display at the Science Museum in London, and its clone is sitting inside Intellectual Ventures in Seattle. Babbage’s pioneering work in automatic computing has become the foundation of the subsequent computer technologies that were developed over time. He may never have seen his masterpiece in its full glory, but the Babbage Engine is undoubtedly one of the most brilliant and founding inventions in the history of computer technology.

Sources:

[1] “Charles Babbage”, N.d., https://history-computer.com/People/BabbageBio.html Accessed 29 September 2020
[2] “Differential Engine”, N.d., https://history-computer.com/Babbage/DifferentialEngine.html Accessed 29 September 2020
[3] Wikipedia. “Difference Engine”, N.d., https://en.wikipedia.org/wiki/Difference_engine Accessed 29 September 2020
[4] “Differential Engine”, N.d., https://history-computer.com/Babbage/DifferentialEngine.html Accessed 29 September 2020
[5] Wikipedia. “Difference Engine”, N.d., https://en.wikipedia.org/wiki/Difference_engine Accessed 29 September 2020
[6] “Differential Engine”, N.d., https://history-computer.com/Babbage/DifferentialEngine.html Accessed 29 September 2020
[7] “Differential Engine”, N.d., https://history-computer.com/Babbage/DifferentialEngine.html Accessed 29 September 2020
[8] “The Babbage Engine”, N.d., https://www.computerhistory.org/babbage/ Accessed 29 September 2020
[9] “The Babbage Engine”, N.d., https://www.computerhistory.org/babbage/ Accessed 29 September 2020 ]]>