了解详情 >


Alfred V Aho

Alfred Vaino Aho DL Author Profile link

United States – 2020

For fundamental algorithms and theory underlying programming language implementation and for synthesizing these results and those of others in their highly influential books, which educated generations of computer scientists.

Jeffrey D Ullman

Jeffrey David Ullman DL Author Profile link

United States – 2020

For fundamental algorithms and theory underlying programming language implementation and for synthesizing these results and those of others in their highly influential books, which educated generations of computer scientists.


Edwin Catmull

Edwin E. Catmull DL Author Profile link

United States – 2019

For fundamental contributions to 3D computer graphics, and the impact of computer-generated imagery (CGI) in filmmaking and other applications.

Pat Hanrahan

Patrick M. Hanrahan DL Author Profile link

United States – 2019

For fundamental contributions to 3D computer graphics, and the impact of computer-generated imagery (CGI) in filmmaking and other applications.


Yoshua Bengio

Yoshua Bengio DL Author Profile link

Canada – 2018

For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.

Yoshua Bengio was born to two college students in Paris, France. His parents had rejected their traditional Moroccan Jewish upbringings to embrace the 1960s counterculture’s focus on personal freedom and social solidarity.  He attributes his comfort in following his “scientific intuition” to this upbringing.[1]In search of a more inclusive society, the family moved to Montreal, in the French-speaking Canadian province of Quebec, when Yoshua was twelve years old.

Bengio spent his childhood as a self-described “typical nerd,” bored by high school and reading alone in the library. Like many in his generation he discovered computers during his teenage years, pooling money earned from newspaper delivery with his brother to purchase Atari 800 and Apple II personal computers. This led him to study computer engineering at McGill. Unlike a typical computer science curriculum, this included significant training in physics and continuous mathematics, providing essential mathematical foundations for his later work in machine learning.

After earning his first degree in 1986, Bengio remained at McGill to follow up with a masters’ degree in 1988 and a Ph.D. in computer science in 1991. His study was funded by a graduate scholarship from the Canadian government. He was introduced to the idea of neural networks when reading about massively parallel computation and its application to artificial intelligence. Discovering the work of Geoffey Hinton, his co-awardee, awakened an interest in the question “what is intelligence?” This chimed with his childhood interest in science fiction, in what he called a “watershed moment” for his career. Bengio found a thesis advisor, Renato De Mori, who studied speech recognition and was beginning to transition from classical AI models to statistical approaches.

As a graduate student he was able to attend conferences and workshops to participate in the tight-knit but growing community interested in neural networks, meeting what he called the “French mafia of neural nets” including co-awardee Yann LeCun. He describes Hinton and LeCun as his most important career mentors, though he did not start working with Hinton until years later. He first did a one-year postdoc at MIT with Michael I. Jordan which helped him advance his understanding of probabilistic modeling and recurrent neural networks. Then, as a postdoctoral fellow at Bell Labs, he worked with LeCun to apply techniques from his Ph.D. thesis to handwriting analysis. This contributed to a groundbreaking AT&T automatic check processing system, based around an algorithm that read the numbers written by hand on paper checks by combining neural networks with probabilistic models of sequences.

Bengio returned to Montreal in 1993 as a faculty member at its other major university, the University of Montreal. He won rapid promotion, becoming a full professor in 2002. Bengio suggests that Canada’s “socialist” commitment to spreading research funding widely and towards curiosity-driven research explains its willingness to support his work on what was then an unorthodox approach to artificial intelligence. This, he believes, laid the groundwork for Canada’s current strength in machine learning.

In 2000 he made a major contribution to natural language processing with the paper “A Neural Probabilistic Language Model.” Training networks to distinguish meaningful sentences from nonsense was difficult because there are so many different ways to express a single idea, with most combinations of words being meaningless. This causes what the paper calls the “curse of dimensionality,” demanding infeasibly large training sets and producing unworkably complex models. The paper introduced high-dimensional word embeddings as a representation of word meaning, letting networks recognize the similarity between new phrases and those included in their training sets, even when the specific words used are different. The approach has led to a major shift in machine translation and natural language understanding systems over the last decade.

Bengio’s group further improved the performance of machine translation systems by combining neural word embeddings with attention mechanisms. “Attention” is another term borrowed from human cognition. It helps networks to narrow their focus to only the relevant context at each stage of the translation in ways that reflect the context of words, including, for example, what a pronoun or article is referring to.

Together with Ian Goodfellow, one of his Ph.D. students, Bengio developed the concept of “generative adversarial networks.” Whereas most networks were designed to recognize patterns, a generative network learns to generate objects that are difficult to distinguish from those in the training set. The technique is “adversarial” because a network learning to generate plausible fakes can be trained against another network learning to identify fakes, allowing for a dynamic learning process inspired by game theory. The process is often used to facilitate unsupervised learning. It has been widely used to generate images, for example to automatically generate highly realistic photographs of non-existent people or objects for use in video games.

Bengio had been central to the institutional development of machine learning in Canada. In 2004, a program in Neural Computation and Adaptive Perception was funded within the Canadian Institute for Advanced Research (CIFAR). Hinton was its founding director, but Bengio was involved from the beginning as a Fellow of the institute. So was LeCun, with whom Bengio has been codirecting the program (now renamed Learning in Machines and Brains) since 2014. The name reflects its interdisciplinary cognitive science agenda, with a two-way passage of ideas between neuroscience and machine learning.

Thanks in part to Bengio, the Montreal area has become a global hub for work on what Bengio and his co-awardees call “deep learning.” He helped to found Mila, the Montreal Institute for Learning Algorithms (now the Quebec Artificial Intelligence Institute), to bring together researchers from four local institutions. Bengio is its scientific director, overseeing a federally funded center of excellence that co-locates faculty and students from participating institutions on a single campus. It boasts a broad range of partnerships with famous global companies and an increasing number of local machine learning startup firms. As of 2020, Google, Facebook, Microsoft and Samsung had all established satellite labs in Montreal. Bengio himself has co-founded several startup firms, most notably Element AI in 2016 which develops industrial applications for deep learning technology.

Author: Thomas Haigh

[1] Personal details and quotes are from Bengio’s Heidelberg Laureate interview - https://www.youtube.com/watch?v=PHhFI8JexLg.

Geoffrey E Hinton

Geoffrey E Hinton DL Author Profile link

Canada – 2018

For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.

When Geoffrey Everest Hinton decided to study science he was following in the tradition of ancestors such as George Boole, the Victorian logician whose work underpins the study of computer science and probability. Geoffrey’s great grandfather, the mathematician and bigamist Charles Hinton, coined the word “tesseract” and popularized the idea of higher dimensions, while his father, Howard Everest Hinton, was a distinguished entomologist. Their shared middle name, Everest, celebrates a relative after whom the mountain was also named (to commemorate his service as Surveyor General of India).

Having begun his time at Cambridge University with plans to study physiology and physics, before dabbling in philosophy on his way to receiving a degree in experimental psychology in 1970, Hinton concluded that none of these sciences had yet done much to explain human thought. He made a brief career shift into carpentry, in search of more tangible satisfactions, before being drawn back to academia in 1972 by the promise of artificial intelligence, which he studied at the University of Edinburgh.

By the mid-1970s an “AI winter” of high profile failures had reduced funding and enthusiasm for artificial intelligence research. Hinton was drawn to a particularly unfashionable area: the development of networks of simulated neural nodes to mimic the capabilities of human thought. This willingness to ignore conventional wisdom was to characterize his career. As he put it, “If you think it’s a really good idea and other people tell you it’s complete nonsense then you know you are really onto something.”[1]

The relationship of computers to brains had captivated many computer pioneers of the 1940s, including John von Neumann who used biological terms such as “memory,” “organ” and “neuron” when first describing the crucial architectural concepts of modern computing in the “First Draft of a Report on the EDVAC.” This was influenced by the emerging cybernetics movement, particularly the efforts of Warren McCulloch and Walter Pitts to equate networks of stylized neurons with statements in boolean logic. That inspired the idea that similar networks might, like human brains, be able to learn to recognize objects or carry out other tasks. Interest in this approach had declined after Turing Award winner Marvin Minsky, working with Seymour Papert, demonstrated that a heavily promoted class of neural networks, in which inputs were connected directly to outputs, had severe limits on its capabilities.

Graduating in 1978, Hinton followed in the footsteps of many of his forebears by seeking opportunities in the United States. Joining a group of cognitive psychologists as a Sloan Foundation postdoctoral researcher at the University of California, San Diego.  Their work on neural networks drew on a broad shift in the decades after the Second World War towards Bayesian approaches to statistics, which treat probabilities as degrees of belief, updating estimates as data accumulates.

Most work on neural networks relies on what is now called a “supervised learning” approach, exposing an initially random network configuration to a “training set” of input data. Its initial responses would have no systematic relationship to the features of the input data, but the algorithm would reconfigure the network as each guess was scored against the labels provided. Thus, for example, a network trained on a large set of photographs of different species of fish might develop a reliable ability to recognize whether a new picture showed a carp or a tuna. This required a learning algorithm to automatically reconfigure the network to identify “features” in the input data that correlated with correct outputs.

Working with David Rumelhart and Ronald J. Williams, Hinton popularized what they termed a “back-propagation” algorithm in a pair of landmark papers published in 1986. The term reflected a phase in which the algorithm propagated measures of the errors produced by the network’s guesses backwards through its neurons, starting with those directly connected to the outputs. This allowed networks with intermediate “hidden” neurons between input and output layers to learn efficiently, overcoming the limitations noted by Minsky and Papert.

Their paper describes the use of the technique to perform tasks including logical and arithmetic operations, shape recognition, and sequence generation. Others had worked independently along similar lines, including Paul J. Werbos, without much impact. Hinton attributes the impact of his work with Rumelhart and Williams to the publication of a summary of their work in Nature, and the efforts they made to provide compelling demonstrations of the power of the new approach. Their findings began to revive enthusiasm for the neural network approach, which has increasingly challenged other approaches to AI such as the symbol processing work of Turing Award winners John McCarthy and Marvin Minsky and the rule-based expert systems championed by Edward Feigenbaum.

By the time the papers with Rumelhart and William were published, Hinton had begun his first faculty position, in Carnegie-Mellon’s computer science department. This was one of the leading computer science programs, with a particular focus on artificial intelligence going back to the work of Herb Simon and Allen Newell in the 1950s. But after five years there Hinton left the United States in part because of his opposition to the “Star Wars” missile defense initiative. The Defense Advanced Research Projects Agency was a major sponsor of work on AI, including Carnegie-Mellon projects on speech recognition, computer vision, and autonomous vehicles.  Hinton first became a fellow of the Canadian Institute for Advanced Research (CIFAR) and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to Toronto.

Hinton’s research group in Toronto made a string of advances in what came to be known as “deep learning”, named as such because it relied on neural networks with multiple layers of hidden neurons to extract higher level features from input data. Hinton, working with David Ackley and Terry Sejnowski, had previously introduced a class of network known as the Boltzmann machine, which in a restricted form was particularly well-suited to this layered approach. His ongoing work to develop machine learning algorithms spanned a broad range of approaches to improve the power and efficiency of systems for probabilistic inference. In particular, his joint work with Radford Neal and Richard Zemel in the early 1990s introduced variational methods to the machine learning community.

Hinton carried this work out with dozens of dozens of Ph.D. students and post-doctoral collaborators, many of whom went on to distinguished careers in their own right. He shared the Turing award with one of them, Yann LeCun, who spent 1987-88 as a post-doctoral fellow in Toronto after Hinton served as the external examiner on his Ph.D. in Paris. From 2004 until 2013 he was the director of the program on "Neural Computation and Adaptive Perception" funded by the Canadian Institute for Advanced Research. That program included LeCun and his other coawardee, Yoshua Bengio. The three met regularly to share ideas as part of a small group. Hinton has advocated for the importance of senior researchers continuing to do hands-on programming work to effectively supervise student teams.

Hinton has long been recognized as a leading researcher in his field, receiving his first honorary doctorate from the University of Edinburgh in 2001, three years after he became a fellow of the Royal Society. In the 2010s his career began to shift from academia to practice as the group’s breakthroughs underpinned new capabilities for object classification and speech recognition appearing in widely used systems produced by cloud computing companies such as Google and Facebook. Their potential was vividly demonstrated in 2012 when a program developed by Hinton with his students Alex Krizhevsky and Ilya Sutskever greatly outperformed all other entrants to ImageNet, an image recognition competition involving a thousand different object types. It used graphics processor chips to run code combining several of the group’s techniques in a network of “60 million parameters and 650,000 neurons” composed of “five convolutional layers, some of which are followed by max-pooling layers, and three globally-connected layers with a final 1000-way softmax.”[2] The “convolutional layers” were an approach originally conceived of by LeCun, to which Hinton’s team had made substantial improvements.

This success prompted Google to acquire a company, DDNresearch, founded by Hinton and the two students to commercialize their achievements. The system allowed Google to greatly improve its automatic classification of photographs. Following the acquisition, Hinton became a vice president and engineering fellow at Google. In 2014 he retired from teaching at the university to establish a Toronto branch of Google Brain. Since 2017, he has held a volunteer position as chief scientific advisor to Toronto’s Vector Institute for the application of machine learning in Canadian health care and other industries. Hinton thinks that in the future teaching people how to train computers to perform tasks will be at least as important as teaching them how to program computers.

Hinton has been increasingly vocal in advocating for his long-standing belief in the potential of “unsupervised” training systems, in which the learning algorithm attempts to identify features without being provided large numbers of labelled examples. As well as being useful these unsupervised learning methods have, Hinton believes, brought us closer to understanding the learning mechanisms used by human brains.


Yann LeCun

Yann LeCun DL Author Profile link

United States – 2018

For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.

Yann LeCun spent his early life in France, growing up in the suburbs of Paris. (His name was originally Le Cun, but he dropped the space after discovering that Americans were confused and treated Le as his middle name). His father was an engineer, whose interests in electronics and mechanics were passed on to Yann during a boyhood of tinkering. As a teenager he enjoyed playing in a band as well as science and engineering. He remained in the region to study, earning the equivalent of a masters’ degree from the École Supérieure d'Ingénieurs en Électrotechnique et Électronique, one of France’s network of competitive and specialized non-university schools established to train to the country’s future elite. His work there focused on microchip design and automation.

LeCun attributes his longstanding interest in machine intelligence to seeing the murderous mainframe HAL, whom he encountered as a young boy in the movie 2001. He began independent research on machine learning as an undergraduate, making it the centerpiece of his Ph.D. work at the Sorbonne Université (then called Université Pierre et Marie Curie). LeCun’s research closely paralleled discoveries made independently by his co-awardee Geoffrey Hinton. Like Hinton he had been drawn to the then-unfashionable neural network approach to artificial intelligence, and like Hinton he discovered the well-publicized limitations of simple neural networks could be overcome with what was later called the “back-propagation” algorithm able to efficiently train “hidden” neurons in intermediate layers between the input and output nodes.

A workshop held in Les Houches in the French Alps in 1985 first brought LeCun into direct contact with the international research community working along these lines. It was there that he met Terry Sejnowski, a close collaborator of Hinton’s whose work on backpropagation was not yet published. A few months later when Hinton was in Paris he introduced himself to LeCun, which led to an invitation to a summer workshop at Carnegie Mellon and a post-doctoral year with Hinton’s new research group in Toronto. This collaboration endured: two decades later, in 2004, he worked with Hinton to establish a program on Neural Computation and Adaptive Perception through the Canadian Institute for Advanced Research (CIFAR). Since 2014 he has co-directed it, now renamed Learning in Machines & Brains, with his co-awardee Yoshua Bengio.

At the conclusion of the fellowship, in 1988, LeCun joined the staff of Bell Labs, a renowned center of computer science research. Its Adaptive Systems Research department, headed by Lawrence D. Jackel, focused on machine learning. Jackel was heavily involved in establishing the Neural Networks for Computing workshop series, later run by LeCun and renamed the “Learning Workshop”. It was held annually from 1986 to 2012 at the Snowbird resort in Utah. The invitation-only event brought together an interdisciplinary group of researchers to exchange ideas on the new techniques and learn how to apply them in their own work.

LeCun’s work at Bell Labs focused on the neural network architectures and learning algorithms. His most far-reaching contribution was a new approach, called the “convolutional neural network.” Many networks are designed to recognize visual patterns, but a simple learning model trained to respond to a feature in one location (say the top left of an image) would not respond to the same feature in a different location. The convolutional network is designed so that a filter or detector is swept across the grid of input values. As a result, higher level portions of the network would be alerted to the pattern wherever it occured in the image. This made training faster and reduced the overall size of networks, boosting their performance. This work was an extension of LeCun’s earlier achievements, because convolutional networks rely on backpropagation techniques to train their hidden layers.

As well as developing the convolutional approach, LeCun pioneered its application in “graph transformer networks” to recognize printed and handwritten text. This was used in a widely deployed system to read numbers written on checks, produced in the early 1990s in collaboration with Bengio, Leon Bottou and Patrick Haffner. At that time handwriting recognition was enormously challenging, despite an industry-wide push to make it work reliably in “slate” computers (the ancestors of today’s tablet systems). Automated check clearing was an important application, as millions were processed daily. The job required very high accuracy, but unlike general handwriting analysis required only digit recognition, which reduced the number of valid symbols. The technology was licensed by specialist providers of bank systems such as National Cash Register. LeCun suggests that at one point it was reading more than 10% of all the checks written in the US.

Check processing work was carried out in centralized locations, which could be equipped with the powerful computers needed to run neural networks. Increases in computer power made it possible to build more complex networks and deploy convolutional approaches more widely. Today, for example, the technique is used on Android smartphones to power the speech recognition features of the Google Assistant such as real-time transcription, and the camera-based translation features of the translation app.

His other main contribution at Bell Labs was the development of "Optimal Brain Damage" regularization methods. This evocatively named concept identifies ways to simplify neutral networks by removing unnecessary connections. Done properly, this “brain damage” could produce simpler, faster networks that performed as well or better than the full-size version.

In 1996 AT&T, which had failed to establish itself in the computer industry, spun off most of Bell Labs and its telecommunications hardware business into a new company, Lucent Technologies. LeCun stayed behind to run an AT&T Labs group focused on image processing research. His primary accomplishment there was the DjVu image compression technology, developed with Léon Bottou, Patrick Haffner, and Paul G. Howard. High speed Internet access was rare, so as a communications company AT&T’s services would be more valuable if large documents could be downloaded more quickly. LeCun’s algorithm compressed files more effectively than Adobe’s Acrobat software, but lacked the latter’s broad support. It was extensively used by the Internet Archive in the early 2000s.

LeCun left industrial research in 2003, for a faculty position as a professor of computer science at New York University’s Courant Institute of Mathematical Sciences, the leading center for applied mathematical research in the US. It has a strong presence in scientific computation and particular focus on machine learning. He took the opportunity to restore his research focus on neural networks. At NYU LeCun ran the Computational and Biological Learning Lab, which continued his work on algorithms for machine learning and applications for computer vision. He is still at NYU, though as his reputation has grown he has added several new titles and additional appointments. Most notable of these is Silver endowed professorship awarded to LeCun in 2008, funded by a generous bequest from Polaroid co-founder Julius Silver to allow NYU to attract and retain top faculty.

LeCun had retained his love of building things, including hobbies constructing airplanes, electronic musical instrument, and robots. At NYU he combined this interest in robotics with his work on convolutional networks for computer vision to participate in DARPA-sponsored projects for autonomous navigation. His most important institutional initiative was work in 2011 to create the NYU Center for Data Science, which he directed until 2014. The center offers undergraduate and graduate degrees and functions as a focal point for data science initiatives across the university.

By the early 2010s the leading technology companies were scrambling to deploy machine learning systems based on neural networks. Like other leading researchers LeCun was courted by the tech giants, and from December 2013 he was hired by Facebook to create FAIR (Facebook AI Research), which he led until 2018 in New York, sharing his time between NYU and FAIR. That made him the public face of AI at Facebook, broadening his role from a researcher famous within several fields to a tech industry leader frequently discussed in newspapers and magazines. In 2018, he stepped down from the director role and became Facebook’s Chief AI Scientist to focus on strategy and scientific leadership.

Author: Thomas Haigh


John L Hennessy

John L Hennessy DL Author Profile link

United States – 2017

For pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry.

John L. Hennessy, born in 1952, was raised on Long Island’s north shore in Huntington, New York. His mother was a teacher before retiring to raise six children; his father was an electrical engineer. He was a tinkerer in high school, winning a science fair prize for an automated tic-tac-toe machine. This impressed the mother of his senior prom date, Andrea Berti, a girl he knew from his shelf-stocking job at the local King Kullen grocery store. He enrolled at Villanova University near Philadelphia, earning a bachelor’s degree in electrical engineering (1973). For graduate school, he returned to Long Island, attending Stony Brook University (then S.U.N.Y. Stonybrook), married Andrea in 1974, and garnered a master’s (1975) and Ph.D (1977) in computer science.

Hennessy became an Assistant Professor at Stanford in September, 1977, remaining for virtually his entire career. Coincident with his first major honor, the John J. Gallen Memorial award by Villanova in 1983, he became an Associate Professor at Stanford. In 1986 he became the inaugural holder of the Willard and Inez Kerr Bell endowed chair.

His work centered on computer architecture. In 1980, microcomputers were rapidly advancing in complexity, to challenge the capabilities of minicomputers. The prevailing wisdom was that powerful processors needed very large, very rich instruction sets. As Hennessy observed in his Turing award lecture, “Microcomputers were competing on crazy things like here’s my new instruction to do this kind of thing … rather than saying here’s a set of standard benchmarks, and my machine is faster than your machine….”[1]

Hennessy won fame by challenging this mindset with his work on reduced instruction set computer architectures (RISC), along with David Patterson, a Berkeley professor. They first met at a microprocessor conference in 1980 where each was presenting similar micro-coding concepts. Hennessy recalled that “like Dave at Berkeley, we started with a clean slate with our graduate student class that was sort of a brainstorming class. We had a unique perspective. People were ignoring basic performance implications completely. It was an efficiency argument from the very beginning…. We both built prototypes of our design, and we could see that the advantages were clear. These were academic prototypes built by graduate students.”

Building on the original RISC work of John Cocke at IBM, in 1983, Hennessy’s Stanford team developed a prototype chip named MIPS (Microprocessor without Interlocked Pipeline Stages). The first MIPS chip used 25,000 transistors and ran at a slightly faster clock speed than a similar Berkeley chip called RISC-2 (40,760 transistors).[2] To advance and commercialized this technology he co-founded MIPS Computer Systems in 1984, during a sabbatical from Stanford. He served eight years as their chief scientist, and six more as chief architect. MIPS was later acquired by Silicon Graphics, where its processors, combined with custom graphics developed by James Clark at Stanford, powered the high-performance graphics workstations relied on by Hollywood in the late 1980s and 1990s.

Patterson recalled that: “There is this remarkable point in time when it was clear that a handful of grad students at Berkeley or Stanford could build a microprocessor that was arguably better than what industry could build—faster, cheaper, more efficient…. RISC was very controversial, it was heretical… and John and I were on the RISC side while all the other people were on the CISC side…. We had a hard time convincing people of that.” [3]

While others argued about the relative merits of the Hennessy and Patterson variants of RISC, they recognized that the much larger contest was between RISC ideas embodied in both of their chips versus the CISC (Complex Instruction Set Computing) architectures then used throughout the industry from mainframes to personal computers. The two began a partnership, creating a systematic quantitative approach for designing faster, lower power and reduced complexity microprocessors, co-authoring two books that became landmark textbooks for the discipline. The first, Computer Architecture: A Quantitative Approach, now in its sixth edition, established enduring principles for generations of architects. [4]

Patterson quantified the impact of this work in his Turing lecture, given jointly with Hennessy: “Our colleagues at Intel … had great technology…. They got up to 350 million chips per year, not only dominating the desktop, but servers as well…. But the Post-PC era, starting with the iPhone in 2007 totally changed things… valuing area and energy as much as performance. Last year there were more than 20 billion chips with 32-bit processors in them. [Intel compatible] chips peaked in 2011 with dropping sales of PCs, and there are only 10 million chips in the cloud, so 99% of the processors today are RISC.” [5]

Hennessy’s career at Stanford led him from research to administrative leadership. Within five years of becoming department chair in 1994 he was appointed Provost, working with his former colleague, Jim Clark (founder of Silicon Graphics) to arrange a record-setting donation to create a biological engineering and sciences center. Clark said of Hennessy: “The most lasting impression was how good he was with students, how hard he worked and how helpful he was with my project." [6] In another year he rose to the top of a pool of five hundred candidates to became president of Stanford, helped by his exceptional connections to Silicon Valley’s high-tech industry. He co-founded Atheros as well as MIPS, and he served many years on the Cisco Systems Board of Directors, and subsequently on the Google Board, where in 2016 he became chairman of Alphabet, Google’s parent company. Under his leadership Stanford’s fundraising brought in $13 billion,” [7] including a five-year campaign from 2007-2011 that $6.23 billion, 60% more than the previous record for any university.[8]

During sixteen years as president, Hennessey reshaped Stanford’s buildings, the campus, its research profile, and its financial resources. An appreciative article in Stanford Magazine catalogued his accomplishments of his term: “70 building projects,” a cultural shift on campus to “a deep commitment to interdisciplinary collaboration,” and the “deft and decisive handling of” of the challenges of a major recession. Maybe most importantly, and surprisingly to many, was Hennessy’s devotion to students, to interdisciplinary studies, to the humanities, and the arts. Hennessy pushed for world-class performance and exhibition spaces, drawing on a comment from Itzhak Perlman that “Mr. President, Stanford is a great university, but you have terrible performance facilities.” Hennessy called this complaint “a gift to a president, because there’s a story I can repeat from an expert.” [9] Since retiring as president in 2016 he has been the inaugural director of the Knight-Hennessy Scholars program.

Fittingly, for the two RISC champions who took on the computer establishment in the 1980s, Hennessy and Patterson have returned to their first love—computing architectures—as they savor their joint selection as the 2017 ACM Turing award winners. Their Turing address challenged the idea that potential processor performance has little scope for dramatic improvement of the kind seen in previous decades. Not so: “innovations like domain-specific hardware, enhanced security, open instruction sets, and agile chip development” will multiple current system throughput “tens, hundreds, thousands of times—up to 62,000 times.” Their audience was listening as intently as ever.[10]

Hennessy has received numerous regional, national, and international awards, plus eleven honorary doctorates. His computing architecture awards include Fellows of IEEE (1991), American Academy of Arts and Sciences (1995), ACM (1997), and the UK Royal Academy of Engineering (2017). He received the Seymour Cray Computer Engineering award in 2001, and he was honored with IEEE’s highest honor, the Medal of Honor, in 2012, "for pioneering the RISC processor architecture and for leadership in computer engineering and higher education."

Hennessy and Patterson have won a number of joint awards, including the John von Neumann Medal (IEEE, 2000), the Eckert-Mauchly ACM/IEEE award in 2001; Fellows for the Computer History Museum in 2007, and the ACM Turing Award in 2017.

Author: Charles H. House

[1] Hennessy, John L. and David A. Patterson, “A new golden age for computer architecture: domain-specific hardware/software co-design, enhanced security, open instruction sets, and agile chip development,” 2017 ACM A.M.Turing Award lecture, 45th ISCA (International Symposium of Computer Architecture), Los Angeles, June 4, 2018 https://www.acm.org/hennessy-patterson-turing-lecture

[2] Hennessy, John L.; Forest Baskett; et al, “MIPS, A Microprocessor Architecture,” ACM SIGMICRO Newsletter, 13:4; 1983

[3] Patterson, David A., “A New Golden Age for Computer Architecture: History, Challenges, and Opportunities,” UC Berkeley ACM Turing Laureate Colloquium lecture, October 10, 2018; https://eecs.berkeley.edu/turing-colloquium/schedule/patterson

[4] Hennessy, J. L. and Patterson, D. A. Computer Architecture: A Quantitative Approach. 1990. Morgan Kaufmann Publishers, Inc. San Mateo, CA. Second edition 1995, Third edition, 2002. Fourth Edition, 2007, Fifth Edition, 2011, Sixth Edition, 2018. Also Patterson, D.A. and Hennessy, J.L., Computer Organization and Design: The Hardware/Software Interface. 1993. San Mateo, CA: Morgan Kaufmann Publishers. Second Edition, 1998, Third Edition 2005.

[5] Hennessy and Patterson, 2017 ACM A.M.Turing award lecture, op. cit.

[6] Swanson, Doug, “Favorite Son,” Stanford Magazine, May-June 2000; https://stanfordmag.org/contents/favorite-son

[7] Antonucci, Mike, “Where he took us,” Stanford Magazine, May-June 2016; https://stanfordmag.org/contents/where-he-took-us

[8] Kiley, Kevin, “Stanford raises $6.2B in five-year campaign,” Inside Higher Ed, February 8, 2012; http://www.insidehighered.com/quicktakes/2012/02/08/stanford-raises-62-billion-five-year-campaign

[9] Antonucci, Mike, “Where he took us,” op. cit.

[10] Hennessy and Patterson, 2017 ACM A.M.Turing award lecture, op. cit. Also Hennessy, John L. and David A. Patterson, “A New Golden Age for Computer Architecture,” Communications of the ACM (62:2) February 2019, pp. 48-60



David Patterson

David Patterson DL Author Profile link

United States – 2017

For pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry.

Born in Evergreen, Illinois in 1947, David A. Patterson graduated from South High School in Torrance, California, and then enrolled at the University of California, Los Angeles (UCLA).  The first person in his family to graduate from college, Patterson received his Bachelor’s(1969) and Master’s (1970) degrees in computer science.  Patterson, a wrestler and math major, tried a programming course when his preferred course was cancelled (‘even with punch cards, Fortran, line printers, one-day turn-around—I was hooked”).[1]

Patterson married high school sweetheart Linda (raised near Berkeley in Albany) and with two young boys, he worked part-time (20-40 hours per week) on airborne computers at Hughes Aircraft for three years while earning a doctoral degree (1976) in computer science at UCLA.  The job hooked him on practical engineering results.   His thesis advisor was Gerald Estrin (also advisor for Vinton Cerf, Turing Award, 2004).

Patterson was hired into the University of California at Berkeley’s computer science/ electrical engineering department upon graduation. Patterson’s PhD thesis was on writable control store methods for operating systems, so he began his Berkeley career with Carlo Sequin working on the X-TREE project led by Alvin Despain.[2]  Years later, he called this modular multiprocessor system 'way too ambitious, no resources, great fun.’ [3]

Patterson took a three-month sabbatical at Digital Equipment Corporation (1979), where Joel Emer and Douglas Clark were starting measurements on a VAX minicomputer.   It had a very complex instruction set and hence a very large and complex microprogram.   Patterson worked on reducing micro-coding errors, concluding that simplifying instruction sets would “easily yield reduced errors.” [4]

Back at Berkeley, Patterson and Sequin teamed on a four-course series where they tasked graduate students to investigate these ideas. Patterson coined the acronym RISC (Reduced Instruction Set Computer) to describe a resultant chip, known as RISC-1, with 44,420 transistors.  A good companion computer for Berkeley’s work on UNIX operating systems and C programming techniques, it could handle large amounts of memory, and it used pipelining techniques to handle several instructions simultaneously. [5]  

Instantly popular, the courses led to a Distinguished Teaching Award (1982).  Patterson’s acceptance speech acknowledged why he selected Berkeley: “When I graduated from UCLA, I went around interviewing at a lot of places,….  They really said, ‘….  Teaching is something we don’t care about—the coin of the realm is publication…’.  I was disturbed (because) that meant that I would be spending many hours of my life in front of a bunch of students, and if  I didn’t do a good job, I’d disappoint a lot of students.  If I did do a good job, I’d disappoint the people I worked for.  But when I came to Berkeley, it was great.  The electrical engineering/ computer science department emphasized that they really did care about teaching, ...” [6]

From 1982 to 1983, Sequin led the RISC-II chip project; Patterson managed collaboration between UC Berkeley and the ARPA VLSI program.  This 40,760 transistor chip, three times faster and half the size of RISC-1, became the highly influential foundation of Sun Microsystems’ SPARC micro-architecture.

Patterson first met John Hennessy at a meeting for DARPA funded research VLSI projects in 1980 or 1981 where each was presenting their ideas.  RISC-2 emerged simultaneously with Hennessy’s MIPS (Microprocessor without Interlocked Pipeline Stages) prototype at Stanford in 1983. Arguments between RISC vs. MIPS designs were soon dwarfed by their common thesis against CISC (Complex Instruction Set Computers), used by the entire industrial computer design community.

Years later, Patterson recalled:  “There is this remarkable point in time when it was clear that a handful of grad students at Berkeley or Stanford could build a microprocessor that was arguably better than what industry could build—faster, cheaper, more efficient….  RISC was very controversial, it was heretical….  We had a hard time convincing people of that.” [7]  

Patterson resolutely resisted the lure of leaving the university to pursue the RISC technology in a company.  John Markoff, in the New York Times, quoted Patterson about the chance to start a company. "I made the choice between being happy and being wealthy." [8]

Patterson and Hennessy in 1990 codified their shared insights in a very influential book, Computer Architecture: A Quantitative Approach.  This book, now in its 6th edition, provided a simple, robust, and quantitative framework for evaluating integrated systems. [9]

Sun adopted the Berkeley architecture, while Silicon Graphics bought Hennessy’s MIPS.  Joel Birnbaum, John Cocke’s supervisor at IBM, brought RISC ideas to Hewlett-Packard.   A number of key micro-coded RISC ideas were incorporated into the Intel’s personal computer  chips, and then mobile products (e.g. iPhone) emphasized efficiency, power usage, and die size.  In their joint Turing Award speech at ISCA (2018), Patterson and Hennessy noted that an astounding 99% of the more than 20 billion micro-processors now produced annually are RISC processors, and are found in nearly all smartphones, tablets, and the billions of embedded devices that comprise the Internet of Things (IoT).[10]

Between 1989 and 1993, Patterson led the Redundant Arrays of Inexpensive Disks (RAID) project with Berkeley colleague Randy Katz, vastly improving speed and reliability of affordable disk systems. Most web servers now use some form of RAID; many compare this work in importance to Patterson’s RISC work.  Later, Patterson contributed in implementing complex systems experiments by networking smaller computers together, foretelling “multi-tier architectures” now used by many Internet companies. 

Patterson today is a Distinguished Engineer at Google and serves as Vice Chair of the Board of the RISC-V Foundation.   An eternal optimist, Patterson notes that tuned hardware/software designs can offer dramatic performance improvements for deep learning applications, which he hopes will usher in a ‘new golden age of computing.’ [11]

When lecturing, Patterson frequently mentions his family, and his life-long enthusiasm for several activities, including soccer, wrestling, cycling and weight lifting.  He reminds listeners that teams are better than individual activity, noting that you cannot be a winner on a losing team, while all members of a winning team are winners by definition. He worked with his high school wrestling partner, Rick Byrne, to win the American Power Lifting California championship, setting a new national record for age and weight bench press, dead lift, squat, and all three combined lifts in 2013 at age 66.[12]  Patterson rode in the annual two-day Waves to Wine bike ride through the Bay Area from 2003-2012 and was the top multiple sclerosis research fundraiser for the group for seven straight years.[13]

Patterson was on the ACM Executive Council for six years, serving as ACM President, 2004-2006.  He took a sabbatical year to do that, explaining that for ‘a big job’ you need really to step up to it.  He also chaired the Computing Research Association, and served on PITAC for two years (Presidential Information Technology Advisory Committee).  His motto throughout has been, “It’s not how many projects you start, it’s how many you finish….  So, pick one big thing a year, and finish it.” [14]

For many professional occasions in recent years, including the 2018 ACM Annual Awards Dinner, Patterson proudly has worn a Scottish kilt to honor his forebears.  In his acceptance speech that evening, as well as in multiple other speeches in recent years, he cited his 50th marriage anniversary with his childhood sweetheart, Linda, who co-founded the East Bay Improv group in Berkeley many years ago. [15]

Patterson, made an ACM Fellow in 1994, is also a Fellow of AAAS and IEEE.  He has been elected to the National Academies of Engineering, Sciences, and the American Academy of Arts and Sciences.  Hennessy and Patterson have won a number of joint awards, including the John von Neumann Medal (IEEE, 2000), the Eckert-Mauchly ACM/IEEE award in 2001; Fellows for the Computer History Museum in 2007, and the ACM Turing Award in 2017.

Author: Charles H. House

[1] Patterson, David, “Closing Remarks,”  40 Years of Patterson Symposium, UC Berkeley EE/CS, May 7, 2016; https://www.youtube.com/watch?v=8X0tsp-FVGI

[2] Carlo H. Séquin, Alvin M. Despain, David A. Patterson:  Communication In X-TREE, A Modular Multiprocessor System. ACM Annual Conference (1) 1978: 194-203

[3] Patterson, David A., “Closing Remarks,” op. cit.   Also see Patterson, David A., “My Last Lecture: How to be a Bad Professor,”  Berkeley EE/CS, May 6, 2016; https://www.youtube.com/watch?v=TK6EPvrmcBk

[4] Patterson, David A., interview with Jim Demmel , EE/CS chair at Berkeley, UC Berkeley  ACM Turing Laureate Colloquium October 10, 2018; https://eecs.berkeley.edu/turing-colloquium/schedule/patterson

[5] John Hennessy and David Patterson, “ACM A.M.Turing Award lecture, 45th ISCA (International Symposium of Computer Architecture), Los Angeles, June 4, 2018   https://www.acm.org/hennessy-patterson-turing-lecture

[6] Patterson 1982 UC Berkeley Distinguished Teaching Award lecture, published on YouTube later (March 16, 2016);  https://www.youtube.com/watch?v=asKcJyFbRm0   

[7] Patterson, David A.,  “A New Golden Age for Computer Architecture: History, Challenges, and Opportunities,” UC Berkeley  ACM Turing Laureate Colloquium lecture, October 10, 2018; https://eecs.berkeley.edu/turing-colloquium/schedule/patterson

[8] Markoff, John, “Chip Technology’s Friendly Rivals,” New York Times, June 4, 1991; https://www.nytimes.com/1991/06/04/business/chip-technology-s-friendly-rivals.html

[9] J. L. Hennessy and D. A. Patterson, Computer Architecture: A Quantitative Approach, 6th ed., Computer Architecture and Design, Morgan Kaufmann Publishers, 2017

[10] John Hennessy and David Patterson, “ACM A.M.Turing Award lecture, op. cit.

[11] Patterson, David, “A New Golden Age for Computer Architecture,” Artificial Intelligence Conference, September 12, 2018   https://www.youtube.com/watch?v=c03Z0Ms8pKg

[12] Posting, Baban Zarkovich, April 20, 2013, “Professor David Patterson sets the APA RAW California State Record,”  https://amplab.cs.berkeley.edu/news/professor-david-patterson-sets-the-apa-raw-california-state-record/

[14] Patterson, David A., “How to have a bad career in research/academia,” Berkeley, November 2001; https://people.eecs.berkeley.edu/~pattrsn/talks/BadCareer.pdf

[15] Patterson, “A New Golden Age” presentation; op.cit.



Sir Tim Berners-Lee

Sir Tim Berners-Lee DL Author Profile link

United Kingdom – 2016

For inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the Web to scale.

Tim Berners-Lee grew up in London. Both of his parents (Mary Lee Woods and Conway Berners-Lee) were mathematicians, who had worked on the Ferranti Mark 1, a pioneering effort to commercialize the early Manchester computer. He inherited their interests, playing with electronics as a boy, but choosing physics for his university studies. After earning a degree from Queen’s College, Oxford in 1976 he worked on programming problems at several companies, before joining the European physics lab CERN in 1984. His initial job was in the data acquisition and control group, working to capture and process experimental data.

Inventing the Web

CERN’s business was particle smashing, not computer science, but its computing needs were formidable and it employed a large technical staff. Its massive internal network was connected to the Internet. In March 1989 Berners-Lee began to circulate a document headed “Information Management: A Proposal,” which proposed an Internet-based hypertext publishing system. This, he argued, would help CERN manage the huge collections of documents, code, and reports produced by its thousands of workers, many of them temporary visitors.

Berners-Lee later said that he had been dreaming of a networked hypertext system since a short spell consulting at CERN in 1980. Ted Nelson, who coined the phrase “hypertext” back in the 1960s, had imagined an online platform to replace conventional publishers. Authors could create links between documents, and readers would follow them from one document to another. By the late-1980s hypertext was flourishing as a research area, but in practice was used only in closed systems, such as the Microsoft Windows help system and the Macintosh Hypercard electronic document platform.

Mike Sendall, Berners-Lee’s manager, wrote “vague but exciting” on his copy of the proposal. In May 1990, he authorized Berners-Lee to spend some time on his idea, justifying this as a test of the widely hyped NeXT workstation. This was a high-end personal computer with a novel Unix-based operating system that optimized the rapid implementation of graphical applications. Berners-Lee spent the first few months working out specifications attempting to interest existing hypertext software companies in his ideas. By October 1990, he had begun to code prototype Web browser and server software, finishing in December. On 6, August, 1991, after tests and further development inside CERN, he used the Internet to announce the new “World Wide Web” and to distribute the new software.

Elements of the Web

The World Wide Web was ambitious in some ways, as its name reflects, but cautious in others. Berners-Lee’s initial support from CERN did not consist of much more than a temporary release from his other duties. So he leveraged existing technologies and standards everywhere in the design of the WWW. He remembers CERN as “chronically short of manpower for the huge challenges it had taken on.” There was no team of staff coders standing by to implement any grand plans he might come up with.

The Web, like most of the Internet during this era, was intimately tied in with the Unix operating system (for which Dennis M. Ritchie and Ken Thompson won the 1983 Turing Award). For example, the first Web server (and most since) have run as background processes on Unix-derived operating systems. URLs use Unix conventions to specify file paths within a website. To develop his prototype software, Berners-Lee used the NeXT workstation. More fundamentally, Berners-Lee’s whole approach reflected the distinctive Unix philosophy of building new system capabilities by recombining existing tools.

The Web also followed the Internet philosophy of achieving compatibility through communications protocols rather than standard code, hardware, or operating systems. His specifications for the new system led to three new Internet standards.

Web pages displayed text. HTML (Hyper Text Markup Language) specified the way text for a Web page should be tagged, for example as a hyperlink, ordinary paragraph, or level 2 heading. It was an application of the existing SGML (Standard Generalized Markup Language) markup language definition standard.

HTTP (Hyper Text Transfer Protocol) specified the interactions through which Web browsers could request and receive HTML pages from Web servers. HTML was, in computer science terms, stateless – users did not log into websites and each request for a Web page or other file was treated separately. This made it a file transfer protocol, which was easy to design and implement because existing Internet standards and software, most importantly TCP/IP (for which Vinton Cerf and Robert E. Kahn won the 2004 Turing award), provided the infrastructure needed to pipe data across the network from one program to another. Berners-Lee later called this use of Internet protocols “politically incorrect” as European officials at the time were supporting a transition to the rival ISO network protocols. A few years later it was the success of the Web that put the final nail in their coffin.

Consider a Web address like http://amturing.acm.org/award_winners/berners-lee_8087960.cfm. This is a URL or Uniform Resource Locator (Berners-Lee originally called this a Universal Resource Identifier). The “amturing.acm.org” part identified the computer where the resource was found. This was nothing new – Internet sites had been using this Domain Name System since the mid-1980s. The novelty was the “http://” which told Web browsers, and users, to expect a Web server. Information after the first single “/” identified which page on the host computer was being requested. Berners-Lee also specified URL formats for existing Internet resources, including file servers, gopher servers (an earlier kind of Internet hypertext system), and telnet hosts for terminal connections. In 1994, Berners-Lee wrote that “The fact that it is easy to address an object anywhere in the Internet is essential for the system to scale, and for the information space to be independent of the network and server topology.”

The URL was the simplest of the three inventions, but was crucial to the early spread of the Web because it solved the “chicken and egg” problem facing any new communications system. Why set up a Web page when almost nobody has a Web browser? Why run a Web browser when almost nobody has set up a Web server to visit? The URL system made Web browsers a convenient way to access existing resources, cataloged on Web pages. In 1992, the Whole Internet Catalog and User’s Guide stated that “the World Wide Web hasn’t really been exploited yet… Hypertext is used primarily as a way of organizing resources that already exist.”

The Web Takes Off

CERN found some resources to support the further development of the Web – about 20 person years of work in total, mostly from interns. More importantly, it made it clear that others were free to use the new standards and prototype code to develop new and better software. Robert Cailliau, of the Office Computing Systems group, played an important role as a champion of the project within CERN. In 1991 CERN produced a simple text-based browser that could easily be accessed over the Internet and a Macintosh browser, essential to the initial spread of the Web as NeXT workstations remained very rare.

Over the next few years others implemented faster and more robust browsers with new features such as graphics in pages, browser history, and forward and back buttons. Mosaic, released in 1993 by the National Center for Supercomputer Applications of the University of Illinois, brought the Web to millions of users. In April, 1994 CERN, which was still trying to maintain a comprehensive list of Web servers, cataloged 829 in its “Geographical Registry.”

Berners-Lee later attributed his success largely to “being in the right place at the right time.” He succeeded where larger and better funded teams had failed, setting the foundation for a global hypertext system that quickly became a universal infrastructure for online communication and the foundation for many new industries. Yet the ACM’s 1991 Hypertext conference had rejected Berners-Lee’s paper describing the World Wide Web. From a research viewpoint, the Web seemed to sidestep many thorny research problems related to capabilities that Ted Nelson thought essential for a public hypertext publication system. If a Web page was moved, then links pointing to it stopped working. If the target page was changed, then it might no longer hold the content the link promised. Links went only one way – one couldn’t see which other pages linked to a document. There was no central, searchable index of websites and their content. Neither did the Web itself provide any way for publishers to get paid when people read their work.

Berners-Lee had only a few months at his disposal, which may have been a hidden blessing: Nelson worked for decades without coming close to finishing his system. Rather than attack intractable problems, Berners-Lee used proven technologies as the building blocks of a system intended to be powerful and immediately useful rather than perfect.

The Web’s reliance on existing technologies was appealing to early users and eased deployment – setting up a Web server on a computer already connected to the Internet just involved downloading and installing a small program. This technological minimalism made the Web easy to scale, with no indexing system or central database to overload. After the Web took off, whole new industries emerged to fill in some of the missing capabilities needed for large scale and commercial use, eventually leading, for example, to the rise of Google as the dominant provider of Internet search.

One crucial feature that Berners-Lee built into his prototype Web software was left out of its successors. His browser allowed users to edit pages, and save the changes back on the server. His 1994 article in Communications of the ACM noted that “The Web does not yet meet its design goal of being a pool of knowledge that is as easy to update as to read.” Editing capabilities were eventually added in other ways – first through separate HTML editing software, and later with the widespread adoption of content management systems where the software used to edit Web pages is itself accessed through a Web browser.

A screenshot of Berners-Lee’s Web browser software running on his NeXT computer. Note the Edit menu to allow changes, and the Style menu which put decisions over fonts and other display details in the hands of the reader rather than Webpage creators. Since 2014 this computer has been exhibited at the Science Museum in London.

Berners-Lee feels that his original design decisions have held up well, with one exception: the “//” in URLs which make addresses longer and harder to type without adding any additional information. “I have to say that now I regret that the syntax is so clumsy” he wrote in 2009.[1]

Standardizing the Web

Mosaic’s successor, the commercial browser Netscape, was used by hundreds of millions and kickstarted the “.com” frenzy for new Internet stocks. By 2000 there were an estimated 17 million websites online, used for commercial transactions such as online shopping and banking as well as document display. In the process, HTML was quickly given many clunky and incompatible extensions so that Web pages could be coded for things like font styles and page layout rather than its original focus on document structure.

In 1994 Berners-Lee left CERN for a faculty job at MIT. This let him establish the World Wide Web Consortium (W3C), to standardize HTML and other, newer, elements of the Web. Berners-Lee had been frustrated in 1992 in an initial attempt to work with the Internet Engineering Task Force, the group that developed and standardized other Internet protocols. The consortium followed a different model, using corporate memberships to support the work of paid staff members. With its guidance the Web has remained open during its growth, so that users can choose their preferred Web browser while still accessing the full range of functionality found on modern websites. It also played a crucial role in adoption of the XML data description language. As of 2017, his primary appointment remains at MIT where he holds the Founders Chair in the MIT Computer Science and Artificial Intelligence Laboratory and continues to direct W3C.

The Semantic Web

Since the late 1990s Berners-Lee’s primary focus has been on trying to get Web publishers and technology companies to add a set of capabilities he called the “Semantic Web.” Berners-Lee defined his idea as follows: “The Semantic Web is an extension of the current Web in which information is given well-defined meaning, better enabling computers and people to work in cooperation."

Document metadata was largely left off the original Web, in contrast to traditional online publishing systems, which made it hard for search engines to determine basic information such as the date on which an article was written or the person who wrote it. The Semantic Web initiative covered a hierarchy of technologies and standards that would let the creators of Web pages tag them to make their conceptual structure explicit, not just for information retrieval but also for machine reasoning.

Legacy and Recognition

The success of the Web drove a massive expansion in Internet access and infrastructure – indeed most Internet users of the late-1990s experienced the Internet primarily through the Web and did not clearly separate the two. Berners-Lee has been widely honored for this work, winning a remarkable array of international prizes. Sir Tim, as he been known since the Queen knighted him in 2004, has been recognized as one of the public faces of British science and technology. In 2012 he appeared with a NeXT computer during the elaborate opening ceremony of the London Olympic Games.

He has been increasingly willing to use this public influence to impact the ways in which governments and companies are shaping the Web. In 2009 he set up the World Wide Web Foundation, which lobbies for “digital equality” and produces rankings of Web freedom around the world. More recently, Berners-Lee has championed protection for personal data, criticized the increasing dominance of proprietary social media platforms, and bemoaned the prevalence of fake news online.

Author: Thomas Haigh

[1] https://www.w3.org/People/Berners-Lee/FAQ.html#etc