computer architecture : a quantitative approach

705
www.dbeBooks.com - An Ebook Library

Upload: truongtu

Post on 31-Dec-2016

266 views

Category:

Documents


8 download

TRANSCRIPT

  • www.dbeBooks.com - An Ebook Library

  • In Praise of

    Computer Architecture: A Quantitative Approach

    Fourth Edition

    The multiprocessor is here and it can no longer be avoided. As we bid farewellto single-core processors and move into the chip multiprocessing age, it is greattiming for a new edition of Hennessy and Pattersons classic. Few books have hadas significant an impact on the way their discipline is taught, and the current edi-tion will ensure its place at the top for some time to come.

    Luiz Andr Barroso, Google Inc.

    What do the following have in common: Beatles tunes, HP calculators, choco-late chip cookies, and

    Computer Architecture

    ? They are all classics that havestood the test of time.

    Robert P. Colwell, Intel lead architect

    Not only does the book provide an authoritative reference on the concepts thatall computer architects should be familiar with, but it is also a good starting pointfor investigations into emerging areas in the field.

    Krisztin Flautner, ARM Ltd.

    The best keeps getting better! This new edition is updated and very relevant tothe key issues in computer architecture today. Plus, its new exercise paradigm ismuch more useful for both students and instructors.

    Norman P. Jouppi, HP Labs

    Computer Architecture

    builds on fundamentals that yielded the RISC revolution,including the enablers for CISC translation. Now, in this new edition, it clearlyexplains and gives insight into the latest microarchitecture techniques needed forthe new generation of multithreaded multicore processors.

    Marc Tremblay, Fellow & VP, Chief Architect, Sun Microsystems

    This is a great textbook on all key accounts: pedagogically superb in exposingthe ideas and techniques that define the art of computer organization and design,stimulating to read, and comprehensive in its coverage of topics. The first editionset a standard of excellence and relevance; this latest edition does it again.

    Milos Ercegovac, UCLA

    Theyve done it again. Hennessy and Patterson emphatically demonstrate whythey are the doyens of this deep and shifting field. Fallacy: Computer architectureisnt an essential subject in the information age. Pitfall: You dont need the 4thedition of

    Computer Architecture.

    Michael D. Smith, Harvard University

  • Hennessy and Patterson have done it again! The 4th edition is a classic encorethat has been adapted beautifully to meet the rapidly changing constraints oflate-CMOS-era technology. The detailed case studies of real processor productsare especially educational, and the text reads so smoothly that it is difficult to putdown. This book is a must-read for students and professionals alike!

    Pradip Bose, IBM

    This latest edition of

    Computer Architecture

    is sure to provide students with thearchitectural framework and foundation they need to become influential archi-tects of the future.

    Ravishankar Iyer, Intel Corp.

    As technology has advanced, and design opportunities and constraints havechanged, so has this book. The 4th edition continues the tradition of presentingthe latest in innovations with commercial impact, alongside the foundational con-cepts: advanced processor and memory system design techniques, multithreadingand chip multiprocessors, storage systems, virtual machines, and other concepts.This book is an excellent resource for anybody interested in learning the architec-tural concepts underlying real commercial products.

    Gurindar Sohi, University of WisconsinMadison

    I am very happy to have my students study computer architecture using this fan-tastic book and am a little jealous for not having written it myself.

    Mateo Valero, UPC, Barcelona

    Hennessy and Patterson continue to evolve their teaching methods with thechanging landscape of computer system design. Students gain unique insight intothe factors influencing the shape of computer architecture design and the poten-tial research directions in the computer systems field.

    Dan Connors, University of Colorado at Boulder

    With this revision,

    Computer Architecture

    will remain a must-read for all com-puter architecture students in the coming decade.

    Wen-mei Hwu, University of Illinois at UrbanaChampaign

    The 4th edition of

    Computer Architecture

    continues in the tradition of providinga relevant and cutting edge approach that appeals to students, researchers, anddesigners of computer systems. The lessons that this new edition teaches willcontinue to be as relevant as ever for its readers.

    David Brooks, Harvard University

    With the 4th edition, Hennessy and Patterson have shaped

    Computer Architec-ture

    back to the lean focus that made the 1st edition an instant classic.

    Mark D. Hill, University of WisconsinMadison

  • Computer Architecture

    A Quantitative Approach

    Fourth Edition

  • John L. Hennessy

    is the president of Stanford University, where he has been a member of thefaculty since 1977 in the departments of electrical engineering and computer science. Hen-nessy is a Fellow of the IEEE and ACM, a member of the National Academy of Engineering andthe National Academy of Science, and a Fellow of the American Academy of Arts and Sciences.Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC tech-nology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neu-mann Award, which he shared with David Patterson. He has also received seven honorarydoctorates.

    In 1981, he started the MIPS project at Stanford with a handful of graduate students. After com-pleting the project in 1984, he took a one-year leave from the university to cofound MIPS Com-puter Systems, which developed one of the first commercial RISC microprocessors. After beingacquired by Silicon Graphics in 1991, MIPS Technologies became an independent company in1998, focusing on microprocessors for the embedded marketplace. As of 2006, over 500 millionMIPS microprocessors have been shipped in devices ranging from video games and palmtopcomputers to laser printers and network switches.

    David A. Patterson

    has been teaching computer architecture at the University of California,Berkeley, since joining the faculty in 1977, where he holds the Pardee Chair of Computer Sci-ence. His teaching has been honored by the Abacus Award from Upsilon Pi Epsilon, the Distin-guished Teaching Award from the University of California, the Karlstrom Award from ACM, andthe Mulligan Education Medal and Undergraduate Teaching Award from IEEE. Patterson re-ceived the IEEE Technical Achievement Award for contributions to RISC and shared the IEEEJohnson Information Storage Award for contributions to RAID. He then shared the IEEE Johnvon Neumann Medal and the C & C Prize with John Hennessy. Like his co-author, Patterson is aFellow of the American Academy of Arts and Sciences, ACM, and IEEE, and he was elected to theNational Academy of Engineering, the National Academy of Sciences, and the Silicon Valley En-gineering Hall of Fame. He served on the Information Technology Advisory Committee to theU.S. President, as chair of the CS division in the Berkeley EECS department, as chair of the Com-puting Research Association, and as President of ACM. This record led to a Distinguished ServiceAward from CRA.

    At Berkeley, Patterson led the design and implementation of RISC I, likely the first VLSI reducedinstruction set computer. This research became the foundation of the SPARC architecture, cur-rently used by Sun Microsystems, Fujitsu, and others. He was a leader of the Redundant Arraysof Inexpensive Disks (RAID) project, which led to dependable storage systems from many com-panies. He was also involved in the Network of Workstations (NOW) project, which led to clustertechnology used by Internet companies. These projects earned three dissertation awards fromthe ACM. His current research projects are the RAD Lab, which is inventing technology for reli-able, adaptive, distributed Internet services, and the Research Accelerator for Multiple Proces-sors (RAMP) project, which is developing and distributing low-cost, highly scalable, parallelcomputers based on FPGAs and open-source hardware and software.

  • Computer Architecture

    A Quantitative Approach

    Fourth Edition

    John L. Hennessy

    Stanford University

    David A. Patterson

    University of California at Berkeley

    With Contributions by

    Andrea C. Arpaci-Dusseau

    University of WisconsinMadison

    Remzi H. Arpaci-Dusseau

    University of WisconsinMadison

    Krste Asanovic

    Massachusetts Institute of Technology

    Robert P. Colwell

    R&E Colwell & Associates, Inc.

    Thomas M. Conte

    North Carolina State University

    Jos Duato

    Universitat Politcnica de Valncia

    and

    Simula

    Diana Franklin

    California Polytechnic State University, San Luis Obispo

    David Goldberg

    Xerox Palo Alto Research Center

    Wen-mei W. Hwu

    University of Illinois at UrbanaChampaign

    Norman P. Jouppi

    HP Labs

    Timothy M. Pinkston

    University of Southern California

    John W. Sias

    University of Illinois at UrbanaChampaign

    David A. Wood

    University of WisconsinMadison

    Amsterdam Boston Heidelberg LondonNew York Oxford Paris San Diego

    San Francisco Singapore Sydney Tokyo

  • Publisher

    Denise E. M. Penrose

    Project Manager

    Dusty Friedman, The Book Company

    In-house Senior Project Manager

    Brandy Lilly

    Developmental Editor

    Nate McFadden

    Editorial Assistant

    Kimberlee Honjo

    Cover Design

    Elisabeth Beller and Ross Carron Design

    Cover Image

    Richard IAnsons Collection: Lonely Planet Images

    Composition

    Nancy Logan

    Text Design:

    Rebecca Evans & Associates

    Technical Illustration

    David Ruppe, Impact Publications

    Copyeditor

    Ken Della Penta

    Proofreader

    Jamie Thaman

    Indexer

    Nancy Ball

    Printer

    Maple-Vail Book Manufacturing Group

    Morgan Kaufmann Publishers is an Imprint of Elsevier500 Sansome Street, Suite 400, San Francisco, CA 94111

    This book is printed on acid-free paper.

    1990, 1996, 2003, 2007 by Elsevier, Inc. All rights reserved.

    Published 1990. Fourth edition 2007

    Designations used by companies to distinguish their products are often claimed as trademarks or reg-istered trademarks. In all instances in which Morgan Kaufmann Publishers is aware of a claim, theproduct names appear in initial capital or all capital letters. Readers, however, should contact theappropriate companies for more complete information regarding trademarks and registration.

    Permissions may be sought directly from Elseviers Science & Technology Rights Department inOxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail: [email protected]. You may also complete your request on-line via the Elsevier Science homepage (

    http://elsevier.com

    ), by selecting Customer Support and then Obtaining Permissions.

    Library of Congress Cataloging-in-Publication Data

    Hennessy, John L. Computer architecture : a quantitative approach / John L. Hennessy, David A. Patterson ; with contributions by Andrea C. Arpaci-Dusseau . . . [et al.].4th ed. p.cm. Includes bibliographical references and index. ISBN 13: 978-0-12-370490-0 (pbk. : alk. paper) ISBN 10: 0-12-370490-1 (pbk. : alk. paper) 1. Computer architecture. I. Patterson, David A. II. Arpaci-Dusseau, Andrea C. III. Title.

    QA76.9.A73P377 2006004.2'2dc22

    2006024358

    For all information on all Morgan Kaufmann publications,visit our website at

    www.mkp.com

    or

    www.books.elsevier.com

    Printed in the United States of America06 07 08 09 10 5 4 3 2 1

  • To Andrea, Linda, and our four sons

  • ix

    I am honored and privileged to write the foreword for the fourth edition of thismost important book in computer architecture. In the first edition, Gordon Bell,my first industry mentor, predicted the books central position as the definitivetext for computer architecture and design. He was right. I clearly remember theexcitement generated by the introduction of this work. Rereading it now, withsignificant extensions added in the three new editions, has been a pleasure allover again. No other work in computer architecturefrankly, no other work Ihave read in any fieldso quickly and effortlessly takes the reader from igno-rance to a breadth and depth of knowledge.

    This book is dense in facts and figures, in rules of thumb and theories, inexamples and descriptions. It is stuffed with acronyms, technologies, trends, for-mulas, illustrations, and tables. And, this is thoroughly appropriate for a work onarchitecture. The architects role is not that of a scientist or inventor who willdeeply study a particular phenomenon and create new basic materials or tech-niques. Nor is the architect the craftsman who masters the handling of tools tocraft the finest details. The architects role is to combine a thorough understand-ing of the state of the art of what is possible, a thorough understanding of the his-torical and current styles of what is desirable, a sense of design to conceive aharmonious total system, and the confidence and energy to marshal this knowl-edge and available resources to go out and get something built. To accomplishthis, the architect needs a tremendous density of information with an in-depthunderstanding of the fundamentals and a quantitative approach to ground histhinking. That is exactly what this book delivers.

    As computer architecture has evolvedfrom a world of mainframes, mini-computers, and microprocessors, to a world dominated by microprocessors, andnow into a world where microprocessors themselves are encompassing all thecomplexity of mainframe computersHennessy and Patterson have updatedtheir book appropriately. The first edition showcased the IBM 360, DEC VAX,and Intel 80x86, each the pinnacle of its class of computer, and helped introducethe world to RISC architecture. The later editions focused on the details of the80x86 and RISC processors, which had come to dominate the landscape. This lat-est edition expands the coverage of threading and multiprocessing, virtualization

    Foreword

    by Fred Weber, President and CEO of MetaRAM, Inc.

  • x

    Computer Architecture

    and memory hierarchy, and storage systems, giving the reader context appropri-ate to todays most important directions and setting the stage for the next decadeof design. It highlights the AMD Opteron and SUN Niagara as the best examplesof the x86 and SPARC (RISC) architectures brought into the new world of multi-processing and system-on-a-chip architecture, thus grounding the art and sciencein real-world commercial examples.

    The first chapter, in less than 60 pages, introduces the reader to the taxono-mies of computer design and the basic concerns of computer architecture, givesan overview of the technology trends that drive the industry, and lays out a quan-titative approach to using all this information in the art of computer design. Thenext two chapters focus on traditional CPU design and give a strong grounding inthe possibilities and limits in this core area. The final three chapters build out anunderstanding of system issues with multiprocessing, memory hierarchy, andstorage. Knowledge of these areas has always been of critical importance to thecomputer architect. In this era of system-on-a-chip designs, it is essential forevery CPU architect. Finally the appendices provide a great depth of understand-ing by working through specific examples in great detail.

    In design it is important to look at both the forest and the trees and to moveeasily between these views. As you work through this book you will find plentyof both. The result of great architecture, whether in computer design, buildingdesign or textbook design, is to take the customers requirements and desires andreturn a design that causes that customer to say, Wow, I didnt know that waspossible. This book succeeds on that measure and will, I hope, give you as muchpleasure and value as it has me.

  • xi

    Foreword ix

    Preface xv

    Acknowledgments xxiii

    Chapter 1

    Fundamentals of Computer Design

    1.1

    Introduction 2

    1.2

    Classes of Computers 4

    1.3

    Defining Computer Architecture 8

    1.4

    Trends in Technology 14

    1.5

    Trends in Power in Integrated Circuits 17

    1.6

    Trends in Cost 19

    1.7

    Dependability 25

    1.8

    Measuring, Reporting, and Summarizing Performance 28

    1.9

    Quantitative Principles of Computer Design 37

    1.10

    Putting It All Together: Performance and Price-Performance 44

    1.11

    Fallacies and Pitfalls 48

    1.12

    Concluding Remarks 52

    1.13

    Historical Perspectives and References 54Case Studies with Exercises by Diana Franklin 55

    Chapter 2

    Instruction-Level Parallelism and Its Exploitation

    2.1

    Instruction-Level Parallelism: Concepts and Challenges 66

    2.2

    Basic Compiler Techniques for Exposing ILP 74

    2.3

    Reducing Branch Costs with Prediction 80

    2.4

    Overcoming Data Hazards with Dynamic Scheduling 89

    2.5

    Dynamic Scheduling: Examples and the Algorithm 97

    2.6

    Hardware-Based Speculation 104

    2.7

    Exploiting ILP Using Multiple Issue and Static Scheduling 114

    Contents

  • xii

    Contents

    2.8

    Exploiting ILP Using Dynamic Scheduling, Multiple Issue, and Speculation 118

    2.9

    Advanced Techniques for Instruction Delivery and Speculation 121

    2.10

    Putting It All Together: The Intel Pentium 4 131

    2.11

    Fallacies and Pitfalls 138

    2.12

    Concluding Remarks 140

    2.13

    Historical Perspective and References 141Case Studies with Exercises by Robert P. Colwell 142

    Chapter 3

    Limits on Instruction-Level Parallelism

    3.1

    Introduction 154

    3.2

    Studies of the Limitations of ILP 154

    3.3

    Limitations on ILP for Realizable Processors 165

    3.4

    Crosscutting Issues: Hardware versus Software Speculation 170

    3.5

    Multithreading: Using ILP Support to Exploit Thread-Level Parallelism 172

    3.6

    Putting It All Together: Performance and Efficiency in AdvancedMultiple-Issue Processors 179

    3.7

    Fallacies and Pitfalls 183

    3.8

    Concluding Remarks 184

    3.9

    Historical Perspective and References 185Case Study with Exercises by Wen-mei W. Hwu and John W. Sias 185

    Chapter 4

    Multiprocessors and Thread-Level Parallelism

    4.1

    Introduction 196

    4.2

    Symmetric Shared-Memory Architectures 205

    4.3

    Performance of Symmetric Shared-Memory Multiprocessors 218

    4.4

    Distributed Shared Memory and Directory-Based Coherence 230

    4.5

    Synchronization: The Basics 237

    4.6

    Models of Memory Consistency: An Introduction 243

    4.7

    Crosscutting Issues 246

    4.8

    Putting It All Together: The Sun T1 Multiprocessor 249

    4.9

    Fallacies and Pitfalls 257

    4.10

    Concluding Remarks 262

    4.11

    Historical Perspective and References 264Case Studies with Exercises by David A. Wood 264

    Chapter 5

    Memory Hierarchy Design

    5.1

    Introduction 288

    5.2

    Eleven Advanced Optimizations of Cache Performance 293

    5.3

    Memory Technology and Optimizations 310

  • Contents

    xiii

    5.4

    Protection: Virtual Memory and Virtual Machines 315

    5.5

    Crosscutting Issues: The Design of Memory Hierarchies 324

    5.6

    Putting It All Together: AMD Opteron Memory Hierarchy 326

    5.7

    Fallacies and Pitfalls 335

    5.8

    Concluding Remarks 341

    5.9

    Historical Perspective and References 342Case Studies with Exercises by Norman P. Jouppi 342

    Chapter 6

    Storage Systems

    6.1

    Introduction 358

    6.2

    Advanced Topics in Disk Storage 358

    6.3

    Definition and Examples of Real Faults and Failures 366

    6.4

    I/O Performance, Reliability Measures, and Benchmarks 371

    6.5 A Little Queuing Theory 3796.6 Crosscutting Issues 3906.7 Designing and Evaluating an I/O SystemThe Internet

    Archive Cluster 3926.8 Putting It All Together: NetApp FAS6000 Filer 3976.9 Fallacies and Pitfalls 3996.10 Concluding Remarks 4036.11 Historical Perspective and References 404

    Case Studies with Exercises by Andrea C. Arpaci-Dusseau and Remzi H. Arpaci-Dusseau 404

    Appendix A Pipelining: Basic and Intermediate Concepts

    A.1 Introduction A-2A.2 The Major Hurdle of PipeliningPipeline Hazards A-11A.3 How Is Pipelining Implemented? A-26A.4 What Makes Pipelining Hard to Implement? A-37A.5 Extending the MIPS Pipeline to Handle Multicycle Operations A-47A.6 Putting It All Together: The MIPS R4000 Pipeline A-56A.7 Crosscutting Issues A-65A.8 Fallacies and Pitfalls A-75A.9 Concluding Remarks A-76A.10 Historical Perspective and References A-77

    Appendix B Instruction Set Principles and Examples

    B.1 Introduction B-2B.2 Classifying Instruction Set Architectures B-3B.3 Memory Addressing B-7B.4 Type and Size of Operands B-13B.5 Operations in the Instruction Set B-14

  • xiv Contents

    B.6 Instructions for Control Flow B-16B.7 Encoding an Instruction Set B-21B.8 Crosscutting Issues: The Role of Compilers B-24B.9 Putting It All Together: The MIPS Architecture B-32B.10 Fallacies and Pitfalls B-39B.11 Concluding Remarks B-45B.12 Historical Perspective and References B-47

    Appendix C Review of Memory Hierarchy

    C.1 Introduction C-2C.2 Cache Performance C-15C.3 Six Basic Cache Optimizations C-22C.4 Virtual Memory C-38C.5 Protection and Examples of Virtual Memory C-47C.6 Fallacies and Pitfalls C-56C.7 Concluding Remarks C-57C.8 Historical Perspective and References C-58

    Companion CD Appendices

    Appendix D Embedded Systems Updated by Thomas M. Conte

    Appendix E Interconnection NetworksRevised by Timothy M. Pinkston and Jos Duato

    Appendix F Vector ProcessorsRevised by Krste Asanovic

    Appendix G Hardware and Software for VLIW and EPIC

    Appendix H Large-Scale Multiprocessors and Scientific Applications

    Appendix I Computer Arithmeticby David Goldberg

    Appendix J Survey of Instruction Set Architectures

    Appendix K Historical Perspectives and References

    Online Appendix (textbooks.elsevier.com/0123704901)

    Appendix L Solutions to Case Study Exercises

    References R-1

    Index I-1

  • xv

    Why We Wrote This Book

    Through four editions of this book, our goal has been to describe the basic princi-ples underlying what will be tomorrows technological developments. Ourexcitement about the opportunities in computer architecture has not abated, andwe echo what we said about the field in the first edition: It is not a dreary scienceof paper machines that will never work. No! Its a discipline of keen intellectualinterest, requiring the balance of marketplace forces to cost-performance-power,leading to glorious failures and some notable successes.

    Our primary objective in writing our first book was to change the way peoplelearn and think about computer architecture. We feel this goal is still valid andimportant. The field is changing daily and must be studied with real examplesand measurements on real computers, rather than simply as a collection of defini-tions and designs that will never need to be realized. We offer an enthusiasticwelcome to anyone who came along with us in the past, as well as to those whoare joining us now. Either way, we can promise the same quantitative approachto, and analysis of, real systems.

    As with earlier versions, we have strived to produce a new edition that willcontinue to be as relevant for professional engineers and architects as it is forthose involved in advanced computer architecture and design courses. As muchas its predecessors, this edition aims to demystify computer architecture throughan emphasis on cost-performance-power trade-offs and good engineering design.We believe that the field has continued to mature and move toward the rigorousquantitative foundation of long-established scientific and engineering disciplines.

    This Edition

    The fourth edition of Computer Architecture: A Quantitative Approach may bethe most significant since the first edition. Shortly before we started this revision,Intel announced that it was joining IBM and Sun in relying on multiple proces-sors or cores per chip for high-performance designs. As the first figure in thebook documents, after 16 years of doubling performance every 18 months, sin-

    Preface

  • xvi Preface

    gle-processor performance improvement has dropped to modest annual improve-ments. This fork in the computer architecture road means that for the first time inhistory, no one is building a much faster sequential processor. If you want yourprogram to run significantly faster, say, to justify the addition of new features,youre going to have to parallelize your program.

    Hence, after three editions focused primarily on higher performance byexploiting instruction-level parallelism (ILP), an equal focus of this edition isthread-level parallelism (TLP) and data-level parallelism (DLP). While earliereditions had material on TLP and DLP in big multiprocessor servers, now TLPand DLP are relevant for single-chip multicores. This historic shift led us tochange the order of the chapters: the chapter on multiple processors was the sixthchapter in the last edition, but is now the fourth chapter of this edition.

    The changing technology has also motivated us to move some of the contentfrom later chapters into the first chapter. Because technologists predict muchhigher hard and soft error rates as the industry moves to semiconductor processeswith feature sizes 65 nm or smaller, we decided to move the basics of dependabil-ity from Chapter 7 in the third edition into Chapter 1. As power has become thedominant factor in determining how much you can place on a chip, we alsobeefed up the coverage of power in Chapter 1. Of course, the content and exam-ples in all chapters were updated, as we discuss below.

    In addition to technological sea changes that have shifted the contents of thisedition, we have taken a new approach to the exercises in this edition. It is sur-prisingly difficult and time-consuming to create interesting, accurate, and unam-biguous exercises that evenly test the material throughout a chapter. Alas, theWeb has reduced the half-life of exercises to a few months. Rather than workingout an assignment, a student can search the Web to find answers not long after abook is published. Hence, a tremendous amount of hard work quickly becomesunusable, and instructors are denied the opportunity to test what students havelearned.

    To help mitigate this problem, in this edition we are trying two new ideas.First, we recruited experts from academia and industry on each topic to write theexercises. This means some of the best people in each field are helping us to cre-ate interesting ways to explore the key concepts in each chapter and test thereaders understanding of that material. Second, each group of exercises is orga-nized around a set of case studies. Our hope is that the quantitative example ineach case study will remain interesting over the years, robust and detailed enoughto allow instructors the opportunity to easily create their own new exercises,should they choose to do so. Key, however, is that each year we will continue torelease new exercise sets for each of the case studies. These new exercises willhave critical changes in some parameters so that answers to old exercises will nolonger apply.

    Another significant change is that we followed the lead of the third edition ofComputer Organization and Design (COD) by slimming the text to include thematerial that almost all readers will want to see and moving the appendices that

  • Preface xvii

    some will see as optional or as reference material onto a companion CD. Therewere many reasons for this change:

    1. Students complained about the size of the book, which had expanded from594 pages in the chapters plus 160 pages of appendices in the first edition to760 chapter pages plus 223 appendix pages in the second edition and then to883 chapter pages plus 209 pages in the paper appendices and 245 pages inonline appendices. At this rate, the fourth edition would have exceeded 1500pages (both on paper and online)!

    2. Similarly, instructors were concerned about having too much material tocover in a single course.

    3. As was the case for COD, by including a CD with material moved out of thetext, readers could have quick access to all the material, regardless of theirability to access Elseviers Web site. Hence, the current editions appendiceswill always be available to the reader even after future editions appear.

    4. This flexibility allowed us to move review material on pipelining, instructionsets, and memory hierarchy from the chapters and into Appendices A, B, andC. The advantage to instructors and readers is that they can go over the reviewmaterial much more quickly and then spend more time on the advanced top-ics in Chapters 2, 3, and 5. It also allowed us to move the discussion of sometopics that are important but are not core course topics into appendices on theCD. Result: the material is available, but the printed book is shorter. In thisedition we have 6 chapters, none of which is longer than 80 pages, while inthe last edition we had 8 chapters, with the longest chapter weighing in at 127pages.

    5. This package of a slimmer core print text plus a CD is far less expensive tomanufacture than the previous editions, allowing our publisher to signifi-cantly lower the list price of the book. With this pricing scheme, there is noneed for a separate international student edition for European readers.

    Yet another major change from the last edition is that we have moved theembedded material introduced in the third edition into its own appendix, Appen-dix D. We felt that the embedded material didnt always fit with the quantitativeevaluation of the rest of the material, plus it extended the length of many chaptersthat were already running long. We believe there are also pedagogic advantagesin having all the embedded information in a single appendix.

    This edition continues the tradition of using real-world examples to demon-strate the ideas, and the Putting It All Together sections are brand new; in fact,some were announced after our book was sent to the printer. The Putting It AllTogether sections of this edition include the pipeline organizations and memoryhierarchies of the Intel Pentium 4 and AMD Opteron; the Sun T1 (Niagara) 8-processor, 32-thread microprocessor; the latest NetApp Filer; the InternetArchive cluster; and the IBM Blue Gene/L massively parallel processor.

  • xviii Preface

    Topic Selection and Organization

    As before, we have taken a conservative approach to topic selection, for there aremany more interesting ideas in the field than can reasonably be covered in a treat-ment of basic principles. We have steered away from a comprehensive survey ofevery architecture a reader might encounter. Instead, our presentation focuses oncore concepts likely to be found in any new machine. The key criterion remainsthat of selecting ideas that have been examined and utilized successfully enoughto permit their discussion in quantitative terms.

    Our intent has always been to focus on material that is not available in equiva-lent form from other sources, so we continue to emphasize advanced contentwherever possible. Indeed, there are several systems here whose descriptionscannot be found in the literature. (Readers interested strictly in a more basicintroduction to computer architecture should read Computer Organization andDesign: The Hardware/Software Interface, third edition.)

    An Overview of the Content

    Chapter 1 has been beefed up in this edition. It includes formulas for staticpower, dynamic power, integrated circuit costs, reliability, and availability. We gointo more depth than prior editions on the use of the geometric mean and the geo-metric standard deviation to capture the variability of the mean. Our hope is thatthese topics can be used through the rest of the book. In addition to the classicquantitative principles of computer design and performance measurement, thebenchmark section has been upgraded to use the new SPEC2006 suite.

    Our view is that the instruction set architecture is playing less of a role todaythan in 1990, so we moved this material to Appendix B. It still uses the MIPS64architecture. For fans of ISAs, Appendix J covers 10 RISC architectures, the80x86, the DEC VAX, and the IBM 360/370.

    Chapters 2 and 3 cover the exploitation of instruction-level parallelism inhigh-performance processors, including superscalar execution, branch prediction,speculation, dynamic scheduling, and the relevant compiler technology. As men-tioned earlier, Appendix A is a review of pipelining in case you need it. Chapter 3surveys the limits of ILP. New to this edition is a quantitative evaluation of multi-threading. Chapter 3 also includes a head-to-head comparison of the AMD Ath-lon, Intel Pentium 4, Intel Itanium 2, and IBM Power5, each of which has madeseparate bets on exploiting ILP and TLP. While the last edition contained a greatdeal on Itanium, we moved much of this material to Appendix G, indicating ourview that this architecture has not lived up to the early claims.

    Given the switch in the field from exploiting only ILP to an equal focus onthread- and data-level parallelism, we moved multiprocessor systems up to Chap-ter 4, which focuses on shared-memory architectures. The chapter begins withthe performance of such an architecture. It then explores symmetric anddistributed memory architectures, examining both organizational principles andperformance. Topics in synchronization and memory consistency models are

  • Preface xix

    next. The example is the Sun T1 (Niagara), a radical design for a commercialproduct. It reverted to a single-instruction issue, 6-stage pipeline microarchitec-ture. It put 8 of these on a single chip, and each supports 4 threads. Hence, soft-ware sees 32 threads on this single, low-power chip.

    As mentioned earlier, Appendix C contains an introductory review of cacheprinciples, which is available in case you need it. This shift allows Chapter 5 tostart with 11 advanced optimizations of caches. The chapter includes a new sec-tion on virtual machines, which offers advantages in protection, software man-agement, and hardware management. The example is the AMD Opteron, givingboth its cache hierarchy and the virtual memory scheme for its recently expanded64-bit addresses.

    Chapter 6, Storage Systems, has an expanded discussion of reliability andavailability, a tutorial on RAID with a description of RAID 6 schemes, and rarelyfound failure statistics of real systems. It continues to provide an introduction toqueuing theory and I/O performance benchmarks. Rather than go through a seriesof steps to build a hypothetical cluster as in the last edition, we evaluate the cost,performance, and reliability of a real cluster: the Internet Archive. The Putting ItAll Together example is the NetApp FAS6000 filer, which is based on the AMDOpteron microprocessor.

    This brings us to Appendices A through L. As mentioned earlier, AppendicesA and C are tutorials on basic pipelining and caching concepts. Readers relativelynew to pipelining should read Appendix A before Chapters 2 and 3, and thosenew to caching should read Appendix C before Chapter 5.

    Appendix B covers principles of ISAs, including MIPS64, and Appendix Jdescribes 64-bit versions of Alpha, MIPS, PowerPC, and SPARC and their multi-media extensions. It also includes some classic architectures (80x86, VAX, andIBM 360/370) and popular embedded instruction sets (ARM, Thumb, SuperH,MIPS16, and Mitsubishi M32R). Appendix G is related, in that it covers architec-tures and compilers for VLIW ISAs.

    Appendix D, updated by Thomas M. Conte, consolidates the embedded mate-rial in one place.

    Appendix E, on networks, has been extensively revised by Timothy M. Pink-ston and Jos Duato. Appendix F, updated by Krste Asanovic, includes a descrip-tion of vector processors. We think these two appendices are some of the bestmaterial we know of on each topic.

    Appendix H describes parallel processing applications and coherence proto-cols for larger-scale, shared-memory multiprocessing. Appendix I, by DavidGoldberg, describes computer arithmetic.

    Appendix K collects the Historical Perspective and References from eachchapter of the third edition into a single appendix. It attempts to give propercredit for the ideas in each chapter and a sense of the history surrounding theinventions. We like to think of this as presenting the human drama of computerdesign. It also supplies references that the student of architecture may want topursue. If you have time, we recommend reading some of the classic papers inthe field that are mentioned in these sections. It is both enjoyable and educational

  • xx Preface

    to hear the ideas directly from the creators. Historical Perspective was one ofthe most popular sections of prior editions.

    Appendix L (available at textbooks.elsevier.com/0123704901) contains solu-tions to the case study exercises in the book.

    Navigating the Text

    There is no single best order in which to approach these chapters and appendices,except that all readers should start with Chapter 1. If you dont want to readeverything, here are some suggested sequences:

    ILP: Appendix A, Chapters 2 and 3, and Appendices F and G

    Memory Hierarchy: Appendix C and Chapters 5 and 6

    Thread-and Data-Level Parallelism: Chapter 4, Appendix H, and Appendix E

    ISA: Appendices B and J

    Appendix D can be read at any time, but it might work best if read after the ISAand cache sequences. Appendix I can be read whenever arithmetic moves you.

    Chapter Structure

    The material we have selected has been stretched upon a consistent frameworkthat is followed in each chapter. We start by explaining the ideas of a chapter.These ideas are followed by a Crosscutting Issues section, a feature that showshow the ideas covered in one chapter interact with those given in other chapters.This is followed by a Putting It All Together section that ties these ideastogether by showing how they are used in a real machine.

    Next in the sequence is Fallacies and Pitfalls, which lets readers learn fromthe mistakes of others. We show examples of common misunderstandings andarchitectural traps that are difficult to avoid even when you know they are lying inwait for you. The Fallacies and Pitfalls sections is one of the most popular sec-tions of the book. Each chapter ends with a Concluding Remarks section.

    Case Studies with Exercises

    Each chapter ends with case studies and accompanying exercises. Authored byexperts in industry and academia, the case studies explore key chapter conceptsand verify understanding through increasingly challenging exercises. Instructorsshould find the case studies sufficiently detailed and robust to allow them to cre-ate their own additional exercises.

    Brackets for each exercise () indicate the text sections ofprimary relevance to completing the exercise. We hope this helps readers to avoidexercises for which they havent read the corresponding section, in addition toproviding the source for review. Note that we provide solutions to the case study

  • Preface xxi

    exercises in Appendix L. Exercises are rated, to give the reader a sense of theamount of time required to complete an exercise:

    [10] Less than 5 minutes (to read and understand)

    [15] 515 minutes for a full answer

    [20] 1520 minutes for a full answer

    [25] 1 hour for a full written answer

    [30] Short programming project: less than 1 full day of programming

    [40] Significant programming project: 2 weeks of elapsed time

    [Discussion] Topic for discussion with others

    A second set of alternative case study exercises are available for instructorswho register at textbooks.elsevier.com/0123704901. This second set will berevised every summer, so that early every fall, instructors can download a new setof exercises and solutions to accompany the case studies in the book.

    Supplemental Materials

    The accompanying CD contains a variety of resources, including the following:

    Reference appendicessome guest authored by subject expertscovering arange of advanced topics

    Historical Perspectives material that explores the development of the keyideas presented in each of the chapters in the text

    Search engine for both the main text and the CD-only content

    Additional resources are available at textbooks.elsevier.com/0123704901. Theinstructor site (accessible to adopters who register at textbooks.elsevier.com)includes:

    Alternative case study exercises with solutions (updated yearly)

    Instructor slides in PowerPoint

    Figures from the book in JPEG and PPT formats

    The companion site (accessible to all readers) includes:

    Solutions to the case study exercises in the text

    Links to related material on the Web

    List of errata

    New materials and links to other resources available on the Web will beadded on a regular basis.

  • xxii Preface

    Helping Improve This Book

    Finally, it is possible to make money while reading this book. (Talk about cost-performance!) If you read the Acknowledgments that follow, you will see that wewent to great lengths to correct mistakes. Since a book goes through many print-ings, we have the opportunity to make even more corrections. If you uncover anyremaining resilient bugs, please contact the publisher by electronic mail([email protected]). The first reader to report an error with a fix that we incor-porate in a future printing will be rewarded with a $1.00 bounty. Please check theerrata sheet on the home page (textbooks.elsevier.com/0123704901) to see if thebug has already been reported. We process the bugs and send the checks aboutonce a year or so, so please be patient.

    We welcome general comments to the text and invite you to send them to aseparate email address at [email protected].

    Concluding Remarks

    Once again this book is a true co-authorship, with each of us writing half thechapters and an equal share of the appendices. We cant imagine how long itwould have taken without someone else doing half the work, offering inspirationwhen the task seemed hopeless, providing the key insight to explain a difficultconcept, supplying reviews over the weekend of chapters, and commiseratingwhen the weight of our other obligations made it hard to pick up the pen. (Theseobligations have escalated exponentially with the number of editions, as one of uswas President of Stanford and the other was President of the Association forComputing Machinery.) Thus, once again we share equally the blame for whatyou are about to read.

    John Hennessy David Patterson

  • xxiii

    Although this is only the fourth edition of this book, we have actually creatednine different versions of the text: three versions of the first edition (alpha, beta,and final) and two versions of the second, third, and fourth editions (beta andfinal). Along the way, we have received help from hundreds of reviewers andusers. Each of these people has helped make this book better. Thus, we have cho-sen to list all of the people who have made contributions to some version of thisbook.

    Contributors to the Fourth Edition

    Like prior editions, this is a community effort that involves scores of volunteers.Without their help, this edition would not be nearly as polished.

    Reviewers

    Krste Asanovic, Massachusetts Institute of Technology; Mark Brehob, Universityof Michigan; Sudhanva Gurumurthi, University of Virginia; Mark D. Hill, Uni-versity of WisconsinMadison; Wen-mei Hwu, University of Illinois at UrbanaChampaign; David Kaeli, Northeastern University; Ramadass Nagarajan, Univer-sity of Texas at Austin; Karthikeyan Sankaralingam, Univeristy of Texas at Aus-tin; Mark Smotherman, Clemson University; Gurindar Sohi, University ofWisconsinMadison; Shyamkumar Thoziyoor, University of Notre Dame, Indi-ana; Dan Upton, University of Virginia; Sotirios G. Ziavras, New Jersey Instituteof Technology

    Focus Group

    Krste Asanovic, Massachusetts Institute of Technology; Jos Duato, UniversitatPolitcnica de Valncia and Simula; Antonio Gonzlez, Intel and UniversitatPolitcnica de Catalunya; Mark D. Hill, University of WisconsinMadison; LevG. Kirischian, Ryerson University; Timothy M. Pinkston, University of SouthernCalifornia

    Acknowledgments

  • xxiv Acknowledgments

    Appendices

    Krste Asanovic, Massachusetts Institute of Technology (Appendix F); ThomasM. Conte, North Carolina State University (Appendix D); Jos Duato, Universi-tat Politcnica de Valncia and Simula (Appendix E); David Goldberg, XeroxPARC (Appendix I); Timothy M. Pinkston, University of Southern California(Appendix E)

    Case Studies with Exercises

    Andrea C. Arpaci-Dusseau, University of WisconsinMadison (Chapter 6); RemziH. Arpaci-Dusseau, University of WisconsinMadison (Chapter 6); Robert P. Col-well, R&E Colwell & Assoc., Inc. (Chapter 2); Diana Franklin, California Poly-technic State University, San Luis Obispo (Chapter 1); Wen-mei W. Hwu,University of Illinois at UrbanaChampaign (Chapter 3); Norman P. Jouppi, HPLabs (Chapter 5); John W. Sias, University of Illinois at UrbanaChampaign(Chapter 3); David A. Wood, University of WisconsinMadison (Chapter 4)

    Additional Material

    John Mashey (geometric means and standard deviations in Chapter 1); ChenmingHu, University of California, Berkeley (wafer costs and yield parameters inChapter 1); Bill Brantley and Dan Mudgett, AMD (Opteron memory hierarchyevaluation in Chapter 5); Mendel Rosenblum, Stanford and VMware (virtualmachines in Chapter 5); Aravind Menon, EPFL Switzerland (Xen measurementsin Chapter 5); Bruce Baumgart and Brewster Kahle, Internet Archive (IA clusterin Chapter 6); David Ford, Steve Kleiman, and Steve Miller, Network Appliances(FX6000 information in Chapter 6); Alexander Thomasian, Rutgers (queueingtheory in Chapter 6)

    Finally, a special thanks once again to Mark Smotherman of Clemson Univer-sity, who gave a final technical reading of our manuscript. Mark found numerousbugs and ambiguities, and the book is much cleaner as a result.

    This book could not have been published without a publisher, of course. Wewish to thank all the Morgan Kaufmann/Elsevier staff for their efforts and sup-port. For this fourth edition, we particularly want to thank Kimberlee Honjo whocoordinated surveys, focus groups, manuscript reviews and appendices, and NateMcFadden, who coordinated the development and review of the case studies. Ourwarmest thanks to our editor, Denise Penrose, for her leadership in our continu-ing writing saga.

    We must also thank our university staff, Margaret Rowland and CeciliaPracher, for countless express mailings, as well as for holding down the fort atStanford and Berkeley while we worked on the book.

    Our final thanks go to our wives for their suffering through increasingly earlymornings of reading, thinking, and writing.

  • Acknowledgments xxv

    Contributors to Previous Editions

    Reviewers

    George Adams, Purdue University; Sarita Adve, University of Illinois at UrbanaChampaign; Jim Archibald, Brigham Young University; Krste Asanovic, Massa-chusetts Institute of Technology; Jean-Loup Baer, University of Washington;Paul Barr, Northeastern University; Rajendra V. Boppana, University of Texas,San Antonio; Doug Burger, University of Texas, Austin; John Burger, SGI;Michael Butler; Thomas Casavant; Rohit Chandra; Peter Chen, University ofMichigan; the classes at SUNY Stony Brook, Carnegie Mellon, Stanford, Clem-son, and Wisconsin; Tim Coe, Vitesse Semiconductor; Bob Colwell, Intel; DavidCummings; Bill Dally; David Douglas; Anthony Duben, Southeast MissouriState University; Susan Eggers, University of Washington; Joel Emer; BarryFagin, Dartmouth; Joel Ferguson, University of California, Santa Cruz; Carl Fey-nman; David Filo; Josh Fisher, Hewlett-Packard Laboratories; Rob Fowler,DIKU; Mark Franklin, Washington University (St. Louis); Kourosh Gharachor-loo; Nikolas Gloy, Harvard University; David Goldberg, Xerox Palo AltoResearch Center; James Goodman, University of WisconsinMadison; DavidHarris, Harvey Mudd College; John Heinlein; Mark Heinrich, Stanford; DanielHelman, University of California, Santa Cruz; Mark Hill, University of Wiscon-sinMadison; Martin Hopkins, IBM; Jerry Huck, Hewlett-Packard Laboratories;Mary Jane Irwin, Pennsylvania State University; Truman Joe; Norm Jouppi;David Kaeli, Northeastern University; Roger Kieckhafer, University ofNebraska; Earl Killian; Allan Knies, Purdue University; Don Knuth; Jeff Kuskin,Stanford; James R. Larus, Microsoft Research; Corinna Lee, University of Tor-onto; Hank Levy; Kai Li, Princeton University; Lori Liebrock, University ofAlaska, Fairbanks; Mikko Lipasti, University of WisconsinMadison; Gyula A.Mago, University of North Carolina, Chapel Hill; Bryan Martin; Norman Mat-loff; David Meyer; William Michalson, Worcester Polytechnic Institute; JamesMooney; Trevor Mudge, University of Michigan; David Nagle, Carnegie MellonUniversity; Todd Narter; Victor Nelson; Vojin Oklobdzija, University of Califor-nia, Berkeley; Kunle Olukotun, Stanford University; Bob Owens, PennsylvaniaState University; Greg Papadapoulous, Sun; Joseph Pfeiffer; Keshav Pingali,Cornell University; Bruno Preiss, University of Waterloo; Steven Przybylski; JimQuinlan; Andras Radics; Kishore Ramachandran, Georgia Institute of Technol-ogy; Joseph Rameh, University of Texas, Austin; Anthony Reeves, Cornell Uni-versity; Richard Reid, Michigan State University; Steve Reinhardt, University ofMichigan; David Rennels, University of California, Los Angeles; Arnold L.Rosenberg, University of Massachusetts, Amherst; Kaushik Roy, Purdue Univer-sity; Emilio Salgueiro, Unysis; Peter Schnorf; Margo Seltzer; Behrooz Shirazi,Southern Methodist University; Daniel Siewiorek, Carnegie Mellon University;J. P. Singh, Princeton; Ashok Singhal; Jim Smith, University of WisconsinMadison; Mike Smith, Harvard University; Mark Smotherman, Clemson Univer-sity; Guri Sohi, University of WisconsinMadison; Arun Somani, University of

  • xxvi Acknowledgments

    Washington; Gene Tagliarin, Clemson University; Evan Tick, University of Ore-gon; Akhilesh Tyagi, University of North Carolina, Chapel Hill; Mateo Valero,Universidad Politcnica de Catalua, Barcelona; Anujan Varma, University ofCalifornia, Santa Cruz; Thorsten von Eicken, Cornell University; Hank Walker,Texas A&M; Roy Want, Xerox Palo Alto Research Center; David Weaver, Sun;Shlomo Weiss, Tel Aviv University; David Wells; Mike Westall, Clemson Univer-sity; Maurice Wilkes; Eric Williams; Thomas Willis, Purdue University; MalcolmWing; Larry Wittie, SUNY Stony Brook; Ellen Witte Zegura, Georgia Institute ofTechnology

    Appendices

    The vector appendix was revised by Krste Asanovic of the Massachusetts Insti-tute of Technology. The floating-point appendix was written originally by DavidGoldberg of Xerox PARC.

    Exercises

    George Adams, Purdue University; Todd M. Bezenek, University of WisconsinMadison (in remembrance of his grandmother Ethel Eshom); Susan Eggers;Anoop Gupta; David Hayes; Mark Hill; Allan Knies; Ethan L. Miller, Universityof California, Santa Cruz; Parthasarathy Ranganathan, Compaq WesternResearch Laboratory; Brandon Schwartz, University of WisconsinMadison;Michael Scott; Dan Siewiorek; Mike Smith; Mark Smotherman; Evan Tick; Tho-mas Willis.

    Special Thanks

    Duane Adams, Defense Advanced Research Projects Agency; Tom Adams; SaritaAdve, University of Illinois at UrbanaChampaign; Anant Agarwal; Dave Albo-nesi, University of Rochester; Mitch Alsup; Howard Alt; Dave Anderson; PeterAshenden; David Bailey; Bill Bandy, Defense Advanced Research ProjectsAgency; L. Barroso, Compaqs Western Research Lab; Andy Bechtolsheim; C.Gordon Bell; Fred Berkowitz; John Best, IBM; Dileep Bhandarkar; Jeff Bier,BDTI; Mark Birman; David Black; David Boggs; Jim Brady; Forrest Brewer;Aaron Brown, University of California, Berkeley; E. Bugnion, Compaqs West-ern Research Lab; Alper Buyuktosunoglu, University of Rochester; Mark Cal-laghan; Jason F. Cantin; Paul Carrick; Chen-Chung Chang; Lei Chen, Universityof Rochester; Pete Chen; Nhan Chu; Doug Clark, Princeton University; BobCmelik; John Crawford; Zarka Cvetanovic; Mike Dahlin, University of Texas,Austin; Merrick Darley; the staff of the DEC Western Research Laboratory; JohnDeRosa; Lloyd Dickman; J. Ding; Susan Eggers, University of Washington; WaelEl-Essawy, University of Rochester; Patty Enriquez, Mills; Milos Ercegovac;Robert Garner; K. Gharachorloo, Compaqs Western Research Lab; Garth Gib-son; Ronald Greenberg; Ben Hao; John Henning, Compaq; Mark Hill, University

  • Acknowledgments xxvii

    of WisconsinMadison; Danny Hillis; David Hodges; Urs Hoelzle, Google;David Hough; Ed Hudson; Chris Hughes, University of Illinois at UrbanaChampaign; Mark Johnson; Lewis Jordan; Norm Jouppi; William Kahan; RandyKatz; Ed Kelly; Richard Kessler; Les Kohn; John Kowaleski, Compaq ComputerCorp; Dan Lambright; Gary Lauterbach, Sun Microsystems; Corinna Lee; RubyLee; Don Lewine; Chao-Huang Lin; Paul Losleben, Defense Advanced ResearchProjects Agency; Yung-Hsiang Lu; Bob Lucas, Defense Advanced ResearchProjects Agency; Ken Lutz; Alan Mainwaring, Intel Berkeley Research Labs; AlMarston; Rich Martin, Rutgers; John Mashey; Luke McDowell; SebastianMirolo, Trimedia Corporation; Ravi Murthy; Biswadeep Nag; Lisa Noordergraaf,Sun Microsystems; Bob Parker, Defense Advanced Research Projects Agency;Vern Paxson, Center for Internet Research; Lawrence Prince; Steven Przybylski;Mark Pullen, Defense Advanced Research Projects Agency; Chris Rowen; Marg-aret Rowland; Greg Semeraro, University of Rochester; Bill Shannon; BehroozShirazi; Robert Shomler; Jim Slager; Mark Smotherman, Clemson University;the SMT research group at the University of Washington; Steve Squires, DefenseAdvanced Research Projects Agency; Ajay Sreekanth; Darren Staples; CharlesStapper; Jorge Stolfi; Peter Stoll; the students at Stanford and Berkeley whoendured our first attempts at creating this book; Bob Supnik; Steve Swanson;Paul Taysom; Shreekant Thakkar; Alexander Thomasian, New Jersey Institute ofTechnology; John Toole, Defense Advanced Research Projects Agency; Kees A.Vissers, Trimedia Corporation; Willa Walker; David Weaver; Ric Wheeler, EMC;Maurice Wilkes; Richard Zimmerman.

    John Hennessy David Patterson

  • 1.1

    Introduction 2

    1.2

    Classes of Computers 4

    1.3

    Defining Computer Architecture 8

    1.4

    Trends in Technology 14

    1.5

    Trends in Power in Integrated Circuits 17

    1.6

    Trends in Cost 19

    1.7

    Dependability 25

    1.8

    Measuring, Reporting, and Summarizing Performance 28

    1.9

    Quantitative Principles of Computer Design 37

    1.10

    Putting It All Together: Performance and Price-Performance 44

    1.11

    Fallacies and Pitfalls 48

    1.12

    Concluding Remarks 52

    1.13

    Historical Perspectives and References 54

    Case Studies with Exercises by Diana Franklin 55

  • 1

    Fundamentals of

    Computer Design

    And now for something completely different.

    Monty Pythons Flying Circus

  • 2

    Chapter One

    Fundamentals of Computer Design

    Computer technology has made incredible progress in the roughly 60 years sincethe first general-purpose electronic computer was created. Today, less than $500will purchase a personal computer that has more performance, more main mem-ory, and more disk storage than a computer bought in 1985 for 1 million dollars.This rapid improvement has come both from advances in the technology used tobuild computers and from innovation in computer design.

    Although technological improvements have been fairly steady, progress aris-ing from better computer architectures has been much less consistent. During thefirst 25 years of electronic computers, both forces made a major contribution,delivering performance improvement of about 25% per year. The late 1970s sawthe emergence of the microprocessor. The ability of the microprocessor to ridethe improvements in integrated circuit technology led to a higher rate of improve-mentroughly 35% growth per year in performance.

    This growth rate, combined with the cost advantages of a mass-producedmicroprocessor, led to an increasing fraction of the computer business beingbased on microprocessors. In addition, two significant changes in the computermarketplace made it easier than ever before to be commercially successful with anew architecture. First, the virtual elimination of assembly language program-ming reduced the need for object-code compatibility. Second, the creation ofstandardized, vendor-independent operating systems, such as UNIX and itsclone, Linux, lowered the cost and risk of bringing out a new architecture.

    These changes made it possible to develop successfully a new set of architec-tures with simpler instructions, called RISC (Reduced Instruction Set Computer)architectures, in the early 1980s. The RISC-based machines focused the attentionof designers on two critical performance techniques, the exploitation of

    instruction-level parallelism

    (initially through pipelining and later through multiple instructionissue) and the use of caches (initially in simple forms and later using more sophisti-cated organizations and optimizations).

    The RISC-based computers raised the performance bar, forcing prior archi-tectures to keep up or disappear. The Digital Equipment Vax could not, and so itwas replaced by a RISC architecture. Intel rose to the challenge, primarily bytranslating x86 (or IA-32) instructions into RISC-like instructions internally,allowing it to adopt many of the innovations first pioneered in the RISC designs.As transistor counts soared in the late 1990s, the hardware overhead of translat-ing the more complex x86 architecture became negligible.

    Figure 1.1 shows that the combination of architectural and organizationalenhancements led to 16 years of sustained growth in performance at an annualrate of over 50%a rate that is unprecedented in the computer industry.

    The effect of this dramatic growth rate in the 20th century has been twofold.First, it has significantly enhanced the capability available to computer users. Formany applications, the highest-performance microprocessors of today outper-form the supercomputer of less than 10 years ago.

    1.1 Introduction

  • 1.1 Introduction

    3

    Second, this dramatic rate of improvement has led to the dominance ofmicroprocessor-based computers across the entire range of the computer design.PCs and Workstations have emerged as major products in the computer industry.Minicomputers, which were traditionally made from off-the-shelf logic or fromgate arrays, have been replaced by servers made using microprocessors. Main-frames have been almost replaced with multiprocessors consisting of small num-bers of off-the-shelf microprocessors. Even high-end supercomputers are beingbuilt with collections of microprocessors. These innovations led to a renaissance in computer design, which emphasizedboth architectural innovation and efficient use of technology improvements. Thisrate of growth has compounded so that by 2002, high-performance microproces-sors are about seven times faster than what would have been obtained by relyingsolely on technology, including improved circuit design.

    Figure 1.1

    Growth in processor performance since the mid-1980s.

    This chart plots performance relative to theVAX 11/780 as measured by the SPECint benchmarks (see Section 1.8). Prior to the mid-1980s, processor perfor-mance growth was largely technology driven and averaged about 25% per year. The increase in growth to about52% since then is attributable to more advanced architectural and organizational ideas. By 2002, this growth led to adifference in performance of about a factor of seven. Performance for floating-point-oriented calculations hasincreased even faster. Since 2002, the limits of power, available instruction-level parallelism, and long memorylatency have slowed uniprocessor performance recently, to about 20% per year. Since SPEC has changed over theyears, performance of newer machines is estimated by a scaling factor that relates the performance for two differentversions of SPEC (e.g., SPEC92, SPEC95, and SPEC2000).

    Per

    form

    ance

    (vs

    . VA

    X-1

    1/78

    0)10,000

    1000

    100

    10

    19780

    1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006

    1779Intel Pentium III, 1.0 GHz 2584

    AMD Athlon, 1.6 GHz

    Intel Pentium 4,3.0 GHz4195

    AMD Opteron, 2.2 GHz5364

    5764

    Intel Xeon, 3.6 GHz 64-bit Intel Xeon, 3.6 GHz6505

    1267Alpha 21264A, 0.7 GHz

    993

    Alpha 21264, 0.6 GHz

    649Alpha 21164, 0.6 GHz

    481Alpha 21164, 0.5 GHz

    280Alpha 21164, 0.3 GHz

    183Alpha 21064A, 0.3 GHz

    117PowerPC 604, 0.1GHz

    80Alpha 21064, 0.2 GHz

    51HP PA-RISC, 0.05 GHz

    24IBM RS6000/540

    18MIPS M2000

    13MIPS M/120

    9Sun-4/260

    5VAX 8700

    1.5, VAX-11/78525%/year

    52%/year

    20%

    VAX-11/780

  • 4

    Chapter One

    Fundamentals of Computer Design

    However, Figure 1.1 also shows that this 16-year renaissance is over. Since2002, processor performance improvement has dropped to about 20% per yeardue to the triple hurdles of maximum power dissipation of air-cooled chips, littleinstruction-level parallelism left to exploit efficiently, and almost unchangedmemory latency. Indeed, in 2004 Intel canceled its high-performance uniproces-sor projects and joined IBM and Sun in declaring that the road to higher perfor-mance would be via multiple processors per chip rather than via fasteruniprocessors. This signals a historic switch from relying solely on instruction-level parallelism (ILP), the primary focus of the first three editions of this book,to

    thread-level parallelism

    (TLP) and

    data-level parallelism

    (DLP), which arefeatured in this edition. Whereas the compiler and hardware conspire to exploitILP implicitly without the programmers attention, TLP and DLP are explicitlyparallel, requiring the programmer to write parallel code to gain performance.

    This text is about the architectural ideas and accompanying compilerimprovements that made the incredible growth rate possible in the last century,the reasons for the dramatic change, and the challenges and initial promisingapproaches to architectural ideas and compilers for the 21st century. At the coreis a quantitative approach to computer design and analysis that uses empiricalobservations of programs, experimentation, and simulation as its tools. It is thisstyle and approach to computer design that is reflected in this text. This book waswritten not only to explain this design style, but also to stimulate you to contrib-ute to this progress. We believe the approach will work for explicitly parallelcomputers of the future just as it worked for the implicitly parallel computers ofthe past.

    In the 1960s, the dominant form of computing was on large mainframescom-puters costing millions of dollars and stored in computer rooms with multipleoperators overseeing their support. Typical applications included business dataprocessing and large-scale scientific computing. The 1970s saw the birth of theminicomputer, a smaller-sized computer initially focused on applications in sci-entific laboratories, but rapidly branching out with the popularity of time-sharingmultiple users sharing a computer interactively through independentterminals. That decade also saw the emergence of supercomputers, which werehigh-performance computers for scientific computing. Although few in number,they were important historically because they pioneered innovations that latertrickled down to less expensive computer classes. The 1980s saw the rise of thedesktop computer based on microprocessors, in the form of both personal com-puters and workstations. The individually owned desktop computer replacedtime-sharing and led to the rise of serverscomputers that provided larger-scaleservices such as reliable, long-term file storage and access, larger memory, andmore computing power. The 1990s saw the emergence of the Internet and theWorld Wide Web, the first successful handheld computing devices (personal digi-

    1.2 Classes of Computers

  • 1.2 Classes of Computers

    5

    tal assistants or PDAs), and the emergence of high-performance digital consumerelectronics, from video games to set-top boxes. The extraordinary popularity ofcell phones has been obvious since 2000, with rapid improvements in functionsand sales that far exceed those of the PC. These more recent applications use

    embedded computers

    , where computers are lodged in other devices and theirpresence is not immediately obvious.

    These changes have set the stage for a dramatic change in how we view com-puting, computing applications, and the computer markets in this new century.Not since the creation of the personal computer more than 20 years ago have weseen such dramatic changes in the way computers appear and in how they areused. These changes in computer use have led to three different computing mar-kets, each characterized by different applications, requirements, and computingtechnologies. Figure 1.2 summarizes these mainstream classes of computingenvironments and their important characteristics.

    Desktop Computing

    The first, and still the largest market in dollar terms, is desktop computing. Desk-top computing spans from low-end systems that sell for under $500 to high-end,heavily configured workstations that may sell for $5000. Throughout this rangein price and capability, the desktop market tends to be driven to optimize

    price-performance.

    This combination of performance (measured primarily in terms ofcompute performance and graphics performance) and price of a system is whatmatters most to customers in this market, and hence to computer designers. As aresult, the newest, highest-performance microprocessors and cost-reduced micro-processors often appear first in desktop systems (see Section 1.6 for a discussionof the issues affecting the cost of computers).

    Desktop computing also tends to be reasonably well characterized in terms ofapplications and benchmarking, though the increasing use of Web-centric, inter-active applications poses new challenges in performance evaluation.

    Feature Desktop Server Embedded

    Price of system $500$5000 $5000$5,000,000 $10$100,000 (including networkrouters at the high end)

    Price of microprocessor module

    $50$500(per processor)

    $200$10,000 (per processor)

    $0.01$100 (per processor)

    Critical system design issues Price-performance, graphics performance

    Throughput, availability, scalability

    Price, power consumption,application-specific performance

    Figure 1.2

    A summary of the three mainstream

    computing classes and their system characteristics.

    Note thewide range in system price for servers and embedded systems. For servers, this range arises from the need for verylarge-scale multiprocessor systems for high-end transaction processing and Web server applications. The total num-ber of embedded processors sold in 2005 is estimated to exceed 3 billion if you include 8-bit and 16-bit microproces-sors. Perhaps 200 million desktop computers and 10 million servers were sold in 2005.

  • 6

    Chapter One

    Fundamentals of Computer Design

    Servers

    As the shift to desktop computing occurred, the role of servers grew to providelarger-scale and more reliable file and computing services. The World Wide Webaccelerated this trend because of the tremendous growth in the demand andsophistication of Web-based services. Such servers have become the backbone oflarge-scale enterprise computing, replacing the traditional mainframe.

    For servers, different characteristics are important. First, dependability is crit-ical. (We discuss dependability in Section 1.7.) Consider the servers runningGoogle, taking orders for Cisco, or running auctions on eBay. Failure of suchserver systems is far more catastrophic than failure of a single desktop, sincethese servers must operate seven days a week, 24 hours a day. Figure 1.3 esti-mates revenue costs of downtime as of 2000. To bring costs up-to-date, Ama-zon.com had $2.98 billion in sales in the fall quarter of 2005. As there were about2200 hours in that quarter, the average revenue per hour was $1.35 million. Dur-ing a peak hour for Christmas shopping, the potential loss would be many timeshigher.

    Hence, the estimated costs of an unavailable system are high, yet Figure 1.3and the Amazon numbers are purely lost revenue and do not account for lostemployee productivity or the cost of unhappy customers.

    A second key feature of server systems is scalability. Server systems oftengrow in response to an increasing demand for the services they support or anincrease in functional requirements. Thus, the ability to scale up the computingcapacity, the memory, the storage, and the I/O bandwidth of a server is crucial.

    Lastly, servers are designed for efficient throughput. That is, the overall per-formance of the serverin terms of transactions per minute or Web pages served

    ApplicationCost of downtime per hour (thousands of $)

    Annual losses (millions of $) with downtime of

    1% (87.6 hrs/yr)

    0.5% (43.8 hrs/yr)

    0.1% (8.8 hrs/yr)

    Brokerage operations $6450 $565 $283 $56.5

    Credit card authorization $2600 $228 $114 $22.8

    Package shipping services $150 $13 $6.6 $1.3

    Home shopping channel $113 $9.9 $4.9 $1.0

    Catalog sales center $90 $7.9 $3.9 $0.8

    Airline reservation center $89 $7.9 $3.9 $0.8

    Cellular service activation $41 $3.6 $1.8 $0.4

    Online network fees $25 $2.2 $1.1 $0.2

    ATM service fees $14 $1.2 $0.6 $0.1

    Figure 1.3

    The cost of an unavailable system is shown by analyzing the cost of downtime (in terms of immedi-ately lost revenue), assuming three different levels of availability, and that downtime is distributed uniformly.

    These data are from Kembel [2000] and were collected and analyzed by Contingency Planning Research.

  • 1.2 Classes of Computers

    7

    per secondis what is crucial. Responsiveness to an individual request remainsimportant, but overall efficiency and cost-effectiveness, as determined by howmany requests can be handled in a unit time, are the key metrics for most servers.We return to the issue of assessing performance for different types of computingenvironments in Section 1.8.

    A related category is

    supercomputers

    . They are the most expensive comput-ers, costing tens of millions of dollars, and they emphasize floating-point perfor-mance. Clusters of desktop computers, which are discussed in Appendix H, havelargely overtaken this class of computer. As clusters grow in popularity, the num-ber of conventional supercomputers is shrinking, as are the number of companieswho make them.

    Embedded Computers

    Embedded computers are the fastest growing portion of the computer market.These devices range from everyday machinesmost microwaves, most washingmachines, most printers, most networking switches, and all cars contain simpleembedded microprocessorsto handheld digital devices, such as cell phones andsmart cards, to video games and digital set-top boxes.

    Embedded computers have the widest spread of processing power and cost.They include 8-bit and 16-bit processors that may cost less than a dime, 32-bitmicroprocessors that execute 100 million instructions per second and cost under$5, and high-end processors for the newest video games or network switches thatcost $100 and can execute a billion instructions per second. Although the rangeof computing power in the embedded computing market is very large, price is akey factor in the design of computers for this space. Performance requirementsdo exist, of course, but the primary goal is often meeting the performance need ata minimum price, rather than achieving higher performance at a higher price.

    Often, the performance requirement in an embedded application is real-timeexecution. A

    real-time performance

    requirement is when a segment of the appli-cation has an absolute maximum execution time. For example, in a digital set-topbox, the time to process each video frame is limited, since the processor mustaccept and process the next frame shortly. In some applications, a more nuancedrequirement exists: the average time for a particular task is constrained as well asthe number of instances when some maximum time is exceeded. Suchapproachessometimes called

    soft real-time

    arise when it is possible to occa-sionally miss the time constraint on an event, as long as not too many are missed.Real-time performance tends to be highly application dependent.

    Two other key characteristics exist in many embedded applications: the needto minimize memory and the need to minimize power. In many embedded appli-cations, the memory can be a substantial portion of the system cost, and it isimportant to optimize memory size in such cases. Sometimes the application isexpected to fit totally in the memory on the processor chip; other times the

  • 8

    Chapter One

    Fundamentals of Computer Design

    application needs to fit totally in a small off-chip memory. In any event, theimportance of memory size translates to an emphasis on code size, since data sizeis dictated by the application.

    Larger memories also mean more power, and optimizing power is often criti-cal in embedded applications. Although the emphasis on low power is frequentlydriven by the use of batteries, the need to use less expensive packagingplasticversus ceramicand the absence of a fan for cooling also limit total power con-sumption. We examine the issue of power in more detail in Section 1.5.

    Most of this book applies to the design, use, and performance of embeddedprocessors, whether they are off-the-shelf microprocessors or microprocessorcores, which will be assembled with other special-purpose hardware.

    Indeed, the third edition of this book included examples from embeddedcomputing to illustrate the ideas in every chapter. Alas, most readers found theseexamples unsatisfactory, as the data that drives the quantitative design and evalu-ation of desktop and server computers has not yet been extended well to embed-ded computing (see the challenges with EEMBC, for example, in Section 1.8).Hence, we are left for now with qualitative descriptions, which do not fit wellwith the rest of the book. As a result, in this edition we consolidated the embed-ded material into a single appendix. We believe this new appendix (Appendix D)improves the flow of ideas in the text while still allowing readers to see how thediffering requirements affect embedded computing.

    The task the computer designer faces is a complex one: Determine whatattributes are important for a new computer, then design a computer to maximizeperformance while staying within cost, power, and availability constraints. Thistask has many aspects, including instruction set design, functional organization,logic design, and implementation. The implementation may encompass inte-grated circuit design, packaging, power, and cooling. Optimizing the designrequires familiarity with a very wide range of technologies, from compilers andoperating systems to logic design and packaging.

    In the past, the term

    computer architecture

    often referred only to instructionset design. Other aspects of computer design were called

    implementation,

    ofteninsinuating that implementation is uninteresting or less challenging.

    We believe this view is incorrect. The architects or designers job is muchmore than instruction set design, and the technical hurdles in the other aspects ofthe project are likely more challenging than those encountered in instruction setdesign. Well quickly review instruction set architecture before describing thelarger challenges for the computer architect.

    Instruction Set Architecture

    We use the term

    instruction set architecture

    (ISA) to refer to the actual programmer-visible instruction set in this book. The ISA serves as the boundary between the

    1.3 Defining Computer Architecture

  • 1.3 Defining Computer Architecture

    9

    software and hardware. This quick review of ISA will use examples from MIPSand 80x86 to illustrate the seven dimensions of an ISA. Appendices B and J givemore details on MIPS and the 80x86 ISAs.

    1.

    Class of ISA

    Nearly all ISAs today are classified as general-purpose registerarchitectures, where the operands are either registers or memory locations.The 80x86 has 16 general-purpose registers and 16 that can hold floating-point data, while MIPS has 32 general-purpose and 32 floating-point registers(see Figure 1.4). The two popular versions of this class are

    register-memory

    ISAs such as the 80x86, which can access memory as part of many instruc-tions, and

    load-store

    ISAs such as MIPS, which can access memory onlywith load or store instructions. All recent ISAs are load-store.

    2.

    Memory addressing

    Virtually all desktop and server computers, includingthe 80x86 and MIPS, use byte addressing to access memory operands. Somearchitectures, like MIPS, require that objects must be

    aligned

    . An access to anobject of size

    s

    bytes at byte address

    A

    is aligned if

    A

    mod

    s =

    0. (See FigureB.5 on page B-9.) The 80x86 does not require alignment, but accesses aregenerally faster if operands are aligned.

    3.

    Addressing modes

    In addition to specifying registers and constant operands,addressing modes specify the address of a memory object. MIPS addressingmodes are Register, Immediate (for constants), and Displacement, where aconstant offset is added to a register to form the memory address. The 80x86supports those three plus three variations of displacement: no register (abso-lute), two registers (based indexed with displacement), two registers where

    Name Number Use Preserved across a call?

    $zero

    0 The constant value 0 N.A.

    $at

    1 Assembler temporary No

    $v0$v1

    23 Values for function results and expression evaluation

    No

    $a0$a3

    47 Arguments No

    $t0$t7

    815 Temporaries No

    $s0$s7

    1623 Saved temporaries Yes

    $t8$t9

    2425 Temporaries No

    $k0$k1

    2627 Reserved for OS kernel No

    $gp

    28 Global pointer Yes

    $sp

    29 Stack pointer Yes

    $fp

    30 Frame pointer Yes

    $ra

    31 Return address Yes

    Figure 1.4

    MIPS registers and usage conventions.

    In addition to the 32 general-purpose registers (R0R31), MIPS has 32 floating-point registers (F0F31) that can holdeither a 32-bit single-precision number or a 64-bit double-precision number.

  • 10

    Chapter One

    Fundamentals of Computer Design

    one register is multiplied by the size of the operand in bytes (based withscaled index and displacement). It has more like the last three, minus the dis-placement field: register indirect, indexed, and based with scaled index.

    4.

    Types and sizes of operands

    Like most ISAs, MIPS and 80x86 supportoperand sizes of 8-bit (ASCII character), 16-bit (Unicode character or halfword), 32-bit (integer or word), 64-bit (double word or long integer), andIEEE 754 floating point in 32-bit (single precision) and 64-bit (double pre-cision). The 80x86 also supports 80-bit floating point (extended doubleprecision).

    5.

    Operations

    The general categories of operations are data transfer, arith-metic logical, control (discussed next), and floating point. MIPS is a simpleand easy-to-pipeline instruction set architecture, and it is representative of theRISC architectures being used in 2006. Figure 1.5 summarizes the MIPS ISA.The 80x86 has a much richer and larger set of operations (see Appendix J).

    6.

    Control flow instructions

    Virtually all ISAs, including 80x86 and MIPS,support conditional branches, unconditional jumps, procedure calls, andreturns. Both use PC-relative addressing, where the branch address is speci-fied by an address field that is added to the PC. There are some small differ-ences. MIPS conditional branches (

    BE

    ,

    BNE

    , etc.) test the contents of registers,while the 80x86 branches (

    JE

    ,

    JNE

    , etc.) test condition code bits set as sideeffects of arithmetic/logic operations. MIPS procedure call (

    JAL

    ) places thereturn address in a register, while the 80x86 call (

    CALLF

    ) places the returnaddress on a stack in memory.

    7.

    Encoding an ISA

    There are two basic choices on encoding:

    fixed length

    andv

    ariable length

    . All MIPS instructions are 32 bits long, which simplifiesinstruction decoding. Figure 1.6 shows the MIPS instruction formats. The80x86 encoding is variable length, ranging from 1 to 18 bytes. Variable-length instructions can take less space than fixed-length instructions, so a pro-gram compiled for the 80x86 is usually smaller than the same program com-piled for MIPS. Note that choices mentioned above will affect how theinstructions are encoded into a binary representation. For example, the num-ber of registers and the number of addressing modes both have a significantimpact on the size of instructions, as the register field and addressing modefield can appear many times in a single instruction.

    The other challenges facing the computer architect beyond ISA design areparticularly acute at the present, when the differences among instruction sets aresmall and when there are distinct application areas. Therefore, starting with thisedition, the bulk of instruction set material beyond this quick review is found inthe appendices (see Appendices B and J).

    We use a subset of MIPS64 as the example ISA in this book.

  • 1.3 Defining Computer Architecture

    11

    Instruction type/opcode Instruction meaning

    Data transfers Move data between registers and memory, or between the integer and FP or special registers; only memory address mode is 16-bit displacement + contents of a GPR

    LB

    ,

    LBU, SB Load byte, load byte unsigned, store byte (to/from integer registers)LH, LHU, SH Load half word, load half word unsigned, store half word (to/from integer registers)LW, LWU, SW Load word, load word unsigned, store word (to/from integer registers)LD, SD Load double word, store double word (to/from integer registers)L.S, L.D, S.S, S.D Load SP float, load DP float, store SP float, store DP floatMFC0, MTC0 Copy from/to GPR to/from a special register MOV.S, MOV.D Copy one SP or DP FP register to another FP registerMFC1, MTC1 Copy 32 bits to/from FP registers from/to integer registers

    Arithmetic/logical Operations on integer or logical data in GPRs; signed arithmetic trap on overflow

    DADD, DADDI, DADDU, DADDIU Add, add immediate (all immediates are 16 bits); signed and unsignedDSUB, DSUBU Subtract; signed and unsignedDMUL, DMULU, DDIV,DDIVU, MADD

    Multiply and divide, signed and unsigned; multiply-add; all operations take and yield 64-bit values

    AND, ANDI And, and immediateOR, ORI, XOR, XORI Or, or immediate, exclusive or, exclusive or immediateLUI Load upper immediate; loads bits 32 to 47 of register with immediate, then sign-extendsDSLL, DSRL, DSRA, DSLLV, DSRLV, DSRAV

    Shifts: both immediate (DS__) and variable form (DS__V); shifts are shift left logical, right logical, right arithmetic

    SLT, SLTI, SLTU, SLTIU Set less than, set less than immediate; signed and unsigned

    Control Conditional branches and jumps; PC-relative or through register

    BEQZ, BNEZ Branch GPRs equal/not equal to zero; 16-bit offset from PC + 4BEQ, BNE Branch GPR equal/not equal; 16-bit offset from PC + 4BC1T, BC1F Test comparison bit in the FP status register and branch; 16-bit offset from PC + 4MOVN, MOVZ Copy GPR to another GPR if third GPR is negative, zeroJ, JR Jumps: 26-bit offset from PC + 4 (J) or target in register (JR)JAL, JALR Jump and link: save PC + 4 in R31, target is PC-relative (JAL) or a register (JALR)TRAP Transfer to operating system at a vectored addressERET Return to user code from an exception; restore user mode

    Floating point FP operations on DP and SP formats

    ADD.D, ADD.S, ADD.PS Add DP, SP numbers, and pairs of SP numbersSUB.D, SUB.S, SUB.PS Subtract DP, SP numbers, and pairs of SP numbersMUL.D, MUL.S, MUL.PS Multiply DP, SP floating point, and pairs of SP numbersMADD.D, MADD.S, MADD.PS Multiply-add DP, SP numbers, and pairs of SP numbersDIV.D, DIV.S, DIV.PS Divide DP, SP floating point, and pairs of SP numbersCVT._._ Convert instructions: CVT.x.y converts from type x to type y, where x and y are L

    (64-bit integer), W (32-bit integer), D (DP), or S (SP). Both operands are FPRs.C.__.D, C.__.S DP and SP compares: __ = LT,GT,LE,GE,EQ,NE; sets bit in FP status register

    Figure 1.5 Subset of the instructions in MIPS64. SP = single precision; DP = double precision. Appendix B givesmuch more detail on MIPS64. For data, the most significant bit number is 0; least is 63.

  • 12 Chapter One Fundamentals of Computer Design

    The Rest of Computer Architecture: Designing the Organization and Hardware to Meet Goals and Functional Requirements

    The implementation of a computer has two components: organization andhardware. The term organization includes the high-level aspects of a computersdesign, such as the memory system, the memory interconnect, and the design ofthe internal processor or CPU (central processing unitwhere arithmetic, logic,branching, and data transfer are implemented). For example, two processors withthe same instruction set architectures but very different organizations are theAMD Opteron 64 and the Intel Pentium 4. Both processors implement the x86instruction set, but they have very different pipeline and cache organizations.

    Hardware refers to the specifics of a computer, including the detailed logicdesign and the packaging technology of the computer. Often a line of computerscontains computers with identical instruction set architectures and nearly identi-cal organizations, but they differ in the detailed hardware implementation. Forexample, the Pentium 4 and the Mobile Pentium 4 are nearly identical, but offerdifferent clock rates and different memory systems, making the Mobile Pentium4 more effective for low-end computers.

    In this book, the word architecture covers all three aspects of computerdesigninstruction set architecture, organization, and hardware.

    Computer architects must design a computer to meet functional requirementsas well as price, power, performance, and availability goals. Figure 1.7 summa-rizes requirements to consider in designing a new computer. Often, architects

    Figure 1.6 MIPS64 instruction set architecture formats. All instructions are 32 bitslong. The R format is for integer register-to-register operations, such as DADDU, DSUBU,and so on. The I format is for data transfers, branches, and immediate instructions, suchas LD, SD, BEQZ, and DADDIs. The J format is for jumps, the FR format for floating pointoperations, and the FI format for floating point branches.

    Basic instruction formats

    R opcode rs rt rd shamt funct

    31 026 25 21 20 16 15 11 10 6 5

    I opcode rs rt immediate

    31 26 25 21 20 16