Pipelining: Hazards, Methods of Optimization, and a Potential Low-Power Alternative

Date
2011
Journal Title
Journal ISSN
Volume Title
Publisher
Producer
Director
Performer
Choreographer
Costume Designer
Music
Videographer
Lighting Designer
Set Designer
Crew Member
Funder
Rehearsal Director
Concert Coordinator
Moderator
Panelist
Alternative Title
Department
Haverford College. Department of Computer Science
Type
Thesis
Original Format
Running Time
File Format
Place of Publication
Date Span
Copyright Date
Award
Language
eng
Note
Table of Contents
Terms of Use
Rights Holder
Access Restrictions
Open Access
Tripod URL
Identifier
Abstract
This paper surveys methods of microprocessor optimization, particularly pipelining, which is ubiquitous in modern chips. Pipelining is a method of executing instructions in stages, so multiple instructions can be operating in the pipeline simultaneously and allow the chip to use its resources more efficiently. This system creates hazards, which are potential incorrect answers: these can be structural hazards (insufficient logical hardware to process all queued instructions), data hazards (data is read, written, and overwritten incorrectly), or branch hazards (the pipeline does not know whether to load target or fall-through instructions). These complexities slow down the pipeline, so in order to improve speed against all constraints, additional hardware (and therefore extra energy and heat) are required to detect potential hazards and resolve them. This work informs our study of an architecture, conceived of by Dave Wonnacott, that has a more complex and subdivided instruction set. This shifts much of the complexity from hardware to compiler design, which allows for smaller chips. Smaller chips have lower heat and energy costs, which is itself valuable but also creates the potential for running multiple chips at the same cost as one larger (pipelined) chip.
Description
Citation
Collections