Free Essay

Modern Chip Components

In:

Submitted By ginish
Words 5800
Pages 24
Chapter 3

Computer Hardware

W1

MODERN CHIP COMPONENTS
The four most important concepts in microprocessor architectures are functional units, pipelines, caches, and buses. These factors determine how fast a processor will run and how efficiently it will communicate with the outside world. Functional units are subsystems of logic circuits that carry out a program’s instructions. Different types of functional units handle different instructions. Integer units handle fixed-point arithmetic and Boolean logic. Floating-point units handle more complex arithmetic operations, involving noninteger values. Load/store units load data from and store data to memory. Pipelines work like factory assembly lines. Each stage of the pipeline handles one relatively small and simple task that comprises part of the work needed to execute one computer instruction. A simple pipeline has four stages: fetch (retrieve an instruction from cache); decode (figure out what the instruction does); execute (carry out the instruction); and write back (store the result). Each pipeline stage usually takes one clock cycle. Most modern processors have superscalar pipelines, which are two or more pipelines arranged in parallel. That way, the processor can issue and complete multiple instructions in each clock cycle. The processor may also rearrange the instructions at execution time to put them in a more efficient order. This procedure is called dynamic execution. Modern processors can operate at speeds above 1 GHz. To maintain this rate, the processor must fetch a new instruction every nanosecond. Unfortunately, the main memory system that holds the computer program needs as much as 70 nanoseconds to pull an instruction out of its memory banks and send it to the processor. Caches provide a solution to this speed mismatch. Caches are blocks of fast memory that temporarily hold instructions and data. The processor’s fastest cache is called the primary or Level 1 (L1) cache and usually is integrated on the chip. L1 caches usually range from about 4 kilobytes to 128 kilobytes in size. Whenever the processor needs instructions and data, it looks in the L1 cache first. If it cannot find what it needs in the L1 cache, it looks in the secondary or Level 2 (L2) cache. Secondary caches are much larger than primary caches, but not as fast. When a processor cannot find the instructions and data it needs in either the L1 or L2 caches, it looks at main memory (dynamic random access memory). A system bus (frontside bus) is the chip’s interface to external devices. (The L2 cache bus is called the backside bus.) A bus is a physical channel through which data are transmitted from one part of the computer to another. All buses consist of two parts: a data bus, which transfers data, and an address bus, which transfers information about where the data should go. The peak capacity of a bus depends on its width and its clock frequency. A wider bus can transmit more bits per clock cycle. Higher clock frequencies transmit more bits in a given amount of time. A processor’s peak bus bandwidth is the product of the width of its bus (measured in bytes) times the frequency at which the bus transfers data (measured in megahertz). For example, Intel’s Pentium 4 processor uses a 64-bit bus that runs at 400 MHz, giving the processor a peak bandwidth of 3.2 Gbps.

(page 57)

W2

Chapter 3

Computer Hardware (page 58)

CPU OPERATIONS

The execution of machine-level instructions is composed of the instruction phase and the execution phase, each phase having two processes. The time it takes to perform the instruction phase is called the instruction time (I-time) and the time it takes to perform the execution phase is called the execution time (E-time). Each instruction must go through both phases. The two phases, taken together, constitute a machine cycle. The control unit directs the phases of the machine cycle through microcode. Microcode is composed of predefined circuits that the processor performs when it executes an instruction. Microcode is literally software instructions that have been implemented as hardware circuits.

Instruction Phase
Process 1—Fetch instruction. The control unit accesses the instruction to be executed from memory. Process 2—The control unit decodes the instruction so the central processor knows what is to be done, necessary data are moved from memory to the registers, and the next instruction is identified.

Execution Phase
Process 3—Execute the instruction. The ALU does what it is instructed to do, either a mathematical operation or a logical comparison. Process 4—Store the results. The results are stored in the registers or in memory.

PROCESSOR BENCHMARKS

(page 58)

The iCOMP (Intel Comparative Microprocessor Performance) Index from Intel provides a simple, relative measure of microprocessor performance. It is not a benchmark, but a collection of benchmarks that take into account typical processing needs, including the increasing use of 3D, multimedia, and Internet technology and software. The higher the iCOMP rating, the higher the relative performance of a processor. For example, an Intel Pentium III processor running at 933 MHz has an iCOMP rating of 3100; and an Intel Pentium III processor running at 1.0 GHz has an iCOMP rating of 3280. There are other chip benchmarks, which include the CPUmark, WinTune Integer Test, and SPECint. There are also benchmarks for specific applications, such as: • Productivity software benchmarks (word processing, presentation, personal finance) (SYSmark* 2001, SPECint* 2000) • Multimedia benchmarks (Video*2000-Performance, Video*2000-MPEG2 Encoding) • *3D/Floating-point benchmarks (3D WinBench* 2000-Processor Test, SPECfp* 2000) • Internet technology benchmarks (WebMark* 2001) Popular PC magazines (e.g., PC Magazine, PC World, Computer Shopper) provide less technical measures of processor performance, such as price, performance, reliability, services, and other factors.

Chapter 3

Computer Hardware

W3

MICROPROCESSOR ARCHITECTURES
Complex instruction set computing (CISC), the oldest approach to microprocessor design, appears most notably in Intel’s x86 series. CISC first appeared in 1978 with the 8086 chip. CISC architectures attempted to use the smallest amount of system memory, which was then both expensive and slow. Mainframes then contained a few megabytes of memory, and early desktop systems had only a few kilobytes. Manufacturing processes severely limited the number of transistors that could fit on a chip. (The 8086 had 29,000 transistors compared with about 42 million in a Pentium 4.) As a result, Intel made compromises in the 1970s that continue to affect the x86 today: a small set of registers, variable-length instructions, and highly complex instructions that take a long time to execute. These CISC architectures can address only 4 GB of main memory, which is already a limitation in high-end computers. Reduced instruction set computing (RISC) uses simple, fixed-length instructions that execute quickly. Grouped together, several instructions perform the same actions as one CISC instruction. RISC processors include a large number of available registers. Very long instruction word (VLIW) architectures take a different approach to processor organization. CISC and RISC approaches use on-chip hardware resources to make complex decisions about scheduling machine-level instructions within a program, where VLIW architectures make these decisions when a program is compiled. This approach transfers complexity from hardware to software and makes the processor easier to design and more efficient because it uses software instead of hardware to handle these difficult scheduling tasks. In practice, creating compilers that optimize code performance for VLIW architectures has been difficult. VLIW architectures have proven well suited to performing multimedia operations and scientific/engineering programs. Explicitly parallel instruction computing (EPIC), the newest microprocessor architecture, combines the RISC and VLIW concepts. The EPIC approach embeds in the instruction stream explicit information that tells the processor which program instructions to execute in parallel. Intel’s Pentium 4 is the first implementation of EPIC architecture.

(page 61)

ADVANCED CHIP TECHNOLOGIES The Crusoe Chip by Transmeta
Transmeta designed its Crusoe microprocessor (chip) largely for portable devices that will supplement desktop PCs as a common means of connecting to the Internet. The Crusoe chip was built for mobile applications, and Transmeta claims that the processor can efficiently run software programs designed for larger Pentium chips (e.g., Windows or Linux), while consuming less electricity and generating less heat. In January 2001, Transmeta introduced three types of Crusoe chip, the TM3200, the TM5400, and the TM5600. The smallest (TM3200) is designed for handheld devices such as smart phones and pocket computers. The larger and faster TM5400 and TM5600 are designed to run Linux or Windows on notebook computers. The Crusoe chip has its own instruction-set architecture, based on Transmeta’s proprietary 128-bit VLIW (very long instruction word) architecture and design. The Transmeta design does have advantages over traditional microprocessors. The VLIW instruction set is smaller and lacks many of the specialized instructions that Intel has added to the Pentium series to optimize it for specific tasks like streaming multimedia. That lowers the overall transistor count, reducing power consumption and generating

(page 62)

W4

Chapter 3

Computer Hardware less heat. A TM5400 chip in an idle personal computer consumes about 16 percent of the power used by an Intel Mobile Pentium III in the same personal computer. For Web browsing and playing DVDs and MP3s, the TM5400 consumes between 20 and 30 percent of the power that the Mobile Pentium III does. Software written for Intel x86 chips will not run directly on Crusoe chips. Instead, Crusoe chips run one custom software program, written by Transmeta. This program dynamically translates x86 instructions into Crusoe instructions, then executes them. Transmeta calls this process “code morphing.” The code-morphing software is stored on a reprogrammable Flash ROM chip, making it easy to upgrade the software without having to buy replacement hardware. The Crusoe chip executes the entire range of x86 instructions by using its codemorphing software. For example, the code-morphing software might need to translate one complex Pentium instruction into 15 VLIW instructions. However, most of the time, there is a one-to-one correspondence between x86 instructions and VLIW instructions. In fact, the Crusoe chip can outperform this one-to-one ratio. With simple x86 instructions, the code-morphing software can put four of Intel’s 32-bit x86 instructions into a single 128-bit VLIW instruction and execute them simultaneously. Therefore, the Crusoe chip might be slower with a few complex instructions, but it makes up for lost time with simpler instructions. (Source: Red Herring, March 2000; Red Herring, October 30, 2000; Red Herring, February 13, 2001; MIT Technology Review, November/December 2000.)

DNA Chips
DNA microarrays or DNA chips appeared in 1996 when Affymetrix introduced the first commercial version, which the company called GeneChip. Affymetrix used lightsensitive chemical reactions to grow a grid-like pattern of as many as 400,000 short DNA strands, called probes, on a glass wafer. Since each probe can bind to a different gene sequence in a sample of DNA, the chips allow researchers to perform what once would have been thousands of separate experiments all at the same time. DNA chips open up new possibilities: new understanding of the role genes play in heart disease or antibiotic resistance; tools for prenatal or infection diagnosis that incorporate all the genes of interest on a single chip; and massive-scale automated screening of potential drugs. Advantages of DNA chips include: • They work in parallel, processing all possible answers at the same time. • They are fast. • They are energy-efficient. Conventional chips waste about a billion times more energy per operation than do DNA chips. • They have a huge storage capacity. One gram of DNA can hold the data of a trillion CDs. Example of How DNA Chips Work: 1. Start with a group of patients, some of whom have one type of cancer and some of whom have another. 2. For each patient, take a sample of cancer cells and isolate all the genes that are active in those cells. Make copies of those genes, incorporating some special nucleotides, or DNA letters (DNA bases), that have a fluorescent dye attached to them.

Chapter 3 3. Put the new gene copies onto a DNA microarray, a chip covered with a grid of several thousand probes—short stretches of DNA that each bind to a unique gene sequence. 4. When a probe matches one of the genes that are active in the cancer cells, it binds to the copy of that gene. Once binding takes place, wash away the extra free-floating DNA. 5. Put the DNA chip into the chip scanner. There, a laser shines light on the chip and causes the fluorescent dye to glow, making a pattern of light spots where labeled gene copies are bound to probes and dark spots where there are unbound probes. The scanner detects the fluorescence and records an image of the grid of light and dark. 6. Using a computer that has been fed a map of where each probe is on the microarray, you can determine which genes are active in each sample. Careful analysis of these results can allow you to pinpoint small sets of genes that are active in one cancer but not the other. In the future, these genes could become targets for new drugs, or could be the basis for new, highly specific diagnostic tests.

Computer Hardware

W5

Optical Computing
In today’s chips, data move on pathways made of very thin strands of metal (aluminum or copper). In the very near future, the most critical of those pathways could be replaced with fiber circuits carrying tiny pulses of laser light. It may even be possible to dispense with some of the fibers and wires altogether and move laser light through open circuits within the chip. This change could be critical. The speed of computer chips could reach an absolute barrier in the next decade. The barrier will vary because how much data a wire can transmit is determined by a ratio of wire length and thickness. If a wire is too long or thin, the bandwidth will be low. Part of the problem is the physical properties of metal. Metal interconnects can allow data to move only so fast. Optical interconnects may then have to take the place of metal ones wherever bottlenecks may occur. In effect, researchers want to add optical express lanes to conventional chips. Electricity moves slower through a metal wire than light moves through the air or an optical fiber. That is because electrical properties such as resistance limit the throughput of metal wires and also produce a lot of heat. Light has another key advantage: Many different frequencies of light can be sent down the same fiber. Such multiplexing, which is done routinely in telecommunications, could allow several metal wires to be replaced by just one fiber that can transmit just as much data. In optical systems, laser light pulses carry data. Researchers want to develop a silicon laser that can be integrated within the chip and be flicked on and off to produce these pulses. Researchers in Europe and the United States have recently discovered techniques to get silicon to amplify light and then emit it with some efficiency, key steps toward a silicon laser. Another problem is that silicon lasers must be powered by electricity coming from other parts of the chip, a problem that has not been solved. Further, a way to route data within an optical chip is, as yet, unknown. In traditional chips, the flow of current around the circuits is governed by the transistors, which are tiny switches. Making analogous optical devices on the tiny scale demanded by chip-making is a breakthrough waiting to be achieved.

W6

Chapter 3

Computer Hardware

Reconfigurable Processors
So far, no one has figured out how to produce a chip that meets all the criteria for the ultimate consumer device. Such a chip would have to have flexibility, high performance, low power consumption, and low cost, and would need to get to the market quickly before the multiple features it supported became outdated. Now a new kind of chip may reshape the semiconductor arena. The chip adapts to any programming task by effectively erasing its hardware design and regenerating new hardware that is perfectly suited to run the software at hand. These chips, referred to as reconfigurable processors, could tilt the balance of power that has preserved a decade-long standoff between programmable chips and hard-wired custom chips. These new chips are able to rewire themselves on the fly to create the exact hardware needed to run a piece of software at the utmost speed. If silicon can become dynamic, then so will the devices. No longer will you have to buy a camera and a tape recorder. You could just buy one device and then download a new function for it when you want to take some pictures or make a recording. These chips are programmable logic devices with hardware that can be rewritten hundreds of times a second. Each has two parts: one that serves as a quickly accessible library, or cache, for hardware components, and another that is like a blank chalkboard. As needed, the chip takes a hardware component from the library and places it onto the blank chalkboard. There, the component executes the software running at the moment. When it is finished, the hardware component is erased, and a new component is placed in to process the next piece of software. It takes complex scheduling to map the right piece of hardware into the chalkboard at exactly the right time. But the advantages are potentially huge. The chip can be smaller because its chalkboard allows it to fetch hardware components from memory, meaning that it does not use valuable chip space to store the entire library of hardware components, as a microprocessor does. Without such a chalkboard, a microprocessor has the whole library in place and drawing electricity at all times, even though only 1 to 5 percent is being used at any given time. By contrast, a reconfigurable chip uses only the piece of hardware that it needs at any one time, and it uses power only for the active function.

Molecular Computing
In this new field—which merges the technologies of electrical engineering and the materials of physical chemistry—individual molecules take the place of switches etched on silicon wafers. Because the molecules are roughly one-millionth the size of today’s silicon switches, computing could be performed in tiny spaces using far less power. Molecular switches are easily grown in mass quantities, require very little power, and are relatively cheap. Silicon switches are expensive, larger, slower to produce, and require much power.

Intelligent Random Access Memory (IRAM)
This advance in chip design combines a microprocessor and a memory chip on a single silicon wafer. University of California at Berkeley Professor Dave Patterson invented the chip, and IBM will fabricate the prototype. The plan was to begin testing the prototype at the end of 2001 in applications such as multimedia and portable systems. The chip may accelerate the market for a new generation of handheld computers that would combine wireless communications, television, speech recognition, graphics, and video games. One application is to leverage IRAM so that a handheld like the Palm can be used as a tape recorder with speech recognition and file-index capabilities.

Chapter 3 IRAM has the potential for removing the bottleneck that has restrained processing speeds in microprocessors. Over the last two decades, the speed of microprocessors has increased more than 100 times. But, while memory chips (DRAMs) have kept pace in terms of capacity, their speed has increased by only a factor of ten. As a result, microprocessors spend more time waiting for data and less time doing valuable computations. As the gap between speeds widens, methods to help alleviate the problem, like memory caching, are becoming less useful.

Computer Hardware

W7

HARD DISK DRIVES
In 1956, IBM released the first disk drive. The 350 Ramac, a 50-disk storage unit, was a huge device that leased for $3,200 a month and stored 5 megabytes. In 1980, IBM produced the first storage system that could hold 1 billion bytes of data. Known as the 3380 disk drive, it cost $40,000, stood as tall as a refrigerator, and weighed 550 pounds. In 2000, IBM released the Microdrive, with a capacity of 1 gigabyte, a cost of $499, the size of a matchbook, and a weight of less than one ounce. By the end of 2001, the Microdrive will have a capacity of 2 GB—enough printed material to fill two pickup trucks. Hard disk drive technologies may be distinguished by several characteristics, which include the size of the unit, the areal density, and the speed of data access. By the second quarter of 2001, hard disk drives are 3.5-inch fixed head/disk assembly, 1.6 inches tall (known as half-height), and house 12 platters and 24 read/write heads with a capacity of 180 GB. The most common metric of storage technology is areal density, generally expressed as megabits per square inch. IBM researchers have exceeded the superparamagnetic limit—a point at which the tiny magnetic areas that store ones and zeros on the rotating platters used in hard disks become unstable. As a result, by the first quarter of 2001, IBM was shipping commercial products with areal densities as high as 25.7 billion bits (3.26 GB) per square inch. Many researchers say that with current technologies, the practical limit to areal or storage density may be as high as 100 billion bits (12.5 GB) per square inch. Therefore, it will be possible to build desktop drives capable of storing 400 GB of data and portable drives capable of storing 200 GB of data. Such a portable drive could hold the equivalent of 42 DVDs or 300 CDs’ worth of data. Storage density is increasing at nearly 100 percent per year, outpacing the rate that Moore’s Law predicted for semiconductors. The higher the density, the greater the capacity of a storage medium. Recall that, to access a given piece of data, read/write heads pivot across the rotating disks to locate the right track, calculated from an index table, and the head then waits as the disk rotates until the right sector is underneath it. Disk drives rotate at a constant speed, called the rotational speed. The speed at which hard disks rotate is significant because it determines how long it takes for the requested data to come into position under the read/write head. This delay is referred to as rotational latency time and on average is equal statistically to one-half of a disk revolution. In September 2000, the average high-performance disk rotated at 10,000 rpm. Early in 2000, Seagate Technology introduced its Cheetah X15 drive with a rotational speed of 15,000 rpm. Another element that affects a drive’s performance is seek time, which is the time it takes for the read/write heads to move into position to read the data. The final element that affects a drive’s performance is the transfer rate, measured in bits per second, which indicates the speed at which the read/write heads can transfer the data from the rotating disks.

(page 66)

W8

Chapter 3

Computer Hardware (page 67)

OTHER INTERFACES TO STORAGE SYSTEMS

Fibre channel, a high-speed interface, serves as a host to connect storage systems and servers and has become a foundation technology for SANs (discussed in the body of the chapter). Originally, the technology was used exclusively with fiber optic cable, but now fibre channel can also be used with copper cabling. Firewire, developed by Apple, is a high-speed interface that supports high-volume data transfers. Firewire is currently in limited use (on digital cameras, set-top boxes, printers, camcorders, and high-end computer systems), but is expected to become more widely available in consumer electronics devices and PCs sold to consumers. Infiniband offers a new high-speed interconnection to storage systems. Intended primarily as a next-generation replacement for communication between processors and storage devices, Infiniband offers very high-volume interconnection speeds over fiber and copper. The universal serial bus (USB) was originally developed as a means of replacing the serial and parallel ports that have historically been used to connect low-speed peripheral devices to PCs. (These devices include the keyboard and mouse, printers, scanners, and docking cradles for personal digital assistants.) USB 2.0 supports a broader range of I/O devices, including the full range of disk storage devices as well as digital video. USB 2.0 also has the ability to connect new devices without rebooting the system, a capability called hot plugging.

ADVANCED STORAGE TECHNOLOGIES Holographic Optical Storage

(page 68)

Three-dimensional holographic storage promises to provide smaller storage devices, higher storage capacities, and faster data transfer rates. IBM labs anticipate that the first generation of these devices could store 125 GB of information on a removable 5.25-inch holographic disk. The potential exists to store one terabyte of data on one 5.25-inch disk. In addition, holographic storage has the possibility of reading and writing data one million bits at a time, instead of one by one as with magnetic storage. This means that you could duplicate an entire DVD movie in a few seconds. Holographic storage relies mainly on laser light and a photosensitive material— usually a crystal or a polymer—to save data. It works by splitting a laser beam in two. One beam contains the data and is referred to as the object beam. The other holds the location of the data and is called the reference beam. The two beams intersect to create an intricate pattern of light and dark bands. A replica of this interference pattern is engraved three-dimensionally into the photosensitive material and becomes the hologram. To retrieve the stored data, the reference beam is shone into the hologram, which refracts the light to replicate the data beam.

Chapter 3

Computer Hardware

W9

ENTERPRISE STORAGE SYSTEMS
Enterprise storage systems are built around a high-speed interconnect (shared bus or switched fabric) that uses technology proprietary to the manufacturer. The system then is linked to the host via a fibre channel interface. Enterprise storage systems are designed to be modular so the customer can add storage incrementally after purchase. A number of characteristics distinguish enterprise storage from server-based storage: • Multiple heterogeneous host support. Enterprise storage must have the ability to access a common storage resource from hosts and servers of various types and must be able to run various operating systems. • Data sharing. Enterprise storage must allow multiple servers that handle different parts of the same application to use the same data files. When multiple systems share access to the same files, the data must be protected so multiple users or applications do not try to update the same data at the same time. • High performance. Enterprise storage must be able to deal with the quantity and variety of transactions requested of it without unacceptable degradation of performance. Because storage access requests may be coming from distant, diverse locations, an additional goal is to provide a uniform performance to the remote user population. • High availability. Enterprise systems are synonymous with worldwide operation and no allowance for downtime. These systems must have integrated fault tolerance and be able to perform both scheduled and unscheduled maintenance without disrupting business. • Disaster tolerance. Most disasters are unpredictable and unavoidable. Regardless of the cause or location, disasters should not result in an enterprise environment being down for more than a matter of minutes at most. Recovery of critical systems must be part of the business plan, which means having redundant backup hardware, software, and current data files. • Storage management and optimization. For the enterprise, this term means centralized management with decentralized access and control. • Numerous integrated technologies. Enterprise storage is not just for disk storage—it also relates to tape and other removable media types as well as to the libraries that manage them. • Scalable resources, function, and performance. All systems must be expandable online, and this expansion must be seamless and easy to maintain.

(page 69)

SUPERCOMPUTERS
There are two types of supercomputer architectures: vector processors and massively parallel processing (MPP). Vector processing supercomputers, such as Cray T90 and NEC’s SX-5, use a small number of (as few as four) powerful vector processors. Vector processing performs operations on vectors, which are ordered collections of individual data elements. This approach is in contrast to scalar processing, which processes one data element at a time. Whole vectors are fed into the pipeline (in contrast to the loading of individual elements on a scalar processor), and the same operation is performed on each element of the vector. This approach, referred to as single instruction, multiple data (SIMD) technology, is also being built into microprocessors. Because of the small

(page 72)

W10

Chapter 3

Computer Hardware number of processors, it is easy for the operating systems to divide tasks among processors, and the overhead associated with interprocessor communications is relatively low. A key aspect of vector computing has been the development of memory systems that allow the data to be streamed fast enough to the processor so operations can be performed without time lost waiting for vectorized data. A supercomputer is considered to be a massively parallel processor (MPP) if it operates with at least 64 processors, but MPP supercomputers can operate with thousands of processors. Examples of MPP supercomputers include Cray’s T3E (up to 2,048 processors) and IBM’s RS/6000 SP. Vendors of MPP systems typically use commodity microprocessors from Intel or one of the RISC vendors rather than the special kind found in vector systems. The processors usually are grouped into nodes—each node typically consists of several processors in SMP architecture, sharing memory and a memory bus. Each node might consist of 4, 8, or 16 processors that present a single-system image to the application, share a single memory bus, and have their own local shared memory resources. The nodes are then linked via a high-speed interconnect. One benefit of an MPP system is the greatly reduced contention for memory resources as compared to a shared-memory SMP system with the same number of processors. Because each node has its own memory and memory bus, contention among nodes for memory resources can be eliminated to the extent that an application can be partitioned and made parallel successfully. Unfortunately, the multiple nodes of an MPP system do not look like a single system to the application. Instead, each node must run its own copy of the program instructions (called the code segment), making memory management complex and resulting in slower access to memory. A significant repercussion of this issue is that applications must be written specifically or modified for an MPP environment, which has greatly slowed the adoption of massively parallel architectures. In addition, a large number of nodes increases the overhead associated with interprocessor communications. Supercomputing has evolved through three eras. The first era is best represented by the Cray 1 and offered performance by building fast processors. The second era, dominated by SMP and MPP systems, sought performance through thousands of processors. In the third era (just begun), organizations are coupling very large numbers of heterogeneous computers on networks to create virtual supercomputers. This trend builds on an effort known as the Beowulf Project that began trying to build a supercomputer by linking together multiple commodity systems in 1994. There are, however, new systems under development that will redefine the rules of high-capacity computing. Cray currently is developing the Cray SV2 system, which will be the first system to offer both vector and MPP capabilities in a single architecture. (The new systems should ship in the second half of 2002.) Supercomputers always have been reserved for a small number of select applications, and their greatest impact is probably in their use as a proving ground for new technology, rather than as a general commercial platform. For example, the U.S. government’s $2.5 billion ASCI project is designed to simulate the aging of nuclear weapons beyond their designated shelf life. In December 1999, IBM announced plans to build a 1 petaflops supercomputer. This system, known as Blue Gene because it will be used to study the structure of proteins, will be based on a highly optimized MPP architecture capable of automatically detecting and overcoming failures of individual processors and computing threads. IBM calls the architecture SMASH, for “Simple, Many, and Self-Healing.” Blue Gene will consist of more than 1 million processors, each capable of 1 Gflops performance. The processors themselves will be based on a simplified RISC architecture, with fewer

Chapter 3

Computer Hardware

W11

than 60 instructions in its instruction set, compared with 200 for most current RISC architectures. If successful, the same technology then could be applied to commercial scientific and engineering applications as well as to memory-intensive applications such as data warehouses.

MAINFRAMES
The current version of IBM’s mainframe architecture is the zSeries 900, replacing the earlier S/390 line. Most mainframes today run on OS/390 or IBM’s new 64-bit z/OS, which was introduced in October 2000. Comfortable with mainframe scalability, security, availability, and manageability, companies continue to add significant processing power to S/390 systems and consider a migration to IBM’s zSeries, whose 64-bit addressing provides the necessary capacity for unpredictable workloads and growing enterprise applications. IBM’s eServer zSeries 900 was designed with flexibility in mind. The system’s intelligent Resource Director feature automatically reallocates I/O paths and other resources on the fly to varying application workloads as needed. This reallocation enables applications to expand and contract their resource use. Mainframes can communicate on TCP/IP networks, and modern tools make it a good platform for building Web-enabled applications. One significant factor in the reemergence of the mainframe has been UNIX compatibility as well as support for Internet technologies. For example, mainframes can host e-commerce servers and Web application servers. Also of note is increased support for Java and Linux, making mainframes more suitable as application development and integration platforms. (In May 2000, IBM announced mainframe support for Linux.) In 1994, IBM moved to CMOS (complementary metal-oxide semiconductor) technology in its mainframes. This technology means that mainframes use less energy, run cooler, take up less space, and cost less than the previously dominant emitter-coupled logic (ECL) technology. As of October 2000, the zSeries 900 had 250 uniprocessor MIPs, 2,500 total estimated MIPs, and 16 CPU engines. The 16-way model has 20 CPUs (four CPUs are reserved for handling I/O functions). The zSeries can be configured in numerous ways and can operate independently or as part of a Parallel Sysplex 32-way cluster of servers with as many as 640 processors.

(page 73)

Similar Documents

Free Essay

Termpaper

...Introduction to PC Components Here you will learn computer hardware tutorials introduction, basic pc components, networking devices, ram, vga, monitor and printer etc. Computer hardware is the physical part of the computer including the digital circuits inside the computer as opposed to the software that carry out the computing instructions. The hardware of a computer is unlikely to change frequently unless due to the crash or for upgrading them. The devices that is capable of storing, executing system instructions and controlling other logical outputs.  Hardware comprises all of the physical part of the computer such as Monitor, CPU, motherboard, ram, CD-Rom, printer, scanner, hard disk, flash drive (AKA pen drive), processor, pci buses, floppy disk, power supply, VGA card, sound card, network interface card, peripherals, joystick, mouse, keyboard, foot pedal, computer fan, camera, headset and others. On the other hard software is a logical part of a computer and is used to carry out the instructions, storing, executing and developing other software programs. A typical PC consists of a case or chassis in the desktop or tower case and these components. Motherboard • CPU • Computer Fan • RAM • BIOS • Digital Circuitry • Computer Fan • PCI Slots PC Buses • PCI • USB • Hyper-transport • AGP • ISA • EISA • VLB Media • CD-Rom • DVD-Rom • Combo box • Joystick • BD-Rom drive Internal storage • Hard disk (ATA & SATA) • Data array controller • Floppy disk ...

Words: 1889 - Pages: 8

Free Essay

Cmos Technology

...observation made by Gordon Moore was that the number of components on the most complex integrated circuit chip would double each year for the next 10 years. Frank Wanlass at Fairchild described the first CMOS logic gate (nMOS and pMOS) in 1963. Developments in complementary metal oxide semiconductor (CMOS) technology allow the sensors to penetrate into high-performance applications that were previously not practical. In the eighties, CMOS processes were widely adopted. Present day chips would not exist if the CMOS technique would not have been implemented around the late eighties. CMOS sensors were known for their fast speeds, but less for image quality. The newest CMOS sensors now combine both high speed and excellent image quality, and are rapidly becoming more popular in both area-scan and line-scan machine-vision applications. CMOS sensors are characterized by having parts of the electronics, the read-out system and the illumination control located directly adjacent to the photosensitive surface. One main advantage of this design is that each pixel can be controlled and read out directly. One drawback is that a part of each pixel is occupied by electronic components, which reduces the fill factor. CMOS technology allows a variety of both analogue and digital functions to be integrated directly into the sensor. This means that analogue data can be amplified and converted into a digital signal right on the chip. Memory, timing generators and other preprocessing steps...

Words: 565 - Pages: 3

Premium Essay

Unit 1

...NT-1110 Unit 1 Assignment 1: Research and explain the basic components of a PC. Gilberto Canto Motherboard: Sometimes alternatively known as the mainboard, system board, planar board or logic board or colloquially, a mobo) is the main printed circuit board (PCB) found in computers and other expandable systems. It holds many of the crucial electronic components of the system, such as the central processing unit (CPU) and memory, and provides connectors for other peripherals. Motherboard specifically refers to a PCB with expansion capability and as the name suggests, this board is the "mother" of all components attached to it, which often include sound cards, video cards, network cards, hard drives, or other forms of persistent storage; TV tuner cards, cards providing extra USB or FireWire slots and a variety of other custom components. The motherboard is the core of the system. It really is the PC; everything else is connected to it, and it controls everything in the system. Central Processing Unit (CPU): Is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions. The term has been used in the computer industry at least since the early 1960s.Traditionally, the term "CPU" refers to a...

Words: 1814 - Pages: 8

Free Essay

Nt1110

...example, kilobits/second (Kbps) or megabytes/second (MBps). Cache A temporary storage, a buffer. Chipset A collection of one or more controllers. Many of the motherboard’s controllers are gathered together into a chipset, which is normally made up of a north bridge and a south bridge. Controller A circuit which controls one or more hardware components. The controller is often part of the interface. Hubs This expression is often used in relation to chipset design, where the two north and south bridge controllers are called hubs in modern design. Interface A system which can transfer data from one component (or subsystem) to another. An interface connects two components (e.g. a hard disk and a motherboard). Interfaces are responsible for the exchange of data between two components. At the physical level they consist of both software and hardware elements. I/O units Components like mice, keyboards, serial and parallel ports, screens, network and other cards, along with USB, firewire and SCSI controllers, etc. Clock frequency The rate at which data is transferred, which varies quite a lot between the various components of the PC. (Usually measured in MHz.) Clock tick (or clock cycle) A single clock tick is the smallest...

Words: 600 - Pages: 3

Free Essay

3 Essential Properties of Atams

...well they resist outside forces, and the ability of the material to conduct electricity. In fact the most important new material that has changed modern society, as a result the semiconductor and the microchip has changed and revolutionized computing. Several properties of silicon have made these developments in microelectronics possible. Silicon based microelectronic devices have revolutionized our world in the past three decades. Integrated circuits, built up from many silicon devices (such as transistors and diodes) on a single chip, control everything from cars to telephones, not to mention the Internet. Silicon technology is still the most reliable and cost-efficient way to fabricate large microelectronic circuits. Semiconductors have played an amazing role and have impacted technology in many ways. Every technology product we use in the modern world is created with silicon and depends on semiconductors. The earliest semiconductor device was a diode which let electricity flow in only one direction. Integrated circuits are called micro chips which are complex circuits that are made of many miniature chips of semiconductor and made of silicon. These chips are packaged in a plastic casing and the fine wires inside the chip link to the pins outside. Microchip is the integration of a whole CPU onto a single chip or on a few chips and greatly reduced the cost of processing power. The integrated circuit processor is produced in large numbers by highly automated processes. A microchip...

Words: 1444 - Pages: 6

Free Essay

3 Essential Properties of Atoms

...well they resist outside forces, and the ability of the material to conduct electricity. In fact the most important new material that has changed modern society, as a result the semiconductor and the microchip has changed and revolutionized computing. Several properties of silicon have made these developments in microelectronics possible. Silicon based microelectronic devices have revolutionized our world in the past three decades. Integrated circuits, built up from many silicon devices (such as transistors and diodes) on a single chip, control everything from cars to telephones, not to mention the Internet. Silicon technology is still the most reliable and cost-efficient way to fabricate large microelectronic circuits. Semiconductors have played an amazing role and have impacted technology in many ways. Every technology product we use in the modern world is created with silicon and depends on semiconductors. The earliest semiconductor device was a diode which let electricity flow in only one direction. Integrated circuits are called micro chips which are complex circuits that are made of many miniature chips of semiconductor and made of silicon. These chips are packaged in a plastic casing and the fine wires inside the chip link to the pins outside. Microchip is the integration of a whole CPU onto a single chip or on a few chips and greatly reduced the cost of processing power. The integrated circuit processor is produced in large numbers by highly automated processes. A microchip...

Words: 1445 - Pages: 6

Free Essay

Modelling of Modern Microprocessors

...Modelling Of Modern Microprocessors Siddhant (Author) Department of Computer Science Lovely Professional University Phagwara, India siddhant_s@outlook.com Abstract--Microprocessors are also known as a CPU or central processing unit is a complete computation engine that is fabricated on a single chip. The first microprocessor was the Intel 4004, introduced in 1971. This paper covers the evolution in microprocessors and the changes in the architecture of the microprocessor, the details of the latest microprocessors and the machines using them. The paper also discusses how the number of transistors affects the performance of processor.   A microprocessor can move data from one memory location to another. A microprocessor can make decisions and jump to a new set of instructions based on those decisions. The native language of a microprocessor is Assembly Language. The above mentioned are the three basic activities of a microprocessor. An extremely simple microprocessor capable of performing the above mentioned operations loos like: Index terms—Modern, architecture, Intel, PC, Apple. I. INTRODUCTION The microprocessor is the heart of any normal computer, whether it is a desktop machine , a server or a laptop . The first microprocessor to make a real splash in the market was the Intel 8088, introduced in 1979 and incorporated into the IBM PC (which first appeared around 1982).The microprocessor is made up of transistors. CHIPA chip is also...

Words: 1808 - Pages: 8

Premium Essay

Protection

...First Generation (1941-1956) World War gave rise to numerous developments and started off the computer age. Electronic Numerical Integrator and Computer (ENIAC) was produced by a partnershp between University of Pennsylvannia and the US government. It consisted of 18,000 vacuum tubes and 7000 resistors. It was developed by John Presper Eckert and John W. Mauchly and was a general purpose computer. "Von Neumann designed the Electronic Discrete Variable Automatic Computer (EDVAC) in 1945 with a memory to hold both a stored program as well as data." Von Neumann's computer allowed for all the computer functions to be controlled by a single source. Then in 1951 came the Universal Automatic Computer(UNIVAC I), designed by Remington rand and collectively owned by US census bureau and General Electric. UNIVAC amazingly predicted the winner of 1952, presidential elections, Dwight D. Eisenhower. In first generation computers, the operating instructions or programs were specifically built for the task for which computer was manufactured. The Machine language was the only way to tell these machines to perform the operations. There was great difficulty to program these computers ,and more when there were some malfunctions. First Generation computers used Vacuum tubes and magnetic drums(for data storage). Second Generation Computers (1956-1963) The invention of Transistors marked the start of the second generation. These transistors took place of the vacuum tubes used in the first...

Words: 749 - Pages: 3

Free Essay

Unit 4 Research Paper

...the data going in and out to each and every component integrated, connected, or built into the board such as, the CPU, the RAM, North Bridge, South Bridge, System Bus, Hard Drives, Cd Drives etc.   Chipsets Motherboards are commonly known to be called chipsets or their type of chipset (form) is referred by the company who manufactured them. They are composed of two main components: Memory Controller Hub (MCH) and the I/O or Input Output Controller Hub (ICH). The MCH, is the communication bridge between the CPU the RAM and some PCI Express Devices, it is also known as the Northbridge located near the top of the Motherboard. It requires a high amount of data to be transferred via the system bus. You will notice a heat sink to allow the chip to cool down otherwise it will burn through the motherboard itself. For the ICH provides data to transfer to the secondary systems, such as the USB, Audio, Mouse, Keyboard, Hard Drives etc. The ICH is also known as the Southbridge which is located near the bottom of the motherboard. System Bus The motherboard bus is also called the system bus are tiny etched lines on the motherboard and is the traffic of all data, power, control, and address that is being sent throughout the entire system. The system bus is considered much like our highways and Freeways in the modern world of today going to work and home, or going to Starbucks, to work and then home. Component Integration Component integration is the concept of Integrated or...

Words: 492 - Pages: 2

Premium Essay

Types of Computer (Deep Basic)

...output and processing are simply the act of moving the pebbles into new positions, seeing the changed positions, and counting. Regardless, this is what computing is all about, in a nutshell. We input information, the computer processes it according to its basic logic or the program currently running, and outputs the results. Modern computers do this electronically, which enables them to perform a vastly greater number of calculations or computations in less time. Despite the fact that we currently use computers to process images, sound, text and other non-numerical forms of data, all of it depends on nothing more than basic numerical calculations. Graphics, sound etc. are merely abstractions of the numbers being crunched within the machine; in digital computers these are the ones and zeros, representing electrical on and off states, and endless combinations of those. In other words every image, every sound, and every word have a corresponding binary code. While abacus may have technically been the first computer most people today associate the word “computer” with electronic computers which were invented in the last century, and have evolved into modern computers we know of today. CONTENTS * First Generation Computers (1940s – 1950s) figure 1 Figure 1 show the vacuum tube. The first...

Words: 1856 - Pages: 8

Free Essay

Motherboard

...Motherboard Answer the following questions 5 April 2015 Tanweer Haroon DeJong May The major components of the motherboard are as follows: memory and their slots which is the computer’s memory (RAM) and is one of the most important parts of the system board. The number of chips depends on the type of computer and its capacity. Expansion cards are a typical component of non-integrated system boards a graphic card is a perfect example, but this can be integrated into the motherboard. CPU and slots is the central processing unit and it’s a highly prolific part of the computer and is located on the right of a motherboard and can be identified as a result of the heat sink or cooling fan directly on it. BIOS chip directs the CPU with respect to how it relates with other parts of the computer. Its basic input and output system chip or integrated circuit is fixed on the board and is easily identifiable. CMOS battery, the complementary metal oxide semiconductor is a small battery on the system board that powers the CMOS memory. Power supply and connectors is the electrical unit of the system and if it’s bad the system will not work. Keyboard connector are located on the motherboard and there are two main types. The AT has a round connecting interface into the motherboard and the PS/2 connector is rectangular in shape and smaller. Modern motherboards come with both. Mouse connector its connecting port is located on the motherboard and its interface is usually round. Floppy...

Words: 431 - Pages: 2

Premium Essay

Operating System

...CMOS A complementary metal oxide semiconductor (CMOS) is a type of integrated circuit technology. The term is often used to refer to a battery-powered chip found in many personal computers that holds some basic information, including the date and time and system configuration settings, needed by the basic input/output system (BIOS) to start the computer. This name is somewhat misleading, however, as most modern computers no longer use CMOS chips for this function, but instead depend on other forms of non-volatile memory. CMOS chips are still found in many other electronic devices, including digital cameras. In a computer, the CMOS controls a variety of functions, including the Power On Self Test (POST). When the computer’s power supply fires up, CMOS runs a series of checks to make sure the system is functioning properly. One of these checks includes counting up random access memory (RAM). This delays boot time, so some people disable this feature in the CMOS settings, opting for a quick boot. If installing new RAM it is better to enable the feature until the RAM has been checked. Ad Once POST has completed, CMOS runs through its other settings. Hard disks and formats are detected, along with Redundant Array of Independent Disk (RAID) configurations, boot preferences, the presence of peripherals, and overclocking tweaks. Many settings can be manually changed within the CMOS configuration screen to improve performance; however, changes should be made by experienced users...

Words: 747 - Pages: 3

Premium Essay

Unit 1 Assignment 1

...that are loaded into the computer. At the very basic level, computer software contains specific instructions for how to accomplish a specific task. These instructions tell the hardware exactly what to do, and how to do it 2. Identify the hardware associate with a computer: the physical components that make up a computer system. There are many different kinds of hardware that can be installed inside, and connected to the outside, of a computer. Here are some common individual computer hardware components that you'll often find inside a modern computer case: Motherboard, Central Processing Unit (CPU), Random Access Memory (RAM), Power Supply Video Card, Hard Drive (HDD), Solid-State Drive (SSD), Optical Drive (e.g. BD/DVD/CD drive) Card Reader (SD/SDHC, CF), etc. Here is some common hardware that you might find connected to the outside of a computer: Monitor, Keyboard, and Mouse 3. Describe how computer store data: it converts the request into binary digits and its stores the data in a database. As a table format. The binary data can then be written onto a magnetic disc or tape. The data can also be held in a silicon chip, arranged with a grid of memory locations, called a memory chip. Some chips need power applied in order to hold data. RAM Others can hold data permanently, ROM others can hold data semi-permanently, power is used to change the data and then it will hold it without. EPROMS. (Electrically Programmable Read...

Words: 324 - Pages: 2

Premium Essay

P1 Unit 18 Essay

...Unit 18: P1 CPU: The CPU (Central Processing Unit) is the most important part of the computer, as it acts as the computer’s ‘brain’. The two typical components, the CPU consists of, are the ALU (Arithmetic Logic Unit) and the CU (Control Unit). The ALU performs arithmetic and logical operations whereas, the CU extracts instructions then proceeds to decode and execute them. When referring to the CPU, GHz are normally presented as a clock frequency, representing a cycle of time, e.g. a CPU, containing 3.0GHz, has a clock that beats at 3 billion times per second. The higher the GHz rating, the more things can be done in a unit of time. The Motherboard contains both: the CPU and the Memory. These two components are connected via the Internal Data Bus (A communication system that transfers data between components). The Internal Data Bus can also transfer data between computers. RAM: The RAM (Random-Access Memory) acts as a storage of memory that can be accessed very quickly. The processor needs to continually go to the hard disk to overplay old data, if the RAM fills, as this will cause operations, being carried out, to become slower. It is also...

Words: 781 - Pages: 4

Premium Essay

Live Project

...The information technology course module has been designed with more of software part in the course whereas Computer Science includes more of computer hardware part like networking, chip level knowledge etc. Although some of the subjects are same in both the streams.  Answer Information Technology is the business side of computers - usually dealing with databases, business, and accounting. The cs engineering degree usually deals with how to build micro processors, how to write a compiler, and is usually more math intensive than IT. One way to think of it is one is dealing with information - data which would be the IT and the other is dealing with the "science" or "how to make it" of computers.   Answer    The exact answer depends heavily on the college or university in question, as each tends to split things slightly differently. As a generalization, there are actually three fields commonly associated with computers:  Information Technology - this sometimes also goes by the names "Information Systems", "Systems Administration", or "Business Systems Information/Administration". This is a practical engineering field, concerned primarily with taking existing hardware and software components and designing a larger system to solve a particular business function. Here you learn about some basic information theory, applied mathematics theory, and things like network topology/design, database design, and the like. IT concerns itself with taking building blocks such...

Words: 490 - Pages: 2