This article is about personal computers in general. For computers generally referred to as "PCs", see IBM PC compatible. For hardware components dealing with personal computers, see Personal computer hardware.
An illustration of a modern desktop computerA personal computer (PC) is any general-purpose computer whose size, capabilities, and original sales price make it useful for individuals, and which is intended to be operated directly by an end user with no intervening computer operator. This is in contrast to the batch processing or time-sharing models which allowed large expensive mainframe systems to be used by many people, usually at the same time, or large data processing systems which required a full-time staff to operate efficiently.
A personal computer may be a desktop computer, a laptop, a tablet PC, or a handheld PC (also called a palmtop). The most common microprocessors in personal computers are x86-compatible CPUs. Software applications for personal computers include word processing, spreadsheets, databases, Web browsers and e-mail clients, games, and myriad personal productivity and special-purpose software applications. Modern personal computers often have high-speed or dial-up connections to the Internet allowing access to the World Wide Web and a wide range of other resources.
A PC may be used at home or in an office. Personal computers may be connected to a local area network (LAN), either by a cable or a wireless connection.
While early PC owners usually had to write their own programs to do anything useful with the machines, today's users have access to a wide range of commercial and non-commercial software, which is provided in ready-to-run or ready-to-compile form. Since the 1980s, Microsoft and Intel have dominated much of the personal computer market with the Wintel platform.
Contents [hide]
1 History
1.1 Market and sales
1.1.1 Average selling price
2 Types
2.1 Workstation
2.2 Desktop computer
2.2.1 Single unit
2.3 Nettop
2.4 Laptop
2.4.1 Netbook
2.5 Tablet PC
2.6 Ultra-Mobile PC
2.7 Home theater PC
2.8 Pocket PC
3 Hardware
3.1 Computer case
3.2 Central processing unit
3.3 Motherboard
3.4 Main memory
3.5 Hard disk
3.6 Video card
3.7 Visual display unit
3.8 Keyboard
3.9 Mouse
3.10 Other components
4 Software
4.1 Operating system
4.1.1 Microsoft Windows
4.1.2 Mac OS X
4.1.3 Linux
4.2 Applications
5 See also
6 Notes
7 References
8 External links
[edit] History
This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (September 2008)
Main article: History of personal computers
In what was later to be called The Mother of All Demos, SRI researcher Douglas Englebart in 1968 gave a preview of what would become the staples of daily working life in the 21st century - e-mail, hypertext, word processing, video conferencing, and the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time.
By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person.
HP 9830 was an early desktop computer with printerIn the 1970s Hewlett Packard introduced fully BASIC programmable computers that fit entirely on top of a desk, including a keyboard, a small one-line display and printer. The Wang 2200 of 1973 had a full-size CRT and cassette tape storage. The IBM 5100 in 1975 had a small CRT display and could be programmed in BASIC and APL. These were generally expensive specialized computers sold for business or scientific uses. The introduction of the microprocessor, a single chip with all the circuitry that formerly occupied large cabinets, led to the proliferation of personal computers after 1975.
Early personal computers - generally called microcomputers - were sold often in kit form and in limited volumes, and were of interest mostly to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, and output was provided by front panel lamps. Practical use required peripherals such as keyboards, computer terminals, disk drives, and printers. Micral N was the earliest commercial, non-kit "personal" computer based on a microprocessor, the Intel 8008. It was built starting in 1972 and about 90,000 units were sold. Unlike other hobbyist computers of its day, which were sold as electronics kits, in 1976 Steve Jobs and Steve Wozniak sold the Apple I computer circuit board, which was fully prepared and contained about 30 chips. The first complete personal computer was the Commodore PET introduced in January 1977. It was soon followed by the popular Apple II. Mass-market pre-assembled computers allowed a wider range of people to use computers, focusing more on software applications and less on development of the processor hardware.
Through the late 1970s and into the 1980s, computers were developed for household use, with software for personal productivity, programming and games. One such machine, the Commodore 64, totaled 17 million units sold, making it the best-selling single personal computer model of all time[1]. Somewhat larger and more expensive systems (although still low-cost compared with minicomputers and mainframes) were aimed at office and small business use. Workstations are characterized by high-performance processors and graphics displays, with large local disk storage, networking capability, and running under a multitasking operating system. In 1984, Dr. Mark Dean created a device called the ISA systems bus, which allows a personal computer to have several machines connected to it at once, such as a printer and scanner or modem. ISA is widely used today and Dean also received the Black Engineer of the Year President’s Award in 1997 for his contribution . However, card slots already existed before the ISA bus of the IBM-PC in the Apple ][.
IBM 5150 as of 1981Eventually due to the influence of the IBM-PC on the personal computer market, personal computers and home computers lost any technical distinction. Business computers acquired color graphics capability and sound, and home computers and game systems users used the same processors and operating systems as office workers. Mass-market computers had graphics capabilities and memory comparable to dedicated workstations of a few years before. Even local area networking, originally a way to allow business computers to share expensive mass storage and peripherals, became a standard feature of personal computers used at home.
[edit] Market and sales
See also: Market share of leading PC vendors
Personal computers worldwide in million distinguished by developed and developing worldIn 2001, 125 million personal computers were shipped in comparison to 48 thousand in 1977. More than 500 million personal computers were in use in 2002 and one billion personal computers had been sold worldwide since mid-1970s until this time. Of the latter figure, 75 percent were professional or work related, while the rest sold for personal or home use. About 81.5 percent of personal computers shipped had been desktop computers, 16.4 percent laptops and 2.1 percent servers. United States had received 38.8 percent (394 million) of the computers shipped, Europe 25 percent and 11.7 percent had gone to Asia-Pacific region, the fastest-growing market as of 2002. The second billion was expected to be sold by 2008.[2] Almost half of all the households in Western Europe had a personal computer and a computer could be found in 40 percent of homes in United Kingdom, compared with only 13 percent in 1985.[3]
The global personal computer shipments were 264 million units in the year 2007, according to iSuppli[4], up 11.2 percent from 239 million in 2006.[5]. In year 2004, the global shipments was 183 million units, 11.6 percent increase over 2003.[6] In 2003, 152.6 million computers were shipped, at an estimated value of $175 billion.[7] In 2002, 136.7 million PCs were shipped, at an estimated value of $175 billion.[7] In 2000, 140.2 million personal computers were shipped, at an estimated value of $226 billion.[7] Worldwide shipments of personal computers surpassed the 100-million mark in 1999, growing to 113.5 million units from 93.3 million units in 1998.[8]. In 1999, Asia had 14.1 million units shipped.[9]
As of June 2008, the number of personal computers in use worldwide hit one billion, while another billion is expected to be reached by 2014. Mature markets like the United States, Western Europe and Japan accounted for 58 percent of the worldwide installed PCs. The emerging markets were expected to double their installed PCs by 2012 and to take 70 percent of the second billion PCs. About 180 million computers (16 percent of the existing installed base) were expected to be replaced and 35 million to be dumped into landfill in 2008. The whole installed base grew 12 percent annually.[10][11]
In the developed world, there has been a vendor tradition to keep adding functions to maintain high prices of personal computers. However, since the introduction of One Laptop per Child foundation and its low-cost XO-1 laptop, the computing industry started to pursue the price too. Although introduced only one year earlier, there were 14 million netbooks sold in 2008.[12] Besides the regular computer manufacturers, companies making especially rugged versions of computers have sprung up, offering alternatives for people operating their machines in extreme weather or environments.[13]
[edit] Average selling price
For Microsoft Windows systems, the average selling price (ASP) showed a decline in 2008/2009, possibly due to low-cost netbooks, drawing $569 for desktop computers and $689 for laptops at U.S. retail in August 2008. In 2009, ASP had further fallen to $533 for desktops and to $602 for notebooks by January and to $540 and $560 in February.[14] According to research firm NPD, average selling price of all Windows portable PCs has fallen from $659 in October 2008 to $519 in October 2009.[15]
[edit] Types
[edit] Workstation
Sun SPARCstation 1+, 25 MHz RISC processor from early 1990sMain article: Workstation
A workstation is a high-end personal computer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. Workstations are used for tasks such as computer-aided design, drafting and modelling, computation-intensive scientific and engineering calculations, image processing, architectural modelling, and computer graphics for animation and motion picture visual effects.[16]
[edit] Desktop computer
Main article: Desktop computer
Dell OptiPlex desktop computerPrior to the wide spread of PCs a computer that could fit on a desk was considered remarkably small. Today the phrase usually indicates a particular style of computer case. Desktop computers come in a variety of styles ranging from large vertical tower cases to small form factor models that can be tucked behind an LCD monitor. In this sense, the term 'desktop' refers specifically to a horizontally-oriented case, usually intended to have the display screen placed on top to save space on the desk top. Most modern desktop computers have separate screens and keyboards.
[edit] Single unit
Single unit PCs (also known as all-in-one PCs) are a subtype of desktop computers, which combine the monitor and case of the computer within a single unit. The monitor often utilizes a touchscreen as an optional method of user input, however detached keyboards and mice are normally still included. The inner components of the PC are often located directly behind the monitor, and many are built similarly to laptops.
[edit] Nettop
Main article: Nettop
A subtype of desktops, called nettops, was introduced by Intel in February 2008 to describe low-cost, lean-function, desktop computers. A similar subtype of laptops (or notebooks) are the netbooks (see below). These feature the new Intel Atom processor which specially enables them to consume less power and to be built into small enclosures.
[edit] Laptop
Main article: Laptop
A mid-range HP Laptop.A laptop computer or simply laptop, also called a notebook computer or sometimes a notebook, is a small personal computer designed for portability. Usually all of the interface hardware needed to operate the laptop, such as USB ports (previously parallel and serial ports), graphics card, sound channel, etc., are built in to a single unit. Laptops contain high capacity batteries that can power the device for extensive periods of time, enhancing portability. Once the battery charge is depleted, it will have to be recharged through a power outlet. In the interest of saving power, weight and space, they usually share RAM with the video channel, slowing their performance compared to an equivalent desktop machine.
One main drawback of the laptop is sometimes, due to the size and configuration of components, relatively little can be done to upgrade the overall computer from its original design. Internal upgrades are either not manufacturer recommended, can damage the laptop if done with poor care or knowledge, or in some cases impossible, making the desktop PC more modular. Some internal upgrades, such as memory and hard disks upgrades are often easy, a display or keyboard upgrade is usually impossible. The laptop has the same access as the desktop to the wide variety of devices, such as external displays, mice, cameras, storage devices and keyboards, which may be attached externally through USB ports and other less common ports such as external video.
A subtype of notebooks, called subnotebooks, are computers with most of the features of a standard laptop computer but smaller. They are larger than hand-held computers, and usually run full versions of desktop/laptop operating systems. Ultra-Mobile PCs (UMPC) are usually considered subnotebooks, or more specifically, subnotebook Tablet PCs (see below). Netbooks are sometimes considered in this category, though they are sometimes separated in a category of their own (see below).
Desktop replacements, meanwhile, are large laptops meant to replace a desktop computer while keeping the mobility of a laptop. Entertainment laptops emphasize large, HDTV-resolution screens and video processing capabilities.
[edit] Netbook
Main article: Netbook
An HP netbookNetbooks (also called mini notebooks or subnotebooks) are a rapidly evolving[17] category of small, light and inexpensive laptop computers suited for general computing and accessing web-based applications; they are often marketed as "companion devices," that is, to augment a user's other computer access.[17] Walt Mossberg called them a "relatively new category of small, light, minimalist and cheap laptops." [18] By August 2009, CNET called netbooks "nothing more than smaller, cheaper notebooks."[17]
At their inception in late 2007 — as smaller notebooks optimized for low weight and low cost[19] — netbooks omitted key features (e.g., the optical drive), featured smaller screens and keyboards, and offered reduced specification and computing power. Over the course of their evolution, netbooks have ranged in size from below 5"[20] to over 13",[21] and from ~1 kg (2-3 pounds). Often significantly less expensive than other laptops,[22] by mid-2009, netbooks had been offered to users "free of charge", with an extended service contract purchase.[23]
In the short period since their appearance, netbooks have grown in size and features, now converging with new smaller, lighter notebooks. By mid 2009, CNET noted "the specs are so similar that the average shopper would likely be confused as to why one is better than the other," noting "the only conclusion is that there really is no distinction between the devices."[17]
[edit] Tablet PC
Main article: Tablet PC
HP Compaq tablet PC with rotating/removable keyboard.A tablet PC is a notebook or slate-shaped mobile computer, first introduced by Pen computing in the early 90s with their PenGo Tablet Computer and popularized by Microsoft. Its touchscreen or graphics tablet/screen hybrid technology allows the user to operate the computer with a stylus or digital pen, or a fingertip, instead of a keyboard or mouse. The form factor offers a more mobile way to interact with a computer. Tablet PCs are often used where normal notebooks are impractical or unwieldy, or do not provide the needed functionality.
As technology and functionality continue to progress, prototype tablet PCs will continue to emerge. The Microsoft Courier, a personal business device, has two 7" monitors that support multi-touch gestures, Wi-Fi capabilities and has a built-in camera. The device looks to be a replacement to traditional planners while offering what most digital planners cannot, two pages and large writing spaces.[24]
[edit] Ultra-Mobile PC
Main article: Ultra-Mobile PC
Samsung Q1 Ultra-Mobile PC.The ultra-mobile PC (UMPC) is a specification for a small form factor of tablet PCs. It was developed as a joint development exercise by Microsoft, Intel, and Samsung, among others. Current UMPCs typically feature the Windows XP, Windows Vista, Windows 7, or Linux operating system and low-voltage Intel Atom or VIA C7-M processors.
[edit] Home theater PC
Main article: Home theater PC
Antec Fusion V2 home theater PC with keyboard on top.A home theater PC (HTPC) is a convergence device that combines the functions of a personal computer and a digital video recorder. It is connected to a television or a television-sized computer display and is often used as a digital photo, music, video player, TV receiver and digital video recorder. Home theater PCs are also referred to as media center systems or media servers. The general goal in a HTPC is usually to combine many or all components of a home theater setup into one box. They can be purchased pre-configured with the required hardware and software needed to add television programming to the PC, or can be cobbled together out of discrete components as is commonly done with MythTV, Windows Media Center, GB-PVR, SageTV, Famulent or LinuxMCE.
[edit] Pocket PC
Main article: Pocket PC
An O2 pocket PCA pocket PC is a hardware specification for a handheld-sized computer (personal digital assistant) that runs the Microsoft Windows Mobile operating system. It may have the capability to run an alternative operating system like NetBSD or Linux. It has many of the capabilities of modern desktop PCs.
Currently there are tens of thousands of applications for handhelds adhering to the Microsoft Pocket PC specification, many of which are freeware. Some of these devices also include mobile phone features. Microsoft compliant Pocket PCs can also be used with many other add-ons like GPS receivers, barcode readers, RFID readers, and cameras. In 2007, with the release of Windows Mobile 6, Microsoft dropped the name Pocket PC in favor of a new naming scheme. Devices without an integrated phone are called Windows Mobile Classic instead of Pocket PC. Devices with an integrated phone and a touch screen are called Windows Mobile Professional.[25]
[edit] Hardware
An exploded view of a modern personal computer and peripherals:
1.Scanner
2.CPU (Microprocessor)
3.Primary storage (RAM)
4.Expansion cards (graphics cards, etc.)
5.Power supply
6.Optical disc drive
7.Secondary storage (Hard disk)
8.Motherboard
9.Speakers
10.Monitor
11.System software
12.Application software
13.Keyboard
14.Mouse
15.External hard disk
16.Printer
Main article: Personal computer hardware
Mass-market consumer computers use highly standardized components and so are simple for an end user to assemble into a working system. A typical desktop computer consists of a computer case which holds the power supply, motherboard, hard disk and often an optical disc. External devices such as a video monitor or visual display unit, keyboard, and a pointing device are usually found in a personal computer.
The motherboard connects all processor, memory and peripheral devices together. The memory card(s), graphics card and processor are mounted directly onto the motherboard. The central processing unit microprocessor chip plugs into a socket. Expansion memory plugs into memory sockets. Some motherboards have the video display adapter, sound and other peripherals integrated onto the motherboard. Others use expansion slots for graphics cards, network cards, or other I/O devices. Disk drives for mass storage are connected to the mother board with a cable, and to the power supply through another cable. Usually disk drives are mounted in the same case as the motherboard; formerly, expansion chassis were made for additional disk storage.
The graphics and sound card can have a break out box to keep the analog parts away from the electromagnetic radiation inside the computer case. For really large amounts of data, a tape drive can be used or (extra) hard disks can be put together in an external case.
The keyboard and the mouse are external devices plugged into the computer through connectors on an I/O panel on the back of the computer. The monitor is also connected to the I/O panel, either through an onboard port on the motherboard, or a port on the graphics card.
The hardware capabilities of personal computers can sometimes be extended by the addition of expansion cards connected via an expansion bus. Some standard peripheral buses often used for adding expansion cards in personal computers as of 2005 are PCI, AGP (a high-speed PCI bus dedicated to graphics adapters), and PCI Express. Most personal computers as of 2005 have multiple physical PCI expansion slots. Many also include an AGP bus and expansion slot or a PCI Express bus and one or more expansion slots, but few PCs contain both buses.
[edit] Computer case
Main article: Computer case
A stripped ATX case lying on its side.A computer case is the enclosure that contains the main components of a computer. Cases are usually constructed from steel or aluminium, although other materials such as wood and plastic have been used. Cases can come in many different sizes, or form factors. The size and shape of a computer case is usually determined by the form factor of the motherboard that it is designed to accommodate, since this is the largest and most central component of most computers. Consequently, personal computer form factors typically specify only the internal dimensions and layout of the case. Form factors for rack-mounted and blade servers may include precise external dimensions as well, since these cases must themselves fit in specific enclosures.
Currently, the most popular form factor for desktop computers is ATX, although microATX and small form factors have become very popular for a variety of uses. Companies like Shuttle Inc. and AOpen have popularized small cases, for which FlexATX is the most common motherboard size.
[edit] Central processing unit
Main article: Central processing unit
AMD Athlon 64 X2 CPU.The central processing unit, or CPU, is that part of a computer which executes software program instructions. In older computers this circuitry was formerly on several printed circuit boards, but in PCs is a single integrated circuit. Nearly all PCs contain a type of CPU known as a microprocessor. The microprocessor often plugs into the motherboard using one of many different types of sockets. IBM PC compatible computers use an x86-compatible processor, usually made by Intel, AMD, VIA Technologies or Transmeta. Apple Macintosh computers were initially built with the Motorola 680x0 family of processors, then switched to the PowerPC series (a RISC architecture jointly developed by Apple Computer, IBM and Motorola), but as of 2006, Apple switched again, this time to x86-compatible processors by Intel. Modern CPUs are equipped with a fan attached via heat sink.
[edit] Motherboard
Main article: Motherboard
Asus motherboardThe motherboard, also referred to as systemboard or mainboard, is the primary circuit board within a personal computer. Many other components connect directly or indirectly to the motherboard. Motherboards usually contain one or more CPUs, supporting circuitry - usually integrated circuits (ICs) - providing the interface between the CPU memory and input/output peripheral circuits, main memory, and facilities for initial setup of the computer immediately after power-on (often called boot firmware or, in IBM PC compatible computers, a BIOS). In many portable and embedded personal computers, the motherboard houses nearly all of the PC's core components. Often a motherboard will also contain one or more peripheral buses and physical connectors for expansion purposes. Sometimes a secondary daughter board is connected to the motherboard to provide further expandability or to satisfy space constraints.
[edit] Main memory
Main article: Primary storage
1GB DDR SDRAM PC-3200 moduleA PC's main memory is fast storage that is directly accessible by the CPU, and is used to store the currently executing program and immediately needed data. PCs use semiconductor random access memory (RAM) of various kinds such as DRAM, SDRAM or SRAM as their primary storage. Which exact kind depends on cost/performance issues at any particular time. Main memory is much faster than mass storage devices like hard disks or optical discs, but is usually volatile, meaning it does not retain its contents (instructions or data) in the absence of power, and is much more expensive for a given capacity than is most mass storage. Main memory is generally not suitable for long-term or archival data storage.
[edit] Hard disk
Main article: Hard disk drive
A Western Digital 250 GB hard disk drive.Mass storage devices store programs and data even when the power is off; they do require power to perform read and write functions during usage. Although flash memory has dropped in cost, the prevailing form of mass storage in personal computers is still the hard disk.
The disk drives use a sealed head/disk assembly (HDA) which was first introduced by IBM's "Winchester" disk system. The use of a sealed assembly allowed the use of positive air pressure to drive out particles from the surface of the disk, which improves reliability.
If the mass storage controller provides for expandability, a PC may also be upgraded by the addition of extra hard disk or optical disc drives. For example, BD-ROMs, DVD-RWs, and various optical disc recorders may all be added by the user to certain PCs. Standard internal storage device connection interfaces are PATA, Serial ATA, SCSI
[edit] Video card
Main article: Video card
ATI Radeon video cardThe video card - otherwise called a graphics card, graphics adapter or video adapter - processes and renders the graphics output from the computer to the computer display, and is an essential part of the modern computer. On older models, and today on budget models, graphics circuitry tended to be integrated with the motherboard but, for modern flexible machines, they are supplied in PCI, AGP, or PCI Express format.
When the IBM PC was introduced, most existing business-oriented personal computers used text-only display adapters and had no graphics capability. Home computers at that time had graphics compatible with television signals, but with low resolution by modern standards owing to the limited memory available to the eight-bit processors available at the time.
[edit] Visual display unit
Main article: Visual display unit
A flat-panel LCD monitor.A visual display unit (or monitor) is a piece of electrical equipment, usually separate from the computer case, which displays viewable images generated by a computer without producing a permanent record. The word "monitor" is used in other contexts; in particular in television broadcasting, where a television picture is displayed to a high standard. A computer display device is usually either a cathode ray tube or some form of flat panel such as a TFT LCD. The monitor comprises the display device, circuitry to generate a picture from electronic signals sent by the computer, and an enclosure or case. Within the computer, either as an integral part or a plugged-in Expansion card, there is circuitry to convert internal data to a format compatible with a monitor. The images from monitors originally contained only text, but as Graphical user interfaces emerged and became common, they began to display more images and multimedia content.
[edit] Keyboard
Main article: Keyboard (computing)
A computer keyboardIn computing, a keyboard is an arrangement of buttons that each correspond to a function, letter, or number. They are the primary devices of inputing text. In most cases, they contain an array of keys specifically organized with the corresponding letters, numbers, and functions printed or engraved on the button. They are generally designed around an operators language, and many different versions for different languages exist. In English, the most common layout is the QWERTY layout, which was originally used in typewriters. They have evolved over time, and have been modified for use in computers with the addition of function keys, number keys, arrow keys, and OS specific keys. Often, specific functions can be achieved by pressing multiple keys at once or in succession, such as inputing characters with accents or opening a task manager. Programs use keyboard shotcuts very differently and all use different keyboard shortcuts for different program specific operations, such as refreshing a web page in a web browser or selecting all text in a word processor.
[edit] Mouse
Main article: Mouse (computing)
Apple Mighty Mouse that detects the right and left clicks through what appears to be one large button.A Mouse on a computer is a small, slidable device that users hold and slide around to point at, click on, and sometimes drag objects on screen in a graphical user interface using a pointer on screen. Almost all Personal Computers have mice. It may be plugged into a computer's rear mouse socket, or as a USB device, or, more recently, may be connected wirelessly via a USB antenna or Bluetooth antenna. In the past, they had a single button that users could press down on the device to "click" on whatever the pointer on the screen was hovering over. Now, however, many Mice have two or three buttons(possibly more); a "right click" function button on the mouse, which performs a secondary action on a selected object, and a scroll wheel, which users can rotate using their fingers to "scroll" up or down. The scroll wheel can also be pressed down, and therefore be used as a third button. Some mouse wheels may be tilted from side to side to allow sideways scrolling. Different programs make use of these functions differently, and may scroll horizontally by default with the scroll wheel, open different menus with different buttons, among others. These functions may be user defined through software utilities.
Mice traditionally detected movement and communicated with the computer with an internal "mouse ball"; and used optical encoders to detect rotation of the ball and tell the computer where the mouse has moved. However, these systems were subject to low durability, accuracy and required internal cleaning. Modern mice use optical technology to directly trace movement of the surface under the mouse and are much more accurate, durable and almost maintenace free. They work on a wider variety of surfaces and can even operate on walls, ceilings or other non-horizontal surfaces.
[edit] Other components
Proper ergonomic design of personal computer workplace is necessary to prevent repetitive strain injuries, which can develop over time and can lead to long-term disability.[26]Mass storage
All computers require either fixed or removable storage for their operating system, programs and user generated material.
Formerly the 5¼ inch and 3½ inch floppy drive were the principal forms of removable storage for backup of user files and distribution of software.
As memory sizes increased, the capacity of the floppy did not keep pace; the Zip drive and other higher-capacity removable media were introduced but never became as prevalent as the floppy drive.
By the late 1990s the optical drive, in CD and later DVD and Blu-ray Disc, became the main method for software distribution, and writeable media provided backup and file interchange. Floppy drives have become uncommon in desktop personal computers since about 2000, and were dropped from many laptop systems even earlier.[27]
Early home computers used compact audio cassettes for file storage; these were at the time a very low cost storage solution, but were displaced by floppy disk drives when manufacturing costs dropped, by the mid 1980s.
A second generation of tape recorders was provided when Videocassette recorders were pressed into service as backup media for larger disk drives. All these systems were less reliable and slower than purpose-built magnetic tape drives. Such tape drives were uncommon in consumer-type personal computers but were a necessity in business or industrial use.
Interchange of data such as photographs from digital cameras is greatly expedited by installation of a card reader, which often is compatible with several forms of flash memory. It is usually faster and more convenient to move large amounts of data by removing the card from the mobile device, instead of communicating with the mobile device through a USB interface.
A USB flash drive today performs much of the data transfer and backup functions formerly done with floppy drives, Zip disks and other devices. Main-stream current operating systems for personal computers provide standard support for flash drives, allowing interchange even between computers using different processors and operating systems. The compact size and lack of moving parts or dirt-sensitive media, combined with low cost for high capacity, have made flash drives a popular and useful accessory for any personal computer user.
The operating system (e.g.: Microsoft Windows, Mac OS, Linux or many others) can be located on any storage, but typically it is on a hard disks. A Live CD is the running of a OS directly from a CD. While this is slow compared to storing the OS on a hard drive, it is typically used for installation of operating systems, demonstrations, system recovery, or other special purposes. Large flash memory is currently more expensive than hard drives of similar size (as of mid-2008) but are starting to appear in laptop computers because of their low weight, small size and low power requirements.
Computer communications
Internal modem card
Modem
Network adapter card
Router
Common peripherals and adapter cards
Headset
Joystick
Microphone
Printer
Scanner
Sound adapter card as a separate card rather than located on the motherboard
Speakers
Webcam
[edit] Software
Main article: Computer software
A screenshot of the OpenOffice.org Writer softwareComputer software is a general term used to describe a collection of computer programs, procedures and documentation that perform some tasks on a computer system.[28] The term includes application software such as word processors which perform productive tasks for users, system software such as operating systems, which interface with hardware to provide the necessary services for application software, and middleware which controls and co-ordinates distributed systems.
Software applications for word processing, Internet browsing, Internet faxing, e-mail and other digital messaging, multimedia playback, computer game play and computer programming are common. The user of a modern personal computer may have significant knowledge of the operating environment and application programs, but is not necessarily interested in programming nor even able to write programs for the computer. Therefore, most software written primarily for personal computers tends to be designed with simplicity of use, or "user-friendliness" in mind. However, the software industry continuously provide a wide range of new products for use in personal computers, targeted at both the expert and the non-expert user.
[edit] Operating system
Main article: Operating system
An operating system (OS) manages computer resources and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system. An operating system performs basic tasks such as controlling and allocating memory, prioritizing system requests, controlling input and output devices, facilitating computer networking and managing files.
Common contemporary desktop OSes are Microsoft Windows (92.77% market share), Mac OS X (5.12%), Linux (0.95%),[29] Solaris and FreeBSD. Windows, Mac, and Linux all have server and personal variants. With the exception of Microsoft Windows, the designs of each of the aforementioned OSs were inspired by, or directly inherited from, the Unix operating system. Unix was developed at Bell Labs beginning in the late 1960s and spawned the development of numerous free and proprietary operating systems.
[edit] Microsoft Windows
Main article: Microsoft Windows
Windows 7, the latest client version in the Microsoft Windows lineMicrosoft Windows is the collective brand name of several software operating systems by Microsoft. Microsoft first introduced an operating environment named Windows in November 1985 as an add-on to MS-DOS in response to the growing interest in graphical user interfaces (GUIs).[30][31] The most recent client version of Windows is Windows 7 and Windows Server 2008 R2 which was available at retail on October 22, 2009.
[edit] Mac OS X
Main article: Mac OS X
Mac OS X Snow Leopard desktopMac OS X is a line of graphical operating systems developed, marketed, and sold by Apple Inc.. Mac OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessors, Mac OS X is a Unix-based graphical operating system. The most recent version of Mac OS X is Mac OS X 10.6 "Snow Leopard", and the current server version is Mac OS X Server 10.6.
[edit] Linux
Main article: Linux
A Linux distribution running the KDE 4 desktop environment.Linux is a family of Unix-like computer operating systems. Linux is one of the most prominent examples of free software and open source development: typically all underlying source code can be freely modified, used, and redistributed by anyone.[32] The name "Linux" comes from the Linux kernel, started in 1991 by Linus Torvalds. The system's utilities and libraries usually come from the GNU operating system, announced in 1983 by Richard Stallman. The GNU contribution is the basis for the alternative name GNU/Linux.[33]
Known for its use in servers as part of the LAMP application stack, Linux is supported by corporations such as Dell, Hewlett-Packard, IBM, Novell, Oracle Corporation, Red Hat, Canonical Ltd. and Sun Microsystems. It is used as an operating system for a wide variety of computer hardware, including desktop computers, netbooks, supercomputers,[34] video game systems, such as the PlayStation 3, several arcade games, and embedded devices such as mobile phones, portable media players, routers, and stage lighting systems.
[edit] Applications
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2008)
Main article: Application software
GIMP raster graphics editorA computer user will apply application software to carry out a specific task. System software supports applications and provides common services such as memory management, network connectivity, or device drivers; all of which may be used by applications but which are not directly of interest to the end user. A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.
Typical examples of software applications are word processors, spreadsheets, and media players. Multiple applications bundled together as a package are sometimes referred to as an application suite. Microsoft Office and OpenOffice.org, which bundle together a word processor, a spreadsheet, and several other discrete applications, are typical examples. The separate applications in a suite usually have a user interface that has some commonality making it easier for the user to learn and use each application. And often they may have some capability to interact with each other in ways beneficial to the user. For example, a spreadsheet might be able to be embedded in a word processor document even though it had been created in the separate spreadsheet application.
End-user development tailors systems to meet the user's specific needs. User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is.
[edit] See also
Computer Science portal
Electronics portal
Desktop computer
Desktop replacement computer
e-waste
Gaming PC
Information and communication technologies for development
List of computer system manufacturers
Market share of leading PC vendors
Personal Computer Museum
Public computer
Quiet PC
Supercomputer
[edit] Notes
Accidental Empires: How the boys of Silicon Valley make their millions, battle foreign competition, and still can't get a date, Robert X. Cringely, Addison-Wesley Publishing, (1992), ISBN 0-201-57032-7
[edit] References
1.^ Reimer, Jeremy. "Personal Computer Market Share: 1975–2004". http://www.jeremyreimer.com/total_share.html. Retrieved 2009-07-17.
2.^ personal computers: More than 1 billion served
3.^ Computers reach one billion mark
4.^ ISuppli Raises 2007 Computer Sales Forecast, pcworld.com, accessed at 13 January 2009
5.^ iSuppli raises 2007 computer sales forecast, macworld.co.uk, accessed at 13 January 2009
6.^ Global PC Sales Leveling Off, newsfactor.com, accessed at 13 January 2009
7.^ a b c HP back on top of PC market, accessed at 13 January 2009
8.^ Dell Passes Compaq as Top PC Seller in U.S, latimes.com, accessed at 13 January 2009
9.^ Economic recovery bumps AP 1999 PC shipments to record high, zdnetasia.com, accessed at 13 January 2009
10.^ Gartner Says More than 1 Billion PCs In Use Worldwide and Headed to 2 Billion Units by 2014
11.^ Computers in use pass 1 billion mark: Gartner
12.^ http://www.olpcnews.com/use_cases/technology/4p_computing_olpc_impact.html
13.^ Rugged PC leaders
14.^ http://www.eweek.com/c/a/Windows/Netbooks-Are-Destroying-the-Laptop-Market-and-Microsoft-Needs-to-Act-Now-863307/
15.^ http://www.cio.com/article/509556/Falling_PC_Prices_Pit_Microsoft_Against_PC_Makers
16.^ Ralston, Anthony; Reilly, Edwin (1993). "Workstation". Encyclopedia of Computer Science (Third Edition ed.). New York: Van Nostrand Reinhold. ISBN 0442276796.
17.^ a b c d "Time to drop the Netbook label". CNN, Erica Ogg, August 20, 2009. http://www.cnn.com/2009/TECH/ptech/08/20/cnet.drop.netbook.label/index.html.
18.^ "New Netbook Offers Long Battery Life and Room to Type". The Wall Street Journal Online, Personal Technology, Walt Mossberg, August 6, 2009. http://online.wsj.com/article/SB10001424052970203674704574332522805119180.html.
19.^ "Cheap PCs Weigh on Microsoft". Business Technologies, The Wall Street Journal, December 8, 2008. http://blogs.wsj.com/biztech/2008/12/08/cheap-pcs-weigh-on-microsoft/.
20.^ UMID Netbook Only 4.8″
21.^ CES 2009 - MSI Unveils the X320 “MacBook Air Clone” Netbook
22.^ Netbook Trends and Solid-State Technology Forecast. pricegrabber.com. pp. 7. https://mr.pricegrabber.com/Netbook_Trends_and_SolidState_Technology_January_2009_CBR.pdf. Retrieved 2009-01-28.
23.^ "Light and Cheap, Netbooks Are Poised to Reshape PC Industry" The New York Times - April 1, 2009: "AT&T announced on Tuesday that customers in Atlanta could get a type of compact PC called a netbook for just 50 US$ if they signed up for an Internet service plan..." - “The era of a perfect Internet computer for 99 US$ is coming this year,” said Jen-Hsun Huang, the chief executive of Nvidia, a maker of PC graphics chips that is trying to adapt to the new technological order.
24.^ http://www.pcworld.com/article/172444/microsoft_courier_heats_up_tablet_sector.html
25.^ New Windows Mobile 6 Devices :: Jun/Jul 2007
26.^ Berkeley Lab. Integrated Safety Management: Ergonomics. Website. Retrieved 9 July 2008.
27.^ The NeXT computer introduced in 1988 did not include a floppy drive, which at the time was unusual.
28.^ "Wordreference.com: WordNet 2.0". Princeton University, Princeton, NJ. http://www.wordreference.com/definition/software. Retrieved 2007-08-19.
29.^ [1]- Marketshare.com
30.^ "http://inventors.about.com/od/mstartinventions/a/Windows.htm?rd=1". http://inventors.about.com/od/mstartinventions/a/Windows.htm?rd=1. Retrieved 2007-04-22.
31.^ IDC: Consolidation to Windows won't happen
32.^ "Linux Online ─ About the Linux Operating System". Linux.org. http://www.linux.org/info/index.html. Retrieved 2007-07-06.
33.^ Weeks, Alex (2004). "1.1". Linux System Administrator's Guide (version 0.9 ed.). http://www.tldp.org/LDP/sag/html/sag.html#GNU-OR-NOT. Retrieved 2007-01-18.
34.^ Lyons, Daniel. "Linux rules supercomputers". http://www.forbes.com/home/enterprisetech/2005/03/15/cz_dl_0315linux.html. Retrieved
intoduction of information technology
Wednesday, August 11, 2010
Identify for types of personal computer
HP 7000 Series Desktops
Industry-Leading Business PCs For Cost & Productivity Efficiency
www.HP.com/PH
Desktop Personal Computer
Phils.' Top Marketplace For Desktop Computers. Find Great Bargains Now
www.AyosDito.ph
Best available cloud App
Tribune achieves content-management standardization using Windows Azure
www.microsoft.com/casestudies
On Demand Remote Support
Secure, Easy and Web-based Remote support. Free trial 14 day!
www.rsupport.com
Industry-Leading Business PCs For Cost & Productivity Efficiency
www.HP.com/PH
Desktop Personal Computer
Phils.' Top Marketplace For Desktop Computers. Find Great Bargains Now
www.AyosDito.ph
Best available cloud App
Tribune achieves content-management standardization using Windows Azure
www.microsoft.com/casestudies
On Demand Remote Support
Secure, Easy and Web-based Remote support. Free trial 14 day!
www.rsupport.com
Differentiate workstation from personal computer
be challenged and removed. (July 2010)
A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. The term workstation has also been used to refer to a mainframe computer terminal or a PC connected to a network.
Historically, workstations had offered higher performance than personal computers, especially with respect to CPU and graphics, memory capacity and multitasking capability. They are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation (e.g. computational fluid dynamics), animation and rendering of images, and mathematical plots. Consoles consist of a high resolution display, a keyboard and a mouse at a minimum, but also offer multiple displays, graphics tablets, 3D mice (devices for manipulating and navigating 3D objects and scenes), etc. Workstations are the first segment of the computer market to present advanced accessories and collaboration tools.
Presently, the workstation market is highly commoditized and is dominated by large PC vendors, such as Dell and HP, selling Microsoft Windows/Linux running on Intel Xeon/AMD Opteron. Alternative UNIX based platforms are provided by Apple Inc., Sun Microsystems, and SGI.
Contents [hide]
1 Workstations in particular
2 History
2.1 Workstation class PCs
3 Current workstation market
4 List of manufacturers
4.1 Current
4.2 Defunct
5 See also
6 References
[edit] Workstations in particular
Today, consumer products such as PCs (and even game consoles) use components that provide a reasonable cost for tasks that do not require heavy and sustained processing power. However, for timely engineering, medical, and graphics production tasks the workstation is hard to beat.
In the early 1980s, a high-end workstation had to meet the three Ms, the so-called "3M computer" had a Megabyte of memory, a Megapixel display (roughly 1000x1000), and a "MegaFLOPS" compute performance (at least one million floating point operations per second).[1] As limited as this seems today, it was at least an order of magnitude beyond the capacity of the personal computer of the time; the original 1981 IBM PC had 16 KB memory, a text-only display, and floating-point performance around 1 kiloFLOPS (30 kiloFLOPS with the optional 8087 math coprocessor). Other desirable features not found in desktop computers at that time included networking, graphics acceleration, and high-speed internal and peripheral data buses.
Another goal was to bring the price for such a system down under a "Megapenny", that is, less than $10,000; this was not achieved until the late 1980s, although many workstations, particularly mid-range or high-end still cost anywhere from $15,000 to $100,000 and over throughout the early to mid 1990s.
The more widespread adoption of these technologies into mainstream PCs was a direct factor in the decline of the workstation as a separate market segment:
High performance CPUs: while RISC in its early days (early 1980s) offered roughly an order-of-magnitude performance improvement over CISC processors of comparable cost, one particular family of CISC processors, Intel's x86, always had the edge in market share and the economies of scale that this implied. By the mid-1990s, some x86 CPUs had achieved performance on a parity with RISC in some areas, such as integer performance (albeit at a cost of greater chip complexity), relegating the latter to even more high-end markets for the most part.
Hardware support for floating-point operations: optional on the original IBM PC; remained on a separate chip for Intel systems until the 80486DX processor. Even then, x86 floating-point performance continued to lag behind other processors due to limitations in its architecture. Today even low-price PCs now have performance in the gigaFLOPS range, but higher-end systems are preferred for floating-point intensive tasks.
Large memory configurations: PCs (i.e. IBM-compatibles) were originally limited to a 640 KB memory capacity (not counting bank-switched "expanded memory") until the 1982 introduction of the 80286 processor; early workstations provided access to several megabytes of memory. Even after PCs broke the 640 KB limit with the 80286, special programming techniques were required to address significant amounts of memory until the 80386, as opposed to other 32-bit processors such as SPARC which provided straightforward access to nearly their entire 4 GB memory address range. 64-bit workstations and servers supporting an address range far beyond 4 GB have been available since the early 1990s, a technology just beginning to appear in the PC desktop and server market in the mid-2000s.
Operating system: early workstations ran the Unix operating system (OS) or a Unix-like variant or equivalent such as VMS. The PC CPUs of the time had limitations in memory capacity and memory access protection, making them unsuitable to run OSes of this sophistication, but this, too, began to change in the late 1980s as PCs with the 32-bit 80386 with integrated paged MMUs became widely affordable.
High-speed networking (10 Mbit/s or better): 10 Mbit/s network interfaces were commonly available for PCs by the early 1990s, although by that time workstations were pursuing even higher networking speeds, moving to 100 Mbit/s, 1 Gbit/s, and 10 Gbit/s. However, economies of scale and the demand for high speed networking in even non-technical areas has dramatically decreased the time it takes for newer networking technologies to reach commodity price points.
Large displays (17" to 21"), high resolutions, high refresh rate were common among PCs by the late 1990s, although in the late 1980s and early 1990s, this was rare.
High-performance 3D graphics hardware: this started to become increasingly popular in the PC market around the mid-to-late 1990s, mostly driven by computer gaming, although workstations featured better quality, sometimes sacrificing performance.
High performance/high capacity data storage: early workstations tended to use proprietary disk interfaces until the emergence of the SCSI standard in the mid-1980s. Although SCSI interfaces soon became available for PCs, they were comparatively expensive and tended to be limited by the speed of the PC's ISA peripheral bus (although SCSI did become standard on the Apple Macintosh). SCSI is an advanced controller interface which is particularly good where the disk has to cope with multiple requests at once. This makes it suited for use in servers, but its benefits to desktop PCs which mostly run single-user operating systems are less clear. These days, with desktop systems acquiring more multi-user capabilities (and the increasing popularity of Linux), the new disk interface of choice is Serial ATA, which has throughput comparable to SCSI but at a lower cost.
Extremely reliable components: together with multiple CPUs with greater cache and error correcting memory, this may remain the distinguishing feature of a workstation today. Although most technologies implemented in modern workstations are also available at lower cost for the consumer market, finding good components and making sure they work compatibly with each other is a great challenge in workstation building. Because workstations are designed for high-end tasks such as weather forecasting, video rendering, and game design, it's taken for granted that these systems must be running under full-load, non-stop for several hours or even days without issue. Any off-the-shelf components can be used to build a workstation, but the lifespans of such components under such rigorous conditions are questionable. For this reason, almost no workstations are built by the customer themselves but rather purchased from a vendor such as Hewlett-Packard, IBM, Sun Microsystems, SGI, Apple, or Dell.
Tight integration between the OS and the hardware: Workstation vendors both design the hardware and maintain the Unix operating system variant that runs on it. This allows for much more rigorous testing than is possible with an operating system such as Windows. Windows requires that 3rd party hardware vendors write compliant hardware drivers that are stable and reliable. Also, minor variation in hardware quality such as timing or build quality can affect the reliability of the overall machine. Workstation vendors are able to ensure both the quality of the hardware, and the stability of the operating system drivers by validating these things in-house, and this leads to a generally much more reliable and stable machine.
Sun SPARCstation 1+, 25 MHz RISC processor from early 1990sThese days, workstations have changed greatly. Since many of the components are now the same as those used in the consumer market, the price differential between the lower end workstation and consumer PCs may be narrower than it once was. For example, some low-end workstations use CISC based processors like the Intel Pentium 4 or AMD Athlon 64 as their CPUs. Higher-end workstations still use more sophisticated CPUs such as the Intel Xeon, AMD Opteron, IBM POWER, or Sun's UltraSPARC, and run a variant of Unix, delivering a truly reliable workhorse for computing-intensive tasks.
Indeed, it is perhaps in the area of the more sophisticated CPU where the true workstation may be found. Although both the consumer desktop and the workstation benefit from CPUs designed around the multicore concept (essentially, multiple processors on a die, of which the POWER4 was a pioneer of this technique), modern (as of 2008) workstations use multiple multicore CPUs, error correcting memory and much larger on-die caches. Such power and reliability are not normally required on a general desktop computer. IBM's POWER-based processor boards and the workstation-level Intel-based Xeon processor boards, for example, have multiple CPUs, more on-die cache and ECC memory, which are features more suited to demanding content-creation, engineering and scientific work than to general desktop computing.
Some workstations are designed for use with only one specific application such as AutoCAD, Avid Xpress Studio HD, 3D Studio Max, etc. To ensure compatibility with the software, purchasers usually ask for a certificate from the software vendor. The certification process makes the workstation's price jump several notches but for professional purposes, reliability is more important than the initial purchase cost.
[edit] History
The Xerox Alto workstation, first to use a graphical user interface with mouse and origin of ethernet.Perhaps the first computer that might qualify as a "workstation" was the IBM 1620, a small scientific computer designed to be used interactively by a single person sitting at the console. It was introduced in 1959. One peculiar feature of the machine was that it lacked any actual arithmetic circuitry. To perform addition, it required a memory-resident table of decimal addition rules. This saved on the cost of logic circuitry, enabling IBM to make it inexpensive. The machine was code-named CADET, which some people waggishly claimed meant "Can't Add, Doesn't Even Try". Nonetheless, it rented initially for $1000 a month.
In 1965, IBM introduced the IBM 1130 scientific computer, which was meant as the successor to the 1620. Both of these systems came with the ability to run programs written in Fortran and other languages. Both the 1620 and the 1130 were built into roughly desk-sized cabinets. Both were available with add-on disk drives, printers, and both paper-tape and punched-card I/O. A console typewriter for direct interaction was standard on each.
Early examples of workstations were generally dedicated minicomputers; a system designed to support a number of users would instead be reserved exclusively for one person. A notable example was the PDP-8 from Digital Equipment Corporation, regarded to be the first commercial minicomputer.
The Lisp machines developed at MIT in the early 1970s pioneered some of the principles of the workstation computer, as they were high-performance, networked, single-user systems intended for heavily interactive use. Lisp Machines were commercialized beginning 1980 by companies like Symbolics, Lisp Machines, Texas Instruments (the TI Explorer) and Xerox (the Interlisp-D workstations). The first computer designed for single-users, with high-resolution graphics facilities (and so a workstation in the modern sense of the term) was the Xerox Alto developed at Xerox PARC in 1973. Other early workstations include the Three Rivers PERQ (1979) and the later Xerox Star (1981).
In the early 1980s, with the advent of 32-bit microprocessors such as the Motorola 68000, a number of new participants in this field appeared, including Apollo Computer and Sun Microsystems, who created Unix-based workstations based on this processor. Meanwhile DARPA's VLSI Project created several spinoff graphics products as well, notably the SGI 3130, and Silicon Graphics' range of machines that followed. It was not uncommon to differentiate the target market for the products, with Sun and Apollo considered to be network workstations, while the SGI machines were graphics workstations. As RISC microprocessors became available in the mid-1980s, these were adopted by many workstation vendors.
Workstations tended to be very expensive, typically several times the cost of a standard PC and sometimes costing as much as a new car. However, minicomputers sometimes cost as much as a house. The high expense usually came from using costlier components that ran faster than those found at the local computer store, as well as the inclusion of features not found in PCs of the time, such as high-speed networking and sophisticated graphics. Workstation manufacturers also tend to take a "balanced" approach to system design, making certain to avoid bottlenecks so that data can flow unimpeded between the many different subsystems within a computer. Additionally, workstations, given their more specialized nature, tend to have higher profit margins than commodity-driven PCs.
The systems that come out of workstation companies often feature SCSI or Fibre Channel disk storage systems, high-end 3D accelerators, single or multiple 64-bit processors, large amounts of RAM, and well-designed cooling. Additionally, the companies that make the products tend to have very good repair/replacement plans. However, the line between workstation and PC is increasingly becoming blurred as the demand for fast computers, networking and graphics have become common in the consumer world, allowing workstation manufacturers to use "off the shelf" PC components and graphics solutions as opposed to proprietary in-house developed technology. Some "low-cost" workstations are still expensive by PC standards, but offer binary compatibility with higher-end workstations and servers made by the same vendor. This allows software development to take place on low-cost (relative to the server) desktop machines.
There have been several attempts to produce a workstation-like machine specifically for the lowest possible price point as opposed to performance. One approach is to remove local storage and reduce the machine to the processor, keyboard, mouse and screen. In some cases, these diskless nodes would still run a traditional OS and perform computations locally, with storage on a remote server. These approaches are intended not just to reduce the initial system purchase cost, but lower the total cost of ownership by reducing the amount of administration required per user.
This approach was actually first attempted as a replacement for PCs in office productivity applications, with the 3Station by 3Com as an early example; in the 1990s, X terminals filled a similar role for technical computing. Sun has also introduced "thin clients", most notably its Sun Ray product line. However, traditional workstations and PCs continue to drop in price, which tends to undercut the market for products of this type.
[edit] Workstation class PCs
A significant segment of the desktop market are computers expected to perform as workstations, but using PC operating systems and components. PC component manufacturers will often segment their product line, and market premium components which are functionally similar to the cheaper "consumer" models but feature a higher level of robustness and/or performance. Notable examples of this are the AMD Opteron, Intel Xeon processors, and the ATI FireGL and Nvidia Quadro graphics processors.
A workstation class PC may have some of the following features:
support for ECC memory
a larger number of memory sockets which use registered (buffered) modules
multiple processor sockets, powerful CPUs (for Intel CPU it will be server derived Xeon instead of typical for PCs Core)
multiple displays
run reliable operating system with advanced features
high performance graphics card
[edit] Current workstation market
Of historic Unix workstation manufacturers, only Sun Microsystems continues its product line. As of January 2009 all RISC-based workstation product lines have been discontinued, IBM retiring its IntelliStation product line at that date.[2]
Current workstation market reorganizes around x86-64 microprocessors. Operating systems available for these platforms are Windows, the different Linux distributions, Mac OS X and Solaris 10.
Three types of products are marketed under the workstation umbrella:
Workstation blade systems (IBM HC10 or Hewlett-Packard xw460c. Sun Visualization System is akin to these solutions)
Ultra high-end workstation (SGI Virtu VS3xx)
High-end deskside system two ways capable x64 systems
Some vendors also market commodity mono socket systems as workstations.
[edit] List of manufacturers
[edit] Current
Acer
Alienware
Apple Inc.
AVADirect
BOXX Technologies
Dell
Fujitsu-Siemens Computers
Hewlett-Packard
Lenovo
MAINGEAR
Silicon Graphics
Sun Microsystems
Workstation Specialists
[edit] Defunct
Apollo Computer
Ardent Computer
Callan Data Systems
Computervision
Digital Equipment Corporation
Evans & Sutherland (operating, but no longer manufactures workstations)
Intergraph
InterPro
MIPS Computer Systems
NeXT
Stardent Inc.
Three Rivers Computer Corporation
Torch Computers
Xworks Interactive
[edit] See also
Music workstation
[edit] References
1.^ RFC 782 defined the workstation environment more generally as hardware and software dedicated to serve a single user, and that it provide for the use of additional shared resources.
2.^ Official IBM Hardware Withdrawal Announcement of IntelliStation POWER 185 and 285.
This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.
[hide]v • d • eComputer sizes
Classes of computers
Larger Super · Minisuper · Mainframe · Mini (Midrange) · Supermini · Server
Micro Personal (Workstation · Desktop · Home · SFF (Nettop)) · Plug · Portable · Arcade system board · Video game console
Mobile Portable/Mobile data terminal · Electronic organizer · Pocket computer
Laptop Desktop replacement computer · Subnotebook (Netbook · Smartbook)
Tablet computer Tablet PC (Ultra-Mobile PC) · Mobile internet device (Internet Tablet)
Wearable computer Calculator watch · Virtual retinal display · Head-mounted display (Head-up display)
Information appliance PDA (Palm-size PC · Handheld PC · Pocket PC) · EDA · Mobile phone (Smartphone · Feature phone) · PMP · DAP · E-book reader · Handheld game console
Calculators Scientific · Programmable · Graphing
Others Single-board computer · Wireless sensor network · Microcontroller · Smartdust · Nanocomputer · Pizza Box Case
A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. The term workstation has also been used to refer to a mainframe computer terminal or a PC connected to a network.
Historically, workstations had offered higher performance than personal computers, especially with respect to CPU and graphics, memory capacity and multitasking capability. They are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation (e.g. computational fluid dynamics), animation and rendering of images, and mathematical plots. Consoles consist of a high resolution display, a keyboard and a mouse at a minimum, but also offer multiple displays, graphics tablets, 3D mice (devices for manipulating and navigating 3D objects and scenes), etc. Workstations are the first segment of the computer market to present advanced accessories and collaboration tools.
Presently, the workstation market is highly commoditized and is dominated by large PC vendors, such as Dell and HP, selling Microsoft Windows/Linux running on Intel Xeon/AMD Opteron. Alternative UNIX based platforms are provided by Apple Inc., Sun Microsystems, and SGI.
Contents [hide]
1 Workstations in particular
2 History
2.1 Workstation class PCs
3 Current workstation market
4 List of manufacturers
4.1 Current
4.2 Defunct
5 See also
6 References
[edit] Workstations in particular
Today, consumer products such as PCs (and even game consoles) use components that provide a reasonable cost for tasks that do not require heavy and sustained processing power. However, for timely engineering, medical, and graphics production tasks the workstation is hard to beat.
In the early 1980s, a high-end workstation had to meet the three Ms, the so-called "3M computer" had a Megabyte of memory, a Megapixel display (roughly 1000x1000), and a "MegaFLOPS" compute performance (at least one million floating point operations per second).[1] As limited as this seems today, it was at least an order of magnitude beyond the capacity of the personal computer of the time; the original 1981 IBM PC had 16 KB memory, a text-only display, and floating-point performance around 1 kiloFLOPS (30 kiloFLOPS with the optional 8087 math coprocessor). Other desirable features not found in desktop computers at that time included networking, graphics acceleration, and high-speed internal and peripheral data buses.
Another goal was to bring the price for such a system down under a "Megapenny", that is, less than $10,000; this was not achieved until the late 1980s, although many workstations, particularly mid-range or high-end still cost anywhere from $15,000 to $100,000 and over throughout the early to mid 1990s.
The more widespread adoption of these technologies into mainstream PCs was a direct factor in the decline of the workstation as a separate market segment:
High performance CPUs: while RISC in its early days (early 1980s) offered roughly an order-of-magnitude performance improvement over CISC processors of comparable cost, one particular family of CISC processors, Intel's x86, always had the edge in market share and the economies of scale that this implied. By the mid-1990s, some x86 CPUs had achieved performance on a parity with RISC in some areas, such as integer performance (albeit at a cost of greater chip complexity), relegating the latter to even more high-end markets for the most part.
Hardware support for floating-point operations: optional on the original IBM PC; remained on a separate chip for Intel systems until the 80486DX processor. Even then, x86 floating-point performance continued to lag behind other processors due to limitations in its architecture. Today even low-price PCs now have performance in the gigaFLOPS range, but higher-end systems are preferred for floating-point intensive tasks.
Large memory configurations: PCs (i.e. IBM-compatibles) were originally limited to a 640 KB memory capacity (not counting bank-switched "expanded memory") until the 1982 introduction of the 80286 processor; early workstations provided access to several megabytes of memory. Even after PCs broke the 640 KB limit with the 80286, special programming techniques were required to address significant amounts of memory until the 80386, as opposed to other 32-bit processors such as SPARC which provided straightforward access to nearly their entire 4 GB memory address range. 64-bit workstations and servers supporting an address range far beyond 4 GB have been available since the early 1990s, a technology just beginning to appear in the PC desktop and server market in the mid-2000s.
Operating system: early workstations ran the Unix operating system (OS) or a Unix-like variant or equivalent such as VMS. The PC CPUs of the time had limitations in memory capacity and memory access protection, making them unsuitable to run OSes of this sophistication, but this, too, began to change in the late 1980s as PCs with the 32-bit 80386 with integrated paged MMUs became widely affordable.
High-speed networking (10 Mbit/s or better): 10 Mbit/s network interfaces were commonly available for PCs by the early 1990s, although by that time workstations were pursuing even higher networking speeds, moving to 100 Mbit/s, 1 Gbit/s, and 10 Gbit/s. However, economies of scale and the demand for high speed networking in even non-technical areas has dramatically decreased the time it takes for newer networking technologies to reach commodity price points.
Large displays (17" to 21"), high resolutions, high refresh rate were common among PCs by the late 1990s, although in the late 1980s and early 1990s, this was rare.
High-performance 3D graphics hardware: this started to become increasingly popular in the PC market around the mid-to-late 1990s, mostly driven by computer gaming, although workstations featured better quality, sometimes sacrificing performance.
High performance/high capacity data storage: early workstations tended to use proprietary disk interfaces until the emergence of the SCSI standard in the mid-1980s. Although SCSI interfaces soon became available for PCs, they were comparatively expensive and tended to be limited by the speed of the PC's ISA peripheral bus (although SCSI did become standard on the Apple Macintosh). SCSI is an advanced controller interface which is particularly good where the disk has to cope with multiple requests at once. This makes it suited for use in servers, but its benefits to desktop PCs which mostly run single-user operating systems are less clear. These days, with desktop systems acquiring more multi-user capabilities (and the increasing popularity of Linux), the new disk interface of choice is Serial ATA, which has throughput comparable to SCSI but at a lower cost.
Extremely reliable components: together with multiple CPUs with greater cache and error correcting memory, this may remain the distinguishing feature of a workstation today. Although most technologies implemented in modern workstations are also available at lower cost for the consumer market, finding good components and making sure they work compatibly with each other is a great challenge in workstation building. Because workstations are designed for high-end tasks such as weather forecasting, video rendering, and game design, it's taken for granted that these systems must be running under full-load, non-stop for several hours or even days without issue. Any off-the-shelf components can be used to build a workstation, but the lifespans of such components under such rigorous conditions are questionable. For this reason, almost no workstations are built by the customer themselves but rather purchased from a vendor such as Hewlett-Packard, IBM, Sun Microsystems, SGI, Apple, or Dell.
Tight integration between the OS and the hardware: Workstation vendors both design the hardware and maintain the Unix operating system variant that runs on it. This allows for much more rigorous testing than is possible with an operating system such as Windows. Windows requires that 3rd party hardware vendors write compliant hardware drivers that are stable and reliable. Also, minor variation in hardware quality such as timing or build quality can affect the reliability of the overall machine. Workstation vendors are able to ensure both the quality of the hardware, and the stability of the operating system drivers by validating these things in-house, and this leads to a generally much more reliable and stable machine.
Sun SPARCstation 1+, 25 MHz RISC processor from early 1990sThese days, workstations have changed greatly. Since many of the components are now the same as those used in the consumer market, the price differential between the lower end workstation and consumer PCs may be narrower than it once was. For example, some low-end workstations use CISC based processors like the Intel Pentium 4 or AMD Athlon 64 as their CPUs. Higher-end workstations still use more sophisticated CPUs such as the Intel Xeon, AMD Opteron, IBM POWER, or Sun's UltraSPARC, and run a variant of Unix, delivering a truly reliable workhorse for computing-intensive tasks.
Indeed, it is perhaps in the area of the more sophisticated CPU where the true workstation may be found. Although both the consumer desktop and the workstation benefit from CPUs designed around the multicore concept (essentially, multiple processors on a die, of which the POWER4 was a pioneer of this technique), modern (as of 2008) workstations use multiple multicore CPUs, error correcting memory and much larger on-die caches. Such power and reliability are not normally required on a general desktop computer. IBM's POWER-based processor boards and the workstation-level Intel-based Xeon processor boards, for example, have multiple CPUs, more on-die cache and ECC memory, which are features more suited to demanding content-creation, engineering and scientific work than to general desktop computing.
Some workstations are designed for use with only one specific application such as AutoCAD, Avid Xpress Studio HD, 3D Studio Max, etc. To ensure compatibility with the software, purchasers usually ask for a certificate from the software vendor. The certification process makes the workstation's price jump several notches but for professional purposes, reliability is more important than the initial purchase cost.
[edit] History
The Xerox Alto workstation, first to use a graphical user interface with mouse and origin of ethernet.Perhaps the first computer that might qualify as a "workstation" was the IBM 1620, a small scientific computer designed to be used interactively by a single person sitting at the console. It was introduced in 1959. One peculiar feature of the machine was that it lacked any actual arithmetic circuitry. To perform addition, it required a memory-resident table of decimal addition rules. This saved on the cost of logic circuitry, enabling IBM to make it inexpensive. The machine was code-named CADET, which some people waggishly claimed meant "Can't Add, Doesn't Even Try". Nonetheless, it rented initially for $1000 a month.
In 1965, IBM introduced the IBM 1130 scientific computer, which was meant as the successor to the 1620. Both of these systems came with the ability to run programs written in Fortran and other languages. Both the 1620 and the 1130 were built into roughly desk-sized cabinets. Both were available with add-on disk drives, printers, and both paper-tape and punched-card I/O. A console typewriter for direct interaction was standard on each.
Early examples of workstations were generally dedicated minicomputers; a system designed to support a number of users would instead be reserved exclusively for one person. A notable example was the PDP-8 from Digital Equipment Corporation, regarded to be the first commercial minicomputer.
The Lisp machines developed at MIT in the early 1970s pioneered some of the principles of the workstation computer, as they were high-performance, networked, single-user systems intended for heavily interactive use. Lisp Machines were commercialized beginning 1980 by companies like Symbolics, Lisp Machines, Texas Instruments (the TI Explorer) and Xerox (the Interlisp-D workstations). The first computer designed for single-users, with high-resolution graphics facilities (and so a workstation in the modern sense of the term) was the Xerox Alto developed at Xerox PARC in 1973. Other early workstations include the Three Rivers PERQ (1979) and the later Xerox Star (1981).
In the early 1980s, with the advent of 32-bit microprocessors such as the Motorola 68000, a number of new participants in this field appeared, including Apollo Computer and Sun Microsystems, who created Unix-based workstations based on this processor. Meanwhile DARPA's VLSI Project created several spinoff graphics products as well, notably the SGI 3130, and Silicon Graphics' range of machines that followed. It was not uncommon to differentiate the target market for the products, with Sun and Apollo considered to be network workstations, while the SGI machines were graphics workstations. As RISC microprocessors became available in the mid-1980s, these were adopted by many workstation vendors.
Workstations tended to be very expensive, typically several times the cost of a standard PC and sometimes costing as much as a new car. However, minicomputers sometimes cost as much as a house. The high expense usually came from using costlier components that ran faster than those found at the local computer store, as well as the inclusion of features not found in PCs of the time, such as high-speed networking and sophisticated graphics. Workstation manufacturers also tend to take a "balanced" approach to system design, making certain to avoid bottlenecks so that data can flow unimpeded between the many different subsystems within a computer. Additionally, workstations, given their more specialized nature, tend to have higher profit margins than commodity-driven PCs.
The systems that come out of workstation companies often feature SCSI or Fibre Channel disk storage systems, high-end 3D accelerators, single or multiple 64-bit processors, large amounts of RAM, and well-designed cooling. Additionally, the companies that make the products tend to have very good repair/replacement plans. However, the line between workstation and PC is increasingly becoming blurred as the demand for fast computers, networking and graphics have become common in the consumer world, allowing workstation manufacturers to use "off the shelf" PC components and graphics solutions as opposed to proprietary in-house developed technology. Some "low-cost" workstations are still expensive by PC standards, but offer binary compatibility with higher-end workstations and servers made by the same vendor. This allows software development to take place on low-cost (relative to the server) desktop machines.
There have been several attempts to produce a workstation-like machine specifically for the lowest possible price point as opposed to performance. One approach is to remove local storage and reduce the machine to the processor, keyboard, mouse and screen. In some cases, these diskless nodes would still run a traditional OS and perform computations locally, with storage on a remote server. These approaches are intended not just to reduce the initial system purchase cost, but lower the total cost of ownership by reducing the amount of administration required per user.
This approach was actually first attempted as a replacement for PCs in office productivity applications, with the 3Station by 3Com as an early example; in the 1990s, X terminals filled a similar role for technical computing. Sun has also introduced "thin clients", most notably its Sun Ray product line. However, traditional workstations and PCs continue to drop in price, which tends to undercut the market for products of this type.
[edit] Workstation class PCs
A significant segment of the desktop market are computers expected to perform as workstations, but using PC operating systems and components. PC component manufacturers will often segment their product line, and market premium components which are functionally similar to the cheaper "consumer" models but feature a higher level of robustness and/or performance. Notable examples of this are the AMD Opteron, Intel Xeon processors, and the ATI FireGL and Nvidia Quadro graphics processors.
A workstation class PC may have some of the following features:
support for ECC memory
a larger number of memory sockets which use registered (buffered) modules
multiple processor sockets, powerful CPUs (for Intel CPU it will be server derived Xeon instead of typical for PCs Core)
multiple displays
run reliable operating system with advanced features
high performance graphics card
[edit] Current workstation market
Of historic Unix workstation manufacturers, only Sun Microsystems continues its product line. As of January 2009 all RISC-based workstation product lines have been discontinued, IBM retiring its IntelliStation product line at that date.[2]
Current workstation market reorganizes around x86-64 microprocessors. Operating systems available for these platforms are Windows, the different Linux distributions, Mac OS X and Solaris 10.
Three types of products are marketed under the workstation umbrella:
Workstation blade systems (IBM HC10 or Hewlett-Packard xw460c. Sun Visualization System is akin to these solutions)
Ultra high-end workstation (SGI Virtu VS3xx)
High-end deskside system two ways capable x64 systems
Some vendors also market commodity mono socket systems as workstations.
[edit] List of manufacturers
[edit] Current
Acer
Alienware
Apple Inc.
AVADirect
BOXX Technologies
Dell
Fujitsu-Siemens Computers
Hewlett-Packard
Lenovo
MAINGEAR
Silicon Graphics
Sun Microsystems
Workstation Specialists
[edit] Defunct
Apollo Computer
Ardent Computer
Callan Data Systems
Computervision
Digital Equipment Corporation
Evans & Sutherland (operating, but no longer manufactures workstations)
Intergraph
InterPro
MIPS Computer Systems
NeXT
Stardent Inc.
Three Rivers Computer Corporation
Torch Computers
Xworks Interactive
[edit] See also
Music workstation
[edit] References
1.^ RFC 782 defined the workstation environment more generally as hardware and software dedicated to serve a single user, and that it provide for the use of additional shared resources.
2.^ Official IBM Hardware Withdrawal Announcement of IntelliStation POWER 185 and 285.
This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.
[hide]v • d • eComputer sizes
Classes of computers
Larger Super · Minisuper · Mainframe · Mini (Midrange) · Supermini · Server
Micro Personal (Workstation · Desktop · Home · SFF (Nettop)) · Plug · Portable · Arcade system board · Video game console
Mobile Portable/Mobile data terminal · Electronic organizer · Pocket computer
Laptop Desktop replacement computer · Subnotebook (Netbook · Smartbook)
Tablet computer Tablet PC (Ultra-Mobile PC) · Mobile internet device (Internet Tablet)
Wearable computer Calculator watch · Virtual retinal display · Head-mounted display (Head-up display)
Information appliance PDA (Palm-size PC · Handheld PC · Pocket PC) · EDA · Mobile phone (Smartphone · Feature phone) · PMP · DAP · E-book reader · Handheld game console
Calculators Scientific · Programmable · Graphing
Others Single-board computer · Wireless sensor network · Microcontroller · Smartdust · Nanocomputer · Pizza Box Case
Discribe a typical use for mainframe computer
Mainframe computer
From Wikipedia, the free encyclopediaJump to: navigation, search
For other uses, see Mainframe (disambiguation).
This article has been nominated to be checked for its neutrality. Discussion of this nomination can be found on the talk page. (July 2009)
This article contains weasel words, vague phrasing that often accompanies biased or unverifiable information. Such statements should be clarified or removed. (January 2010)
An IBM 704 mainframeMainframes (often colloquially referred to as Big Iron[1]) are powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.
The term originally referred to the large cabinets that housed the central processing unit and main memory of early computers.[2][3] Later the term was used to distinguish high-end commercial machines from less powerful units.
Most large-scale computer system architectures were firmly established in the 1960s and most large computers were based on architecture established during that era up until the advent of Web servers in the 1990s. (The first Web server running anywhere outside Switzerland ran on an IBM mainframe at Stanford University as early as 1991. See History of the World Wide Web for details.)
There were several minicomputer operating systems and architectures that arose in the 1970s and 1980s, but minicomputers are generally not considered mainframes. (UNIX arose as a minicomputer operating system; Unix has scaled up over the years to acquire some mainframe characteristics.)
Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day.
Contents [hide]
1 Description
2 Characteristics
3 Market
4 History
5 Differences from supercomputers
6 See also
7 References
8 External links
[edit] Description
Modern mainframe computers have abilities not so much defined by their single task computational speed (usually defined as MIPS — Millions of Instructions Per Second) as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility with older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and hardware upgrades taking place during normal operation.
Software upgrades are only non-disruptive when using facilities such as IBM's z/OS and Parallel Sysplex, with workload sharing so one system can take over another's application while it is being refreshed. More recently, there are several IBM mainframe installations that have delivered over a decade of continuous business service as of 2007, with hardware upgrades not interrupting service.[citation needed] Mainframes are defined by high availability, one of the main reasons for their longevity, because they are typically used in applications where downtime would be costly or catastrophic. The term Reliability, Availability and Serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning (and implementation) is required to exploit these features.
In the 1960s, most mainframes had no interactive interface. They accepted sets of punched cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, at least for system operators. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Many mainframes supported graphical terminals (and terminal emulation) but not graphical user interfaces by the 1980s, but end user computing was largely obsoleted in the 1990s by the personal computer. Nowadays most mainframes have partially or entirely phased out classic terminal access for end-users in favor of Web user interfaces. Developers and operational staff typically continue to use terminals or terminal emulators.[citation needed]
Historically, mainframes acquired their name in part because of their substantial size, and because of requirements for specialized heating, ventilation, and air conditioning (HVAC), and electrical power. Those requirements ended by the mid-1990s with CMOS mainframe designs replacing the older bipolar technology. IBM claims its newer mainframes can reduce data center energy costs for power and cooling, and that they can reduce physical space requirements compared to server farms.[4]
[edit] Characteristics
Nearly all mainframes have the ability to run (or host) multiple operating systems, and thereby operate not as a single computer but as a number of virtual machines. In this role, a single mainframe can replace dozens or even hundreds of smaller servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication.
Mainframes can add or hot swap system capacity non disruptively and granularly, to a level of sophistication usually not found on most servers. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions (LPARs, via the PR/SM facility) and virtual machines (via the z/VM operating system). Some IBM mainframe customers run no more than two machines[citation needed]: one in their primary data center, and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice most customers use multiple mainframes linked by Parallel Sysplex and shared DASD.
Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the mid-1960s, mainframe designs have included several subsidiary computers (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Giga-record or tera-record files are not unusual.[5] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it much faster.[citation needed] Other server families also offload I/O processing and emphasize throughput computing.
Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Some argue that the modern mainframe is not cost-effective. Hewlett-Packard and Dell unsurprisingly take that view at least at times, and so do some independent analysts. Sun Microsystems also takes that view, but beginning in 2007 promoted a partnership with IBM which largely focused on IBM support for Solaris on its System x and BladeCenter products (and therefore unrelated to mainframes), but also included positive comments for the company's OpenSolaris operating system being ported to IBM mainframes as part of increasing the Solaris community. Some analysts (such as Gartner[citation needed]) claim that the modern mainframe often has unique value and superior cost-effectiveness, especially for large scale enterprise computing. In fact, Hewlett-Packard also continues to manufacture its own mainframe (arguably), the NonStop system originally created by Tandem. Logical partitioning is now found in many UNIX-based servers, and many vendors are promoting virtualization technologies, in many ways validating the mainframe's design accomplishments while blurring the differences between the different approaches to enterprise computing.
Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.
[edit] Market
IBM mainframes dominate the mainframe market at well over 90% market share.[6] Unisys manufactures ClearPath mainframes, based on earlier Sperry and Burroughs product lines. In 2002, Hitachi co-developed the zSeries z800 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's DPS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe. Fujitsu, Hitachi, and NEC (the "JCMs") still maintain nominal mainframe hardware businesses in their home Japanese market, although they have been slow to introduce new hardware models in recent years.
The amount of vendor investment in mainframe development varies with marketshare. Unisys, HP, Groupe Bull, Fujitsu, Hitachi, and NEC now rely primarily on commodity Intel CPUs rather than custom processors in order to reduce their development expenses, and they have also cut back their mainframe software development. (However, Unisys still maintains its own unique CMOS processor design development for certain high-end ClearPath models but contracts chip manufacturing to IBM.) In stark contrast, IBM continues to pursue a different business strategy of mainframe investment and growth. IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors such as 2008's 4.4 GHz quad-core z10 mainframe microprocessor. IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional revenue and profits.[7][8]
[edit] History
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
Several manufacturers produced mainframe computers from the late 1950s through the 1970s. The group of manufacturers was first known as "IBM and the Seven Dwarfs": IBM, Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA. Later, shrinking, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries mainframes which, along with the then Burroughs and now Unisys MCP-based mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. That said, while they can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the USA were Siemens and Telefunken in Germany, ICL in the United Kingdom, Olivetti in Italy, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the BESM series and Strela are examples of an independently designed Soviet computer.
Shrinking demand and tough competition started a shakeout in the market in the early 1970s — RCA sold out to UNIVAC and GE also left; in the 1980s Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1991, AT&T briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. Infoworld's Stewart Alsop famously predicted that the last mainframe would be unplugged in 1996.
That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999 and is typically run in scores or hundreds virtual machines on a single mainframe. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly People's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000 IBM introduced 64-bit z/Architecture, acquired numerous software companies such as Cognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM's System z hardware revenues decreased by 27 year over year. But MIPS shipments (a measure of mainframe capacity) increased 4% per year over the past two years.[9]
[edit] Differences from supercomputers
A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers are used for scientific and engineering problems (Grand Challenge problems) which are limited by processing speed and memory size, while mainframes are used for problems which are limited by data movement in input/output devices, reliability, and for handling multiple business transactions concurrently. The differences are as follows:
Mainframes are measured in millions of instructions per second (MIPS) while assuming typical instructions are integer operations, but supercomputers are measured in floating point operations per second (FLOPS). Examples of integer operations include adjusting inventory counts, matching names, indexing tables of data, and making routine yes or no decisions. Floating point operations are mostly addition, subtraction, and multiplication with enough digits of precision to model continuous phenomena such as weather. In terms of computational ability, supercomputers are more powerful.[10]
Mainframes are built to be reliable for transaction processing as it is commonly understood in the business world: a commercial exchange of goods, services, or money. A typical transaction, as defined by the Transaction Processing Performance Council,[11] would include the updating to a database system for such things as inventory control (goods), airline reservations (services), or banking (money). A transaction could refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another.
[edit] See also
Computer types
[edit] References
1.^ "IBM preps big iron fiesta". The Register. July 20, 2005. http://www.theregister.co.uk/2005/07/20/ibm_mainframe_refresh/.
2.^ Oxford English Dictionary, on -line edition, mainframe, n
3.^ Ebbers, Mike; O’Brien, W.; Ogden, B. (2006). "Introduction to the New Mainframe: z/OS Basics" (pdf). IBM International Technical Support Organization. http://publibz.boulder.ibm.com/zoslib/pdf/zosbasic.pdf. Retrieved 2007-06-01.
4.^ "Get the facts on IBM vs the Competition- The facts about IBM System z "mainframe"". IBM. http://www-03.ibm.com/systems/migratetoibm/getthefacts/mainframe.html#4. Retrieved December 28, 2009.
5.^ "Largest Commercial Database in Winter Corp. TopTen Survey Tops One Hundred Terabytes". Press release. http://www.wintercorp.com/PressReleases/ttp2005_pressrelease_091405.htm. Retrieved 2008-05-16.
6.^ "IBM Tightens Stranglehold Over Mainframe Market; Gets Hit with Antitrust Complaint in Europe". CCIA. 2008-07-02. http://openmainframe.org/news/ibm-tightens-stranglehold-over-mainframe-market-gets-hit-wit.html. Retrieved 2008-07-09.
7.^ "IBM Opens Latin America's First Mainframe Software Center". Enterprise Networks and Servers. August 2007. http://www.enterprisenetworksandservers.com/monthly/art.php?3306.
8.^ "IBM Helps Clients Modernize Applications on the Mainframe". IBM. November 7, 2007. http://www-03.ibm.com/press/us/en/pressrelease/22556.wss.
9.^ "IBM 4Q2009 Financial Report: CFO's Prepared Remarks". IBM. January 19, 2010. http://www.ibm.com/investor/4q09/presentation/4q09prepared.pdf.
10.^ World's Top Supercomputer Retrieved on December 25, 2009
11.^ Transaction Processing Performance Council Retrieved on December 25, 2009.
[edit] External links
Wikimedia Commons has media related to: Mainframe computers
IBM eServer zSeries mainframe servers
Univac 9400, a mainframe from the 1960s, still in use in a German computer museum
Lectures in the History of Computing: Mainframes
[hide]v • d • eComputer sizes
Classes of computers
Larger Super · Minisuper · Mainframe · Mini (Midrange) · Supermini · Server
Micro Personal (Workstation · Desktop · Home · SFF (Nettop)) · Plug · Portable · Arcade system board · Video game console
Mobile Portable/Mobile data terminal · Electronic organizer · Pocket computer
Laptop Desktop replacement computer · Subnotebook (Netbook · Smartbook)
Tablet computer Tablet PC (Ultra-Mobile PC) · Mobile internet device (Internet Tablet)
Wearable computer Calculator watch · Virtual retinal display · Head-mounted display (Head-up display)
Information appliance PDA (Palm-size PC · Handheld PC · Pocket PC) · EDA · Mobile phone (Smartphone · Feature phone) · PMP · DAP · E-book reader · Handheld game console
Calculators Scientific · Programmable · Graphing
Others Single-board computer · Wireless sensor network · Microcontroller · Smartdust · Nanocomputer · Pizza Box Case
From Wikipedia, the free encyclopediaJump to: navigation, search
For other uses, see Mainframe (disambiguation).
This article has been nominated to be checked for its neutrality. Discussion of this nomination can be found on the talk page. (July 2009)
This article contains weasel words, vague phrasing that often accompanies biased or unverifiable information. Such statements should be clarified or removed. (January 2010)
An IBM 704 mainframeMainframes (often colloquially referred to as Big Iron[1]) are powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.
The term originally referred to the large cabinets that housed the central processing unit and main memory of early computers.[2][3] Later the term was used to distinguish high-end commercial machines from less powerful units.
Most large-scale computer system architectures were firmly established in the 1960s and most large computers were based on architecture established during that era up until the advent of Web servers in the 1990s. (The first Web server running anywhere outside Switzerland ran on an IBM mainframe at Stanford University as early as 1991. See History of the World Wide Web for details.)
There were several minicomputer operating systems and architectures that arose in the 1970s and 1980s, but minicomputers are generally not considered mainframes. (UNIX arose as a minicomputer operating system; Unix has scaled up over the years to acquire some mainframe characteristics.)
Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day.
Contents [hide]
1 Description
2 Characteristics
3 Market
4 History
5 Differences from supercomputers
6 See also
7 References
8 External links
[edit] Description
Modern mainframe computers have abilities not so much defined by their single task computational speed (usually defined as MIPS — Millions of Instructions Per Second) as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility with older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and hardware upgrades taking place during normal operation.
Software upgrades are only non-disruptive when using facilities such as IBM's z/OS and Parallel Sysplex, with workload sharing so one system can take over another's application while it is being refreshed. More recently, there are several IBM mainframe installations that have delivered over a decade of continuous business service as of 2007, with hardware upgrades not interrupting service.[citation needed] Mainframes are defined by high availability, one of the main reasons for their longevity, because they are typically used in applications where downtime would be costly or catastrophic. The term Reliability, Availability and Serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning (and implementation) is required to exploit these features.
In the 1960s, most mainframes had no interactive interface. They accepted sets of punched cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, at least for system operators. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Many mainframes supported graphical terminals (and terminal emulation) but not graphical user interfaces by the 1980s, but end user computing was largely obsoleted in the 1990s by the personal computer. Nowadays most mainframes have partially or entirely phased out classic terminal access for end-users in favor of Web user interfaces. Developers and operational staff typically continue to use terminals or terminal emulators.[citation needed]
Historically, mainframes acquired their name in part because of their substantial size, and because of requirements for specialized heating, ventilation, and air conditioning (HVAC), and electrical power. Those requirements ended by the mid-1990s with CMOS mainframe designs replacing the older bipolar technology. IBM claims its newer mainframes can reduce data center energy costs for power and cooling, and that they can reduce physical space requirements compared to server farms.[4]
[edit] Characteristics
Nearly all mainframes have the ability to run (or host) multiple operating systems, and thereby operate not as a single computer but as a number of virtual machines. In this role, a single mainframe can replace dozens or even hundreds of smaller servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication.
Mainframes can add or hot swap system capacity non disruptively and granularly, to a level of sophistication usually not found on most servers. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions (LPARs, via the PR/SM facility) and virtual machines (via the z/VM operating system). Some IBM mainframe customers run no more than two machines[citation needed]: one in their primary data center, and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice most customers use multiple mainframes linked by Parallel Sysplex and shared DASD.
Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the mid-1960s, mainframe designs have included several subsidiary computers (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Giga-record or tera-record files are not unusual.[5] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it much faster.[citation needed] Other server families also offload I/O processing and emphasize throughput computing.
Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Some argue that the modern mainframe is not cost-effective. Hewlett-Packard and Dell unsurprisingly take that view at least at times, and so do some independent analysts. Sun Microsystems also takes that view, but beginning in 2007 promoted a partnership with IBM which largely focused on IBM support for Solaris on its System x and BladeCenter products (and therefore unrelated to mainframes), but also included positive comments for the company's OpenSolaris operating system being ported to IBM mainframes as part of increasing the Solaris community. Some analysts (such as Gartner[citation needed]) claim that the modern mainframe often has unique value and superior cost-effectiveness, especially for large scale enterprise computing. In fact, Hewlett-Packard also continues to manufacture its own mainframe (arguably), the NonStop system originally created by Tandem. Logical partitioning is now found in many UNIX-based servers, and many vendors are promoting virtualization technologies, in many ways validating the mainframe's design accomplishments while blurring the differences between the different approaches to enterprise computing.
Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.
[edit] Market
IBM mainframes dominate the mainframe market at well over 90% market share.[6] Unisys manufactures ClearPath mainframes, based on earlier Sperry and Burroughs product lines. In 2002, Hitachi co-developed the zSeries z800 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's DPS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe. Fujitsu, Hitachi, and NEC (the "JCMs") still maintain nominal mainframe hardware businesses in their home Japanese market, although they have been slow to introduce new hardware models in recent years.
The amount of vendor investment in mainframe development varies with marketshare. Unisys, HP, Groupe Bull, Fujitsu, Hitachi, and NEC now rely primarily on commodity Intel CPUs rather than custom processors in order to reduce their development expenses, and they have also cut back their mainframe software development. (However, Unisys still maintains its own unique CMOS processor design development for certain high-end ClearPath models but contracts chip manufacturing to IBM.) In stark contrast, IBM continues to pursue a different business strategy of mainframe investment and growth. IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors such as 2008's 4.4 GHz quad-core z10 mainframe microprocessor. IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional revenue and profits.[7][8]
[edit] History
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
Several manufacturers produced mainframe computers from the late 1950s through the 1970s. The group of manufacturers was first known as "IBM and the Seven Dwarfs": IBM, Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA. Later, shrinking, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries mainframes which, along with the then Burroughs and now Unisys MCP-based mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. That said, while they can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the USA were Siemens and Telefunken in Germany, ICL in the United Kingdom, Olivetti in Italy, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the BESM series and Strela are examples of an independently designed Soviet computer.
Shrinking demand and tough competition started a shakeout in the market in the early 1970s — RCA sold out to UNIVAC and GE also left; in the 1980s Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1991, AT&T briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. Infoworld's Stewart Alsop famously predicted that the last mainframe would be unplugged in 1996.
That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999 and is typically run in scores or hundreds virtual machines on a single mainframe. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly People's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000 IBM introduced 64-bit z/Architecture, acquired numerous software companies such as Cognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM's System z hardware revenues decreased by 27 year over year. But MIPS shipments (a measure of mainframe capacity) increased 4% per year over the past two years.[9]
[edit] Differences from supercomputers
A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers are used for scientific and engineering problems (Grand Challenge problems) which are limited by processing speed and memory size, while mainframes are used for problems which are limited by data movement in input/output devices, reliability, and for handling multiple business transactions concurrently. The differences are as follows:
Mainframes are measured in millions of instructions per second (MIPS) while assuming typical instructions are integer operations, but supercomputers are measured in floating point operations per second (FLOPS). Examples of integer operations include adjusting inventory counts, matching names, indexing tables of data, and making routine yes or no decisions. Floating point operations are mostly addition, subtraction, and multiplication with enough digits of precision to model continuous phenomena such as weather. In terms of computational ability, supercomputers are more powerful.[10]
Mainframes are built to be reliable for transaction processing as it is commonly understood in the business world: a commercial exchange of goods, services, or money. A typical transaction, as defined by the Transaction Processing Performance Council,[11] would include the updating to a database system for such things as inventory control (goods), airline reservations (services), or banking (money). A transaction could refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another.
[edit] See also
Computer types
[edit] References
1.^ "IBM preps big iron fiesta". The Register. July 20, 2005. http://www.theregister.co.uk/2005/07/20/ibm_mainframe_refresh/.
2.^ Oxford English Dictionary, on -line edition, mainframe, n
3.^ Ebbers, Mike; O’Brien, W.; Ogden, B. (2006). "Introduction to the New Mainframe: z/OS Basics" (pdf). IBM International Technical Support Organization. http://publibz.boulder.ibm.com/zoslib/pdf/zosbasic.pdf. Retrieved 2007-06-01.
4.^ "Get the facts on IBM vs the Competition- The facts about IBM System z "mainframe"". IBM. http://www-03.ibm.com/systems/migratetoibm/getthefacts/mainframe.html#4. Retrieved December 28, 2009.
5.^ "Largest Commercial Database in Winter Corp. TopTen Survey Tops One Hundred Terabytes". Press release. http://www.wintercorp.com/PressReleases/ttp2005_pressrelease_091405.htm. Retrieved 2008-05-16.
6.^ "IBM Tightens Stranglehold Over Mainframe Market; Gets Hit with Antitrust Complaint in Europe". CCIA. 2008-07-02. http://openmainframe.org/news/ibm-tightens-stranglehold-over-mainframe-market-gets-hit-wit.html. Retrieved 2008-07-09.
7.^ "IBM Opens Latin America's First Mainframe Software Center". Enterprise Networks and Servers. August 2007. http://www.enterprisenetworksandservers.com/monthly/art.php?3306.
8.^ "IBM Helps Clients Modernize Applications on the Mainframe". IBM. November 7, 2007. http://www-03.ibm.com/press/us/en/pressrelease/22556.wss.
9.^ "IBM 4Q2009 Financial Report: CFO's Prepared Remarks". IBM. January 19, 2010. http://www.ibm.com/investor/4q09/presentation/4q09prepared.pdf.
10.^ World's Top Supercomputer Retrieved on December 25, 2009
11.^ Transaction Processing Performance Council Retrieved on December 25, 2009.
[edit] External links
Wikimedia Commons has media related to: Mainframe computers
IBM eServer zSeries mainframe servers
Univac 9400, a mainframe from the 1960s, still in use in a German computer museum
Lectures in the History of Computing: Mainframes
[hide]v • d • eComputer sizes
Classes of computers
Larger Super · Minisuper · Mainframe · Mini (Midrange) · Supermini · Server
Micro Personal (Workstation · Desktop · Home · SFF (Nettop)) · Plug · Portable · Arcade system board · Video game console
Mobile Portable/Mobile data terminal · Electronic organizer · Pocket computer
Laptop Desktop replacement computer · Subnotebook (Netbook · Smartbook)
Tablet computer Tablet PC (Ultra-Mobile PC) · Mobile internet device (Internet Tablet)
Wearable computer Calculator watch · Virtual retinal display · Head-mounted display (Head-up display)
Information appliance PDA (Palm-size PC · Handheld PC · Pocket PC) · EDA · Mobile phone (Smartphone · Feature phone) · PMP · DAP · E-book reader · Handheld game console
Calculators Scientific · Programmable · Graphing
Others Single-board computer · Wireless sensor network · Microcontroller · Smartdust · Nanocomputer · Pizza Box Case
Identify two unique features of supercomputer
Supercomputer
From Wikipedia, the free encyclopediaJump to: navigation, search
For other uses, see Supercomputer (disambiguation).
The Columbia Supercomputer, located at the NASA Ames Research Center.A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".
Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. As of May 2010[update], the Cray Jaguar is the fastest supercomputer in the world.
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion). A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Contents [hide]
1 Hardware and software design
1.1 Supercomputer challenges, technologies
1.2 Processing techniques
1.3 Operating systems
1.4 Programming
1.5 Software tools
2 Modern supercomputer architecture
3 Special-purpose supercomputers
4 The fastest supercomputers today
4.1 Measuring supercomputer speed
4.2 The TOP500 list
4.3 Current fastest supercomputer system
4.4 Quasi-supercomputing
5 Research and development
6 Timeline of supercomputers
7 See also
8 Notes
9 External links
[edit] Hardware and software design
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
Processor board of a CRAY YMP vector computerSupercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.
As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.
[edit] Supercomputer challenges, technologies
A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many metres across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason, hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1–5 microseconds to send a message between CPUs are typical.
Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:
Vector processing
Liquid cooling
Non-Uniform Memory Access (NUMA)
Striped disks (the first instance of what was later called RAID)
Parallel filesystems
[edit] Processing techniques
Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers.
Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).
The current Top500 list (from May 2010) has 3 supercomputers based on GPGPUs. In particular, the number 2 supercomputer is Nebulae built by Dawning in China[1].
[edit] Operating systems
Supercomputers predominantly run a variant of Linux.[2]Supercomputers today most often use variants of Linux[2].
Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. In similar manner, different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of computer systems such as Cray's Unicos, or Linux.
[edit] Programming
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is, in general, Fortran or C, using special libraries to share data between nodes. In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize a problem for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. The new massively parallel GPGPUs have 100s of processor cores and are programmed using programming models such as CUDA and OpenCL.
[edit] Software tools
Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf, WareWulf, and openMosix, which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates disruptive technology.
[edit] Modern supercomputer architecture
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
IBM Roadrunner - LANL
The CPU Architecture Share of Top500 Rankings between 1993 and 2009.Supercomputers today often have a similar top-level architecture consisting of a cluster of MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:
A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. Each computer runs under a separate instance of an Operating System (OS).
A multiprocessing computer is a computer, operating under a single OS and using more than one CPU, wherein the application-level software is indifferent to the number of processors. The processors share tasks using Symmetric multiprocessing (SMP) and Non-Uniform Memory Access (NUMA).
A SIMD processor executes the same instruction on more than one set of data at the same time. The processor could be a general purpose commodity processor or special-purpose vector processor. It could also be high-performance processor or a low power processor. As of 2007, the processor executes several SIMD instructions per nanosecond.
As of November 2009 the fastest supercomputer in the world is the Cray XT5 Jaguar system at National Center for Computational Sciences with more than 19000 computers and 224,000 processing elements, based on standard AMD processors.
The second fastest supercomputer and the fastest heterogeneous (or hybrid) machine is Dawning Nebulae in China. This machine is a cluster of 4640 blade servers, each with 1 NVIDIA Tesla C2050 (Fermi) GPGPU and 2 Intel Westmere CPUs. The Tesla GPUs deliver most of the Linpack performance, since each Tesla C2050 GPU has 515 Gigaflops peak double precision performance. The most remarkable thing about the hybrid supercomputers like Nebulae and the IBM Roadrunner (uses IBM Cell as coprocessor) is the low power of these systems. Nebulae for example is 2.55 Megawatts power and delivers 1.271 Petaflops/s compared to the number 1 supercomputer Jaguar (made using AMD Opteron CPUs) that consumes 7 Megawatt power and delivers 1.759 Petaflops/s. This makes Nebulae two times higher performance per watt compared to Jaguar.
In February 2009, IBM also announced work on "Sequoia," which appears to be a 20 petaflops supercomputer. This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100,000 laptops). It is slated for deployment in late 2011. [3] The Sequoia will be powered by 1.6 million cores (specific 45-nanometer chips in development) and 1.6 petabytes of memory. It will be housed in 96 refrigerators spanning roughly 3,000 square feet [4] .
Moore's Law and economies of scale are the dominant factors in supercomputer design. The design concepts that allowed past supercomputers to out-perform desktop machines of the time tended to be gradually incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can be done on workstations costing less than 4,000 US dollars as of 2010. Supercomputing is taking a step of increasing density, allowing for desktop supercomputers to become available, offering the computer power that in 1998 required a large room to require less than a desktop footprint.
In addition, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, in particular, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design, which can be programmed to act as one large computer.
[edit] Special-purpose supercomputers
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.
Examples of special-purpose supercomputers:
Belle, Deep Blue, and Hydra, for playing chess
Reconfigurable computing machines or parts of machines
GRAPE, for astrophysics and molecular dynamics
Deep Crack, for breaking the DES cipher
MDGRAPE-3, for protein structure computation
D. E. Shaw Research Anton, for simulating molecular dynamics [5]
[edit] The fastest supercomputers today
[edit] Measuring supercomputer speed
14 countries account for the vast majority of the world's 500 fastest supercomputers, with over half being located in the United States.In general, the speed of a supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) This measurement is based on a particular benchmark, which does LU decomposition of a large matrix. This mimics a class of real-world problems, but is significantly easier to compute than a majority of actual real-world problems.
"Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaflops range. An exaflop is one quintillion (1018) FLOPS (one million teraflops).
[edit] The TOP500 list
Main article: TOP500
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
[edit] Current fastest supercomputer system
In November 2009, the AMD Opteron-based Cray XT5 Jaguar at the Oak Ridge National Laboratory was announced as the fastest operational supercomputer, with a sustained processing rate of 1.759 PFLOPS.[6] [7]
[edit] Quasi-supercomputing
A Blue Gene/P node cardSome types of large-scale distributed computing for embarrassingly parallel problems take the clustered supercomputing concept to an extreme.
The fastest cluster, Folding@home, reported over 7.8 petaflops of processing power as of December 2009. Of this, 2.3 petaflops of this processing power is contributed by clients running on NVIDIA GeForce GPUs, AMD GPUs, PlayStation 3 systems and another 5.1 petaflops is contributed by their newly released GPU2 client.[8]
Another distributed computing project is the BOINC platform, which hosts a number of distributed computing projects. As of April 2010[update], BOINC recorded a processing power of over 5 petaflops through over 580,000 active computers on the network.[9] The most active project (measured by computational power), MilkyWay@home, reports processing power of over 1.4 petaflops through over 30,000 active computers.[10]
As of April 2010[update], GIMPS's distributed Mersenne Prime search currently achieves about 45 teraflops.[11]
Also a “quasi-supercomputer” is Google's search engine system with estimated total processing power of between 126 and 316 teraflops, as of April 2004.[12] In June 2006 the New York Times estimated that the Googleplex and its server farms contain 450,000 servers.[13] According to recent estimations, the processing power of Google's cluster might reach from 20 to 100 petaflops.[14]
The PlayStation 3 Gravity Grid uses a network of 16 machines, and exploits the Cell processor for the intended application, which is binary black hole coalescence using perturbation theory.[15][16] The Cell processor has a main CPU and 6 floating-point vector processors, giving the machine a net of 16 general-purpose machines and 96 vector processors. The machine has a one-time cost of $9,000 to build and is adequate for black-hole simulations, which would otherwise cost $6,000 per run on a conventional supercomputer. The black hole calculations are not memory-intensive and are locally introduced, and so are well-suited to this architecture.
Other notable computer clusters are the flash mob cluster and the Beowulf cluster. The flash mob cluster allows the use of any computer in the network, while the Beowulf cluster still requires uniform architecture.
[edit] Research and development
IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip".
Other PFLOPS projects include one by Narendra Karmarkar in India,[17] a CDAC effort targeted for 2010,[18] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011).[19]
In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflops computer, Pleiades, in 2009, scaling up to 10 PFLOPs by 2012.[20] Meanwhile, IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory, named Sequoia, which is scheduled to go online in 2011.
Given the current speed of progress, supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019.[21]
Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two week time span accurately.[22] Such systems might be built around 2030.[23]
[edit] Timeline of supercomputers
This is a list of the record-holders for fastest general-purpose supercomputer in the world, and the year each one set the record. For entries prior to 1993, this list refers to various sources[24][citation needed]. From 1993 to present, the list reflects the Top500 listing[25], and the "Peak speed" is given as the "Rmax" rating.
Year Supercomputer Peak speed
(Rmax) Location
1938 Zuse Z1 1 OPS Konrad Zuse, Berlin, Germany
1941 Zuse Z3 20 OPS Konrad Zuse, Berlin, Germany
1943 Colossus 1 5 kOPS Post Office Research Station, Bletchley Park, UK
1944 Colossus 2 (Single Processor) 25 kOPS Post Office Research Station, Bletchley Park, UK
1946 Colossus 2 (Parallel Processor) 50 kOPS Post Office Research Station, Bletchley Park, UK
1946
UPenn ENIAC
(before 1948+ modifications) 5 kOPS Department of War
Aberdeen Proving Ground, Maryland, USA
1954 IBM NORC 67 kOPS Department of Defense
U.S. Naval Proving Ground, Dahlgren, Virginia, USA
1956 MIT TX-0 83 kOPS Massachusetts Inst. of Technology, Lexington, Massachusetts, USA
1958 IBM AN/FSQ-7 400 kOPS 25 U.S. Air Force sites across the continental USA and 1 site in Canada (52 computers)
1960 UNIVAC LARC 250 kFLOPS Atomic Energy Commission (AEC)
Lawrence Livermore National Laboratory, California, USA
1961 IBM 7030 "Stretch" 1.2 MFLOPS AEC-Los Alamos National Laboratory, New Mexico, USA
1964 CDC 6600 3 MFLOPS AEC-Lawrence Livermore National Laboratory, California, USA
1969 CDC 7600 36 MFLOPS
1974 CDC STAR-100 100 MFLOPS
1975 Burroughs ILLIAC IV 150 MFLOPS NASA Ames Research Center, California, USA
1976 Cray-1 250 MFLOPS Energy Research and Development Administration (ERDA)
Los Alamos National Laboratory, New Mexico, USA (80+ sold worldwide)
1981 CDC Cyber 205 400 MFLOPS (~40 systems worldwide)
1983 Cray X-MP/4 941 MFLOPS U.S. Department of Energy (DoE)
Los Alamos National Laboratory; Lawrence Livermore National Laboratory; Battelle; Boeing
1984 M-13 2.4 GFLOPS Scientific Research Institute of Computer Complexes, Moscow, USSR
1985 Cray-2/8 3.9 GFLOPS DoE-Lawrence Livermore National Laboratory, California, USA
1989 ETA10-G/8 10.3 GFLOPS Florida State University, Florida, USA
1990 NEC SX-3/44R 23.2 GFLOPS NEC Fuchu Plant, Fuchū,_Tokyo, Japan
1993 Thinking Machines CM-5/1024 59.7 GFLOPS DoE-Los Alamos National Laboratory; National Security Agency
Fujitsu Numerical Wind Tunnel 124.50 GFLOPS National Aerospace Laboratory, Tokyo, Japan
Intel Paragon XP/S 140 143.40 GFLOPS DoE-Sandia National Laboratories, New Mexico, USA
1994 Fujitsu Numerical Wind Tunnel 170.40 GFLOPS National Aerospace Laboratory, Tokyo, Japan
1996 Hitachi SR2201/1024 220.4 GFLOPS University of Tokyo, Japan
Hitachi/Tsukuba CP-PACS/2048 368.2 GFLOPS Center for Computational Physics, University of Tsukuba, Tsukuba, Japan
1997 Intel ASCI Red/9152 1.338 TFLOPS DoE-Sandia National Laboratories, New Mexico, USA
1999 Intel ASCI Red/9632 2.3796 TFLOPS
2000 IBM ASCI White 7.226 TFLOPS DoE-Lawrence Livermore National Laboratory, California, USA
2002 NEC Earth Simulator 35.86 TFLOPS Earth Simulator Center, Yokohama, Japan
2004 IBM Blue Gene/L 70.72 TFLOPS DoE/IBM Rochester, Minnesota, USA
2005 136.8 TFLOPS DoE/U.S. National Nuclear Security Administration,
Lawrence Livermore National Laboratory, California, USA
280.6 TFLOPS
2007 478.2 TFLOPS
2008 IBM Roadrunner 1.026 PFLOPS DoE-Los Alamos National Laboratory, New Mexico, USA
1.105 PFLOPS
2009 Cray Jaguar 1.759 PFLOPS DoE-Oak Ridge National Laboratory, Tennessee, USA
[edit] See also
The Journal of Supercomputing
[edit] Notes
1.^ Nebulae #2 Supercomputer built with NVIDIA Tesla GPGPUs
2.^ a b Top500 OS chart
3.^ IBM to build new monster supercomputer By Tom Jowitt , TechWorld , 02/04/2009
4.^ www-03.ibm.com/press/us/en/pressrelease/26599.wss
5.^ D.E. Shaw Research Anton
6.^ "Jaguar supercomputer races past Roadrunner in Top500". cnet.com. 15. http://news.cnet.com/8301-31021_3-10397627-260.html.
7.^ "Oak Ridge 'Jaguar' Supercomputer Is World's Fastest". sciencedaily.com. 17. http://www.sciencedaily.com/releases/2009/11/091116204229.htm.
8.^ Folding@home: OS Statistics, Stanford University, http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats, retrieved 2009-12-06
9.^ BOINCstats: BOINC Combined, BOINC, http://www.boincstats.com/stats/project_graph.php?pr=bo, retrieved 2010-04-13 . Note these link will give current statistics, not those on the date last accessed.
10.^ BOINCstats: MilkyWay@home, BOINC, http://boincstats.com/stats/project_graph.php?pr=milkyway, retrieved 2010-03-05 . Note these link will give current statistics, not those on the date last accessed.
11.^ PrimeNet 5.0, http://mersenne.org/primenet, retrieved 2010-04-13
12.^ How many Google machines, April 30, 2004
13.^ Markoff, John; Hensell, Saul (June 14, 2006). "Hiding in Plain Sight, Google Seeks More Power". New York Times. http://www.nytimes.com/2006/06/14/technology/14search.html. Retrieved 2008-03-16.
14.^ Google Surpasses Supercomputer Community, Unnoticed?, May 20, 2008.
15.^ "PlayStation 3 tackles black hole vibrations", by Tariq Malik, January 28, 2009, MSNBC
16.^ PlayStation3 Gravity Grid
17.^ Athley, Gouri Agtey; Rajeshwari Adappa (30 October, 2006). ""Tatas get Karmakar to make super comp"". The Economic Times. http://economictimes.indiatimes.com/articleshow/msid-225517,curpg-2.cms. Retrieved 2008-03-16.
18.^ C-DAC's Param programme sets to touch 10 teraflops by late 2007 and a petaflops by 2010.[dead link]
19.^ ""National Science Board Approves Funds for Petascale Computing Systems"". U.S. National Science Foundation. August 10, 2007. http://www.nsf.gov/news/news_summ.jsp?cntn_id=109850. Retrieved 2008-03-16.
20.^ "NASA collaborates with Intel and SGI on forthcoming petaflops super computers". Heise online. 2008-05-09. http://www.heise.de/english/newsticker/news/107683.
21.^ Thibodeau, Patrick (2008-06-10). "IBM breaks petaflop barrier". InfoWorld. http://www.infoworld.com/article/08/06/10/IBM_breaks_petaflop_barrier_1.html.
22.^ DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. pp. 391–402. ISBN 1595930191. http://portal.acm.org/citation.cfm?id=1062325.
23.^ "IDF: Intel says Moore's Law holds until 2029". Heise Online. 2008-04-04. http://www.heise.de/english/newsticker/news/106017.
24.^ CDC timeline at Computer History Museum
25.^ Directory page for Top500 lists. Result for each list since June 1993
[edit] External links
From Wikipedia, the free encyclopediaJump to: navigation, search
For other uses, see Supercomputer (disambiguation).
The Columbia Supercomputer, located at the NASA Ames Research Center.A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".
Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. As of May 2010[update], the Cray Jaguar is the fastest supercomputer in the world.
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion). A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Contents [hide]
1 Hardware and software design
1.1 Supercomputer challenges, technologies
1.2 Processing techniques
1.3 Operating systems
1.4 Programming
1.5 Software tools
2 Modern supercomputer architecture
3 Special-purpose supercomputers
4 The fastest supercomputers today
4.1 Measuring supercomputer speed
4.2 The TOP500 list
4.3 Current fastest supercomputer system
4.4 Quasi-supercomputing
5 Research and development
6 Timeline of supercomputers
7 See also
8 Notes
9 External links
[edit] Hardware and software design
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
Processor board of a CRAY YMP vector computerSupercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.
As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.
[edit] Supercomputer challenges, technologies
A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many metres across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason, hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1–5 microseconds to send a message between CPUs are typical.
Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:
Vector processing
Liquid cooling
Non-Uniform Memory Access (NUMA)
Striped disks (the first instance of what was later called RAID)
Parallel filesystems
[edit] Processing techniques
Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers.
Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).
The current Top500 list (from May 2010) has 3 supercomputers based on GPGPUs. In particular, the number 2 supercomputer is Nebulae built by Dawning in China[1].
[edit] Operating systems
Supercomputers predominantly run a variant of Linux.[2]Supercomputers today most often use variants of Linux[2].
Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. In similar manner, different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of computer systems such as Cray's Unicos, or Linux.
[edit] Programming
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is, in general, Fortran or C, using special libraries to share data between nodes. In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize a problem for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. The new massively parallel GPGPUs have 100s of processor cores and are programmed using programming models such as CUDA and OpenCL.
[edit] Software tools
Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf, WareWulf, and openMosix, which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates disruptive technology.
[edit] Modern supercomputer architecture
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
IBM Roadrunner - LANL
The CPU Architecture Share of Top500 Rankings between 1993 and 2009.Supercomputers today often have a similar top-level architecture consisting of a cluster of MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:
A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. Each computer runs under a separate instance of an Operating System (OS).
A multiprocessing computer is a computer, operating under a single OS and using more than one CPU, wherein the application-level software is indifferent to the number of processors. The processors share tasks using Symmetric multiprocessing (SMP) and Non-Uniform Memory Access (NUMA).
A SIMD processor executes the same instruction on more than one set of data at the same time. The processor could be a general purpose commodity processor or special-purpose vector processor. It could also be high-performance processor or a low power processor. As of 2007, the processor executes several SIMD instructions per nanosecond.
As of November 2009 the fastest supercomputer in the world is the Cray XT5 Jaguar system at National Center for Computational Sciences with more than 19000 computers and 224,000 processing elements, based on standard AMD processors.
The second fastest supercomputer and the fastest heterogeneous (or hybrid) machine is Dawning Nebulae in China. This machine is a cluster of 4640 blade servers, each with 1 NVIDIA Tesla C2050 (Fermi) GPGPU and 2 Intel Westmere CPUs. The Tesla GPUs deliver most of the Linpack performance, since each Tesla C2050 GPU has 515 Gigaflops peak double precision performance. The most remarkable thing about the hybrid supercomputers like Nebulae and the IBM Roadrunner (uses IBM Cell as coprocessor) is the low power of these systems. Nebulae for example is 2.55 Megawatts power and delivers 1.271 Petaflops/s compared to the number 1 supercomputer Jaguar (made using AMD Opteron CPUs) that consumes 7 Megawatt power and delivers 1.759 Petaflops/s. This makes Nebulae two times higher performance per watt compared to Jaguar.
In February 2009, IBM also announced work on "Sequoia," which appears to be a 20 petaflops supercomputer. This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100,000 laptops). It is slated for deployment in late 2011. [3] The Sequoia will be powered by 1.6 million cores (specific 45-nanometer chips in development) and 1.6 petabytes of memory. It will be housed in 96 refrigerators spanning roughly 3,000 square feet [4] .
Moore's Law and economies of scale are the dominant factors in supercomputer design. The design concepts that allowed past supercomputers to out-perform desktop machines of the time tended to be gradually incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can be done on workstations costing less than 4,000 US dollars as of 2010. Supercomputing is taking a step of increasing density, allowing for desktop supercomputers to become available, offering the computer power that in 1998 required a large room to require less than a desktop footprint.
In addition, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, in particular, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design, which can be programmed to act as one large computer.
[edit] Special-purpose supercomputers
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.
Examples of special-purpose supercomputers:
Belle, Deep Blue, and Hydra, for playing chess
Reconfigurable computing machines or parts of machines
GRAPE, for astrophysics and molecular dynamics
Deep Crack, for breaking the DES cipher
MDGRAPE-3, for protein structure computation
D. E. Shaw Research Anton, for simulating molecular dynamics [5]
[edit] The fastest supercomputers today
[edit] Measuring supercomputer speed
14 countries account for the vast majority of the world's 500 fastest supercomputers, with over half being located in the United States.In general, the speed of a supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) This measurement is based on a particular benchmark, which does LU decomposition of a large matrix. This mimics a class of real-world problems, but is significantly easier to compute than a majority of actual real-world problems.
"Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaflops range. An exaflop is one quintillion (1018) FLOPS (one million teraflops).
[edit] The TOP500 list
Main article: TOP500
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
[edit] Current fastest supercomputer system
In November 2009, the AMD Opteron-based Cray XT5 Jaguar at the Oak Ridge National Laboratory was announced as the fastest operational supercomputer, with a sustained processing rate of 1.759 PFLOPS.[6] [7]
[edit] Quasi-supercomputing
A Blue Gene/P node cardSome types of large-scale distributed computing for embarrassingly parallel problems take the clustered supercomputing concept to an extreme.
The fastest cluster, Folding@home, reported over 7.8 petaflops of processing power as of December 2009. Of this, 2.3 petaflops of this processing power is contributed by clients running on NVIDIA GeForce GPUs, AMD GPUs, PlayStation 3 systems and another 5.1 petaflops is contributed by their newly released GPU2 client.[8]
Another distributed computing project is the BOINC platform, which hosts a number of distributed computing projects. As of April 2010[update], BOINC recorded a processing power of over 5 petaflops through over 580,000 active computers on the network.[9] The most active project (measured by computational power), MilkyWay@home, reports processing power of over 1.4 petaflops through over 30,000 active computers.[10]
As of April 2010[update], GIMPS's distributed Mersenne Prime search currently achieves about 45 teraflops.[11]
Also a “quasi-supercomputer” is Google's search engine system with estimated total processing power of between 126 and 316 teraflops, as of April 2004.[12] In June 2006 the New York Times estimated that the Googleplex and its server farms contain 450,000 servers.[13] According to recent estimations, the processing power of Google's cluster might reach from 20 to 100 petaflops.[14]
The PlayStation 3 Gravity Grid uses a network of 16 machines, and exploits the Cell processor for the intended application, which is binary black hole coalescence using perturbation theory.[15][16] The Cell processor has a main CPU and 6 floating-point vector processors, giving the machine a net of 16 general-purpose machines and 96 vector processors. The machine has a one-time cost of $9,000 to build and is adequate for black-hole simulations, which would otherwise cost $6,000 per run on a conventional supercomputer. The black hole calculations are not memory-intensive and are locally introduced, and so are well-suited to this architecture.
Other notable computer clusters are the flash mob cluster and the Beowulf cluster. The flash mob cluster allows the use of any computer in the network, while the Beowulf cluster still requires uniform architecture.
[edit] Research and development
IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip".
Other PFLOPS projects include one by Narendra Karmarkar in India,[17] a CDAC effort targeted for 2010,[18] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011).[19]
In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflops computer, Pleiades, in 2009, scaling up to 10 PFLOPs by 2012.[20] Meanwhile, IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory, named Sequoia, which is scheduled to go online in 2011.
Given the current speed of progress, supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019.[21]
Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two week time span accurately.[22] Such systems might be built around 2030.[23]
[edit] Timeline of supercomputers
This is a list of the record-holders for fastest general-purpose supercomputer in the world, and the year each one set the record. For entries prior to 1993, this list refers to various sources[24][citation needed]. From 1993 to present, the list reflects the Top500 listing[25], and the "Peak speed" is given as the "Rmax" rating.
Year Supercomputer Peak speed
(Rmax) Location
1938 Zuse Z1 1 OPS Konrad Zuse, Berlin, Germany
1941 Zuse Z3 20 OPS Konrad Zuse, Berlin, Germany
1943 Colossus 1 5 kOPS Post Office Research Station, Bletchley Park, UK
1944 Colossus 2 (Single Processor) 25 kOPS Post Office Research Station, Bletchley Park, UK
1946 Colossus 2 (Parallel Processor) 50 kOPS Post Office Research Station, Bletchley Park, UK
1946
UPenn ENIAC
(before 1948+ modifications) 5 kOPS Department of War
Aberdeen Proving Ground, Maryland, USA
1954 IBM NORC 67 kOPS Department of Defense
U.S. Naval Proving Ground, Dahlgren, Virginia, USA
1956 MIT TX-0 83 kOPS Massachusetts Inst. of Technology, Lexington, Massachusetts, USA
1958 IBM AN/FSQ-7 400 kOPS 25 U.S. Air Force sites across the continental USA and 1 site in Canada (52 computers)
1960 UNIVAC LARC 250 kFLOPS Atomic Energy Commission (AEC)
Lawrence Livermore National Laboratory, California, USA
1961 IBM 7030 "Stretch" 1.2 MFLOPS AEC-Los Alamos National Laboratory, New Mexico, USA
1964 CDC 6600 3 MFLOPS AEC-Lawrence Livermore National Laboratory, California, USA
1969 CDC 7600 36 MFLOPS
1974 CDC STAR-100 100 MFLOPS
1975 Burroughs ILLIAC IV 150 MFLOPS NASA Ames Research Center, California, USA
1976 Cray-1 250 MFLOPS Energy Research and Development Administration (ERDA)
Los Alamos National Laboratory, New Mexico, USA (80+ sold worldwide)
1981 CDC Cyber 205 400 MFLOPS (~40 systems worldwide)
1983 Cray X-MP/4 941 MFLOPS U.S. Department of Energy (DoE)
Los Alamos National Laboratory; Lawrence Livermore National Laboratory; Battelle; Boeing
1984 M-13 2.4 GFLOPS Scientific Research Institute of Computer Complexes, Moscow, USSR
1985 Cray-2/8 3.9 GFLOPS DoE-Lawrence Livermore National Laboratory, California, USA
1989 ETA10-G/8 10.3 GFLOPS Florida State University, Florida, USA
1990 NEC SX-3/44R 23.2 GFLOPS NEC Fuchu Plant, Fuchū,_Tokyo, Japan
1993 Thinking Machines CM-5/1024 59.7 GFLOPS DoE-Los Alamos National Laboratory; National Security Agency
Fujitsu Numerical Wind Tunnel 124.50 GFLOPS National Aerospace Laboratory, Tokyo, Japan
Intel Paragon XP/S 140 143.40 GFLOPS DoE-Sandia National Laboratories, New Mexico, USA
1994 Fujitsu Numerical Wind Tunnel 170.40 GFLOPS National Aerospace Laboratory, Tokyo, Japan
1996 Hitachi SR2201/1024 220.4 GFLOPS University of Tokyo, Japan
Hitachi/Tsukuba CP-PACS/2048 368.2 GFLOPS Center for Computational Physics, University of Tsukuba, Tsukuba, Japan
1997 Intel ASCI Red/9152 1.338 TFLOPS DoE-Sandia National Laboratories, New Mexico, USA
1999 Intel ASCI Red/9632 2.3796 TFLOPS
2000 IBM ASCI White 7.226 TFLOPS DoE-Lawrence Livermore National Laboratory, California, USA
2002 NEC Earth Simulator 35.86 TFLOPS Earth Simulator Center, Yokohama, Japan
2004 IBM Blue Gene/L 70.72 TFLOPS DoE/IBM Rochester, Minnesota, USA
2005 136.8 TFLOPS DoE/U.S. National Nuclear Security Administration,
Lawrence Livermore National Laboratory, California, USA
280.6 TFLOPS
2007 478.2 TFLOPS
2008 IBM Roadrunner 1.026 PFLOPS DoE-Los Alamos National Laboratory, New Mexico, USA
1.105 PFLOPS
2009 Cray Jaguar 1.759 PFLOPS DoE-Oak Ridge National Laboratory, Tennessee, USA
[edit] See also
The Journal of Supercomputing
[edit] Notes
1.^ Nebulae #2 Supercomputer built with NVIDIA Tesla GPGPUs
2.^ a b Top500 OS chart
3.^ IBM to build new monster supercomputer By Tom Jowitt , TechWorld , 02/04/2009
4.^ www-03.ibm.com/press/us/en/pressrelease/26599.wss
5.^ D.E. Shaw Research Anton
6.^ "Jaguar supercomputer races past Roadrunner in Top500". cnet.com. 15. http://news.cnet.com/8301-31021_3-10397627-260.html.
7.^ "Oak Ridge 'Jaguar' Supercomputer Is World's Fastest". sciencedaily.com. 17. http://www.sciencedaily.com/releases/2009/11/091116204229.htm.
8.^ Folding@home: OS Statistics, Stanford University, http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats, retrieved 2009-12-06
9.^ BOINCstats: BOINC Combined, BOINC, http://www.boincstats.com/stats/project_graph.php?pr=bo, retrieved 2010-04-13 . Note these link will give current statistics, not those on the date last accessed.
10.^ BOINCstats: MilkyWay@home, BOINC, http://boincstats.com/stats/project_graph.php?pr=milkyway, retrieved 2010-03-05 . Note these link will give current statistics, not those on the date last accessed.
11.^ PrimeNet 5.0, http://mersenne.org/primenet, retrieved 2010-04-13
12.^ How many Google machines, April 30, 2004
13.^ Markoff, John; Hensell, Saul (June 14, 2006). "Hiding in Plain Sight, Google Seeks More Power". New York Times. http://www.nytimes.com/2006/06/14/technology/14search.html. Retrieved 2008-03-16.
14.^ Google Surpasses Supercomputer Community, Unnoticed?, May 20, 2008.
15.^ "PlayStation 3 tackles black hole vibrations", by Tariq Malik, January 28, 2009, MSNBC
16.^ PlayStation3 Gravity Grid
17.^ Athley, Gouri Agtey; Rajeshwari Adappa (30 October, 2006). ""Tatas get Karmakar to make super comp"". The Economic Times. http://economictimes.indiatimes.com/articleshow/msid-225517,curpg-2.cms. Retrieved 2008-03-16.
18.^ C-DAC's Param programme sets to touch 10 teraflops by late 2007 and a petaflops by 2010.[dead link]
19.^ ""National Science Board Approves Funds for Petascale Computing Systems"". U.S. National Science Foundation. August 10, 2007. http://www.nsf.gov/news/news_summ.jsp?cntn_id=109850. Retrieved 2008-03-16.
20.^ "NASA collaborates with Intel and SGI on forthcoming petaflops super computers". Heise online. 2008-05-09. http://www.heise.de/english/newsticker/news/107683.
21.^ Thibodeau, Patrick (2008-06-10). "IBM breaks petaflop barrier". InfoWorld. http://www.infoworld.com/article/08/06/10/IBM_breaks_petaflop_barrier_1.html.
22.^ DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. pp. 391–402. ISBN 1595930191. http://portal.acm.org/citation.cfm?id=1062325.
23.^ "IDF: Intel says Moore's Law holds until 2029". Heise Online. 2008-04-04. http://www.heise.de/english/newsticker/news/106017.
24.^ CDC timeline at Computer History Museum
25.^ Directory page for Top500 lists. Result for each list since June 1993
[edit] External links
Subscribe to:
Posts (Atom)