Categories: Archives

Campbell: Supercomputers as servers

To the average computer user, the supercomputer is the stuff of Hollywood…

To the average computer user, the supercomputer is the stuff of Hollywood dreams.

Supercomputers are giant room-sized devices that perform fantastical feats of processing.

By comparison, the average server is just a beige or black box that sits and provides web pages and is all-around uninteresting.

If Advanced Micro Devices has anything to do with it, the average person will consider that beige box a supercomputer by around 2012.

AMD, which currently owns the graphics card producer ATI, is continuing its ongoing push for Graphics Processing Units to show themselves in everyday computing.

GPUs are designed to tackle large amounts of mathematical processing in parallel; it takes quite a bit of parallel mathematical calculation to render complex graphics in today’s most demanding games.

The CPU, or Central Processing Unit, with which most people are more comfortable, is designed to perform generic computing, allowing users to process information in unpredictable ways by responding to their inputs.

The Intel CPU chip that currently resides in the computer used to type this article must be able to handle the relatively calm processes of entering text into Microsoft Word but also must be ready to perform the many iterative calculations involved in running tax software; all it takes is for the user to double-click an icon.

CPUs have progressively become more parallel, with many techniques exploited by both AMD and Intel to increase efficiency, and thereby speed, of their CPU chips. There is a limit, however, and AMD sees an opportunity to squeeze more performance out of the system as a whole by giving some of the responsibility of a computer’s CPU to a series of parallel GPU chips.

While other companies have had this idea — most notably Nvidia, which has produced multi-core graphics processors — AMD appears to have somewhat bigger plans.

AMD hopes to begin adding GPU parallel cores to CPU-centric systems within the next two years. Eventually, according to Agam Shah, the article’s author, AMD will attempt to “de-emphasize” the CPU core, removing it from many responsibilities and allowing the GPUs to take over the lion share of the data processing.

AMD has further developed a series of programming tools, called OpenCL, to assist developers in taking advantage of the parallel nature of having a system with multiple GPU cores.

While general-purpose computers, like the one sitting on your desk, may eventually benefit from this, it is important to note that AMD seems to have its eye on the server market.

Servers do tend to have a bit more predictability in their program execution, as they are commonly designed and set up to serve a specific purpose. They are not fully at the whim of a user with some icons on the desktop.

Many servers, with the exception of web and file servers, spend a good amount of their time doing mathematical calculations. They are perfect candidates for GPU conversion.

AMD will have to make a concerted effort, however, to prevent its efforts in GPU conversion from being relegated to niche markets.

GPUs are excellent solutions for gaining efficiency from programs that are designed to utilize them. Programs that are written in an unforgiving linear manner — without many steps that could be successfully executed in parallel — will not gain efficiency from parallel GPUs. Failure to use AMD’s OpenCL could make AMD’s efforts superfluous, as the programs written for AMD’s systems would execute in much the same way as if they were run on a simple CPU.

Many applications, including file and web servers, do not lend themselves by nature to parallelism and would be a poor test bench for GPU performance.

Scientists will most likely be the first to benefit, as scientific computing is more often than not highly mathematical, and if written correctly, highly parallel. If AMD’s efforts do successfully influence the development of “standard servers” as opposed to supercomputers, scientists will undoubtedly be able to afford more computing power more often.

Preventing AMD’s efforts from being a scientific niche will need tireless promotion of good programming practices for non-scientific programmers. AMD will have to continually promote using its OpenCL tools when developing non-scientific servers and perhaps eventually desktop applications.

Until the time when GPUs conquer the desktop world, take comfort that your computer has already begun to do some of what AMD and others have envisioned. When you watch a movie, much of the rendering processing is being done on your graphics card, leaving your general-purpose CPU to wait for your next icon double-click.

Pitt News Staff

Share
Published by
Pitt News Staff

Recent Posts

Students gear up, get excited for Thanksgiving break plans 

From hosting a “kiki” to relaxing in rural Indiana, students share a wide scope of…

16 hours ago

Photos: Pitt Women’s Basketball v. Delaware State

Pitt women’s basketball defeats Delaware State 80-45 in the Petersen Events Center on Wednesday, Nov.…

16 hours ago

Opinion | Democrats should be concerned with shifts in blue strongholds

Recent election results in such states have raised eyebrows nationwide, suggesting a deeper shift in…

1 day ago

Editorial | Trump’s cabinet picks could not be worse

Over the past week, President-elect Donald Trump began announcing his nominations for Cabinet secretaries —…

1 day ago

What Trump’s win means for the future of reproductive rights 

Pitt professors give their opinions on what future reproductive health care will look like for…

1 day ago

Police blotter: Nov. 8 – Nov. 20

Pitt police reported one warrant arrest for indecent exposure at Forbes and Bouquet, the theft…

1 day ago