The International Supercomputing Conference (ISC) 2011 kicks off today in Hamburg, where parallel systems and programming are among the major topics.
Parallel programming lets software developers write code to use the ever-increasing number of processor cores in chip architectures to speed up program execution.
Software developers can then address problems requiring high-performance computing (HPC), such as biological simulations of large molecules important to drug design, for example.
Processor vendors including AMD, IBM, Intel and Sun all have multi-core processor architectures, ranging from dual-core systems to 16 cores, in the case of Sun's Ultra Sparc T3 processors.
Last week Microsoft unveiled new C++ compiler features called C++ Accelerated Massive Parallelism (C++ AMP) at the AMD Developer Summit in Washington, aiming to simplify parallel programming for code writers.
Sessions at ISC 2011 aimed at parallel computing systems and software include: Parallelisation of ISVs' software packages; How to teach parallel programming for millions of cores; Many-core computing – challenges and opportunities for HPC; and Parallel filesystems.
Other presentations at the conference will address social aspects of supercomputing; for example, the impact of supercomputing on science and society, how cloud computing fits with HPC, and how the EU will be standardising on HPC systems.
Sometimes, the power of the mainframe is the most cost effective answer. Computing's Peter Gothard puts Computing's readers' questions on the future of the mainframe to IBM's Z13 expert Steven Dickens.
This Dummies white paper will help you better understand business process management (BPM)