David Yip – New Technologies Business Development Manager, OCF plc
Incorporating Graphics Processing Units (GPUs) into the HPC system environment delivers a unique opportunity – lower costs and higher compute performance from a commodity product.
With data that naturally lends itself to GPU processing and, with a historical belief in the benefits of HPC systems, organisations in the finance and oil & gas industries are the early pioneers. They are already investigating GPU processors and, in fact, they are already proactively approaching HPC system integrators like OCF www.ocf.co.uk to test GPU processors.
However, here is a word of warning for those pioneer organisations. Unfortunately, despite GPUs having their roots in games consoles, working with GPUs in an HPC system environment specifically is not all entirely child’s-console-play. Adding the physical hardware – the GPU itself - is relatively easy, but the difficulties arise when organisations want to put the GPU to work. Organisations must carefully consider some important steps:
- Take A Staged-Approach
Firstly, organisations considering GPUs should start with a single GPU (which can be bought from any PC World) and start using and testing it. If organisations find they are getting good performance from a single GPU and, their software applications require more memory, they should purchase a professional computational GPU (which is essentially the same as a GPU bought from PC World but without the graphics output). These can be purchased from integrators like OCF. If organisations still require more performance, they should then consider building a full cluster of GPUs (a HPC system), in partnership with an experienced HPC system integrator.
- Look for Applications that can use the GPU as an Accelerator
More and more software application vendors have come to realise that GPUs provide an extra, powerful resource and have written extensions or plug-ins to their existing applications to take advantage of GPUs where possible. For ease and simplicity, in the first instance, organisations should try to look for and, use these applications. Examples already on the market include: Adobe Photoshop CS4 and MATLAB from The Mathworks.
- Be Prepared To Fine Tune Applications
GPU processors are not designed for general purpose use, so organisations will never be able to run a Microsoft Excel spreadsheet straight off, for example. Without extensions or plug-ins, most organisations will need to fine tune or, more than likely re-architect existing software applications, to make use of GPUs. Unfortunately, this can be complex.
Organisations must examine applications and see where they can gain the best performance from the GPU – this can require a huge amount of time and investment from the organisation.
Organisations must look at the algorithms in the application and see where they can offload the calculations onto the GPU.
As organisations make demands for applications to run faster and faster, they must equally invest more time and resource into programming the application.
Organisations should work closely with software application developers to achieve this – the people that write the application are the people best placed to fine tune it.
- Carefully Consider APIs
On the flip side, there are actually free Application Programme Interfaces (APIs) - available from graphic cards manufacturers that want you to use their graphics cards for compute resource - that enable organisations to easily integrate software application code with GPUs, but that is only a first step.
However, organisations must still know the hardware of the GPU very well (the memory architecture, for example) and understand how their applications map onto the GPU to gain best performance.
- Remember Memory
To gain the best performance from a software application on GPUs, memory management is imperative. In particular, memory coalescing and managing the memory usage and the memory transfers from the host system.
Organisations must try and take advantage of the faster memory available on a GPU, all of the time. Organisations don’t want to be moving data from slow memory to fast memory – they must coalesce application data into a ‘GPU memory friendly way’ so that it will automatically run on a GPU’s faster memory from scratch.
Again, this is another task where software application developers are best placed to support organisations.
- Expect to Restructure Data
In the future, as organisations move away from using single GPUs to regular use of GPU clusters within HPC systems, organisations will need to restructure data from ‘course grain’ MPI type data decomposition to ‘fine grain’ on the GPU to run it on multiple GPUs in the cluster nodes.
Currently, no one has really come up with a system of managing the data decomposition – this could potentially slow the adoption of GPU-based HPC systems.
- Keep it Busy
Importantly, organisations must keep GPUs busy at all times, in terms of both memory and processes. The processor will simply not provide the best performance if it is left idle for periods of time.
Conclusion
Organisations in the finance and oil & gas industries are rightfully investigating GPUs for their HPC system environments and they should continue to do so. However, they must be prepared to invest an equal amount of time both in the GPU and HPC system hardware and into making software applications work effectively on GPUs. They must also be prepared to work with partners when necessary – HPC integrators can provide the necessary HPC system hardware and software and application developers will ensure applications run effectively on GPUs.
