More and more CFD users are finding they have a requirement for large processing power. As simulations get more complex and with a higher resolution, users really need the extra compute power to get a sensible completion time on their tasks. Mesh sizes seem to be growing rapidly; we’ve seen engineers generate meshes of three billion cells – definitely not the job for your workstation!
How does a company meet the computing requirements for a huge variety and size of workloads? Well, fortunately, CFD applications scale well using HPC, but is the answer an in-house HPC cluster? This can be very costly and requires serious cost benefit analysis. Sizing your own system becomes a critical decision, because the cost of unused resources at quiet times becomes prohibitive. Ideally, a firm should purchase an in-house HPC system, which can handle maybe 80 per cent of its workload, with the largest 20 per cent of jobs burst to an external HPC-on-demand service. This gives flexibility and the ability to meet peaks in workload with pay-per-use services.
Francisco Campos, Director of Operations at CAE software developer ENGYS recently told me his reasons for using an on-demand service: “We have our own cluster – in fact hosted by OCF – which is suitable most of the time for the majority of our jobs. We don’t have a massive cluster because the total cost of ownership can be prohibitively expensive. Secondly, it simply doesn’t make sense for us to have a larger, massively powerful cluster available for just one or two significant 500 million-cell jobs that we work on each year, nor for the rare occasions when multiple urgent jobs arrive unplanned at once. If we invested in hardware to manage these odd jobs, most of the time, the system would sit idle. That is why we sometimes rely on OCF’s supporting HPC-on-demand service, enCORE.”
In my experience, users of CFD software are using a general CFD application or they have possibly compiled their own application, a variant of CFD with additional modules. They are - almost without exception – all very technically competent and more than happy using command line to manage jobs. However, they still want to retain the ability to work within a Graphical User Interface from their desktops.
Francisco Campos agrees: “Along with using an on-demand service ourselves, we have made HELYX, our own CFD software solution based on open source technologies, available as a service to our customers. We wanted to give our customers a choice; they can use our product to create a 100 million-cell case and process it locally in-house, or they can use it to access significant processing power on-demand without any solver license limitations. However, the experience needs to be the same. Our customers want to sit at their local machine, set up a case using the GUI, process it locally or remotely and get their results quickly. We’re giving users a choice with HELYX.”
On the same point, effective remote visualization is also a vital component of an on-demand service and should be available for users, enabling the manipulation of large data sets and creation of complex visualisations remotely. Leading efforts here, OCF is working with DragonHPC and others to refine this aspect of our own service.
Licensing in an on-demand model is a critical – and potentially very costly - factor. Open source codes such as OpenFoam and Code Saturne are free and therefore ideal for on-demand computing. On our own service, for example, both of these applications are loaded and ready to use. By contrast, licensing to run Fluent on multiple cores is costly even for large businesses.
In my view, traditional license models are in fact the major limiting factor for CFD users’ adoption of on-demand access to HPC. We are, however, seeing the early stages of a move toward more flexible license models.
The XFlow CFD application from Next Limit, re-sold in the UK by FlowHD, now has a token-based license model ideally suited to on-demand computing. Matt Hieatt, commercial director for FlowHD explains: “Our users can now buy a pool of tokens, which reside on a server. Users can spend those tokens flexibly to pay for use of XFlow– either on their local 16-core machine or via an 8,000-core on-demand service. They can dip into the pool, as they require. Users can top-up and receive a statement of account online. Pricing is geared towards encouraging greater use of cores, with the cost of using XFlow on 1000 cores for 40 hours being less than 40 cores for 1000 hours.”
Users should also carefully evaluate the service and system performance. Check the total cost of using the service, not just the headline pence per core hour, as hidden costs, pricing thresholds and system performance all need to be taken into account. Users of the service also need visibility of spend levels at all times.
As “cloud” adoption accelerates, the key barrier to CFD usage of HPC clouds is certainly the traditional license model. With new packages from companies like ENGYS and FlowHD taking this issue by the scruff of the neck, the future looks far brighter for CFD users.