Cummins’ View on Speed and Agility Enabled by High Performance Computing
View this recorded webinar to learn how you can reduce simulation time on increasingly complex engineering challenges in today’s fast-paced globalized economy by incorporating high-performance computing (HPC) into your virtual product development.
Guest speaker Mike Hughes, who is the program director, digital engineering/transformation at Cummins Inc., outlines a credible path to the HPC power needed for ever-expanding engineering simulation demands. The views of solution provider Dell Technologies, who along with X-ISS provide an end-to-end HPC solution for ANSYS workloads, is also presented.
To download a copy of the presentation – visit Ansys’ Resource Library.
Cummins – Another Satisfied Customer
Cummins recently released the following video which highlights their use of High Performance Computing to more efficiently bring new products to development and how X-ISS ManagedHPC service is critical in helping them reach their objectives.
For more about Cummins visit them at www.cummins.com.
For more information on DellEMC High Performance Computing solutions go to www.dellemc.com/solutions/high-performance-computing/.
Let us help you reach your full HPC potential. See our offerings at x-iss.com.
X-ISS Tweaks Cluster Configuration during Setup to Speed Pump Design Simulations
Personnel at a prominent engineering company in Houston, Texas, no longer have to stay up all night to make sure their pump design simulations finish on time and produce the desired results. Thanks to a Dell HPC cluster deployed and optimized by X-ISS Inc., the company is running simulations 18 times faster than was possible on their computer workstations.
Much of what the company’s Houston office does is destined for the oil industry. One type of time-critical project for the division’s engineers is figuring out why a pump from an oil rig or pipeline has failed and coming up with a solution to the failure.
Usually when a pump breaks down in the oil patch, the flow of petroleum products ceases until the faulty equipment can be repaired or replaced. Every minute that oil isn’t being pumped costs the operator money. It’s up to the company’s engineers to examine the defective hardware and develop a better design that can be rushed to manufacturing as quickly as possible. For the company’s dedicated personnel that often meant keeping watch over computerized design simulations late into the night or early morning.
“The client uses the ANSYS Fluent software to simulate and test various pump designs,” explained X-ISS CEO Deepak Khosla. “As the simulations became more complex, they simply took too long, even on high-end computer workstations.”
By nature, design simulations are an iterative, trial-and-error process. Engineers have to choose just the right level of detail, or granularity, in the simulation to produce workable design alternatives. Using their workstations, the engineers sometimes had to run a simulation for several hours before seeing the results would be inadequate. After tweaking the inputs, they then restarted the simulation from the beginning.
In some situations requiring high detail, the workstation was overwhelmed by data volume and the simulation crashed before completion. Both instances often required late nights at the office tending to the computer rather than risking coming in the next morning only to find poor results or a stalled simulation.
The company contracted Dell to build a Windows 2012 HPC cluster that could speed up the simulations in ANSYS Fluent, which scales extremely well in the HPC environment. Among the many advantages, faster simulations meant the engineers could identify and tweak problems with their designs in minutes rather than hours, ultimately delivering workable solutions to the oil field customers more quickly.
Tweaking the Deployment
X-ISS worked closely with Dell and ANSYS to design and build the HPC cluster. The HPC system is comprised of 10 mid-level Dell servers and Cisco 10-GB networking equipment. X-ISS personnel deployed, set up and configured the cluster onsite at the Company. As is standard procedure, the X-ISS team ensured the Fluent application ran well, fine tuning the cluster in the process so the client would get maximum speed from the new system.
A critical step in configuring the cluster was designing the networks. X-ISS created three separate networks for data transmission so that large volumes of data could move in many directions at once. The primary network carried the data needed for the nodes to run the Fluent simulations. A second was set up for cluster managers to monitor overall system operations. And the third network enabled the engineers to run the simulations.
“Three networks maintain system speed and throughput,” said Khosla. “If we had set up just one network for the cluster, it would have had to connect with the company LAN, and the large volume of data would have slowed both the LAN and the cluster.”
A second recommendation made by X-ISS during configuration also contributed to keeping the cluster running fast. The cluster itself had 9 TB of data storage available in the head node. X-ISS suggested the company’s engineers move their pump design data to the cluster and keep it there, rather than moving data back and forth from workstations or remote nodes, which would have slowed jobs.
This configuration concept required buy-in from the company’s IT department because they were the ones responsible for backing up massive volumes of data from the cluster on a regular basis. These backups were required for archiving purposes in case the engineers later had to revisit one of their design simulations. Fortunately, the IT staff agreed with the suggestion, and data is stored locally on the cluster.
To further maximize the power, speed and efficiency of the new HPC cluster, X-ISS ran several validation tests on it, adjusting power and BIOS settings as needed. At the client’s request, the team also set up redundant hardware connections so the cluster could be maintained while it is still operating.
“The company’s engineers can now run a simulation in five minutes that once took 90 minutes,” said Khosla. “The engineers are elated because the simulations no longer cut into their personal time, and the pump re-design process is faster than ever.”
Download this case study: SpeedBoostII.CaseStudy8