High Performance Computing
Add a Supercomputer to Your CloudTransform data into breakthroughs. Scale-out your compute resources by adding a high density server or cluster. A single HPC server can replace 50 CPU nodes.
High Performance Compute for IaaS and Hybrid Cloud
The performance and scalability of a world-class supercomputing center is now available to everyone, on demand in your cloud. Argon Systems offers high density servers for compute-intensive data analytics, deep learning and scientific applications.
High Performance Computing
Argon Systems offers high density servers for compute-intensive data analytics, deep learning and scientific applications. High Performance Computing (HPC) is a critical tool for advancing the pace of discovery and providing a competitive edge in scientific and industrial research.
Conventional architectures can no longer meet the growing demands for HPC. The industry has already begun migrating from the traditional CPU to the next generation of HPC. The GPU was designed for the fastest computational speeds and highest data throughputs. This gives us 100x more computing power on single chip, quadrupling the performance improvement predicted by Moore’s law.
HPC isn’t just for engineering and graphics. It’s becoming more commonplace, such as a big data analysis tool. Big data applications can include text analysis. Analyze billions of tweets, blogs, article and books to uncover patterns and linkages. Determine differences in meaning and interpretation across languages.
- Up to 30 GPUs in a single server!
- Argon servers are optimized for NVIDIA Tesla V100 data center GPUs
- A single NVIDIA Tesla V100 offers the performance of 100 CPUs
- NVIDIA Tesla V100 GPU accelerators with maximum GPU-to-GPU bandwidth
- NVIDIA NVLink GPU interconnect technology with over five times the bandwidth of PCIe 3.0
- GPU and CPU data transfers up to 12 times faster than PCIe (80 GB/s transfer between two cards)
- Application performance using NVLink can be up to twice as fast, relative to PCIe
- GPUs efficiently communicate minimizing latency and maximizing throughput (measured by the NCCL P2PBandwidthTest)
- Applications depending on Fast Fourier Transform (FFT) algorithms can perform over twice as fast
- Supercomputing with dramatically higher throughput and minimal power consumption.
- Comprehensive set of HPC tools for deployment, administration, job scheduling, and monitoring tools for your Windows and Linux HPC cluster environment.
- Schedule jobs on Linux VMs with a single job scheduling solution for your Linux and Windows HPC applications.
- Deploy a private cloud compute cluster and dynamically extend to Azure when you need additional capacity.
- Scale up and down based upon what you need and pay only for what you use.
- Wide range of compute options, including memory or compute intensive instances, without compromising performance.
- RDMA technology, enabling scientists and engineers to solve complex problems using many popular industry-standard applications for Linux.
- Business Intelligence: high frequency trading, fraud analysis, big data analytics, pricing/financial models
- Science and engineering: 3D modeling, structural analysis, mechanical modeling, molecular dynamics, aerodynamics, computational fluid dynamics, astrophysics, nuclear interactions, quantum mechanics, crash modeling
- Energy and Earth Science: oil and gas research, seismic modeling, facility power scheduling and management, climate forecasting
- Medical research: genomic sequencing, molecular modeling, drug screening, bioinformatics, protein modeling
- Image processing: digital content creation, texture mapping, visual effects and advanced engineering concepts
- Artificial Intelligence (AI): machine learning algorithms, deep learning applications, autonomous vehicle systems