Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
HPC customers regularly tell us about their excitement when they’re starting to use the cloud for the first time. In conversations, we always want to dig a bit deeper to find out how we can improve those initial experiences and deliver on the potential they see. Most often they’ve told us they need a simpler way to get started with migrating and bursting their workloads into the cloud.
Today we’re introducing AWS HPC Connector, which is a new feature in NICE EnginFrame that allows customers to leverage managed HPC resources on AWS. With this release, EnginFrame provides a unified interface for administrators to make hybrid HPC resources available both on-premises and within AWS. It means highly specialized users like scientists and engineers can use EnginFrame’s portal interface to run their important workflows without having to understand the detailed operation of infrastructure underneath. HPC is, after all, a tool used by humans. Their productivity is the real measure of success, and we think AWS HPC Connector will make a big difference to them.
In this post, we’ll provide some context around EnginFrame’s typical use cases, and show how you can use AWS HPC Connector to stand up HPC compute resources on AWS.
NICE EnginFrame is an installable service-side application that provides a user-friendly application portal for HPC job submission, control, and monitoring. It includes sophisticated data management for every stage of a job’s lifetime, and integrates with HPC job schedulers and middleware tools to submit, monitor, and manage those jobs. The modular EnginFrame system allows for extreme customization to add new functionality (application integrations, authentication sources, license monitoring, and more) via the web portal.
The favorite feature for end users is EnginFrame’s web portal which provides an easy-to-understand, and consistent, user interface. The underlying HPC compute and storage can be used without needing to be fluent in either command line interfaces (CLIs), or in writing scripts. This frees you to scale your HPC systems underneath, and make them available to non-IT audiences who are focused on curing cancer or designing a better wind turbine.
Behind the scenes, EnginFrame “spools” a management process for each submitted job. This spooler runs in the background to manage data movement and job placement on the selected computational resource, and returns the results when the job finishes. This is transparent to the end user. As the administrator, you provide the necessary configuration to set up an application —app-specific parameters, location of data, where to run analysis, who can submit jobs. The admin portal also shows health and state information for the registered HPC systems, as shown in Figure 1.
Prior to this release, EnginFrame treated all registered HPC clusters as the same, even if some were static on-premises resources, and others elastic clusters in the cloud. Specific to AWS, EnginFrame left all the decisions about your AWS infrastructure to you, including network layouts, security posture, and scaling. Quite often customers used AWS ParallelCluster (our cluster-management tool that makes it easy to deploy and manage HPC clusters on AWS) to stand up clusters within an AWS Region. They’d then manually install EnginFrame on their head node and integrate the two. While this approach worked, we knew the experience could be better.
In September, we introduced new API capabilities in ParallelCluster 3, in preparation for today, so you can have all the functionality of ParallelCluster in EnginFrame with a single administration, management, and deployment path for hybrid HPC.
AWS HPC Connector begins by letting you register ParallelCluster 3 configuration files in the EnginFrame admin portal. The ParallelCluster configuration file is designed as a simple YAML text file for describing the resources needed for your HPC applications and automating their provisioning in a secure manner. Once a ParallelCluster configuration is registered within EnginFrame, you can start and stop clusters as necessary. The cluster will scale the compute resources based on the number of submitted jobs, according to your defined scaling criteria and node types, up to the limits you set for running instances. Once the submitted jobs are complete, ParallelCluster is designed to automatically stop the compute instances it created, by scaling down to the minimum number of instances you defined, which is usually zero. At that point, only the head node remains running – ready to receive new jobs. Figure 2 has a high-level architecture diagram showing AWS HPC Connector in EnginFrame working in concert with ParallelCluster to stand up resources on AWS.
Read the full blog to learn more about using NICE EnginFrame AWS HPC Connector to manage your workflows across on-premises and on AWS.
Reminder: You can learn a lot from AWS HPC engineers by subscribing to the HPC Tech Short YouTube channel, and following the AWS HPC Blog channel.
Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!
Oak Ridge National Laboratory’s exascale Frontier system may be stealing some of the spotlight, but the lab’s 148.6 Linpack petaflops Summit system is still churning out powerful science. Recently, for instance, the Read more…
In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…
HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Director Justin Whitt. The first supercomputer to surpass 1 exafl Read more…
You may be surprised how ready Python is for heterogeneous programming, and how easy it is to use today. Our first three articles about heterogeneous programming focused primarily on C++ as we ponder “how to enable programming in the face of an explosion of hardware diversity that is coming?” For a refresher on what motivates this question... Read more…
MLCommons’ latest MLPerf Training results (v2.0) issued today are broadly similar to v1.1 released last December. Nvidia still dominates, but less so (no grand sweep of wins). Relative newcomers to the exercise – AI Read more…
Hamburg-based Indivumed specializes in using the highest quality biospecimen and comprehensive clinical data to advance research and development in precision oncology. Its IndivuType discovery solution uses AWS to store data and support analysis to decipher the complexity of cancer. Read more…
Consumers use many accounts for financial transactions, ordering products, and social media—a customer’s identity can be stolen using any of these accounts. Identity fraud can happen when setting up or using financial accounts, but it can also occur with communications such as audio, images, and chats. Read more…
In February 2020, the United States’ National Oceanic and Atmospheric Administration (NOAA) announced that it would be procuring two HPE Cray systems, allowing the organization to triple its operational supercomputing capacity for weather and climate applications. Now, those efforts have come to fruition: NOAA has inaugurated the two systems, which are... Read more…
HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…
You may be surprised how ready Python is for heterogeneous programming, and how easy it is to use today. Our first three articles about heterogeneous programming focused primarily on C++ as we ponder “how to enable programming in the face of an explosion of hardware diversity that is coming?” For a refresher on what motivates this question... Read more…
MLCommons’ latest MLPerf Training results (v2.0) issued today are broadly similar to v1.1 released last December. Nvidia still dominates, but less so (no gran Read more…
In February 2020, the United States’ National Oceanic and Atmospheric Administration (NOAA) announced that it would be procuring two HPE Cray systems, allowing the organization to triple its operational supercomputing capacity for weather and climate applications. Now, those efforts have come to fruition: NOAA has inaugurated the two systems, which are... Read more…
With the Linpack exaflops milestone achieved by the Frontier supercomputer at Oak Ridge National Laboratory, the United States is turning its attention to the next crop of exascale machines, some 5-10x more performant than Frontier. At least one such system is being planned for the 2025-2030 timeline, and the DOE is soliciting input from the vendor community... Read more…
HPE's early stab at ARM servers close to a decade ago didn't pan out, but the company is hoping the second time is a charm. The company introduced the ProLiant RL300 Gen11 server, which has Ampere's ARM server processor. The one-socket server is designed for cloud-based applications, with the ability to scale up applications in a power efficient... Read more…
In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programmin Read more…
You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…
Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…
Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…
In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…
AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…
The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…
The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…
The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…
PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…
AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…
Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…
Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…
Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…
The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…
MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…
Supercomputing has been indispensable throughout the Covid-19 pandemic, from modeling the virus and its spread to designing vaccines and therapeutics. But, desp Read more…
Close to a decade ago, AMD was in turmoil. The company was playing second fiddle to Intel in PCs and datacenters, and its road to profitability hinged mostly on Read more…
© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.