Description
Key Features | HPC/AI Applications are Data Rich, and Data Must be Protected
HPE Data Management Framework 7 creates immutable versions of files and takes snapshots of the namespace reflection. Managed via an administrator policy, recovery can be customized for lowest RTO from disk, lowest cost from tape, and/or from remote locations via S3/cloud.
Loss of the file system due to failure has catastrophic impact upon availability of the high performance compute cluster. Even when a file system has tools for repairing, the complexity and time it takes to repair a broken file system can extend the compute outage beyond acceptable SLAs.
Up to now, protecting file systems and data has been a costly investment with imposing drawbacks, including the lack of backup windows, backup utilities that are sub-optimized, optimized for PB-sized parallel file systems, and the negative impact upon performance from scanning file system metadata.
The Emergence of Exascale Computing is Challenging the Scaling Limits of Legacy HPC Storage
HPE Data Management Framework 7 manages free space in storage by automatically moving 'stale' files out of the high performance storage, creating an underlying storage space that is bigger on the inside. Administrators easily manage policy settings, alleviating the need to take brute force actions.
Perhaps the challenge is felt most acutely by storage administrators, who struggle to maintain enough free space in costly high performance storage while users independently flood the file system with new files. Increasing the storage budget and/or deleting user files aren't practical remedies.
The volume and diversity of data demanded by HPC/AI applications has fueled the growth of the "storage beast' that feeds on HPC budgets. At the same time traditional parallel file system architectures are struggling under the weight of relentless growth in the number of files and inodes.
Eventually, administrators need to work with users to prune unused files from the file system to ensure metadata performance isn't undermined. When old files are marked for removal, no data has to be moved since HPE DMF7 already preserves files and metadata in less costly back end storage.
HPC/AI Storage Environments are Diverse and Data has to be Portable
HPE Data Management Framework 7 automatically migrates files down the storage system hierarchy without administrator interaction and recalls them up to high performance storage on demand. It uses parallel data movers and the high speed network to move files faster than standard desktop utilities.
Managing HPC/AI data movement is an intimidating task. Tools aren't easy to use, they don't scale well, network pipe bandwidth is limited, and users may not have the needed skills. When data cannot be moved easily and the motivation to move it is low, the default choice is to leave it in place.
Storage systems are optimized for performance, capacity, and cost, and data is always in flight between these tiers. Application workflows demand that data follow the user and the application, and administrators are continually pressured to manage storage costs and push data down the hierarchy.
Technology migration is a common driver of data movement and HPE DMF7 future proofs against this risk. It automates the migration of back end objects from older, inefficient generation HDD/tape technologies and on to generations that have the highest density, reliability and performance.
- With 18 TB capacity you have ample space for storing millions of images, hundreds of hours of video or for music lovers, up to a million songs
- 12Gb/s SAS interface for high speed and reliability