ANN ARBOR, Mich., Aug. 16, 2022 — Verge.io, the company with a simpler way to virtualize data centers, has added significant new features to its Verge-OS software to give users the performance of GPUs as virtualized, shared resources. This creates a cost-effective, simple and flexible way to perform GPU-based machine learning, remote desktop and other compute-intensive workloads within an agile, scalable, secure Verge-OS virtual data center.
Verge-OS abstracts compute, network, and storage from commodity servers and creates pools of raw resources that are simple to run and manage, creating feature-rich infrastructures for environments and workloads like clustered HPC in universities, ultra-converged and hyperconverged enterprises, DevOps and Test/Dev, compliant medical and healthcare, remote and edge compute including VDI, and xSPs offering hosted services including private clouds.
Current methods for deploying GPUs systemwide are complex and expensive, especially for remote users. Rather than supplying GPUs throughout the organization, Verge.io allows users and applications with access to a virtual data center to share the computing resources of a single GPU-equipped server. Users/administrators can ‘pass through’ an installed GPU to a virtual data center by simply creating a virtual machine with access to that GPU and its resources.
Alternatively, Verge.io can manage the virtualization of the GPU and serve up vGPUs to virtual data centers. This allows organizations to easily manage vGPUs on the same platform as all other shared resources.
According to Darren Pulsipher, Chief Solution Architect of Public Sector at Intel, “The market is looking for simplicity, and Verge-OS is like an ‘Easy Button’ for creating a virtual cloud that is so much faster and easier to set up than a private cloud. With Verge-OS, my customers can migrate and manage their data centers anywhere and upgrade their hardware with zero downtime.”
“The ability to deploy GPU in a virtualized, converged environment, and access that performance as needed, even remotely, radically reduces the investment in hardware while simplifying management,” said Verge.io CEO Yan Ness. “Our users are increasingly needing GPU performance, from scientific research to machine learning, so vGPU and GPU Passthrough are simple ways to share and pool GPU resources as they do with the rest of their processing capabilities.”
Verge-OS is an ultra-thin software—less than 300,000 lines of code—that is easy to install and scale on low-cost commodity hardware and self-manages based on AI/ML. A single license replaces separate hypervisor, networking, storage, data protection, and management tools to simplify operations and downsize complex technology stacks.
Secure virtual data centers based on Verge-OS include all enterprise data services like global deduplication, disaster recovery, continuous data protection, snapshots, long-distance synch, and auto-failover. They are ideal for creating honeypots, sandboxes, cyber ranges, air-gapped computing, and secure compliance enclaves to meet regulations such as HIPAA, CUI, SOX, NIST, and PCI. Nested multi-tenancy gives service providers, departmental enterprises, and campuses the ability to assign resources and services to groups and sub-groups.
Currently Verge.io supports NVIDIA Tesla and Ampere cards; Additional licenses must be purchased for vGPU capability.
For a complete list of enhancements please visit https://updates.verge.io/release.html.
Verge.io provides a simpler way to virtualize data centers and end IT infrastructure complexity. The company’s Verge OS software is the first and only fully integrated virtual cloud software stack to build, deploy and manage virtual data centers. Verge-OS delivers significant capital savings, increased operational efficiencies, reduced risk, and rapid scalability.