Aethir Expands Enterprise AI Infrastructure with Next-Generation B300 GPUs

Learn how Aethir is expanding its enterprise AI infrastructure with next-generation NVIDIA B300 GPUs for advanced AI use cases.

Featured | 
Community
  |  
March 19, 2026

Key Takeaways

  1. Aethir is the first decentralized GPU cloud to deploy NVIDIA B300 GPUs across multiple regions at production scale, available now for enterprise AI training 
  2. Aethir is in advanced discussions with multiple enterprise partners for long-term GPU infrastructure deployments, with further announcements expected throughout 2026
  3. Organizations including KAUST, JobTalk, and GAIB are already leveraging Aethir's network for advanced AI workloads
  4. Aethir is introducing a managed Kubernetes layer to support enterprise AI deployments

Expanding Infrastructure for Next-Generation AI Workloads

Aethir's decentralized enterprise GPU cloud continues expanding its global compute network to support the rapidly growing demand for large-scale AI workloads.

As part of its enterprise infrastructure roadmap, Aethir is introducing NVIDIA B300 GPU clusters designed to support advanced AI training, multimodal models, and emerging AI agent workloads.

Aethir is the first decentralized AI infrastructure network to deploy B300 GPUs across multiple regions, providing organizations with access to next-generation GPU compute capable of supporting the most demanding AI workloads, available now, at scale, across multiple locations, and at competitive pricing.

Aethir's decentralized infrastructure enables distributed GPU deployment across a global network of Cloud Hosts, helping organizations access scalable compute resources without relying on a single centralized data center. With a global fleet of high-performance GPU compute containers supporting a wide range of AI workloads, Aethir continues expanding its infrastructure to meet the increasing compute demands of AI builders and enterprises worldwide.

Supporting the Next Phase of Enterprise AI Growth

Demand for AI compute continues to accelerate as organizations integrate machine learning, generative AI, and advanced analytics into their operations. Applications such as AI co-pilots, recommendation systems, robotics, multimodal model training, and large-scale simulation require significant GPU resources to support increasingly complex models.

To address these needs, Aethir is expanding its infrastructure roadmap with next-generation GPU clusters, including B300 deployments designed for large-scale AI training workloads.

Aethir is currently in advanced discussions with multiple enterprise partners regarding long-term GPU infrastructure deployments for AI training and advanced research workloads. These engagements reflect the growing demand for dedicated AI training infrastructure, beyond inference workloads. Further announcements regarding enterprise deployments are expected throughout 2026.

Global Availability of B300 GPU Clusters

Aethir's B300 GPU clusters are being deployed across a growing number of global regions, including the United States, Canada, the United Kingdom, France, Norway, Korea, Japan, Thailand, and Malaysia.

These deployments enable organizations to access high-performance AI compute closer to their operational regions while benefiting from flexible cluster configurations and scalable infrastructure.

Aethir is also preparing additional B300 infrastructure deployments in the United States to support large-scale enterprise AI training workloads with capacity ready to go for enterprise partners at competitive pricing and full reference architecture backed by enterprise SLAs.

These clusters are designed to support a wide range of configurations, allowing enterprises and AI builders to deploy workloads ranging from research experiments to large-scale model training environments.

Enterprise and Research Organizations Building on Aethir

A growing number of enterprises, research institutions, and AI-native organizations are leveraging Aethir's decentralized GPU infrastructure to support demanding AI workloads.

Several organizations currently deploying workloads on Aethir include:

JobTalk AI

JobTalk Inc. is an agentic AI recruiter platform that uses intelligent voice agents to automate candidate phone screens, interview scheduling, and follow-ups for staffing firms and in-house recruiting teams. By integrating directly with your ATS, it runs structured, 24/7 outreach and screening across voice, SMS, and email, then delivers scored summaries, transcripts, and sentiment analysis so recruiters can focus on closing top talent instead of repetitive phone work.

GAIB 

GAIB has deployed H200-based tokenized compute infrastructure on Aethir's network to support large-scale open-source model workloads. The project focuses on building an economic layer for AI infrastructure by tokenizing enterprise-grade GPUs and their associated future cash flows into yield-bearing on-chain assets, AID (AI dollar). Users earn real AI yields by staking AID to receive sAID (staked AID), which is backed by a diversified portfolio of US T-bills and real-world AI infra financing deals (TVL at Peak: $200+ million), enabling broader participation in AI infrastructure investment. GAIB is backed by Hack VC, Faction, Amber Group, and Antalpha.

KAUST (King Abdullah University of Science and Technology) 

KAUST is leveraging NVIDIA H200 GPUs on Aethir for short-term research projects at highly competitive pricing. As one of the most well-funded research universities in the Middle East, KAUST launched a dedicated Center of Excellence for Generative AI in 2024, helping position Saudi Arabia as a rapidly growing hub for AI research under the country's Vision 2030 initiative.

These deployments highlight the diversity of workloads running on Aethir's network — spanning academic research, frontier AI model development, and new economic models for AI infrastructure.

Managed Kubernetes for Enterprise AI Workloads

In addition to expanding its GPU infrastructure, Aethir has introduced a managed Kubernetes layer designed specifically for enterprise AI workloads.

This layer sits on top of Aethir's bare-metal GPU infrastructure and allows customers to manage compute resources using familiar container-based workflows.

The managed Kubernetes environment provides several capabilities for organizations deploying large-scale AI systems, including:

  • Containerized AI workloads
  • Multi-tenant resource sharing across teams
  • GPU partitioning and orchestration
  • Simplified deployment of distributed training pipelines

This managed option improves efficiency and operational flexibility for enterprises, universities, research institutions, and startups running GPU-intensive workloads.

Building Infrastructure for the Future of AI

As demand for AI compute continues to grow globally, scalable GPU infrastructure is becoming a critical foundation for innovation. Through the continued expansion of its decentralized GPU cloud, Aethir aims to provide AI builders, enterprises, and research institutions with the infrastructure needed to train, deploy, and operate advanced AI systems at scale.

For enterprise compute inquiries, contact us at sales@axecompute.com 

Resources

Keep Reading