In a significant move to challenge Nvidia’s dominance in the AI accelerator market, Japan’s SoftBank Corp. has announced a collaboration with AMD. The two companies will jointly validate the use of AMD Instinctâ„¢ GPUs for SoftBank’s next-generation artificial intelligence infrastructure, aiming to create a more efficient and flexible system for managing demanding AI workloads.
The Core of the Partnership: Tackling AI’s Resource Challenge
As the demand for generative AI and large language models (LLMs) surges, so does the need for immense computing power. However, the resources required by these models vary significantly based on their size and the number of concurrent tasks. Uniformly allocating powerful GPU resources often leads to inefficiency, with accelerators being either underutilized or creating bottlenecks. SoftBank’s initiative directly addresses this problem by focusing on the intelligent partitioning and allocation of GPU resources to precisely match the needs of each AI application.
Enter the Orchestrator: SoftBank’s Brainchild
At the heart of this collaboration is a tool developed by SoftBank called the Orchestrator. This system is designed to manage computing resources and optimally distribute AI applications. By enhancing the Orchestrator to work with AMD’s hardware, SoftBank aims to create a next-generation AI infrastructure that can flexibly control computing power. The goal is to run multiple AI applications efficiently on a single GPU, minimizing wasted resources and maximizing utilization. This is part of SoftBank’s broader strategy to build a robust AI infrastructure, which includes significant investments in data centers and related technologies.

Why AMD Instinct? The Technical Edge
The choice of AMD Instinct accelerators is crucial. These GPUs feature advanced hardware-level partitioning capabilities, allowing a single physical GPU to be divided into multiple isolated, logical devices. This technology, which is key to the joint validation, enables the Orchestrator to allocate specific compute and memory resources to different AI tasks simultaneously. For example, an AMD Instinct MI300X GPU can be subdivided into several smaller virtual GPUs, allowing cloud providers and enterprises to run multiple, less-demanding inference jobs concurrently instead of dedicating an entire expensive accelerator to a single task.
This approach is AMD’s answer to Nvidia’s Multi-Instance GPU (MIG) technology, which similarly allows GPUs like the H100 to be partitioned into as many as seven independent instances. By providing hardware-level isolation, both technologies ensure that workloads running on one partition do not interfere with others, guaranteeing predictable performance.

Market Implications: A Calculated Move Against Nvidia’s Dominance
While AMD has been making significant strides, Nvidia still holds a commanding share of the AI accelerator market, estimated to be around 80-85%. This partnership provides a major validation for AMD from a global technology giant and opens a path to challenge Nvidia’s entrenched position. For SoftBank, collaborating with AMD offers a powerful alternative to Nvidia, potentially reducing costs and avoiding vendor lock-in. Previously, SoftBank had developed and run its Orchestrator system on Nvidia-accelerated platforms, making this move a clear signal of its intent to diversify its hardware options.
“Through our collaboration, SoftBank is right-sizing GPU resource allocation to match model requirements and help enable flexible inference platforms that support a wide range of real-world AI services,” said Kumaran Siva, Corporate Vice President at AMD. Ryuji Wakikawa, who heads the Research Institute of Advanced Technology at SoftBank, added, “This enables more efficient operation of multiple AI applications on a single GPU. SoftBank will continue to improve the efficiency of computing resource utilization.”
Looking Ahead: The MWC 2026 Showcase and Beyond
The results of this joint effort are not far from public view. The companies plan to showcase a demonstration of the technology at the AMD booth during MWC Barcelona 2026. This will be a critical opportunity for AMD and SoftBank to prove the viability and efficiency of their combined solution to the global tech industry.
In the long term, this partnership could influence how cloud providers and large enterprises build their AI infrastructure. As AI models become more diverse, the ability to dynamically and efficiently partition GPU resources will be paramount. Success in this venture could accelerate AMD’s market share growth and establish a new blueprint for flexible, cost-effective AI computing at scale, marking a new chapter in the competitive AI hardware landscape.