Demonstration of Remote Distributed AI Infrastructure Between Tokyo and Fukuoka Using 'IOWN APN' Confirms Practical Performance Based on Workload Characteristics
Practicality of remote distributed AI infrastructure between Tokyo and Fukuoka demonstrated using IOWN APN.
GMO Internet, Inc. (Headquarters: Shibuya-ku, Tokyo; President and CEO: Masashi Ito; hereinafter 'GMO Internet'), NTT East Corporation (Headquarters: Shinjuku-ku, Tokyo; President: Naoki Shibuya; hereinafter 'NTT East'), NTT West Corporation (Headquarters: Osaka-shi, Osaka; President: Ryota Kitamura; hereinafter 'NTT West'), and QTnet, Inc. (Headquarters: Fukuoka-shi, Fukuoka; President: Yoshio Ogura; hereinafter 'QTnet') have completed a technical demonstration of a remote distributed AI infrastructure between Tokyo and Fukuoka utilizing the 'APN (All-Photonics Network)' of the 'IOWN (Innovative Optical and Wireless Network)'.
In this demonstration, conducted from November 2025 to February 2026, an IOWN APN dedicated line was established between Tokyo (storage) and Fukuoka (GPU) to measure and evaluate AI workload performance on an AI development platform connecting GPUs from 'GMO GPU Cloud' with high-capacity storage. The results confirmed that for Large Language Model (LLM) training, performance degradation was limited to only about 0.5% compared to a local environment, indicating that the impact is extremely minimal. For image classification tasks involving data loading, it was confirmed that processing at a practical level is possible even in a remote environment through optimizations such as learning data refinement. This demonstrated that practical AI development in a remote distributed environment is achievable through design tailored to workload characteristics.
Prior to this, the four companies conducted a preliminary demonstration (Phase 1) in July 2025, performing performance tests in a simulated remote environment assuming a distance of approximately 1,000 km between Tokyo and Fukuoka, with details published in a technical report.
Press Release: https://internet.gmo/news/article/88/
Technical Report: https://internet.gmo/news/article/87/
Based on the results of this demonstration, the four companies will continue to advance initiatives toward the practical application of remote distributed AI infrastructure to meet customer needs.
[Background and Objectives]
With the recent spread of generative AI and LLMs, demand for AI development platforms is expanding rapidly. Conventionally, it has been considered essential for GPUs and high-capacity storage to be physically adjacent. However, to address data center space constraints and the need for companies to manage data at their own facilities, there is a demand for distributed AI development platforms that transcend geographical limitations. The four companies have been examining the technical feasibility of connecting remote GPUs and storage using the high-speed, high-capacity, and low-latency features of IOWN APN.
[Overview and Results of Preliminary Demonstration (Phase 1)]
In July 2025, a delay adjustment device 'OTN Anywhere' was installed in a Fukuoka data center, and two test tasks—image recognition (ResNet) and language learning (Llama2 70B)—were executed using GMO GPU Cloud. Under simulated delay conditions equivalent to the Tokyo-Fukuoka distance (15ms), the decline in ResNet benchmark scores was confirmed to be around 12%, which was deemed within a commercially viable range, leading to this current demonstration.
[Overview and Results of Current Demonstration (Phase 2)]
In this demonstration, the second headquarters of GMO Internet Group (Shibuya-ku, Tokyo) and the QTnet data center (Fukuoka-shi, Fukuoka) were connected via IOWN APN (100GbE). A GPU server 'NVIDIA HGX H100' was placed in Fukuoka, and high-speed storage 'DDN AI400X2' was placed in Shibuya to measure AI training performance using remote storage.
- Demonstration Period: November 2025 – February 2026
- Connection Section: Shibuya-ku, Tokyo (GMO Internet) – Fukuoka-shi, Fukuoka (QTnet)
- Demonstration Content: Measurement of training time for image classification tasks (ResNet) and LLM processing tasks (Llama2 70B)
[Demonstration Results]
The results confirmed that even in a remote distributed environment via IOWN APN, performance comparable to a local environment (within the same data center) can be achieved.
◾️ Large Language Model (Llama2 70B) Training Task
- Local Environment: 24.87 minutes
- Remote Environment (via IOWN APN): 24.99 minutes
- It was demonstrated that for LLM training, which is primarily computational, the impact of latency is extremely limited (approx. 0.5% difference).
◾️ Image Classification (ResNet) Task
- Local Environment: 13.72 minutes
- Remote Environment (via IOWN APN): 14.38 minutes
- It was confirmed that even for tasks involving data loading, processing at a practical level is possible in a remote environment through appropriate data formatting.
*The results of this verification have not been officially verified or approved by the MLCommons Association.
[Transformation Brought by This Demonstration]
The success of this demonstration marks a major turning point in solving the challenge of 'separation of computing resources and data' caused by physical distance. This model, where 'data is not moved, and computing resources access data remotely,' offers a new option for fields with strict data sovereignty and security requirements. This is expected to contribute significantly to the realization of 'sovereign clouds' in sectors such as finance, healthcare, defense, and government, where internal controls and cross-border data regulations are stringent.
[Expected Use Cases]
- AI training while maintaining large-scale or confidential data under in-house management.
- Hybrid utilization with existing on-premises environments to supplement GPU resources from the cloud.
- BCP response through geographically distributed placement of computing resources and storage.