ASUS GX10 (DGX Spark) – Initial Workloads and Benchmarks

Introduction

The ASUS GX10, the Asus version of the DGX Spark with the GB10. is NVIDIA’s newest entry for AI‑focused workstations. In this post I walk through the initial setup (including a Docker hiccup), early power‑draw measurements, and some baseline CPU benchmark numbers.


Docker Setup – Fixing a BuildKit Crash

Out of the box Docker was installed, but it failed to start with the following error:

error initializing buildkit: error creating buildkit instance: invalid database

A quick search of the NVIDIA developer forums turned up a fix:

Corrupt Docker BuildKithttps://forums.developer.nvidia.com/t/corrupt-docker-buildkit/348996

Run the commands below to reset BuildKit:

sudo systemctl stop docker.service
sudo -s
mv /var/lib/docker/buildkit /var/lib/docker/buildkit-bad
exit
sudo systemctl start docker.service

After the steps above Docker started cleanly, and I was ready to fire up my first AI workload.


First Workload – Ollama Inference

To get something running quickly I used Ollama and loaded the gpt-oss:120b model with a 128k context window. This gave me an immediate feel for both performance and power draw.


Power Consumption

Measurements were taken at the wall outlet using a Kill‑a‑Watt power meter.

Scenario Power (W)
Plugged in, powered off 1 W
Idle (no workload) 38 W
Idle with gpt‑oss:120b model loaded (128k context) 47 W
Active inference (single response) with gpt‑oss:120b 125‑135 W

These numbers show a modest baseline and a respectable jump when the GPU is doing heavy tensor work.


CPU Benchmarks

I ran Geekbench 6 on the GX10’s CPU and compared the scores to a few other reference systems.

Device Single‑Core Score Multi‑Core Score
ASUS GX10 3,102 18,943
Apple M1 MacBook Air 2,416 8,778
AMD Ryzen 9 9800X3D 3,306 17,712

I was very surprised to see the GX10’s 20-core Arm processor (10 Cortex-X925 + 10 Cortex-A725) rank competitively with the AMD 9800X3D and substantially ahead of Apple M1, especially on multi‑core workloads.

Simple Ollama Benchmark

I have a much bigger write‑up on LLM benchmarks on the way, but initial results look promising. On average I am seeing 38–42 tokens per second with gpt-oss:120b.