DXG Spark for Local LLM Development

Hi everyone!

Recently, I got a DXG Spark running Linux OS (I’m a Linux user myself, so I was really glad to test its performance).
Among other things, I work on AI development, and sometimes I face the question of which hardware to choose …


This content originally appeared on DEV Community and was authored by Aleksandr Pimenov

Hi everyone!

DXG Spark

Recently, I got a DXG Spark running Linux OS (I’m a Linux user myself, so I was really glad to test its performance).

Among other things, I work on AI development, and sometimes I face the question of which hardware to choose for running local LLM models.

ollama run qwen2.5:14b --verbose "Why sky is blue?"

To properly compare all the hardware I have access to, I chose the qwen2.5:14b model.

Why this one? Because it takes only 9.0 GB and easily fits into any video memory.

I measured the results using llm-benchmark — see the table below.

Compare

Pros

  • Of course, it’s very nice that the OS is Linux, and everything works exactly like in production.
  • It’s cool that the devices can be stacked (although I didn’t manage to test this mode).
  • A good memory reserve — 128 GB of RAM.
  • Silent: even at 100% load, at 30 cm from me, I don’t hear any noise.
  • Excellent software — thanks to the Ubuntu/Linux community and the DXG developers themselves.
  • Low power consumption.

Cons

  • Expensive, especially considering what chip is inside. On smaller models, my RTX 3060 16 GB performs just as well.

Conclusions

Without a doubt, I will find a use for it as a local hub for running large LLM models to offload the main machine.

But I cannot recommend it.

For production and data center deployment, it might make sense — you can save on electricity.

For example, I wasn’t able to fit more than eight machines with 4× RTX 6000 96 GB each into one rack without additional power.

But I can’t imagine how to place the DXG Spark in a rack.

For home use, in my opinion, it’s easier to get a Mac Studio with 96 GB of memory — it costs $1000 more but works faster in tokens per second.

Or, if you don’t want to compromise, get an RTX 6000 96 GB to get maximum performance.

Or buy two RTX 5090 32 GB cards for the same money and get 64 GB of VRAM.

If you have more logical explanations for what the DXG Spark is really needed for, or if you want me to run some tests — I’ll be glad to discuss it in the comments.

Wishing everyone good vibes and good luck choosing your hardware!


This content originally appeared on DEV Community and was authored by Aleksandr Pimenov


Print Share Comment Cite Upload Translate Updates
APA

Aleksandr Pimenov | Sciencx (2025-11-05T19:16:51+00:00) DXG Spark for Local LLM Development. Retrieved from https://www.scien.cx/2025/11/05/dxg-spark-for-local-llm-development/

MLA
" » DXG Spark for Local LLM Development." Aleksandr Pimenov | Sciencx - Wednesday November 5, 2025, https://www.scien.cx/2025/11/05/dxg-spark-for-local-llm-development/
HARVARD
Aleksandr Pimenov | Sciencx Wednesday November 5, 2025 » DXG Spark for Local LLM Development., viewed ,<https://www.scien.cx/2025/11/05/dxg-spark-for-local-llm-development/>
VANCOUVER
Aleksandr Pimenov | Sciencx - » DXG Spark for Local LLM Development. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/11/05/dxg-spark-for-local-llm-development/
CHICAGO
" » DXG Spark for Local LLM Development." Aleksandr Pimenov | Sciencx - Accessed . https://www.scien.cx/2025/11/05/dxg-spark-for-local-llm-development/
IEEE
" » DXG Spark for Local LLM Development." Aleksandr Pimenov | Sciencx [Online]. Available: https://www.scien.cx/2025/11/05/dxg-spark-for-local-llm-development/. [Accessed: ]
rf:citation
» DXG Spark for Local LLM Development | Aleksandr Pimenov | Sciencx | https://www.scien.cx/2025/11/05/dxg-spark-for-local-llm-development/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.