A custom-built system designed to run large language models locally, combining repurposed hardware with modern AI acceleration capabilities.
This project involved modifying an older AM4 desktop computer to create a capable local LLM inference machine. By leveraging an AMD Radeon AI PRO R9700 graphics card with 32GB of GDDR6 memory, the system can run large language models locally using LM Studio on Ubuntu 24.04. This setup provides privacy, full control over AI workloads, and the ability to experiment with various open-source models without relying on cloud services.
AMD Ryzen 5 2600
6-core/12-thread CPU
AMD Radeon AI PRO R9700
32GB GDDR6 VRAM
32GB DDR4 RAM
Dual-channel configuration
512GB NVMe SSD
OS and model storage
Ubuntu 24.04 LTS
Optimized for AI workloads
LM Studio
Local LLM inference platform
All AI processing happens locally with no data sent to external servers
32GB VRAM enables running large models with excellent throughput
Repurposed existing AM4 platform reducing overall project cost
Perfect platform for testing various open-source LLM models