AI - Building my own budget local LLM computer

My Struggle:
With the increasing demand in working with LLM, instead of paying for API key, I am thinking about building my own local LLM model. However, I am a bit new to computer hardware. So, I plan to research first and sort out what I plan to buy at the end. My budget is around 1000 - 3000 USD.
Step 1: Understanding what is inside the computer (Week of 20 Mar 2025)
First, I tried to have basic understanding, about things that I need and explore to different option.
Basic Components required:
- CPU
- GPU (**for higher performance)
- RAM
- SSD
- Power
- Motherboard
- Cooler / Fan
- The Box
Since there is many components, the possible combination is a lot, let’s see how to find a balance between the price and performance for my use-case:)!
At the same time, as I am located in Hong Kong, I realized that the price of GPU display card, especially the 50 series is pretty expansive (at least 50% more than the price stated in official website) and also out of stock. Wonder what card I would end up with ;,)
Step 2: Understanding the relationship of hardware and performance
For selection of CPU: I understand that we can follow bench hierarchy to search for the desire processing model.
CPU Benchnark: https://www.tomshardware.com/reviews/cpu-hierarchy,4312.html
GPU: There are few criteria to be consider when selecting GPU. Based on my understanding the most popular option is Nvidia, AMD and Mac.
- VRAM
- VRAM, or Video Random Access Memory, is a type of specialized memory used primarily by graphics processing units (GPUs) to store and process data related to rendering images, videos, and other visual computations. In the context of large language models (LLMs), VRAM plays a critical role when these models are run on GPUs, which is common for training and inference due to the parallel processing power GPUs provide
TBC