Private OpenGPT - Unlimited ChatGPT Private AI No Account Cloud GPUs
. Private AI . GPUs

AdBlock . Free AI . OpenGPT

Private AI . Tools . OpenGPT . Gemma3 . AI Models . Cloud GPUs




Last Modified
Wednesday, 14-Jan-2026 12:09:34 EST Building Your First AI GPU Rig - BA.net

Building Your First AI GPU Rig

By now you’ve heard about the recent Private AI Advantages. Building an AI GPU Server is a complex task.

Sifting through all of the computer parts can be a daunting task, especially if you’re new to this. So, we’ve laid out below everything you need to know in order to build a successful general purpose AI rig

We will be building this rig with chatgpt-oss 20b and gemma3 12b in mind, as they are currently the most popular private ai models to use.

The Motherboard (MOBO)

The Motherboard is the big flat piece of your computer “guts” that EVERYTHING plugs into. Every part that you choose needs to be compatible with your MOBO, so keep that in mind. The motherboard is going to dictate what kind of CPU you have, how many GPUs you can hold, how much memory you can have, etc. All of your other parts rely on your motherboard, so choose wisely.

The main compatability issue will rise from you MOBO/CPU combo, as the motherboard has to have a matching CPU socket. An easy way to check if your parts are compatible is to use the PC Part Picker tool.

Now, most MOBOs have multiple slots for multiple GPUs. Right now, a standard Ai Inference Rig MOBO will have the capacity for 6 GPUs. However, many GPU companies are starting to add slots for even more GPUs.

Anyways, we’re going to build a standard 3 or 6 GPU rig, so this is a bit of an overkill. So, this MSI Pro Solution Intel Z170A LGA 1151 DDR4 USB 3.1 ATX Motherboard (Z170A SLI Plus) should work perfectly! It has 6 GPU slots, which is exactly what we’re looking for!

Other recommended mobos are Asus Prime Z270, Asus Prime H270 Plus, BIOSTAR TB250-BTC

More motherboard options

Central Processing Unit (CPU)

The CPU in an AI Inference rig isn’t all that important. It is important, but you don’t need a high-end CPU for it. An i7 would be a huge overkill, heck, even an i5 would still probably be a bit much. You don’t need it to be super powerful since it will do one thing and one thing only, ai inference.

It appears that everyone gravitates towards the Intel Celeron series. They are powerful enough to get the job done, yet inexpensive to help keep the costs of your rig down.

The Intel Celeron G3900 Dual-core (2 Core) 2.80 GHz Processor is the most common CPU found particularly in AI Inference Rigs, but it will also work well for AI just about anything else! It is powerful enough to be very successful in AI, yet it is only $41.00. Definitely a good buy while they’re readily available!

Graphics Processing Unit (GPU)

This is probably the most important part of a AI Inference rig. Since most cryptocurrencies are mined with a GPU, you’ll need a powerful one to yield the highest hashrate possible and therefore earn coin faster. Choosing a GPU can be a daunting task because there are so many with different specs, RAM, chipsets, and a whole lot more that effects your hashing power.

Today NVIDIA reigns supreme! Their GTX 3070, GTX 3070ti, and GTX 3080 have been killing it! They’re not as powerful for AI as the RX AMD series, but they are better for equihash (Zcash, Zen, BTG, Monero) they are competitive and generally NVIDIA products hold their value better than AMD. So, if you ever wanted out of the AI game, you could salvage your parts for a decent amount.

There’s two main things that you’ll need to look out for when purchasing a GPU:

  1. GPU RAM
  2. AI Inference Efficiency

GPU RAM

GPU RAM is important because without it, you won’t be able to fit you AI model on the GPU

You need enough GPU RAM to run ollama and move the large AI model files around. So, you’ll need AT A BARE MINIMUM 8GB of RAM on your GPU to cover it, anything less and you may not be able to load ollama.

More is better, so if you can afford a GPU with more RAM, GET IT! It will ensure that you are able to keep running even as AI model Files get bigger (which they continually do). It is recommended to go with a GPUs that has at least 6GB of RAM, again… more is more!

Nvidia GPUs Comparison Table


GPU Efficiency for AI

Model VRAM (GB) TDP (W) New Price (USD) Used Price (USD)
BA GPU Hosting
BA 40 Tflops 40G RAM 500 $199.00 /month product link
BA 50 Tflops 50G RAM 600 $249.00 /month product link
BA On-Prem any RAM any $1190.00 /year product link
BlackWell Series
RTX Pro 4000 Blackwell 24 GB GDDR7 140 W ~$1,650 ~$1,200 (estimate, limited availability)
RTX Pro 6000 Blackwell 96 GB GDDR7 600 W ~$8,500 ~$7,000 (estimate, limited availability)
A100 80 GB HBM2e 300 W (PCIe) ~$12,000 ~$7,000
H100 80 GB HBM3 350 - 400 W (PCIe) ~$30,000 ~$20,000
H200 141 GB HBM3e 350 - 700 W (PCIe) ~$35,000 ~$25,000
RTX 50 Series
RTX 5090 32 GDDR7 575 ~$3,500 (AIB models; MSRP $1,999 but inflated due to hikes) N/A (recent launch)
RTX 5080 16 GDDR7 360 ~$1,500 (estimated post-hike; MSRP ~$999) N/A (recent launch)
RTX 5070 Ti 16 GDDR7 300 ~$600 (estimated MSRP) N/A (recent launch)
RTX 5070 12 GDDR7 250 $400 (discounted from $470) N/A (recent launch)
RTX 5060 Ti 8 GDDR7 180 $400 (discounted from $470) N/A (recent launch)
RTX 5060 8 GDDR7 145 $250-$280 N/A (recent launch)
RTX 5050 8 GDDR6 130 ~$200 N/A (recent launch)
RTX 40 Series
RTX 4090 24 GDDR6X 450 $2,755 $2,199
RTX 4080 Super 16 GDDR6X 320 $999 (MSRP) ~$800 (estimated based on market trends)
RTX 4080 16 GDDR6X 320 $1,199 (original MSRP; discounted now) ~$700 (estimated)
RTX 4070 Ti Super 16 GDDR6X 285 $799 (MSRP) ~$600 (estimated)
RTX 4070 Ti 12 GDDR6X 285 $799 (original MSRP; superseded) ~$550 (estimated)
RTX 4070 Super 12 GDDR6X 220 $599 (MSRP) ~$450 (estimated)
RTX 4070 12 GDDR6X 200 $549 (MSRP) ~$400 (estimated)
RTX 4060 Ti (16GB) 16 GDDR6 160 $499 (MSRP) ~$350 (estimated)
RTX 4060 Ti (8GB) 8 GDDR6 160 $399 (MSRP) ~$300 (estimated)
RTX 4060 8 GDDR6 115 $299 (MSRP) ~$200 (estimated)
RTX 30 Series
RTX 3090 Ti 24 GDDR6X 450 N/A (discontinued) ~$800 (estimated)
RTX 3090 24 GDDR6X 350 N/A (discontinued) ~$700 (estimated)
RTX 3080 Ti 12 GDDR6X 350 N/A (discontinued) ~$500 (estimated)
RTX 3080 (10GB) 10 GDDR6X 320 N/A (discontinued) $370-$400
RTX 3070 Ti 8 GDDR6X 290 N/A (discontinued) ~$300 (estimated)
RTX 3070 8 GDDR6 220 N/A (discontinued) ~$250 (estimated)
RTX 3060 Ti 8 GDDR6 200 N/A (discontinued) ~$200 (estimated)
RTX 3060 (12GB) 12 GDDR6 170 ~$200 (reintroduced) ~$150 (estimated)
RTX 3050 (8GB) 8 GDDR6 130 N/A (discontinued) ~$100 (estimated)

Notes: - Specs are for desktop variants; VRAM and TDP focus on primary configurations (e.g., excluding mobile ranges). - Prices reflect January 2026 market conditions, including ongoing hikes due to VRAM shortages and AI demand. New prices are approximate retail/MSRP where available; used are eBay/secondary market estimates. Actual prices may vary by region and vendor. For recently launched RTX 50 series, used markets are limited. Discontinued models like RTX 30 series are primarily available used.


MoBo RAM (Random Access Memory)

At least 8 GB RAM is the minimum you need. Though we recommend builds of 16 GB and above. We recommend just picking up a cheap vendor as long as it is DDR5 desktop memory.

Power Supply Unit (PSU)

1300W Power Supply Unit should do the trick for most 6 GPU rigs (without nvidia 3080s). You can calculate how many watts you’ll need by looking at the power requirements for each component of your computer and adding them together. No PSU is 100% efficient, so take that calculated number and go with a PSU that has a little bit more.

It is always better to have a PSU that can hold more than you need, than not enough. Too little power and you’ll over work it and could start an electrical fire, which is bad in most opinons.

Look for Gold or Platinum rated (as opposed to silver or bronze). This is the industry standard for rating the efficiency of a computer’s power supply unit (PSU).

Also check the number of molex, 6 pin and 8 pin connector cables. To be able to connect all your GPUs.

Electrical circuit

It is recommended never to go above 80% capacity on an electrical circuit
20amp * 120volts * .8 = 1,920 watts safely

20amp * 240volts * .8 = 3,840 watts safely

Risers

To connect more than one GPU to your motherboard you will need Riser PCIe adaptors.

Example Risers at amazon

Molex to 6 Pin Cables

You will need some Molex to 6 Pin power cables, depending on your power supply. Read the specs of your PSU, and order the number of missing cables.

Important: Never use sata to 6 Pin adaptors! They can not carry enough watts to power GPU risers safely.

Example cable at amazon

Case and Power Button

An open air case is recommended due to the heat generated by the GPUs. For up to 3 GPUs you can simply screw your components on a shelf-like piece of wood.

To start your computer you will need a power button like this

Storage

You need a hard drive. Preferably a SSD. You can boot your AI Inference OS in a 8G pendrive. Use our turnkey free banet nvidia GPT OS to test the hardware build, and run the final software installer.


Operating System (OS)

It is argued that Linux is best for AI Inference. If you choose Nvidia GPUs you can use the free banet nvidia GPT OS Nvidia will generally be more power efficient than AMD and be optimal for AI Inference.

Ultimately, it is all up to user preference.


Remote AC Power Outlets aka Smart Plugs

Bottom of the Nice-But-Not-Required build list is a way to remotely power cycle your rig.

The Wemo Insight is preferred because it can function with IFTTT support and monitors watt usage. link

Alternatively, the Etekcity Smart Plugs do not work with IFTTT but are much cheaper. link

A Kill-a-Watt is useful for monitoring wattage use, but doesn't have remote power cycling capabilities. link

Note: Smart plugs are typically rated for no more than 1200-1500W total power.


Alternatively Relay Buttons
A wifi or Ethernet relay board can be used to remotely operate the power or reset buttons on your motherboards amazon


More info. mobo recommendations and mobo bios config settings mobo bios config help

Ethernet Internet or Wifi

Ethernet internet connection is preferred. But wifi can be used, make sure to have usb extender cable, since rigs create a lot of static electricity around them and wifi almost never works in the radius of 0.5-1m. Example for ba.net nvidia gpt os amazon


Conclusion

You can make just about any rig into an AI Inference rig. Only a few pieces of hardware really make a difference. The biggest difference maker in the AI Inference world is your choice of GPU.

Today Nvidia reigns supreme in bang for buck. Also in power efficiency.

We want to make AI Inference available for everyone. It all starts with building your rig, so let’s get building!


More Details Specification List - What Hardware Should I buy ? PSU Options PSU Cables


Private AI . Tools . OpenGPT . Gemma3 . AI Models . Cloud GPUs