LLMLab.ee

AI Workstations in Estonia

FAQ

Experimental / Advanced

Mac + External GPU AI

Apple Silicon Macs paired with external NVIDIA/AMD GPUs for AI compute workflows. For AI compute only — not gaming or macOS graphics acceleration.

Important warning

External GPUs on Apple Silicon Macs are for AI compute workflows only. They do not accelerate macOS graphics, gaming, displays, Final Cut, or Blender viewport rendering.

Apple's official eGPU support is Intel-Mac-only. Apple Silicon support depends on TinyGPU/tinygrad-style AI compute drivers.

Advanced

Mac Studio M4 Max + RTX 6000 Ada eGPU AI Compute

Mac: Mac Studio M4 Max 128GB / 2TB (128GB unified)

eGPU: OWC Helios FX 850W

GPU: NVIDIA RTX 6000 Ada (48GB VRAM, Ada Lovelace)

70B needs serious memory tradeoffs

70B-class models depend heavily on VRAM/RAM, quantization, and context length.

Supported:

  • Large-scale local inference (70B+)
  • CUDA/tinygrad research
  • parallel model serving
  • advanced AI development

Not supported:

  • Gaming acceleration
  • macOS display acceleration
  • Final Cut acceleration
  • Blender viewport rendering

For AI compute only. Depends on third-party TinyGPU/tinygrad driver support.

Mac: €3599

Enclosure: €449

Maximum combined: 128GB unified + 48GB VRAM. Mac Studio handles native MLX/Ollama workloads, eGPU handles CUDA-specific tasks.

Advanced

Mac Studio M2 Max Native MLX Workstation (Optional eGPU Path)

Mac: Mac Studio M2 Max 64GB / 1TB (64GB unified)

eGPU: Open-frame PCIe Riser (ATX PSU)

GPU: NVIDIA RTX 4090 (24GB VRAM, Ada Lovelace)

Better for 30B-class models

Stronger fit for larger quantized models; actual fit depends on runtime and settings.

Supported:

  • Native MLX/Ollama local inference
  • macOS AI development
  • LLM experimentation

Not supported:

  • CUDA-based training
  • multi-GPU workloads
  • high-VRAM model serving

Primary workloads run natively on Apple Silicon via MLX/Ollama. External GPU compute is an optional experimental path and is not included as a fixed total price.

Mac: €2599

Enclosure: €49

Mac Studio M2 Max with 64GB unified memory for native macOS AI, plus an optional RTX 4090/eGPU path for CUDA/tinygrad experimentation.

Experimental

Mac mini M4 + RTX 6000 Ada eGPU AI Compute

Mac: Mac mini M4 24GB / 512GB (24GB unified)

eGPU: Sonnet Breakaway Box 750ex

GPU: NVIDIA RTX 6000 Ada (48GB VRAM, Ada Lovelace)

Best for 7B/8B models

Good starting point for chat and coding assistants; larger models need more memory.

Supported:

  • Local LLM inference
  • tinygrad experiments
  • CUDA-based AI workloads
  • high-VRAM AI testing

Not supported:

  • Gaming acceleration
  • macOS display acceleration
  • Final Cut acceleration
  • Blender viewport rendering

For AI compute only. Depends on third-party TinyGPU/tinygrad driver support. External GPUs on Apple Silicon Macs do not accelerate macOS graphics, gaming, or displays.

Mac: €1199

Enclosure: €349

Mac mini M4 + external RTX 6000 Ada (48GB VRAM) via TinyGPU/tinygrad for CUDA AI compute. Uses a 2-slot workstation GPU that fits the listed enclosure.

Experimental

Mac mini M4 Pro + RTX 6000 Ada eGPU AI Compute

Mac: Mac mini M4 Pro 48GB / 1TB (48GB unified)

eGPU: Sonnet Breakaway Box 750ex

GPU: NVIDIA RTX 6000 Ada (48GB VRAM, Ada Lovelace)

Good for 13B-class models

Strong everyday local LLM tier; 30B may need more memory or heavier quantization.

Supported:

  • Local LLM inference (up to 70B q4)
  • tinygrad/CUDA experiments
  • high-VRAM model testing

Not supported:

  • Gaming acceleration
  • macOS display acceleration
  • Final Cut acceleration
  • Blender viewport rendering

For AI compute only. Depends on third-party TinyGPU/tinygrad driver support.

Mac: €1999

Enclosure: €349

Mac mini M4 Pro with 48GB unified memory + external RTX 6000 Ada via Thunderbolt 5. Combined memory pools enable workloads not fitting in either alone.

Experimental

Mac mini M4 + Radeon PRO W7900 eGPU (ROCm)

Mac: Mac mini M4 24GB / 512GB (24GB unified)

eGPU: Sonnet Breakaway Box 750ex

GPU: AMD Radeon PRO W7900 (48GB VRAM, RDNA 3)

Best for 7B/8B models

Good starting point for chat and coding assistants; larger models need more memory.

Supported:

  • Local LLM inference via ROCm
  • tinygrad experiments
  • AMD RDNA3+ AI workloads

Not supported:

  • Gaming acceleration
  • macOS display acceleration
  • CUDA workloads
  • Final Cut acceleration

For AI compute only via ROCm/tinygrad. AMD eGPU support is less mature than NVIDIA CUDA.

Mac: €1199

Enclosure: €349

AMD workstation path using Radeon PRO W7900 (48GB VRAM). Uses a 2-slot GPU that fits the listed enclosure; ROCm on external GPU is experimental.