Your AI Engineering Workstation
Before building LLM applications, you need a properly configured development environment. This lesson walks you through setting up everything from scratch — Python, API keys, GPU access, and essential libraries.
Step 1: Python Environment with uv
We use uv — the modern Python package manager that is 10-100x faster than pip:
# Install uv (macOS/Linux)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create project directory
mkdir ai-engineer-core && cd ai-engineer-core
# Initialize Python 3.12 project
uv init --python 3.12
uv venv
source .venv/bin/activate
Step 2: Install Core Libraries
# LLM APIs
uv add openai anthropic google-generativeai
# AI/ML frameworks
uv add langchain langchain-openai langchain-community langgraph
uv add transformers datasets accelerate peft bitsandbytes
uv add sentence-transformers chromadb faiss-cpu
# Web and UI
uv add gradio streamlit fastapi uvicorn
# Utilities
uv add python-dotenv requests beautifulsoup4 pandas
Step 3: API Keys Configuration
Create a file — never commit this to Git:
OPENAI_API_KEY=sk-proj-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
HUGGINGFACE_TOKEN=hf_your-token-here
Load them safely in Python:
from dotenv import load_dotenv
import os
load_dotenv()
# Verify keys are loaded
assert os.getenv("OPENAI_API_KEY"), "Missing OpenAI key"
assert os.getenv("ANTHROPIC_API_KEY"), "Missing Anthropic key"
print("All API keys loaded successfully!")
Step 4: GPU Access Options
| Option | Cost | Best For |
|---|---|---|
| Google Colab Free | Free | Quick experiments, small models |
| Google Colab Pro | $10/month | Fine-tuning 7B models |
| Kaggle Notebooks | Free (30h/week) | Training with T4 GPUs |
| Lambda Labs | $0.50/hr | Production fine-tuning |
| RunPod | $0.40/hr | Custom Docker + A100 GPUs |
Step 5: Verify Everything Works
from openai import OpenAI
from anthropic import Anthropic
# Test OpenAI
openai_client = OpenAI()
response = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Say hello in one word"}],
max_tokens=5
)
print(f"OpenAI: {response.choices[0].message.content}")
# Test Anthropic
anthropic_client = Anthropic()
response = anthropic_client.messages.create(
model="claude-haiku-4-5-20251001",
max_tokens=5,
messages=[{"role": "user", "content": "Say hello in one word"}]
)
print(f"Anthropic: {response.content[0].text}")
print("Environment setup complete!")
Key Takeaway
A clean, reproducible environment is the foundation of every AI project. Use uv for speed, .env files for security, and cloud GPUs for fine-tuning. You are now ready to build.