Before touching AI, you need one mental model: every piece of software is just computers talking to other computers over wires.
A server is just a computer that is always on and waits for requests. When you open Instagram, your phone sends a request to Meta's servers. The server finds your photos and sends them back. That's it.
An API (Application Programming Interface) is a defined way for programs to communicate. Instead of a human visiting a website, your code sends a structured request and gets data back.
# Calling the Claude API from Python — this is how you "talk to AI" from code import anthropic client = anthropic.Anthropic(api_key="your-key-here") message = client.messages.create( model="claude-sonnet-4-5", max_tokens=1024, messages=[{"role": "user", "content": "Summarize this email for me"}] ) print(message.content[0].text) # → "The email is about a meeting scheduled for Friday at 3pm..."
| Term | What it means | Example |
|---|---|---|
| HTTP/HTTPS | Language computers use to communicate over the web | Opening any website |
| API | A contract for how software talks to other software | Claude API, Gmail API |
| JSON | Data format used in APIs — like a structured dictionary | {"name": "Alice", "age": 30} |
| Endpoint | A specific URL where an API receives requests | api.anthropic.com/v1/messages |
| API Key | Your password to use an API service | sk-ant-api03-... |
| Webhook | A URL that receives data when something happens | Slack notifies your app on new message |
You don't need to master every language. For AI and automation, a focused stack beats broad shallow knowledge every time.
# Variables & data types name = "Claude" temperature = 0.7 messages = [] # list config = {"model": "sonnet"} # dictionary — used EVERYWHERE in APIs # Functions — reusable blocks of logic def ask_claude(question): response = client.messages.create( model="claude-sonnet-4-5", messages=[{"role": "user", "content": question}] ) return response.content[0].text # Calling your function answer = ask_claude("What is machine learning?") print(answer)
Most people confuse these terms. Here's a clear, permanent definition that won't mislead you.
| Term | What it is | Real-world example |
|---|---|---|
| AI | Any machine that mimics intelligent behavior | Chess computers, spam filters, Siri |
| ML | AI that learns from data instead of hardcoded rules | Netflix recommendations, fraud detection |
| Deep Learning | ML using neural networks (brain-inspired layers) | Image recognition, speech-to-text |
| LLM | Deep learning model trained on text — predicts next words | Claude, GPT-4, Gemini, Llama |
| Generative AI | AI that creates new content (text, images, code) | Claude, DALL-E, Midjourney, Suno |
| Concept | Plain English |
|---|---|
| Token | Chunk of text (~4 chars). "Hello world" = 2 tokens. Models charge and limit by tokens. |
| Context Window | How much text the model can "see" at once. Claude Sonnet = 200k tokens ≈ 150,000 words. |
| Temperature | Randomness (0=deterministic, 1=creative). Use 0 for coding, 0.7–1 for creative writing. |
| Embedding | Converting text into numbers that capture meaning. Used for search, similarity, RAG. |
| Fine-tuning | Training an existing model more on your specific data to specialize it. |
| RAG | Retrieval-Augmented Generation — give the model your documents as context at query time. |
| Inference | Running a trained model to get predictions. What you do when you call the Claude API. |
Anthropic's Claude is the most capable and safest frontier AI. There are three model tiers — choosing the right one saves cost and latency.
| Model | Speed | Intelligence | Cost | Best for |
|---|---|---|---|---|
| claude-haiku-4-5 | ⚡⚡⚡ Fastest | ★★★☆☆ | Lowest | High-volume tasks, classification, simple responses, chatbots |
| claude-sonnet-4-5 | ⚡⚡ Fast | ★★★★☆ | Mid | Most production tasks, coding, analysis, agents — the default choice |
| claude-opus-4-5 | ⚡ Slower | ★★★★★ | Highest | Complex reasoning, research, highest-stakes decisions |
The difference between a mediocre AI output and an exceptional one is almost always the prompt. Here's what actually works:
# BAD prompt — vague, no context "Summarize this" # GOOD prompt — role, task, format, constraints system_prompt = """You are an expert business analyst. Summarize the following email in exactly 3 bullet points. Focus on: action items, deadlines, and decisions made. Be concise — each bullet should be under 20 words.""" user_message = "[paste email here]" response = client.messages.create( model="claude-sonnet-4-5", system=system_prompt, messages=[{"role": "user", "content": user_message}], max_tokens=500 )
| Principle | What to do |
|---|---|
| Give it a role | "You are a senior Python developer with 10 years of experience…" |
| Be specific | Say exactly what format, length, tone you want |
| Show examples | Give 2–3 input/output pairs before your actual request (few-shot) |
| Chain of thought | Add "Think step by step before answering" for reasoning tasks |
| Constrain the output | "Return only valid JSON. No explanation." prevents bloat |
| Iterate | Treat prompts like code — test, measure, refine |
Claude can call external tools (functions) when it needs to fetch data, run code, or interact with systems. This is the foundation of agents.
# Define a tool Claude can use tools = [{ "name": "get_weather", "description": "Gets current weather for a city", "input_schema": { "type": "object", "properties": { "city": {"type": "string", "description": "City name"} }, "required": ["city"] } }] # Claude decides WHEN to use the tool based on the conversation response = client.messages.create( model="claude-sonnet-4-5", tools=tools, messages=[{"role": "user", "content": "What's the weather in Jaipur?"}] ) # Claude responds with: tool_use block → {name: "get_weather", input: {city: "Jaipur"}}
MCP is the USB standard for AI. It lets any AI model connect to any data source or tool through a single, consistent interface.
Before MCP, every AI integration was custom code. You'd write a different connector for Gmail, Slack, your database, GitHub — all differently. MCP standardizes this: write one server, any MCP-compatible AI can use it.
| Type | What it is | Example |
|---|---|---|
| Tools | Functions Claude can call (actions) | Send email, create file, query DB |
| Resources | Data Claude can read (context) | Your calendar, codebase, documents |
| Prompts | Pre-built prompt templates | "Summarize this repo's README" |
# Simple MCP server in Python (FastMCP library) from mcp.server.fastmcp import FastMCP mcp = FastMCP("My Company Tools") @mcp.tool() def get_customer_count(date: str) -> str: """Get number of new customers on a date""" # Connect to your database here result = db.query(f"SELECT COUNT(*) FROM customers WHERE date='{date}'") return f"New customers on {date}: {result}" # Now Claude can answer: "How many customers joined last Monday?"
An agent is an AI that doesn't just answer questions — it plans, takes actions, observes results, and keeps going until a goal is achieved.
Complex tasks benefit from multiple specialized agents working together — like a team rather than one person doing everything.
| Pattern | How it works | Example |
|---|---|---|
| Orchestrator + Workers | One agent coordinates, specialized agents execute | Manager agent assigns research/writing/review to sub-agents |
| Pipeline | Output of one agent feeds into next | Scraper → Analyzer → Writer → Publisher |
| Parallel | Multiple agents work simultaneously, results merged | 3 agents research different markets simultaneously |
| Debate | Agents critique each other to improve quality | Writer + Critic + Editor agents on a document |
Automation means getting computers to do repetitive work so humans can focus on thinking. Claude is the brain; workflow tools are the skeleton.
| Layer | Tool | What it does |
|---|---|---|
| Trigger | Webhook, schedule, email, form | Something happens that starts the workflow |
| Orchestration | n8n, Make, Zapier, Temporal | Connects apps and manages flow |
| AI Brain | Claude API | Understands, decides, generates content |
| Actions | Gmail, Slack, Notion, GitHub | Where the output goes |
| Storage | PostgreSQL, Airtable, Google Sheets | Stores data and results |
n8n is open-source, self-hostable, and has a visual workflow builder. It connects 400+ apps and has a built-in Claude/OpenAI node. You can start free at n8n.io.
Cloud = someone else's computers you rent by the hour. AWS, Google Cloud, and Azure are the three giants. You only pay for what you use.
| Service Type | AWS Name | GCP Name | Use For |
|---|---|---|---|
| Compute | EC2 | Compute Engine | Run your Python agent/server 24/7 |
| Serverless | Lambda | Cloud Functions | Run code triggered by events, no server management |
| Database | RDS | Cloud SQL | PostgreSQL/MySQL managed — no DBA needed |
| Storage | S3 | Cloud Storage | Store files, documents, model outputs |
| Container | ECS/Fargate | Cloud Run | Run Docker containers — easiest production deploy |
| Secrets | Secrets Manager | Secret Manager | Store API keys securely (never in your code!) |
Docker packages your application with all its dependencies into a portable container. Works on your laptop, works in the cloud, works everywhere exactly the same.
# Dockerfile — how to package your Claude agent FROM python:3.11-slim WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . CMD ["python", "agent.py"] --- # Deploy to Google Cloud Run (simplest production option) gcloud run deploy my-claude-agent \ --source . \ --region us-central1 \ --set-env-vars ANTHROPIC_API_KEY="your-key"
Concrete, project-based. Every month ends with something real you've built and can show to anyone.
| Week | Focus | Project to Build |
|---|---|---|
| Week 1 | Python basics: variables, functions, loops, dicts | Calculator script that runs in terminal |
| Week 2 | HTTP, APIs, JSON — make your first API call | Weather app using a free weather API |
| Week 3 | Claude API — prompts, system messages, tool use | Personal Q&A chatbot about a topic you know |
| Week 4 | File handling, reading PDFs, working with text | Document summarizer: input any PDF, get a summary |
| Week | Focus | Project to Build |
|---|---|---|
| Week 5 | Agents: planning, tool loops, error handling | Research agent: give a topic, get a structured report |
| Week 6 | Memory: databases (SQLite), storing state | Persistent chatbot that remembers past conversations |
| Week 7 | MCP: install existing servers, build a simple one | Claude connected to your local filesystem + GitHub |
| Week 8 | n8n: visual workflows, Gmail + Slack integration | Email classifier that auto-labels and drafts replies |
| Week | Focus | Project to Build |
|---|---|---|
| Week 9 | Docker: containerize your agent | Package your Month 2 agent into a Docker container |
| Week 10 | Cloud Run/AWS Lambda: deploy to cloud | Live URL for your agent — shareable with anyone |
| Week 11 | Multi-agent: orchestrator + specialized workers | Content factory: input brief → research + write + review |
| Week 12 | Polish, monitoring, cost optimization | Capstone: Full automation solving a real problem you have |
npm install -g @anthropic-ai/claude-code