When an AI agent is helping you debug a feature, the most valuable question is usually "what does the data actually look like?" — and the answer lives in your database. Without access to it, the agent has to guess at schema, hallucinate column names, or keep asking you to paste in query results.
Give your AI agent safe access to your database and the debugging experience transforms. Instead of guessing, it runs a discovery query, checks a row count, inspects a specific record, and forms a real hypothesis — the same way a senior developer would.
An AI agent investigating a bug without DB access is like someone debugging over the phone — they can describe what they think is happening, but they can't look. With direct DB access, the agent can:
User: The orders page is blank for customer ALFKI. Can you check what's going on?
Agent: I can't see your database, but possibilities are:
CustomerId is wrongCan you run SELECT * FROM Orders WHERE CustomerId = 'ALFKI' and paste the result?
❌ Figure: Bad example - Bad example - The agent can't investigate on its own; you ferry data back and forth
User: The orders page is blank for customer ALFKI. Can you check what's going on?
Agent: (queries Sales.Orders directly)
Found it — 6 orders exist for ALFKI but all have IsDeleted = 1. The page filter correctly excludes soft-deleted rows, but the archive view should include them. Investigating OrdersController now.
✅ Figure: Good example - Good example - The agent queries the DB directly, diagnoses, and moves to the fix
Obviously, don't give the AI access to a production database.
A Model Context Protocol server exposes a curated set of DB tools (query, list-tables, describe-table) that the agent can call directly. Good for standard databases where you want minimal setup.
Examples:
See Do you give your AI agents context with MCP servers and skills?.
The AI development tool will probably attempt to do this anyway. A skill file guides it with project-specific context and saves it getting caught on the gotchas.
For a real project with multiple databases, Aspire, containerized SQL, or non-standard conventions, a skill file usually beats a generic MCP server. A skill teaches the agent how to connect, which database owns which module, and which queries to start with — context a vanilla MCP server doesn't know.
Example skill for a multi-database Northwind project (.claude/skills/northwind-db-query/SKILL.md):
---name: northwind-db-querydescription: Connect to the local Northwind SQL Server, identify the correctdatabase and schema for a module, and run ad hoc SQL queries safely.---## Northwind Local Database Layout- SQL Server runs in Docker on localhost:1433- Main databases: Sales, Customers, Inventory, Identity- Orders.* → Sales- Invoicing.* → Sales- Customers.* → Customers- Products.* → Inventory## Safety Rules- Default to read-only queries- Do not modify data unless the user explicitly asks- Never run destructive SQL on shared or production environments## Discovery queriesList databases → list tables → inspect columns → row counts## Known Pitfalls- Host sqlcmd breaks on `!` in passwords — pipe queries instead of -Q- SmartEnum columns store GUIDs, not ints — check *Enum.cs files- Schemas match module names (Orders.Order, not dbo.Orders)
✅ Figure: Good example - Good example - A skill teaches the agent your DB layout, connection details, safety rules, and starter queries
A good DB skill should cover:
docker exec or host sqlcmdSee Do you use skills to standardize your AI workflows? for more on writing skills.
~/.claude/skills/Nothing stops you from combining both — an MCP server for the raw connection plus a skill that layers project-specific knowledge on top.
Tip: Once the agent has DB access, ask it to start every investigation with a discovery query — row counts and a sample row — before forming any hypothesis. It prevents confident-sounding answers based on a hallucinated schema.