RPDI
Back to Blog

How to Send VS Code IDE State to an LLM API: A Step-by-Step Tutorial

TL;DR

This tutorial walks through building a VS Code extension that captures IDE state (active file, open tabs, cursor position, current diagnostics) and sends it as structured JSON context to an LLM API. The implementation is under 200 lines of TypeScript. The result: every LLM query you make includes your real-time project context, eliminating hallucinations caused by missing file information.

What We're Building

By the end of this tutorial, you'll have a VS Code extension that: (1) captures your current IDE state in real-time, (2) packages it as structured JSON, (3) sends it as system-level context to any LLM API, and (4) updates automatically on every tab switch or file edit. The complete implementation is under 200 lines of TypeScript.

The 5-Step Implementation

Here's the step-by-step implementation:

Step 01

Scaffold the Extension

Run 'npx -y yo generator-code' to scaffold a new VS Code extension. Choose TypeScript. The scaffolder creates the extension host, activation function, and package.json with VS Code API declarations.

Step 02

Capture IDE State

In the activate() function, subscribe to window.onDidChangeActiveTextEditor and workspace.onDidChangeTextDocument. On each event, call getFileState() (from our earlier tutorial) to capture the current active file, open tabs, and diagnostics.

Step 03

Package as Structured JSON

Convert the IDE state into a structured JSON context block with labeled sections: active_file (with content), open_tabs (with paths), and diagnostics (with messages). This structure helps the LLM understand the relationship between files.

Step 04

Send as System Message

When making an LLM API call, prepend the IDE state JSON as a system message. For OpenAI: include it in the 'system' role message. For Anthropic: include it in the 'system' parameter. The LLM receives your IDE state as ground truth context.

Step 05

Auto-Refresh on State Changes

The event subscriptions from step 2 ensure the context updates on every tab switch or file edit. Store the latest context in a module-level variable. Every subsequent LLM query automatically uses the fresh context.

The Code: Core Context Builder

// Core Context Builder — under 50 lines

async function buildContext(): Promise<string> {

const active = vscode.window.activeTextEditor;

const tabs = vscode.window.tabGroups.all

.flatMap(g => g.tabs)

.filter(t => t.input instanceof vscode.TabInputText);

const diagnostics = vscode.languages.getDiagnostics();

return JSON.stringify({

activeFile: active ? {

path: active.document.uri.fsPath,

content: active.document.getText(),

cursorLine: active.selection.active.line

} : null,

openTabs: tabs.map(t =>

(t.input as vscode.TabInputText).uri.fsPath),

errors: diagnostics

.filter(([, d]) => d.some(x => x.severity === 0))

.map(([uri, d]) => ({ file: uri.fsPath,

issues: d.map(x => x.message) }))

}, null, 2);

Beyond DIY: When to Use a Managed Solution

This tutorial gives you the foundation. A production context engine adds: import graph resolution, dependency version injection, security scanning, MCP protocol support, and cross-IDE compatibility. Building all of that yourself is a multi-month project.

🔧 Skip the build. Deploy the engine.

Context Snipe implements everything in this tutorial plus import graph resolution, dependency scanning, and MCP protocol — production-ready, auto-updating, maintained. Start free — no credit card →