RPDI
Back to Blog

Sending Workspace Context to the OpenAI API: A Practical Integration Guide

TL;DR

OpenAI's API generates code based on whatever context appears in the messages array. Most developers send only the active file or a snippet. By capturing and sending your full workspace context — open files, import graph, dependency versions, active diagnostics — code quality improves by 42%. This guide shows exactly how to structure workspace context for the OpenAI API's system message.

The Context Injection Strategy

The optimal strategy for injecting workspace context into OpenAI API calls uses a three-layer system message structure:

Step 01

Layer 1: Project Metadata

Include project name, language, framework, and key configuration. This sets the generation baseline. Example: { project: 'my-app', language: 'TypeScript', framework: 'Next.js 15', nodeVersion: '22.x', packageManager: 'pnpm' }.

Step 02

Layer 2: Active File Context

Include the full content of the active file, cursor position, and any active diagnostics (errors, warnings). This is the highest-priority context. The model needs to see the code it's helping with.

Step 03

Layer 3: Working Set Context

Include summaries of open tab files — at minimum their paths and first 50 lines. For directly imported files, include type signatures and export declarations. This gives the model visibility into the surrounding code.

Step 04

Layer 4: Dependency Context

Include key dependency versions from package.json. This prevents the model from suggesting APIs from old or incompatible versions. Critical for fast-moving frameworks like Next.js, React, and TypeScript.

Structuring the System Message

// System message structure for OpenAI API:

const systemMessage = {

role: 'system',

content: `You are a ${project.framework} expert.

Current project: ${project.name}

Dependencies: ${JSON.stringify(project.deps)}

Active file: ${activeFile.path} ${activeFile.content}

Open files in workspace:

${openFiles.map(f => `- ${f.path}`).join('\n')}

Current errors: ${diagnostics.join('\n')}`

};

Token Budget Management

The challenge: workspace context can easily exceed the model's context window. Here's how to manage the token budget:

Metric128KGPT-4o'S CONTEXT WINDOW — ROUGHLY 64K TOKENS FOR CONTEXT + 64K FOR GENERATION

Practical token allocation: Layer 1 (metadata): ~200 tokens. Layer 2 (active file): ~2,000-8,000 tokens (most files). Layer 3 (working set): ~10,000-30,000 tokens (5-15 file summaries). Layer 4 (dependencies): ~500 tokens. Total context budget: ~15,000-40,000 tokens, leaving 88,000-113,000 tokens for the conversation and generation. For files that exceed the budget, prioritize: imports > function signatures > full content.

Automate the Context Pipeline.

Manually capturing and formatting workspace context for every API call is unsustainable. The solution: a context pipeline that auto-captures your IDE state and serves it as a pre-formatted system message.

🔧 Workspace context, automatically structured, for any AI.

Context Snipe captures your VS Code workspace state and serves it as structured context via MCP. Any AI tool that supports MCP — including custom OpenAI API integrations — receives your workspace context automatically. Start free — no credit card →