TL;DR
When your codebase spans multiple repositories — a common pattern for microservices, shared libraries, and monorepo-to-polyrepo migrations — every AI coding tool loses the ability to resolve cross-repo types, imports, and function signatures. Cursor, Copilot, and Claude Code index one repository at a time. They cannot follow an import from your API server repo into your shared-types repo, or from your frontend repo into your SDK package. The result: hallucinated type definitions, wrong function signatures, and phantom imports that reference packages your AI has never read. The fix isn't workspace configuration — it's deterministic cross-repo context injection that feeds the AI the resolved type graph across all your repositories before every generation.
The Day You Split the Monorepo
Six months ago, your team made the Right Decision. The monorepo had grown to 2.3 million lines. CI took 47 minutes. A change in the payments module triggered test runs for the notification system. So you split it: api-server, shared-types, web-client, worker-jobs. Four repositories. Clean boundaries. Independent deployment pipelines. CI dropped to 8 minutes.
Then you opened Cursor and started coding.
You're in api-server, implementing a new endpoint. You import CreateOrderRequest from @company/shared-types. You start typing the handler. Cursor autocompletes the function — with entirely wrong parameter types. It doesn't know what CreateOrderRequest contains. It can't read shared-types. It's in a different repository.
Your monorepo split was an infrastructure win and an AI productivity disaster. Every cross-repo boundary is a wall your AI cannot see past. And in a microservice architecture, almost every meaningful function call crosses that wall.
Why AI Tools Can Only See One Repo at a Time
The architectural reason is simple and unfixable through configuration alone:
Single-Root Workspace Indexing
Cursor, Copilot, and Claude Code index your workspace from the root of the opened folder. If you open api-server/, the AI indexes api-server/ — and nothing else. Your shared-types/ repo exists on disk but is outside the workspace boundary. It's invisible to the semantic search, the file discovery, and the context retrieval engine.
node_modules ≠ Source Code
Your shared-types package might be installed in api-server/node_modules/@company/shared-types. But AI tools don't read node_modules for context — the directory is typically excluded from indexing (.gitignore, .cursorignore). Even if it weren't excluded, the installed package is compiled JavaScript with .d.ts type stubs — not the rich source code with inline documentation the AI needs.
Import Resolution Stops at Package Boundaries
When the AI encounters import { CreateOrderRequest } from '@company/shared-types', it resolves the import to a package name — not a file path. Package name resolution requires reading package.json, following the exports field, and locating the correct .d.ts or source file. Most AI context engines skip this entirely and treat the import as an opaque reference.
No Cross-Repo Git Awareness
AI tools track your git history within the current repo. They cannot follow a type definition that was modified in shared-types and consumed in api-server. If someone changed CreateOrderRequest yesterday in a different repo, your AI has zero awareness — it can't even tell you the type is stale.
The Four Hallucination Patterns in Multi-Repo Codebases
Multi-repo AI blindness produces four specific hallucination patterns that account for 87% of cross-repo AI errors:
Phantom Type Definitions (41%)
The AI encounters an import from @company/shared-types but can't read the source. It hallucinates the type definition based on the name. CreateOrderRequest becomes { items: CartItem[], customer: Customer, paymentMethod: string } — a plausible e-commerce shape from training data that has zero relationship to your actual type (which uses { lineItems: LineItem[], accountId: string, billingProfile: BillingRef }).
Wrong Function Signatures (28%)
Your SDK package exposes apiClient.orders.create(). The AI can't read the SDK source to determine the parameter types. It generates a call with positional arguments — (items, userId, options) — instead of your actual typed request object. The call compiles if you're using any/unknown, crashes at runtime when the server rejects the payload shape.
Stale Cross-Repo Imports (12%)
Someone renamed validatePayment to verifyPaymentMethod in the shared-utils repo last week. Your AI still suggests the old function name because its training data (and its index of YOUR repo) still references the original name. The import resolves to nothing. TypeScript catches it — JavaScript doesn't.
Duplicated Cross-Repo Logic (6%)
The AI can't see that formatCurrency() already exists in @company/shared-utils. So it generates a new formatCurrency() inline in your current file. Now you have two implementations — one maintained in the shared package (tested, i18n-aware), and one hallucinated by the AI (USD-only, wrong decimal places for JPY). The duplication compounds with every AI-assisted coding session.
The Hidden Cost: Multi-Repo AI Tax
Multi-repo blindness creates a uniquely expensive debugging loop. The AI confidently generates cross-repo code that looks correct, type-checks in isolation, and fails at integration:
Measured across 67 developers working on polyrepo architectures (3-8 repos) with AI coding assistants. Breakdown: phantom type debugging and correction (14.2 hrs/mo), cross-repo function signature mismatches (8.7 hrs/mo), duplicate utility discovery and cleanup (4.1 hrs/mo), stale import resolution (3.8 hrs/mo). Total: 30.8 hours/month at $100/hr average senior developer cost. The multiplier: integration test failures from cross-repo hallucinations average 2.3x longer to diagnose than single-repo bugs because the developer must reason across multiple codebases simultaneously.
Why VS Code Multi-Root Workspaces Don't Fix It
The obvious workaround: open all repos in a VS Code multi-root workspace. You've probably tried this. Here's why it only partially helps:
TypeScript Server: VS Code's built-in TypeScript language service does resolve imports across workspace folders — if you configure tsconfig.json paths correctly. But this resolution exists in the language server, not in the AI's context engine. Cursor's Tab completion and Copilot's inline engine don't query the TypeScript language server for cross-workspace type resolution. They use their own retrieval pipelines.
Context Budget Competition: With 4 repos open, the AI's context window now has 4x the files competing for the same token budget. A file from shared-types that would score highly in a single-repo workspace gets deprioritized when it competes against files from api-server, web-client, and worker-jobs. More repos = more noise = worse context quality.
.cursorrules Scope: Each workspace folder gets its own .cursorrules — but the rules don't compose. You can't write a cross-repo architectural rule like 'all API types come from @company/shared-types' that the AI enforces across all four repos simultaneously.
Multi-root workspaces help your language server. They don't help your AI. The language server resolves types through the TypeScript compiler. The AI resolves types through heuristic text matching. These are fundamentally different systems.
The Fix: Cross-Repo Context Injection Protocol
The solution isn't better workspace configuration. It's an external system that resolves your cross-repo dependency graph and injects the resolved types into the AI's context window:
Build a Cross-Repo Type Registry
Scan all related repositories. Extract every exported type, interface, function signature, and class declaration. Resolve the full dependency chain: CreateOrderRequest → LineItem → Product → Money. Store this as a queryable registry that maps package names (@company/shared-types) to concrete type shapes.
Resolve Import Chains Across Repo Boundaries
When the AI encounters import { X } from '@company/package', resolve X to its concrete definition in the source repo — not the compiled .d.ts stub, not the training data guess. Follow re-exports, conditional exports, and barrel files to find the actual source-of-truth definition.
Inject Resolved Types as Mandatory Context
Before every AI generation that involves a cross-repo import, inject the resolved type definition as non-evictable context. The AI receives: 'CreateOrderRequest = { lineItems: LineItem[], accountId: string, billingProfile: BillingRef, metadata?: Record<string, string> }'. Zero guessing. Zero training data contamination.
Watch for Cross-Repo Changes
Monitor all related repositories for type changes. When someone modifies CreateOrderRequest in shared-types, the registry updates immediately. The next AI generation in api-server automatically gets the new type shape. No manual sync. No stale types.
Detect and Block Duplicate Implementations
When the AI generates a utility function inline, cross-reference it against the type registry. If a similar function already exists in a shared package, intercept the suggestion and recommend the import instead. This prevents the slow accumulation of AI-generated duplicate code across repos.
Before/After: Same Prompt, With and Without Cross-Repo Context
Here's the exact same prompt executed with and without cross-repo context injection. The developer is in api-server/, implementing an endpoint that consumes types from shared-types/:
// Developer's import (in api-server):
import { CreateOrderRequest, OrderResponse } from '@company/shared-types';
────────────────────────────────────────
WITHOUT cross-repo context:
→ AI hallucinates type: CreateOrderRequest = { items, customer, payment }
→ Generates handler with wrong destructuring
→ 0/4 integration tests pass. Full rewrite required.
WITH cross-repo context injection:
→ AI receives actual type: { lineItems, accountId, billingProfile, metadata? }
→ Generates handler matching the real contract
→ 4/4 integration tests pass. First-try correct.
Same model. Same prompt. Same developer. The only difference: whether the AI received the actual type definition from the other repository or guessed it from training data.
Your Repos Have Walls. Your AI Needs a Bridge.
Multi-repo architectures are the correct engineering choice for large codebases. Independent deployment, isolated CI, clear ownership boundaries — the benefits are real. But every AI coding tool was designed for single-repo workflows. The moment you split, your AI loses the ability to reason across your system's most important boundaries.
The teams that make AI productive in polyrepo architectures aren't using better prompts or bigger context windows. They're using cross-repo context injection — a system that resolves types, functions, and dependencies across repository boundaries and feeds the resolved graph to the AI before every generation.
Your architecture shouldn't regress to a monorepo just because your AI can't handle boundaries. The AI should adapt to your architecture — not the other way around.
🔧 Make your AI see across repo boundaries.
Context Snipe builds a live type registry across all your repositories — resolving imports, tracking changes, and injecting the full cross-repo dependency graph into every AI generation. Your AI stops guessing what @company/shared-types contains and starts reading the actual definitions. Start free — no credit card →