Integration Patterns
This guide covers common integration patterns for adding TypeGraph to existing applications, from simple setups to production deployment strategies.
Direct Drizzle Integration (Shared Database)
Section titled “Direct Drizzle Integration (Shared Database)”If you’re already using Drizzle ORM, TypeGraph can share your existing database connection. TypeGraph tables coexist alongside your application tables.
import { drizzle } from "drizzle-orm/node-postgres";import { Pool } from "pg";import { createPostgresBackend, generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";import { createStore } from "@nicia-ai/typegraph";
// Your existing Drizzle setupconst pool = new Pool({ connectionString: process.env.DATABASE_URL });const db = drizzle(pool);
// Add TypeGraph tables to your existing databaseawait pool.query(generatePostgresMigrationSQL());
// Create TypeGraph backend using the same connectionconst backend = createPostgresBackend(db);const store = createStore(graph, backend);
// For pure TypeGraph operations, use store.transaction()await store.transaction(async (tx) => { const person = await tx.nodes.Person.create({ name: "Alice" }); const company = await tx.nodes.Company.create({ name: "Acme" }); await tx.edges.worksAt.create(person, company, { role: "Engineer" });});Mixed Drizzle + TypeGraph Transactions
Section titled “Mixed Drizzle + TypeGraph Transactions”When combining TypeGraph operations with direct Drizzle queries in the same atomic transaction, create a temporary backend from the Drizzle transaction:
await db.transaction(async (tx) => { // Direct Drizzle operations await tx.insert(auditLog).values({ action: "user_created" });
// TypeGraph operations in the same transaction const txBackend = createPostgresBackend(tx); const txStore = createStore(graph, txBackend); await txStore.nodes.Person.create({ name: "Alice" });});This pattern is only needed when you must combine both in one atomic transaction.
When to use:
- You want a single database to manage
- Your graph data relates to existing tables
- You need cross-cutting transactions
Considerations:
- TypeGraph tables use the
typegraph_prefix to avoid collisions - Run TypeGraph migrations alongside your application migrations
- Connection pool is shared, so size accordingly
Drizzle-Kit Managed Migrations (Recommended)
Section titled “Drizzle-Kit Managed Migrations (Recommended)”If you use drizzle-kit to manage migrations, you can import TypeGraph’s table
definitions directly into your schema file. This lets drizzle-kit generate
migrations for all tables—both yours and TypeGraph’s—in one place.
1. Import TypeGraph tables into your schema:
import { sqliteTable, text, integer } from "drizzle-orm/sqlite-core";
// Import TypeGraph tables (these are standard Drizzle table definitions)export * from "@nicia-ai/typegraph/sqlite";// Or for PostgreSQL:// export * from "@nicia-ai/typegraph/postgres";
// Your application tablesexport const users = sqliteTable("users", { id: text("id").primaryKey(), name: text("name").notNull(), email: text("email").notNull(),});2. Generate migrations normally:
npx drizzle-kit generateDrizzle-kit will now see all tables—TypeGraph’s and yours—and generate migrations for them.
3. Apply migrations:
npx drizzle-kit migrate# Or for Cloudflare D1:wrangler d1 migrations apply your-database4. Create the backend:
import { drizzle } from "drizzle-orm/better-sqlite3";import Database from "better-sqlite3";import { createSqliteBackend, tables } from "@nicia-ai/typegraph/sqlite";import { createStore } from "@nicia-ai/typegraph";
const sqlite = new Database("app.db");const db = drizzle(sqlite);
// Use the same tables that drizzle-kit managesconst backend = createSqliteBackend(db, { tables });const store = createStore(graph, backend);Custom Table Names
Section titled “Custom Table Names”To avoid conflicts or match your naming conventions, use the factory function:
import { createSqliteTables } from "@nicia-ai/typegraph/sqlite";
// Create tables with custom namesexport const typegraphTables = createSqliteTables({ nodes: "myapp_graph_nodes", edges: "myapp_graph_edges", uniques: "myapp_graph_uniques", schemaVersions: "myapp_graph_schema_versions", embeddings: "myapp_graph_embeddings",});
// Export individual tables for drizzle-kitexport const { nodes: myappGraphNodes, edges: myappGraphEdges, uniques: myappGraphUniques, schemaVersions: myappGraphSchemaVersions, embeddings: myappGraphEmbeddings,} = typegraphTables;Then pass the same tables to the backend:
import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";import { typegraphTables } from "./schema";
const backend = createSqliteBackend(db, { tables: typegraphTables });Adding TypeGraph Indexes
Section titled “Adding TypeGraph Indexes”The table factory functions also accept indexes, which drizzle-kit will include in migrations:
import { createSqliteTables } from "@nicia-ai/typegraph/sqlite";import { defineNodeIndex } from "@nicia-ai/typegraph/indexes";
import { Person } from "./graph";
const personEmail = defineNodeIndex(Person, { fields: ["email"] });
export const typegraphTables = createSqliteTables({}, { indexes: [personEmail] });For PostgreSQL, use createPostgresTables from @nicia-ai/typegraph/postgres.
See Indexes for covering fields, partial indexes, and profiler integration.
If you only need PostgreSQL adapter exports, import from @nicia-ai/typegraph/postgres:
import { createPostgresBackend, tables } from "@nicia-ai/typegraph/postgres";PostgreSQL with pgvector
Section titled “PostgreSQL with pgvector”For PostgreSQL with vector search, ensure the pgvector extension is enabled before running migrations:
CREATE EXTENSION IF NOT EXISTS vector;Then in your schema:
export * from "@nicia-ai/typegraph/postgres";export const users = pgTable("users", { ... });When to use:
- You already use drizzle-kit for migrations
- You want a single migration workflow for all tables
- You need Cloudflare D1 or other platforms that require drizzle-kit migrations
Advantages over raw SQL migrations:
- Single source of truth for schema
- Type-safe schema in TypeScript
- Drizzle-kit handles migration diffs automatically
- Works with all drizzle-kit supported platforms
Separate Database
Section titled “Separate Database”Use a dedicated database when you want isolation between your application data and graph data.
import { Pool } from "pg";import { drizzle } from "drizzle-orm/node-postgres";import { createPostgresBackend, generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";
// Application database (your existing setup)const appPool = new Pool({ connectionString: process.env.APP_DATABASE_URL });const appDb = drizzle(appPool);
// Dedicated TypeGraph databaseconst graphPool = new Pool({ connectionString: process.env.GRAPH_DATABASE_URL });const graphDb = drizzle(graphPool);
await graphPool.query(generatePostgresMigrationSQL());const backend = createPostgresBackend(graphDb);const store = createStore(graph, backend);When to use:
- Your primary database doesn’t support required features (e.g., pgvector)
- You want independent scaling for graph operations
- Compliance requires data separation
- You’re adding graph capabilities to a legacy system
Considerations:
- No cross-database transactions (use eventual consistency patterns)
- Sync data between databases via application logic or events
- Separate backup/restore procedures
In-Memory (Ephemeral Graphs)
Section titled “In-Memory (Ephemeral Graphs)”Use in-memory SQLite for temporary graphs, caching, or computation.
import { createLocalSqliteBackend } from "@nicia-ai/typegraph/sqlite/local";
function createEphemeralStore(graph: GraphDef) { const { backend } = createLocalSqliteBackend(); return createStore(graph, backend);}
// Use case: Build a temporary graph for computationasync function computeRecommendations(userId: string): Promise<Recommendation[]> { const tempStore = createEphemeralStore(recommendationGraph);
// Load relevant data into temporary graph const userData = await fetchUserData(userId); await populateGraph(tempStore, userData);
// Run graph algorithms const results = await tempStore .query() .from("User", "u") .whereNode("u", (u) => u.id.eq(userId)) .traverse("similar", "s") .to("Product", "p") .select((ctx) => ctx.p) .execute();
return results;}When to use:
- Temporary computation graphs
- Request-scoped graph state
- Graph-based caching with expiration
- Isolated test fixtures
Considerations:
- Data lost on process termination
- Memory usage scales with graph size
- No persistence—rebuild on restart
Hybrid Overlay (Graph on Existing Data)
Section titled “Hybrid Overlay (Graph on Existing Data)”Add graph relationships on top of existing relational data without migrating your data model. Your existing tables remain the source of truth; TypeGraph stores only the relationships and graph-specific metadata.
Use the externalRef() helper to create type-safe references to external tables:
import { createExternalRef, defineEdge, defineGraph, defineNode, embedding, externalRef,} from "@nicia-ai/typegraph";import { z } from "zod";
// Define nodes that reference your existing tablesconst User = defineNode("User", { schema: z.object({ // Type-safe reference to your existing users table source: externalRef("users"), // Denormalized fields for graph queries (optional) displayName: z.string().optional(), }),});
const Document = defineNode("Document", { schema: z.object({ source: externalRef("documents"), embedding: embedding(1536).optional(), }),});
// Graph-only relationships not in your relational schemaconst relatedTo = defineEdge("relatedTo", { schema: z.object({ relationship: z.enum(["cites", "extends", "contradicts"]), confidence: z.number().min(0).max(1), }),});
const authored = defineEdge("authored");
const graph = defineGraph({ id: "document_graph", nodes: { User, Document }, edges: { relatedTo: { type: relatedTo, from: [Document], to: [Document] }, authored: { type: authored, from: [User], to: [Document] }, },});The externalRef() helper validates that references include both the table name
and ID, catching errors at insert time:
// Valid: includes table and idawait store.nodes.Document.create({ source: { table: "documents", id: "doc_123" },});
// Error: wrong table name (caught by TypeScript and runtime validation)await store.nodes.Document.create({ source: { table: "users", id: "doc_123" }, // Type error!});
// Use createExternalRef() for a cleaner APIconst docRef = createExternalRef("documents");await store.nodes.Document.create({ source: docRef("doc_456"),});Syncing with external data:
// Sync helper: Create or update graph node from app dataasync function syncDocument(store: Store, appDocument: AppDocument) { const existing = await store .query() .from("Document", "d") .whereNode("d", (d) => d.source.get("id").eq(appDocument.id)) .select((ctx) => ctx.d) .first();
if (existing) { await store.nodes.Document.update(existing.id, { embedding: await generateEmbedding(appDocument.content), }); return existing; }
return store.nodes.Document.create({ source: { table: "documents", id: appDocument.id }, embedding: await generateEmbedding(appDocument.content), });}
// Query combining graph traversal with app data hydrationasync function findRelatedDocuments(documentId: string) { // Get graph relationships const related = await store .query() .from("Document", "d") .whereNode("d", (d) => d.source.get("id").eq(documentId)) .traverse("relatedTo", "r") .to("Document", "related") .select((ctx) => ({ source: ctx.related.source, relationship: ctx.r.relationship, confidence: ctx.r.confidence, })) .execute();
// Hydrate with full data from app database const externalIds = related.map((r) => r.source.id); const fullDocuments = await appDb .select() .from(documents) .where(inArray(documents.id, externalIds));
return related.map((r) => ({ ...r, document: fullDocuments.find((d) => d.id === r.source.id), }));}When to use:
- Adding graph capabilities to an existing application
- Semantic search over existing content
- Relationship discovery without schema changes
- Gradual migration from relational to graph thinking
Considerations:
- Maintain sync between app data and graph nodes
- Decide what to denormalize (tradeoff: query speed vs. sync complexity)
- The
tablefield inexternalRefenables referencing multiple external sources
Background Embedding Workers
Section titled “Background Embedding Workers”Decouple embedding generation from request handling using background jobs.
// job-queue.ts - Define the embedding jobinterface EmbeddingJob { nodeType: string; nodeId: string; content: string;}
// worker.ts - Process embedding jobsimport { createStore } from "@nicia-ai/typegraph";
async function processEmbeddingJob(job: EmbeddingJob) { const { nodeType, nodeId, content } = job;
// Generate embedding (expensive operation) const embedding = await openai.embeddings.create({ model: "text-embedding-ada-002", input: content, });
// Update the node const collection = store.nodes[nodeType as keyof typeof store.nodes]; await collection.update(nodeId, { embedding: embedding.data[0].embedding, });}
// api-handler.ts - Enqueue jobs on create/updateasync function createDocument(data: DocumentInput) { // Create node without embedding (fast) const doc = await store.nodes.Document.create({ title: data.title, content: data.content, // embedding: undefined - will be populated by worker });
// Enqueue embedding job (non-blocking) await jobQueue.add("generate-embedding", { nodeType: "Document", nodeId: doc.id, content: data.content, });
return doc;}Batch processing for bulk imports:
async function backfillEmbeddings(batchSize = 100) { let processed = 0;
while (true) { // Find nodes missing embeddings const nodes = await store .query() .from("Document", "d") .whereNode("d", (d) => d.embedding.isNull()) .select((ctx) => ({ id: ctx.d.id, content: ctx.d.content, })) .limit(batchSize) .execute();
if (nodes.length === 0) break;
// Batch embed const embeddings = await openai.embeddings.create({ model: "text-embedding-ada-002", input: nodes.map((n) => n.content), });
// Batch update await store.transaction(async (tx) => { for (const [i, node] of nodes.entries()) { await tx.nodes.Document.update(node.id, { embedding: embeddings.data[i].embedding, }); } });
processed += nodes.length; console.log(`Processed ${processed} documents`); }}When to use:
- Embedding generation is slow (100-500ms per call)
- You want fast API response times
- Bulk importing existing content
- Retry logic for API failures
Considerations:
- Handle job failures and retries
- Consider rate limits on embedding APIs
- Queries on
embeddingshould handle null values during population
Testing
Section titled “Testing”For test setup patterns, seed data strategies, and profiler-based index coverage checks, see the dedicated Testing guide.
Deployment Patterns
Section titled “Deployment Patterns”Edge and Serverless
Section titled “Edge and Serverless”Deploy TypeGraph at the edge using SQLite-compatible runtimes.
Note: Edge environments cannot use
@nicia-ai/typegraph/sqlitebecause it depends onbetter-sqlite3, a native Node.js addon. Instead, use@nicia-ai/typegraph/sqlitewhich is driver-agnostic.
Cloudflare Workers with D1:
import { drizzle } from "drizzle-orm/d1";import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";
export default { async fetch(request: Request, env: Env): Promise<Response> { const db = drizzle(env.DB); const backend = createSqliteBackend(db); const store = createStore(graph, backend);
// Handle request with graph queries const results = await store .query() .from("Document", "d") .whereNode("d", (d) => d.embedding.similarTo(queryEmbedding, 5)) .select((ctx) => ctx.d) .execute();
return Response.json(results); },};Turso (libSQL):
import { createClient } from "@libsql/client";import { drizzle } from "drizzle-orm/libsql";import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";
const client = createClient({ url: process.env.TURSO_DATABASE_URL!, authToken: process.env.TURSO_AUTH_TOKEN,});
const db = drizzle(client);const backend = createSqliteBackend(db);const store = createStore(graph, backend);For Turso and D1, use drizzle-kit managed migrations to set up the schema.
Bun with built-in SQLite:
Bun runs locally, so you can use the Node.js-compatible path with better-sqlite3, or use bun:sqlite with drizzle-kit managed migrations:
import { Database } from "bun:sqlite";import { drizzle } from "drizzle-orm/bun-sqlite";import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";
const sqlite = new Database("app.db");const db = drizzle(sqlite);const backend = createSqliteBackend(db);const store = createStore(graph, backend);Use drizzle-kit managed migrations to set up the schema with bun:sqlite.
When to use:
- Low-latency requirements (data close to users)
- Serverless functions with graph queries
- Read-heavy workloads
Considerations:
- SQLite limitations (single-writer, no pgvector)
- Cold start times include DB initialization
- sqlite-vec for vector search (cosine/L2 only)
Read Replica Separation
Section titled “Read Replica Separation”Route heavy graph queries to read replicas while writes go to primary.
import { Pool } from "pg";import { drizzle } from "drizzle-orm/node-postgres";import { createPostgresBackend } from "@nicia-ai/typegraph/postgres";
// Primary for writesconst primaryPool = new Pool({ connectionString: process.env.PRIMARY_DATABASE_URL, max: 10,});const primaryDb = drizzle(primaryPool);const primaryBackend = createPostgresBackend(primaryDb);const primaryStore = createStore(graph, primaryBackend);
// Replica for readsconst replicaPool = new Pool({ connectionString: process.env.REPLICA_DATABASE_URL, max: 50, // Higher pool for read-heavy workloads});const replicaDb = drizzle(replicaPool);const replicaBackend = createPostgresBackend(replicaDb);const replicaStore = createStore(graph, replicaBackend);
// Route based on operationexport const stores = { write: primaryStore, read: replicaStore,};
// Usageasync function searchDocuments(query: string) { // Read from replica return stores.read .query() .from("Document", "d") .whereNode("d", (d) => d.embedding.similarTo(queryEmbedding, 10)) .select((ctx) => ctx.d) .execute();}
async function createDocument(data: DocumentInput) { // Write to primary return stores.write.nodes.Document.create(data);}When to use:
- Heavy read workloads (semantic search, graph traversals)
- Write/read ratio is heavily skewed toward reads
- Need to scale read capacity independently
Considerations:
- Replication lag means reads may be slightly stale
- Don’t use replica for read-after-write scenarios
- Monitor replication lag in production
Multi-Tenant Architecture
Section titled “Multi-Tenant Architecture”Three approaches for multi-tenant deployments, each with different tradeoffs.
Option 1: Shared tables with tenant isolation (simplest)
Section titled “Option 1: Shared tables with tenant isolation (simplest)”import { defineNode, defineGraph } from "@nicia-ai/typegraph";
// Include tenantId in your node schemasconst Document = defineNode("Document", { schema: z.object({ tenantId: z.string(), title: z.string(), content: z.string(), }),});
// Always filter by tenant in queriesfunction createTenantQuery(store: Store, tenantId: string) { return { searchDocuments: (query: string) => store .query() .from("Document", "d") .whereNode("d", (d) => d.tenantId.eq(tenantId).and( d.embedding.similarTo(queryEmbedding, 10) ) ) .select((ctx) => ctx.d) .execute(),
createDocument: (data: Omit<DocumentInput, "tenantId">) => store.nodes.Document.create({ ...data, tenantId }), };}
// Middleware extracts tenant and creates scoped APIfunction withTenant(req: Request) { const tenantId = req.headers.get("x-tenant-id")!; return createTenantQuery(store, tenantId);}Option 2: Schema per tenant (PostgreSQL)
Section titled “Option 2: Schema per tenant (PostgreSQL)”import { sql } from "drizzle-orm";
async function createTenantStore(tenantId: string) { const schemaName = `tenant_${tenantId}`;
// Create schema if not exists await pool.query(`CREATE SCHEMA IF NOT EXISTS ${schemaName}`);
// Run migrations in tenant schema await pool.query(`SET search_path TO ${schemaName}`); await pool.query(generatePostgresMigrationSQL()); await pool.query(`SET search_path TO public`);
// Create Drizzle instance with schema const db = drizzle(pool, { schema: { schemaName } }); const backend = createPostgresBackend(db); return createStore(graph, backend);}
// Cache tenant storesconst tenantStores = new Map<string, Store>();
async function getTenantStore(tenantId: string): Promise<Store> { if (!tenantStores.has(tenantId)) { tenantStores.set(tenantId, await createTenantStore(tenantId)); } return tenantStores.get(tenantId)!;}Option 3: Database per tenant (strongest isolation)
Section titled “Option 3: Database per tenant (strongest isolation)”interface TenantConfig { id: string; databaseUrl: string;}
async function createTenantStore(config: TenantConfig) { const pool = new Pool({ connectionString: config.databaseUrl }); await pool.query(generatePostgresMigrationSQL());
const db = drizzle(pool); const backend = createPostgresBackend(db); return { store: createStore(graph, backend), close: () => pool.end(), };}
// Connection manager with LRU evictionclass TenantConnectionManager { private stores = new Map<string, { store: Store; close: () => Promise<void> }>(); private maxConnections = 100;
async getStore(tenantId: string): Promise<Store> { if (!this.stores.has(tenantId)) { if (this.stores.size >= this.maxConnections) { await this.evictOldest(); } const config = await fetchTenantConfig(tenantId); this.stores.set(tenantId, await createTenantStore(config)); } return this.stores.get(tenantId)!.store; }
private async evictOldest() { const [oldestId, oldest] = this.stores.entries().next().value; await oldest.close(); this.stores.delete(oldestId); }}Comparison:
| Approach | Isolation | Complexity | Scaling | Cost |
|---|---|---|---|---|
| Shared tables | Low (row-level) | Low | Single DB | Lowest |
| Schema per tenant | Medium | Medium | Single DB, separate schemas | Low |
| Database per tenant | High | High | Independent DBs | Highest |
When to use each:
- Shared tables: SaaS with many small tenants, cost-sensitive
- Schema per tenant: Moderate isolation needs, PostgreSQL only
- Database per tenant: Enterprise customers requiring data isolation, compliance requirements
Next Steps
Section titled “Next Steps”- Quick Start - Basic setup and first graph
- Semantic Search - Vector embeddings and similarity
- Performance - Optimization strategies