This is the abridged developer documentation for TypeGraph
# What is TypeGraph?
> A TypeScript-first embedded knowledge graph library
TypeGraph is a **TypeScript-first, embedded knowledge graph library** that brings property graph semantics and
ontological reasoning to applications using standard relational databases. Rather than introducing a separate graph
database, TypeGraph lives inside your application as a library, storing graph data in your existing SQLite or
PostgreSQL database.
## Architecture

## Core Capabilities
### 1. Type-Driven Schema Definition
Zod schemas are the single source of truth. From one schema definition, TypeGraph derives:
- Runtime validation rules
- TypeScript types (inferred, not duplicated)
- Database storage requirements
- Query builder type constraints
```typescript
const Person = defineNode("Person", {
schema: z.object({
fullName: z.string().min(1),
email: z.string().email().optional(),
dateOfBirth: z.date().optional(),
}),
});
```
### 2. Semantic Layer with Ontological Reasoning
Type-level relationships enable sophisticated inference:
| Relationship | Meaning | Use Case |
| -------------- | ----------------------------------------- | ---------------------- |
| `subClassOf` | Instance inheritance (Podcast IS-A Media) | Query expansion |
| `broader` | Hierarchical concept (ML broader than DL) | Topic navigation |
| `equivalentTo` | Same concept, different name | Cross-system mapping |
| `disjointWith` | Cannot be both (Person ≠ Organization) | Constraint validation |
| `implies` | Edge entailment (marriedTo implies knows) | Relationship inference |
| `inverseOf` | Edge pairs (manages/managedBy) | Bidirectional queries |
### 3. Self-Describing Schema (Homoiconic)
The schema and ontology are stored in the database as data, enabling:
- Runtime schema introspection
- Versioned schema history
- Self-describing exports and backups
- Migration tooling
### 4. Type-Safe Query Compilation
Queries compile to an AST before targeting SQL:
- Consistent semantics across SQLite and PostgreSQL
- Type-checked at compile time
- Query results have inferred types
## Design Philosophy
### Embedded, Not External
TypeGraph is a library dependency, not a networked service. TypeGraph initializes with your application, uses your
database connection, and requires no separate deployment.
### Schema-First, Type-Driven
Define your schemas once with Zod, and TypeGraph handles validation, type inference, and storage.
No duplicate type definitions or manual synchronization.
### Explicit Over Implicit
TypeGraph favors explicit declarations:
- Relationships are declared, not inferred from foreign keys
- Semantic relationships are explicit in the ontology
- Cascade behavior is configured, not assumed
### Portable Abstractions
The query builder generates portable ASTs that can target different SQL dialects.
The same query code works with SQLite and PostgreSQL.
## What TypeGraph Is Not
TypeGraph deliberately excludes:
- **Graph algorithms**: No built-in shortest path, PageRank, or community detection
- **Distributed storage**: Single-database deployment only
These exclusions keep TypeGraph focused and maintainable.
Note: TypeGraph **does support** semantic search via database vector extensions
(pgvector for PostgreSQL, sqlite-vec for SQLite). See [Semantic Search](/semantic-search) for details.
Note: TypeGraph does support **variable-length paths** via `.recursive()` with
configurable depth limits, optional path/depth projection, and explicit cycle
policy. Cycle prevention is the default.
See [Recursive Traversals](/queries/recursive) for details.
## Why TypeGraph?
### Compared to Graph Databases (Neo4j, Amazon Neptune)
Graph databases are powerful but come with operational overhead:
| Aspect | Graph Database | TypeGraph |
|--------|---------------|-----------|
| **Deployment** | Separate service to manage, scale, and monitor | Library in your app, uses existing database |
| **Network** | Additional latency for every query | In-process, no network hop |
| **Transactions** | Separate transaction scope from your SQL data | Same ACID transaction as your other data |
| **Learning curve** | New query language (Cypher, Gremlin) | TypeScript you already know |
| **Graph algorithms** | Built-in (PageRank, shortest path) | Not included |
| **Scale** | Optimized for billions of nodes | Best for thousands to millions |
**Choose TypeGraph** when your graph is part of your application domain (knowledge bases, org
charts, content relationships) rather than a standalone analytical system.
### Compared to ORMs (Prisma, Drizzle, TypeORM)
ORMs model relations through foreign keys, which works well for simple associations but lacks graph semantics:
| Aspect | Traditional ORM | TypeGraph |
|--------|----------------|-----------|
| **Relationships** | Foreign keys, eager/lazy loading | First-class edges with properties |
| **Traversals** | Manual joins or N+1 queries | Fluent traversal API, compiled to efficient SQL |
| **Inheritance** | Table-per-class or single-table | Semantic `subClassOf` with query expansion |
| **Constraints** | Foreign key constraints | Disjointness, cardinality, implications |
| **Schema** | Migrations alter tables | Schema versioning, JSON properties |
**Choose TypeGraph** when you need to traverse relationships, model type hierarchies, or enforce
semantic constraints beyond what foreign keys provide.
### Compared to Triple Stores (RDF, SPARQL)
Triple stores and RDF provide rich ontological modeling but have practical challenges:
| Aspect | Triple Store | TypeGraph |
|--------|-------------|-----------|
| **Type safety** | Runtime validation, stringly-typed | Full TypeScript inference |
| **Query language** | SPARQL (powerful but verbose) | TypeScript fluent API |
| **Schema** | OWL/RDFS (complex specification) | Zod schemas (familiar, composable) |
| **Integration** | Separate system, data sync required | Embedded in your app |
| **Inference** | Full reasoning engines available | Precomputed closures, practical subset |
**Choose TypeGraph** when you want ontological concepts (subclass, disjoint, implies) without the
complexity of full semantic web stack.
### The TypeGraph Sweet Spot
TypeGraph is designed for applications where:
1. **The graph is your domain model** — not a separate analytical system
2. **You already use SQL** — and don't want another database to manage
3. **Type safety matters** — you want compile-time checking, not runtime surprises
4. **Semantic relationships help** — inheritance, implications, constraints add value
5. **Scale is moderate** — thousands to millions of nodes, not billions
## When to Use TypeGraph
TypeGraph is ideal for:
- **Knowledge bases** with typed entities and relationships
- **Organizational structures** with hierarchies and roles
- **Content graphs** with topics, articles, and references
- **Domain models** requiring semantic constraints
- **RAG applications** combining graph traversal with vector search
TypeGraph is not ideal for:
- Large-scale graph analytics requiring distributed processing
- Social networks with billions of edges
- Real-time streaming graph data
- Applications requiring graph algorithms (use Neo4j or a graph library)
# Quick Start
> Set up TypeGraph and build your first knowledge graph
Get TypeGraph running in your project with this minimal example.
## 1. Install
```bash
npm install @nicia-ai/typegraph zod drizzle-orm better-sqlite3
npm install -D @types/better-sqlite3
```
> **Edge environments (Cloudflare Workers, etc.):** Skip `better-sqlite3` and use
> `@nicia-ai/typegraph/sqlite` with your edge-compatible driver (D1, libsql).
> See [Edge and Serverless](/integration#edge-and-serverless).
## 2. Create Your First Graph
```typescript
import { z } from "zod";
import { defineNode, defineEdge, defineGraph, createStore } from "@nicia-ai/typegraph";
import { createLocalSqliteBackend } from "@nicia-ai/typegraph/sqlite/local";
// Create an in-memory SQLite backend
const { backend } = createLocalSqliteBackend();
// Define your schema
const Person = defineNode("Person", {
schema: z.object({ name: z.string(), role: z.string().optional() }),
});
const Project = defineNode("Project", {
schema: z.object({ name: z.string(), status: z.enum(["active", "done"]) }),
});
const worksOn = defineEdge("worksOn");
const graph = defineGraph({
id: "my_app",
nodes: { Person: { type: Person }, Project: { type: Project } },
edges: { worksOn: { type: worksOn, from: [Person], to: [Project] } },
});
// Create the store
const store = createStore(graph, backend);
// Use it!
const alice = await store.nodes.Person.create({ name: "Alice", role: "Engineer" });
const project = await store.nodes.Project.create({ name: "Website", status: "active" });
await store.edges.worksOn.create(alice, project, {});
// Query with full type safety
const results = await store
.query()
.from("Person", "p")
.traverse("worksOn", "e")
.to("Project", "proj")
.select((ctx) => ({ person: ctx.p.name, project: ctx.proj.name }))
.execute();
console.log(results); // [{ person: "Alice", project: "Website" }]
```
That's it! You have a working knowledge graph. Read on for the complete setup guide.
---
## Complete Setup Guide
This section covers production setup with SQLite and PostgreSQL in detail.
### Installation
```bash
npm install @nicia-ai/typegraph zod drizzle-orm better-sqlite3
npm install -D @types/better-sqlite3
```
> `better-sqlite3` is optional. For edge environments, use `@nicia-ai/typegraph/sqlite`
> with D1, libsql, or bun:sqlite instead.
### SQLite Setup
TypeGraph provides two ways to set up SQLite:
#### Quick Setup (Recommended for Development)
The simplest way to get started. Handles database creation and schema setup automatically.
> **Note:** `createLocalSqliteBackend` requires `better-sqlite3` and only works in Node.js.
> For edge environments, see [Manual Setup](#manual-setup-full-control) with `/sqlite`.
```typescript
import { createLocalSqliteBackend } from "@nicia-ai/typegraph/sqlite/local";
// In-memory database (data lost on restart)
const { backend } = createLocalSqliteBackend();
// File-based database (persistent)
const { backend, db } = createLocalSqliteBackend({ path: "./my-app.db" });
```
The function returns both the `backend` (for use with `createStore`) and `db`
(the underlying Drizzle instance for direct SQL access if needed).
#### Manual Setup (Full Control)
For production deployments or when you need full control over the database configuration:
```typescript
import Database from "better-sqlite3";
import { drizzle } from "drizzle-orm/better-sqlite3";
import { createSqliteBackend, generateSqliteMigrationSQL } from "@nicia-ai/typegraph/sqlite";
// Create database connection
const sqlite = new Database("my-app.db");
// Run TypeGraph migrations (creates required tables)
sqlite.exec(generateSqliteMigrationSQL());
// Create Drizzle instance
const db = drizzle(sqlite);
// Create the backend
const backend = createSqliteBackend(db);
```
#### Edge-Compatible Setup (D1, libsql, bun:sqlite)
For Cloudflare Workers, Turso, or other edge environments, use the driver-agnostic backend:
```typescript
import { drizzle } from "drizzle-orm/d1"; // or libsql, bun-sqlite
import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";
// D1 example
const db = drizzle(env.DB);
const backend = createSqliteBackend(db);
```
Use [drizzle-kit managed migrations](/integration#drizzle-kit-managed-migrations-recommended)
to set up the schema.
#### Drizzle-Kit Managed Migrations
If you already use `drizzle-kit` for migrations, see [Drizzle-Kit Managed Migrations](/integration#drizzle-kit-managed-migrations-recommended)
for how to import TypeGraph's schema into your `schema.ts` file.
## Defining Your Schema
### Step 1: Define Node Types
Nodes represent entities in your graph. Each node type has a name and a Zod schema:
```typescript
import { z } from "zod";
import { defineNode } from "@nicia-ai/typegraph";
const Person = defineNode("Person", {
schema: z.object({
name: z.string().min(1),
email: z.string().email().optional(),
bio: z.string().optional(),
}),
});
const Project = defineNode("Project", {
schema: z.object({
name: z.string(),
description: z.string().optional(),
status: z.enum(["planning", "active", "completed"]),
}),
});
const Task = defineNode("Task", {
schema: z.object({
title: z.string(),
priority: z.enum(["low", "medium", "high"]),
completed: z.boolean().default(false),
}),
});
```
### Step 2: Define Edge Types
Edges represent relationships between nodes:
```typescript
import { defineEdge } from "@nicia-ai/typegraph";
const worksOn = defineEdge("worksOn", {
schema: z.object({
role: z.string().optional(),
since: z.string().optional(),
}),
});
const hasTask = defineEdge("hasTask", {
schema: z.object({}),
});
const assignedTo = defineEdge("assignedTo", {
schema: z.object({
assignedAt: z.string().optional(),
}),
});
// Unconstrained edge — connects any node to any node
const related = defineEdge("related");
```
### Step 3: Create the Graph Definition
Combine nodes, edges, and ontology into a graph:
```typescript
import { defineGraph, disjointWith } from "@nicia-ai/typegraph";
const graph = defineGraph({
id: "project_management",
nodes: {
Person: { type: Person },
Project: { type: Project },
Task: { type: Task },
},
edges: {
worksOn: { type: worksOn, from: [Person], to: [Project] },
hasTask: { type: hasTask, from: [Project], to: [Task] },
assignedTo: { type: assignedTo, from: [Task], to: [Person] },
related, // any→any
},
ontology: [
// A Person cannot be a Project or Task
disjointWith(Person, Project),
disjointWith(Person, Task),
disjointWith(Project, Task),
],
});
```
### Step 4: Create the Store
The store connects your graph definition to the database:
```typescript
import { createStore } from "@nicia-ai/typegraph";
const store = createStore(graph, backend);
```
#### Store Creation: Which Function to Use
| Function | Schema Handling | Use Case |
|----------|-----------------|----------|
| `createLocalSqliteBackend` | Automatic | Quick start, development, tests |
| `createStore` + manual migration | None | When you manage migrations externally |
| `createStoreWithSchema` | Validates & auto-migrates | **Recommended for production** |
For production, use `createStoreWithSchema` to validate and auto-apply safe schema changes:
```typescript
import { createStoreWithSchema } from "@nicia-ai/typegraph";
const [store, result] = await createStoreWithSchema(graph, backend);
if (result.status === "initialized") {
console.log("Schema initialized at version", result.version);
} else if (result.status === "migrated") {
console.log(`Migrated from v${result.fromVersion} to v${result.toVersion}`);
}
// Other statuses: "unchanged", "pending", "breaking"
// See Schema Migrations for full details
```
#### Graph ID
Every graph has a unique `id` that scopes its data:
```typescript
const graph = defineGraph({
id: "my_app", // Scopes all nodes/edges to this graph
// ...
});
```
**Key behaviors:**
- All nodes and edges are stored with this `graph_id` in the database
- Multiple graphs can share the same database tables (isolated by `graph_id`)
- Changing the ID creates a new, empty graph (existing data is orphaned)
See [Multiple Graphs](/multiple-graphs) for multi-graph deployments.
## Working with Data
### Creating Nodes
```typescript
const alice = await store.nodes.Person.create({
name: "Alice Smith",
email: "alice@example.com",
});
const project = await store.nodes.Project.create({
name: "Website Redesign",
status: "active",
});
const task = await store.nodes.Task.create({
title: "Design mockups",
priority: "high",
});
```
### Creating Edges
Pass node objects directly to create edges:
```typescript
await store.edges.worksOn.create(alice, project, { role: "Lead Designer" });
await store.edges.hasTask.create(project, task, {});
await store.edges.assignedTo.create(task, alice, { assignedAt: new Date().toISOString() });
```
### Retrieving Nodes
```typescript
const person = await store.nodes.Person.getById(alice.id);
console.log(person?.name); // "Alice Smith"
```
### Updating Nodes
```typescript
const updated = await store.nodes.Task.update(task.id, { completed: true });
```
### Deleting Nodes
```typescript
await store.nodes.Task.delete(task.id);
```
## Querying Data
TypeGraph provides a fluent query builder:
```typescript
// Find all active projects
const activeProjects = await store
.query()
.from("Project", "p")
.whereNode("p", (p) => p.status.eq("active"))
.select((ctx) => ctx.p)
.execute();
// Find people working on a project
const teamMembers = await store
.query()
.from("Project", "p")
.traverse("worksOn", "e", { direction: "in" })
.to("Person", "person")
.select((ctx) => ({
project: ctx.p.name,
person: ctx.person.name,
}))
.execute();
// Multi-hop traversal: find tasks for a person
const myTasks = await store
.query()
.from("Person", "person")
.whereNode("person", (p) => p.name.eq("Alice Smith"))
.traverse("worksOn", "e1")
.to("Project", "project")
.traverse("hasTask", "e2")
.to("Task", "task")
.select((ctx) => ({
project: ctx.project.name,
task: ctx.task.title,
priority: ctx.task.priority,
}))
.execute();
```
## Transactions
Group operations in transactions for atomicity:
```typescript
await store.transaction(async (tx) => {
const project = await tx.nodes.Project.create({
name: "New Feature",
status: "planning",
});
const task1 = await tx.nodes.Task.create({
title: "Research",
priority: "high",
});
const task2 = await tx.nodes.Task.create({
title: "Implementation",
priority: "medium",
});
await tx.edges.hasTask.create(project, task1, {});
await tx.edges.hasTask.create(project, task2, {});
});
```
## Error Handling
TypeGraph provides specific error types:
```typescript
import { ValidationError, NodeNotFoundError, DisjointError, RestrictedDeleteError } from "@nicia-ai/typegraph";
try {
await store.nodes.Person.create({ name: "" }); // Invalid: empty name
} catch (error) {
if (error instanceof ValidationError) {
console.log("Validation failed:", error.message);
}
}
try {
await store.nodes.Project.delete(project.id);
} catch (error) {
if (error instanceof RestrictedDeleteError) {
console.log("Cannot delete: edges exist");
}
}
```
## PostgreSQL Setup
TypeGraph also supports PostgreSQL for production deployments with better concurrency and JSON support.
### Installation
```bash
npm install @nicia-ai/typegraph zod drizzle-orm pg
npm install -D @types/pg
```
### Database Setup
```typescript
import { Pool } from "pg";
import { drizzle } from "drizzle-orm/node-postgres";
import { createPostgresBackend, generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";
// Create connection pool
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Connection pool size
});
// Run TypeGraph migrations
await pool.query(generatePostgresMigrationSQL());
// Create Drizzle instance and backend
const db = drizzle(pool);
const backend = createPostgresBackend(db);
```
If you use `drizzle-kit` for migrations, see [Drizzle-Kit Managed Migrations](/integration#drizzle-kit-managed-migrations-recommended).
### PostgreSQL Advantages
- **JSONB**: Native JSON type with efficient indexing
- **Connection pooling**: Better concurrency handling
- **Partial indexes**: More efficient uniqueness constraints
- **Full transactions**: ACID guarantees across operations
### Using with Connection Pools
For production, always use connection pooling:
```typescript
import { Pool } from "pg";
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});
// Graceful shutdown
process.on("SIGTERM", async () => {
await pool.end();
});
```
## Next Steps
- [Project Structure](/project-structure) - Organize your graph definitions as your project grows
- [Schemas & Types](/core-concepts) - Deep dive into nodes, edges, and schemas
- [Ontology](/ontology) - Learn about semantic relationships
- [Query Builder](/queries/overview) - Query patterns and traversals
- [Schemas & Stores](/schemas-stores) - Complete API documentation
# Schemas & Types
> Defining nodes, edges, and leveraging TypeScript inference
TypeGraph's power comes from its type system. Define your schema once with Zod, and get:
- **Runtime validation** on every create and update
- **TypeScript types** inferred automatically (no duplication)
- **Query builder constraints** that prevent invalid queries at compile time
## Contents
- [Nodes](#nodes) — Entities with properties and metadata
- [Defining Node Types](#defining-node-types)
- [Schema Features](#schema-features)
- [Node Operations](#node-operations)
- [Edges](#edges) — Relationships between nodes
- [Defining Edge Types](#defining-edge-types) (domain/range constraints)
- [Edge Constraints](#edge-constraints) (cardinality)
- [Edge Operations](#edge-operations)
- [Graph Definition](#graph-definition) — Combining nodes, edges, and ontology
- [Delete Behaviors](#delete-behaviors) — Restrict, cascade, disconnect
- [Uniqueness Constraints](#uniqueness-constraints) — Enforcing unique values
- [Type Inference](#type-inference) — Extracting TypeScript types from schemas
## Nodes
Nodes represent entities in your graph. Each node has:
- **Type**: The type of node (e.g., "Person", "Company")
- **ID**: A unique identifier within the graph
- **Props**: Properties defined by a Zod schema
- **Metadata**: Version, timestamps, and soft-delete state
### Defining Node Types
```typescript
import { z } from "zod";
import { defineNode } from "@nicia-ai/typegraph";
const Person = defineNode("Person", {
schema: z.object({
fullName: z.string().min(1),
email: z.string().email().optional(),
dateOfBirth: z.string().optional(),
tags: z.array(z.string()).default([]),
}),
description: "A person in the system", // Optional
});
```
### Schema Features
TypeGraph supports all Zod validation features:
```typescript
const Product = defineNode("Product", {
schema: z.object({
// Required string
name: z.string().min(1).max(200),
// Optional with default
status: z.enum(["draft", "active", "archived"]).default("draft"),
// Number with constraints
price: z.number().positive(),
// Array with items validation
categories: z.array(z.string()).min(1),
// Regex pattern
sku: z.string().regex(/^[A-Z]{2,4}-\d{4,8}$/),
// Nullable field
description: z.string().nullable(),
// Transform on validation
slug: z.string().transform((s) => s.toLowerCase().replace(/\s+/g, "-")),
}),
});
```
### Node Operations
```typescript
// Create with auto-generated ID
const node = await store.nodes.Person.create({ fullName: "Alice Smith" });
// Create with specific ID
const node = await store.nodes.Person.create({ fullName: "Alice Smith" }, { id: "person-alice" });
// Retrieve
const person = await store.nodes.Person.getById("person-alice");
// Update (partial)
const updated = await store.nodes.Person.update("person-alice", {
email: "alice@example.com",
});
// Delete (soft delete by default)
await store.nodes.Person.delete("person-alice");
// Hard delete (permanent removal) - use carefully!
await store.nodes.Person.hardDelete("person-alice");
```
### Node Object Shape
A node returned from the store has this structure:
```typescript
const alice = await store.nodes.Person.create({ name: "Alice", email: "a@example.com" });
// alice = {
// id: "01HX...", // Generated ULID (or your custom ID)
// kind: "Person", // The node type name
// name: "Alice", // Schema property (flattened to top level)
// email: "a@example.com", // Schema property
// meta: {
// version: 1,
// createdAt: "2024-01-15T10:30:00.000Z",
// updatedAt: "2024-01-15T10:30:00.000Z",
// deletedAt: undefined,
// validFrom: undefined,
// validTo: undefined,
// }
// }
```
Schema properties are flattened to the top level for ergonomic access (`alice.name` instead of
`alice.props.name`). System metadata lives under `meta`.
### Soft Delete vs Hard Delete
By default, `delete()` performs a **soft delete**—it sets the `deletedAt` timestamp but preserves the record:
```typescript
await store.nodes.Person.delete(alice.id); // Sets deletedAt, keeps the record
```
For permanent removal, use `hardDelete()`:
```typescript
await store.nodes.Person.hardDelete(alice.id); // Removes from database
```
**When to use each:**
| Method | Use Case |
|--------|----------|
| `delete()` | Standard deletions, audit trails, undo capability |
| `hardDelete()` | GDPR erasure, storage cleanup, removing test data |
**Warning:** `hardDelete()` is irreversible. It also removes associated uniqueness entries and
embeddings. Consider using soft delete for most use cases.
## Edges
Edges represent relationships between nodes. Each edge has:
- **Type**: The type of relationship (e.g., "worksAt", "knows")
- **ID**: A unique identifier
- **From**: Source node (type + ID)
- **To**: Target node (type + ID)
- **Props**: Properties defined by a Zod schema
### Defining Edge Types
```typescript
import { defineEdge } from "@nicia-ai/typegraph";
// Edge with properties
const worksAt = defineEdge("worksAt", {
schema: z.object({
role: z.string(),
startDate: z.string().optional(),
isPrimary: z.boolean().default(true),
}),
});
// Edge without properties
const knows = defineEdge("knows");
// Equivalent to: defineEdge("knows", { schema: z.object({}) })
```
#### Unconstrained Edges
Edges defined without `from` and `to` are **unconstrained** — they can connect any
node type to any node type. When used directly in `defineGraph`, they are automatically
allowed for all node types in the graph:
```typescript
const sameAs = defineEdge("sameAs");
const related = defineEdge("related", {
schema: z.object({ reason: z.string() }),
});
const graph = defineGraph({
id: "my_graph",
nodes: {
Person: { type: Person },
Company: { type: Company },
},
edges: {
sameAs, // any→any (Person↔Person, Person↔Company, Company↔Company)
related, // any→any, with properties
worksAt: { type: worksAt, from: [Person], to: [Company] }, // constrained
},
});
// All of these work:
await store.edges.sameAs.create(alice, bob, {}); // Person→Person
await store.edges.sameAs.create(alice, acme, {}); // Person→Company
await store.edges.sameAs.create(acme, alice, {}); // Company→Person
```
This is useful for semantic relationships like `sameAs`, `seeAlso`, `related`, or
`tagged` that apply broadly across node types.
#### Domain and Range Constraints
Edges can include built-in domain (source types) and range (target types) constraints
directly in their definition. This makes edge definitions self-contained and reusable:
```typescript
// Edge with built-in domain/range constraints
const worksAt = defineEdge("worksAt", {
schema: z.object({
role: z.string(),
startDate: z.string().optional(),
}),
from: [Person], // Domain: only Person can be the source
to: [Company], // Range: only Company can be the target
});
// Edge connecting multiple types
const mentions = defineEdge("mentions", {
from: [Article, Comment],
to: [Person, Company, Topic],
});
```
Any edge type can be used directly in `defineGraph` without an `EdgeRegistration`
wrapper. Constrained edges use their built-in `from`/`to`; unconstrained edges
allow all node types:
```typescript
const graph = defineGraph({
nodes: { Person: { type: Person }, Company: { type: Company } },
edges: {
worksAt, // Constrained - uses built-in from/to
sameAs, // Unconstrained - connects any node to any node
},
});
```
You can still use `EdgeRegistration` to narrow (but not widen) the constraints:
```typescript
const worksAt = defineEdge("worksAt", {
from: [Person],
to: [Company, Subsidiary], // Allows both Company and Subsidiary
});
const graph = defineGraph({
edges: {
// Narrow to only Subsidiary targets in this graph
worksAt: { type: worksAt, from: [Person], to: [Subsidiary] },
},
});
```
Attempting to widen beyond the edge's built-in constraints throws a `ValidationError`:
```typescript
const worksAt = defineEdge("worksAt", {
from: [Person],
to: [Company],
});
// This throws ValidationError - OtherEntity is not in the edge's range
defineGraph({
edges: {
worksAt: { type: worksAt, from: [Person], to: [OtherEntity] },
},
});
```
### Edge Constraints
#### Cardinality
Control how many edges can exist:
```typescript
const graph = defineGraph({
edges: {
// Default: no limit
knows: { type: knows, from: [Person], to: [Person], cardinality: "many" },
// At most one edge of this type from any source node
currentEmployer: {
type: currentEmployer,
from: [Person],
to: [Company],
cardinality: "one",
},
// At most one edge between any (source, target) pair
rated: { type: rated, from: [Person], to: [Product], cardinality: "unique" },
// At most one active edge (valid_to IS NULL) from any source
currentRole: {
type: currentRole,
from: [Person],
to: [Company],
cardinality: "oneActive",
},
},
});
```
| Cardinality | Description |
|-------------|-------------|
| `"many"` | No limit (default) |
| `"one"` | At most one edge of this type from any source node |
| `"unique"` | At most one edge between any (source, target) pair |
| `"oneActive"` | At most one edge with `valid_to IS NULL` from any source |
#### Enforcement Timing
Cardinality constraints are checked at edge **creation time**, before the insert:
```typescript
// With cardinality: "one" on currentEmployer:
await store.edges.currentEmployer.create(alice, acme, {}); // OK
await store.edges.currentEmployer.create(alice, other, {}); // Throws CardinalityError
```
The check queries existing edges and throws `CardinalityError` if violated.
For `oneActive`, only edges with `validTo` unset count toward the limit.
### Edge Operations
```typescript
// Create edge - pass nodes directly
const edge = await store.edges.worksAt.create(alice, acme, { role: "Engineer" });
// Retrieve edge
const e = await store.edges.worksAt.getById(edge.id);
// Delete edge
await store.edges.worksAt.delete(edge.id);
```
## Graph Definition
The graph definition combines all components:
```typescript
import { defineGraph } from "@nicia-ai/typegraph";
const graph = defineGraph({
// Unique identifier for this graph
id: "my_application",
// Node registrations
nodes: {
Person: {
type: Person,
onDelete: "restrict", // Default behavior
},
Company: {
type: Company,
onDelete: "cascade",
},
Employment: {
type: Employment,
onDelete: "disconnect",
},
},
// Edge registrations
edges: {
worksAt: {
type: worksAt,
from: [Person],
to: [Company],
cardinality: "many",
},
employedAt: {
type: employedAt,
from: [Company],
to: [Employment],
cardinality: "many",
},
},
// Semantic relationships
ontology: [subClassOf(Company, Organization), disjointWith(Person, Company)],
});
```
## Delete Behaviors
Control what happens when nodes are deleted:
### Restrict (Default)
Blocks deletion if any edges are connected:
```typescript
nodes: {
Author: { type: Author }, // onDelete defaults to "restrict"
}
// This throws RestrictedDeleteError if Author has edges
await store.nodes.Author.delete(authorId);
```
### Cascade
Automatically deletes all connected edges:
```typescript
nodes: {
Book: { type: Book, onDelete: "cascade" },
}
// Deletes the book and all edges connected to it
await store.nodes.Book.delete(bookId);
```
### Disconnect
Soft-deletes edges (preserves history):
```typescript
nodes: {
Review: { type: Review, onDelete: "disconnect" },
}
// Marks connected edges as deleted (deleted_at is set)
await store.nodes.Review.delete(reviewId);
```
## Uniqueness Constraints
Ensure unique values within node types:
```typescript
const graph = defineGraph({
nodes: {
Person: {
type: Person,
unique: [
{
name: "person_email",
fields: ["email"],
where: (props) => props.email.isNotNull(),
scope: "kind",
collation: "caseInsensitive",
},
],
},
Company: {
type: Company,
unique: [
{
name: "company_ticker",
fields: ["ticker"],
scope: "kind",
collation: "binary",
},
],
},
},
});
```
### Scope Options
- `"kind"`: Unique within this exact type only
- `"kindWithSubClasses"`: Unique across this type and all subclasses
### Collation Options
- `"binary"`: Case-sensitive comparison
- `"caseInsensitive"`: Case-insensitive comparison
## Type Inference
TypeGraph infers TypeScript types from Zod schemas—you never duplicate type definitions.
### Extracting Types from Definitions
```typescript
import { z } from "zod";
import { defineNode, type Node, type NodeProps, type NodeId } from "@nicia-ai/typegraph";
const Person = defineNode("Person", {
schema: z.object({
name: z.string(),
email: z.string().email().optional(),
age: z.number().optional(),
}),
});
// For functions that work with full nodes (id, kind, metadata, props):
type PersonNode = Node;
// { id: NodeId; kind: "Person"; name: string; email?: string; version: number; createdAt: Date; ... }
// For functions that only need the property data:
type PersonProps = NodeProps;
// { name: string; email?: string; age?: number }
// For type-safe node IDs (prevents mixing IDs from different node types):
type PersonId = NodeId;
// string & { readonly [__nodeId]: typeof Person }
```
Use `Node` when your function needs the full node with metadata.
Use `NodeProps` when you only care about the schema properties (e.g., for form validation or API payloads).
### Typed Store Operations
```typescript
// Create returns a fully typed Node
const alice: Node = await store.nodes.Person.create({
name: "Alice",
email: "alice@example.com",
});
// TypeScript knows the structure
alice.id; // NodeId - branded string
alice.name; // string
alice.email; // string | undefined
alice.age; // number | undefined
alice.version; // number
alice.createdAt; // Date
// Type errors caught at compile time
await store.nodes.Person.create({
name: 123, // Error: Type 'number' is not assignable to type 'string'
invalid: "field", // Error: Object literal may only specify known properties
});
```
### Typed Query Results
```typescript
// Result type is inferred from your select projection
const results = await store
.query()
.from("Person", "p")
.select((ctx) => ({
name: ctx.p.name, // TypeScript knows: string
email: ctx.p.email, // TypeScript knows: string | undefined
id: ctx.p.id, // TypeScript knows: NodeId
}))
.execute();
// results: Array<{ name: string; email: string | undefined; id: NodeId }>
// Invalid property access is caught
.select((ctx) => ({
invalid: ctx.p.nonexistent, // TypeScript error!
}))
```
### Typed Edge Operations
Edge endpoints are constrained to valid node types:
```typescript
// Edge definition: worksAt goes from Person → Company
const graph = defineGraph({
// ...
edges: {
worksAt: { type: worksAt, from: [Person], to: [Company] },
},
});
// TypeScript enforces valid endpoints
await store.edges.worksAt.create(alice, acmeCorp, { role: "Engineer" }); // OK
await store.edges.worksAt.create(acmeCorp, alice, { role: "Engineer" });
// Error: Argument of type 'Node' is not assignable to parameter of type 'Node'
```
# Backend Setup
> Configure SQLite and PostgreSQL backends for TypeGraph
TypeGraph stores graph data in your existing relational database using Drizzle ORM adapters.
This guide covers setting up SQLite and PostgreSQL backends.
:::note[Custom indexes]
TypeGraph migrations create the core tables and built-in indexes. For application-specific indexes
on JSON properties (and Drizzle/drizzle-kit integration), see [Indexes](/performance/indexes).
:::
## SQLite
SQLite is ideal for development, testing, single-server deployments, and embedded applications.
### Quick Setup
For development and testing, use the convenience function that handles everything:
```typescript
import { createLocalSqliteBackend } from "@nicia-ai/typegraph/sqlite/local";
import { createStore } from "@nicia-ai/typegraph";
// In-memory database (resets on restart)
const { backend } = createLocalSqliteBackend();
const store = createStore(graph, backend);
// File-based database (persisted)
const { backend, db } = createLocalSqliteBackend({ path: "./app.db" });
const store = createStore(graph, backend);
```
### Manual Setup
For full control over the database connection:
```typescript
import Database from "better-sqlite3";
import { drizzle } from "drizzle-orm/better-sqlite3";
import { createSqliteBackend, generateSqliteMigrationSQL } from "@nicia-ai/typegraph/sqlite";
import { createStore } from "@nicia-ai/typegraph";
// Create and configure the database
const sqlite = new Database("app.db");
sqlite.pragma("journal_mode = WAL"); // Recommended for performance
sqlite.pragma("foreign_keys = ON");
// Run TypeGraph migrations
sqlite.exec(generateSqliteMigrationSQL());
// Create Drizzle instance and backend
const db = drizzle(sqlite);
const backend = createSqliteBackend(db);
const store = createStore(graph, backend);
// Clean up when done
process.on("exit", () => sqlite.close());
```
### SQLite with Vector Search
For semantic search, use sqlite-vec:
```typescript
import Database from "better-sqlite3";
import { drizzle } from "drizzle-orm/better-sqlite3";
import { createSqliteBackend, generateSqliteMigrationSQL } from "@nicia-ai/typegraph/sqlite";
const sqlite = new Database("app.db");
// Load sqlite-vec extension
sqlite.loadExtension("vec0");
// Run migrations (includes vector index setup)
sqlite.exec(generateSqliteMigrationSQL());
const db = drizzle(sqlite);
const backend = createSqliteBackend(db);
```
See [Semantic Search](/semantic-search) for query examples.
### API Reference
#### `createLocalSqliteBackend(options?)`
Creates a SQLite backend with automatic database and schema setup.
```typescript
function createLocalSqliteBackend(options?: {
path?: string; // Database path, defaults to ":memory:"
tables?: SqliteTables;
}): { backend: GraphBackend; db: BetterSQLite3Database };
```
#### `createSqliteBackend(db, options?)`
Creates a SQLite backend from an existing Drizzle database instance.
```typescript
function createSqliteBackend(
db: BetterSQLite3Database,
options?: { tables?: SqliteTables }
): GraphBackend;
```
#### `generateSqliteMigrationSQL()`
Returns SQL for creating TypeGraph tables in SQLite.
```typescript
function generateSqliteMigrationSQL(): string;
```
## PostgreSQL
PostgreSQL is recommended for production deployments with concurrent access, large datasets,
or when you need advanced features like pgvector.
### Basic Setup
```typescript
import { Pool } from "pg";
import { drizzle } from "drizzle-orm/node-postgres";
import { createPostgresBackend, generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";
import { createStore } from "@nicia-ai/typegraph";
// Create connection pool
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Maximum connections
});
// Run TypeGraph migrations
await pool.query(generatePostgresMigrationSQL());
// Create Drizzle instance and backend
const db = drizzle(pool);
const backend = createPostgresBackend(db);
const store = createStore(graph, backend);
```
### PostgreSQL with Vector Search
For semantic search, enable pgvector:
```typescript
import { Pool } from "pg";
import { drizzle } from "drizzle-orm/node-postgres";
import { createPostgresBackend, generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
// Migration SQL includes pgvector extension
await pool.query(generatePostgresMigrationSQL());
// Runs: CREATE EXTENSION IF NOT EXISTS vector;
const db = drizzle(pool);
const backend = createPostgresBackend(db);
```
See [Semantic Search](/semantic-search) for query examples.
### Connection Pooling
For production, always use connection pooling:
```typescript
import { Pool } from "pg";
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Maximum pool size
idleTimeoutMillis: 30000, // Close idle connections after 30s
connectionTimeoutMillis: 2000, // Timeout for new connections
});
// Handle pool errors
pool.on("error", (err) => {
console.error("Unexpected pool error", err);
});
// Graceful shutdown
process.on("SIGTERM", async () => {
await pool.end();
process.exit(0);
});
```
### API Reference
#### `createPostgresBackend(db, options?)`
Creates a PostgreSQL backend adapter.
```typescript
function createPostgresBackend(
db: NodePgDatabase,
options?: { tables?: PostgresTables }
): GraphBackend;
```
#### `generatePostgresMigrationSQL()`
Returns SQL for creating TypeGraph tables in PostgreSQL, including the pgvector extension.
```typescript
function generatePostgresMigrationSQL(): string;
```
#### `generatePostgresDDL(tables?)`
Returns individual DDL statements (CREATE TABLE, CREATE INDEX) as an array. Useful when you
need per-statement control, for example to execute them in separate transactions or log them
individually.
```typescript
function generatePostgresDDL(tables?: PostgresTables): string[];
```
## Drizzle Entrypoints
TypeGraph exposes Drizzle adapters through two public entrypoints:
- `@nicia-ai/typegraph/sqlite` - SQLite adapter exports
- `@nicia-ai/typegraph/postgres` - PostgreSQL adapter exports
Import from the entrypoint matching your database:
```typescript
import { createSqliteBackend, tables } from "@nicia-ai/typegraph/sqlite";
```
```typescript
import { createPostgresBackend, tables } from "@nicia-ai/typegraph/postgres";
```
## Cloudflare D1
TypeGraph supports Cloudflare D1 for edge deployments, with some limitations.
```typescript
import { drizzle } from "drizzle-orm/d1";
import { createStore } from "@nicia-ai/typegraph";
import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";
export default {
async fetch(request: Request, env: Env) {
const db = drizzle(env.DB);
const backend = createSqliteBackend(db);
const store = createStore(graph, backend);
// Use store...
},
};
```
**Important:** D1 does not support transactions. See [Limitations](/limitations) for details.
## Backend Capabilities
Check what features a backend supports:
```typescript
const backend = createSqliteBackend(db);
if (backend.capabilities.transactions) {
await store.transaction(async (tx) => { /* ... */ });
} else {
// Handle non-transactional execution
}
if (backend.capabilities.vectorSearch) {
// Vector similarity queries available
}
```
## Connection Management
TypeGraph does not manage database connections. You are responsible for:
1. **Creating connections** with appropriate configuration
2. **Connection pooling** for production use
3. **Closing connections** on shutdown
```typescript
// You create the connection
const sqlite = new Database("app.db");
const db = drizzle(sqlite);
const backend = createSqliteBackend(db);
const store = createStore(graph, backend);
// You close the connection
process.on("exit", () => {
sqlite.close();
});
```
The `store.close()` method is a no-op—cleanup is your responsibility.
## Environment-Specific Setup
### Development
```typescript
// In-memory for fast tests
const { backend } = createLocalSqliteBackend();
// Or file-based for persistence during development
const { backend } = createLocalSqliteBackend({ path: "./dev.db" });
```
### Testing
```typescript
// Fresh in-memory database per test
beforeEach(() => {
const { backend } = createLocalSqliteBackend();
store = createStore(graph, backend);
});
```
### Production
```typescript
// PostgreSQL with pooling
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20,
ssl: { rejectUnauthorized: false }, // For managed databases
});
await pool.query(generatePostgresMigrationSQL());
const db = drizzle(pool);
const backend = createPostgresBackend(db);
const [store] = await createStoreWithSchema(graph, backend);
```
## Next Steps
- [Schemas & Types](/core-concepts) - Define your graph schema
- [Semantic Search](/semantic-search) - Vector embeddings and similarity search
- [Limitations](/limitations) - Backend-specific constraints
# Query Builder Overview
> A fluent, type-safe API for querying your graph
TypeGraph provides a fluent, type-safe query builder for traversing and filtering your graph. This
page introduces the query categories and how they compose together.
## Query Categories
Every query builder method falls into one of these categories:
| Category | Purpose | Key Methods |
|----------|---------|-------------|
| [Source](/queries/source) | Entry point - where to start | `from()` |
| [Filter](/queries/filter) | Reduce the result set | `whereNode()`, `whereEdge()` |
| [Traverse](/queries/traverse) | Navigate relationships | `traverse()`, `optionalTraverse()`, `to()` |
| [Recursive](/queries/recursive) | Variable-length paths | `recursive()` |
| [Shape](/queries/shape) | Transform output structure | `select()`, `aggregate()` |
| [Aggregate](/queries/aggregate) | Summarize data | `groupBy()`, `count()`, `sum()`, `avg()` |
| [Order](/queries/order) | Control result ordering/size | `orderBy()`, `limit()`, `offset()` |
| [Temporal](/queries/temporal) | Time-based queries | `temporal()` |
| [Compose](/queries/compose) | Reusable query parts | `pipe()`, `createFragment()` |
| [Combine](/queries/combine) | Set operations | `union()`, `intersect()`, `except()` |
| [Execute](/queries/execute) | Run and retrieve | `execute()`, `first()`, `count()`, `exists()`, `paginate()`, `stream()` |
## Query Flow
A typical query follows this flow:
```text
Source → Filter → Traverse → Filter → Shape → Order → Execute
↑__________________|
(repeat as needed)
```
Each step is optional except Source and Execute. You can filter, traverse, and filter again as many
times as needed before shaping and executing.
## Basic Example
```typescript
const results = await store
.query()
.from("Person", "p") // Source
.whereNode("p", (p) => p.status.eq("active")) // Filter
.traverse("worksAt", "e") // Traverse
.to("Company", "c") // Traverse (target)
.whereNode("c", (c) => c.industry.eq("Tech")) // Filter
.select((ctx) => ({ // Shape
person: ctx.p.name,
company: ctx.c.name,
role: ctx.e.role,
}))
.orderBy("p", "name", "asc") // Order
.limit(50) // Order
.execute(); // Execute
```
## Type Safety
The query builder is fully typed. TypeScript infers result types based on your schema and selection:
```typescript
// TypeScript infers: Array<{ name: string; email: string | undefined }>
const results = await store
.query()
.from("Person", "p")
.select((ctx) => ({
name: ctx.p.name, // string (required in schema)
email: ctx.p.email, // string | undefined (optional in schema)
}))
.execute();
// Invalid property access is caught at compile time:
.select((ctx) => ({
invalid: ctx.p.nonexistent, // TypeScript error!
}))
```
## When to Use Queries vs Store API
**Use the query builder** when you need:
- Filtering based on node properties
- Traversing relationships between nodes
- Aggregating data across multiple nodes
- Complex predicates with AND/OR logic
**Use the [Store API](/schemas-stores#store-api)** for simple operations:
- Get a node by ID
- Create a new node
- Update a node's properties
- Delete a node
## Predicates Reference
Predicates are the building blocks for filtering. Each data type has its own set of predicates:
| Type | Documentation |
|------|--------------|
| String | [String Predicates](/queries/predicates/#string) |
| Number | [Number Predicates](/queries/predicates/#number) |
| Date | [Date Predicates](/queries/predicates/#date) |
| Array | [Array Predicates](/queries/predicates/#array) |
| Object | [Object Predicates](/queries/predicates/#object) |
| Embedding | [Embedding Predicates](/queries/predicates/#embedding) |
## Performance Tips
### Filter Early
Apply predicates as early as possible to reduce the working set:
```typescript
// Good: Filter at source
store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.active.eq(true))
.traverse("worksAt", "e")
.to("Company", "c");
// Less efficient: Filter after traversal
store
.query()
.from("Person", "p")
.traverse("worksAt", "e")
.to("Company", "c")
.whereNode("p", (p) => p.active.eq(true));
```
### Be Specific with Kinds
Unless you need subclass expansion, use exact kinds:
```typescript
// More efficient: Exact kind
.from("Podcast", "p")
// Less efficient: Includes all subclasses
.from("Media", "m", { includeSubClasses: true })
```
### Always Paginate Large Results
```typescript
const page = await store
.query()
.from("Event", "e")
.orderBy("e", "date", "desc")
.limit(100)
.execute();
```
## Next Steps
Start with the fundamentals:
1. [Source](/queries/source) - Starting queries with `from()`
2. [Filter](/queries/filter) - Reducing results with predicates
3. [Traverse](/queries/traverse) - Navigating relationships
4. [Shape](/queries/shape) - Transforming output with `select()`
# Troubleshooting
> Solutions to common issues and frequently asked questions
This guide covers common issues and their solutions when working with TypeGraph.
## Installation Issues
### "Cannot find module '@nicia-ai/typegraph'"
**Cause:** Package not installed or using wrong package name.
**Solution:**
```bash
npm install @nicia-ai/typegraph zod drizzle-orm
```
### "better-sqlite3 compilation failed"
**Cause:** Native module compilation requires build tools.
**Solutions:**
**macOS:**
```bash
xcode-select --install
```
**Ubuntu/Debian:**
```bash
sudo apt-get install build-essential python3
```
**Windows:**
```bash
npm install --global windows-build-tools
```
**Alternative:** Use `sql.js` for pure JavaScript SQLite (no compilation needed).
### "Module not found: drizzle-orm/better-sqlite3"
**Cause:** Drizzle ORM subpath exports require specific import syntax.
**Solution:** Ensure correct imports:
```typescript
// Correct
import { drizzle } from "drizzle-orm/better-sqlite3";
// Incorrect
import { drizzle } from "drizzle-orm";
```
## Schema Definition Errors
### "Node schema contains reserved property names"
**Cause:** Using reserved keys (`id`, `kind`, `meta`) in your Zod schema.
**Solution:** Rename your properties:
```typescript
// Bad - 'id' is reserved
const User = defineNode("User", {
schema: z.object({
id: z.string(), // Error!
name: z.string(),
}),
});
// Good - use a different name
const User = defineNode("User", {
schema: z.object({
externalId: z.string(),
name: z.string(),
}),
});
```
TypeGraph automatically provides `id`, `kind`, and `meta` on all nodes.
### "Edge type already has constraints defined"
**Cause:** Defining `from`/`to` constraints on both the edge type and graph registration.
**Solution:** Define constraints in one place only:
```typescript
// Option 1: On the edge type (reusable across graphs)
const worksAt = defineEdge("worksAt", {
from: [Person],
to: [Company],
});
const graph = defineGraph({
edges: {
worksAt: { type: worksAt }, // No from/to here
},
});
// Option 2: On the graph (flexible per-graph)
const worksAt = defineEdge("worksAt");
const graph = defineGraph({
edges: {
worksAt: { type: worksAt, from: [Person], to: [Company] },
},
});
```
## Runtime Errors
### ValidationError: "Invalid input"
**Cause:** Data doesn't match the Zod schema.
**Solution:** Check the error details for specific issues:
```typescript
try {
await store.nodes.Person.create({ name: "" });
} catch (error) {
if (error instanceof ValidationError) {
console.log(error.details.issues); // Zod issues array
}
}
```
### NodeNotFoundError
**Cause:** Attempting to read/update/delete a non-existent node.
**Solution:** Check if the node exists first or handle the error:
```typescript
const node = await store.nodes.Person.getById(someId);
if (!node) {
// Handle missing node
}
// Or use error handling
try {
await store.nodes.Person.update(someId, { name: "New" });
} catch (error) {
if (error instanceof NodeNotFoundError) {
console.log(`Node ${error.details.id} not found`);
}
}
```
### RestrictedDeleteError
**Cause:** Attempting to delete a node that has edges, with `onDelete: "restrict"` (the default).
**Solution:** Either delete the edges first or use a different delete behavior:
```typescript
// Option 1: Delete edges first
const edges = await store.edges.worksAt.findFrom(person);
for (const edge of edges) {
await store.edges.worksAt.delete(edge.id);
}
await store.nodes.Person.delete(person.id);
// Option 2: Use cascade delete in schema
const graph = defineGraph({
nodes: {
Person: { type: Person, onDelete: "cascade" },
},
});
```
### DisjointError
**Cause:** Creating a node with an ID that's already used by a disjoint type.
**Solution:** Ensure IDs are unique across disjoint types or don't use explicit IDs:
```typescript
// If Person and Organization are disjoint:
// Bad - same ID for different types
await store.nodes.Person.create({ name: "Alice" }, { id: "entity-1" });
await store.nodes.Organization.create({ name: "Acme" }, { id: "entity-1" }); // Error!
// Good - let TypeGraph generate unique IDs
await store.nodes.Person.create({ name: "Alice" });
await store.nodes.Organization.create({ name: "Acme" });
```
## Query Issues
### "Alias 'x' is already in use"
**Cause:** Using the same alias twice in a query.
**Solution:** Use unique aliases:
```typescript
// Bad
store.query()
.from("Person", "p")
.traverse("knows", "e")
.to("Person", "p") // Error! 'p' already used
// Good
store.query()
.from("Person", "p1")
.traverse("knows", "e")
.to("Person", "p2")
```
### Empty results when expecting data
**Causes and solutions:**
1. **Type mismatch:** Ensure you're querying the correct node type
```typescript
// Check the node type name matches exactly
.from("Person", "p") // Must match defineNode("Person", ...)
```
2. **Missing includeSubClasses:** When querying a superclass
```typescript
.from("Content", "c", { includeSubClasses: true })
```
3. **Strict predicate:** Check your filters aren't too restrictive
```typescript
// Debug by removing filters temporarily
const all = await store.query().from("Person", "p").select((c) => c.p).execute();
console.log(all.length); // How many total?
```
### Slow queries
**Solutions:**
1. **Use the query profiler:**
```typescript
import { QueryProfiler } from "@nicia-ai/typegraph/profiler";
const profiler = new QueryProfiler();
profiler.attachToStore(store);
// Run your queries...
const report = profiler.getReport();
console.log(report.recommendations);
```
2. **Add indexes** based on profiler recommendations:
```typescript
import { defineNodeIndex } from "@nicia-ai/typegraph/indexes";
const nameIndex = defineNodeIndex("Person", ["name"]);
```
3. **Limit results:**
```typescript
.limit(100)
// Or use pagination
.paginate({ first: 20 })
```
## Database Connection Issues
### "Database is locked" (SQLite)
**Cause:** Multiple processes accessing the same SQLite file without WAL mode.
**Solution:** Enable WAL mode:
```typescript
const sqlite = new Database("myapp.db");
sqlite.pragma("journal_mode = WAL");
```
### Connection pool exhausted (PostgreSQL)
**Cause:** Too many concurrent connections.
**Solution:** Configure pool limits:
```typescript
import { Pool } from "pg";
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Adjust based on your needs
idleTimeoutMillis: 30000,
});
```
### "relation 'typegraph_nodes' does not exist"
**Cause:** Migration not run.
**Solution:** Run the migration SQL:
```typescript
// PostgreSQL
import { generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";
await pool.query(generatePostgresMigrationSQL());
// SQLite
import { generateSqliteMigrationSQL } from "@nicia-ai/typegraph/sqlite";
sqlite.exec(generateSqliteMigrationSQL());
```
## Semantic Search Issues
### "Extension not found" / "vector type not available"
**Cause:** Vector extension not installed.
**PostgreSQL:**
```sql
CREATE EXTENSION IF NOT EXISTS vector;
```
**SQLite:**
```typescript
import * as sqliteVec from "sqlite-vec";
sqliteVec.load(sqlite); // Must be called before creating backend
```
### "Dimension mismatch"
**Cause:** Query embedding has different dimension than stored embeddings.
**Solution:** Use consistent embedding dimensions:
```typescript
// Schema defines 1536 dimensions
const Document = defineNode("Document", {
schema: z.object({
embedding: embedding(1536),
}),
});
// Query embedding must also be 1536
const queryEmbedding = await generateEmbedding(text);
console.log(queryEmbedding.length); // Should be 1536
```
### "Inner product not supported" (SQLite)
**Cause:** sqlite-vec doesn't support inner product metric.
**Solution:** Use cosine or L2:
```typescript
// Instead of:
d.embedding.similarTo(query, 10, { metric: "inner_product" })
// Use:
d.embedding.similarTo(query, 10, { metric: "cosine" })
```
## TypeScript Issues
### "Property 'x' does not exist on type"
**Cause:** Accessing a property not defined in your schema.
**Solution:** Ensure the property is in your Zod schema:
```typescript
const Person = defineNode("Person", {
schema: z.object({
name: z.string(),
email: z.string().optional(),
}),
});
// Now both properties are available with correct types
const person = await store.nodes.Person.getById(id);
person?.name; // string
person?.email; // string | undefined
```
### Type inference not working in select
**Cause:** Complex generic inference limitations.
**Solution:** Use explicit typing or simplify:
```typescript
// If inference fails, be explicit
.select((ctx) => ({
name: ctx.p.name as string,
company: ctx.c.name as string,
}))
```
## Still Having Issues?
1. **Check the [Limitations](/limitations)** page for known constraints
2. **Review [Architecture](/architecture)** to understand how TypeGraph works
3. **Search [GitHub Issues](https://github.com/nicia-ai/typegraph/issues)** for similar problems
4. **Open a new issue** with a minimal reproduction case
# Errors
> Error types and handling in TypeGraph
TypeGraph uses typed errors to communicate specific failure conditions. All errors extend the base
`TypeGraphError` class and include categorization, contextual details, and actionable suggestions.
## Error Categories
Every error is categorized to help determine the appropriate response:
| Category | Description | Typical Response |
|----------|-------------|------------------|
| `user` | Invalid input or misuse of API | Fix the input and retry |
| `constraint` | Graph constraint violated | Handle as business logic violation |
| `system` | Internal or infrastructure error | Log, alert, potentially retry |
```typescript
import { isUserRecoverable, isConstraintError, isSystemError } from "@nicia-ai/typegraph";
try {
await store.nodes.Person.create(data);
} catch (error) {
if (isUserRecoverable(error)) {
// Show validation errors to user
return { error: error.toUserMessage() };
}
if (isConstraintError(error)) {
// Handle business rule violation
return { error: "This operation violates a constraint" };
}
if (isSystemError(error)) {
// Log and alert
console.error(error.toLogString());
throw error;
}
}
```
## Base Error
### `TypeGraphError`
Base error class for all TypeGraph errors.
```typescript
class TypeGraphError extends Error {
readonly code: string;
readonly category: ErrorCategory;
readonly details: Readonly>;
readonly suggestion?: string;
// Format error for end users (includes suggestion if available)
toUserMessage(): string;
// Format error for logging (includes code, category, and details)
toLogString(): string;
}
type ErrorCategory = "user" | "constraint" | "system";
```
**Properties:**
| Property | Type | Description |
|----------|------|-------------|
| `code` | `string` | Machine-readable error code |
| `category` | `ErrorCategory` | Error classification for handling |
| `details` | `Record` | Additional context about the error |
| `suggestion` | `string \| undefined` | Actionable guidance for resolution |
**Methods:**
| Method | Returns | Description |
|--------|---------|-------------|
| `toUserMessage()` | `string` | Human-readable message with suggestion |
| `toLogString()` | `string` | Detailed string for logging/debugging |
## Validation Errors
### `ValidationError`
Thrown when schema validation fails during node or edge creation/update. Includes structured issue
details with context about which entity failed.
```typescript
interface ValidationErrorDetails {
readonly issues: readonly ValidationIssue[];
readonly entityType?: "node" | "edge";
readonly kind?: string;
readonly operation?: "create" | "update";
readonly id?: string;
}
interface ValidationIssue {
readonly path: string;
readonly message: string;
readonly code?: string;
}
```
**Example:**
```typescript
try {
await store.nodes.Person.create({ name: "" }); // Empty name fails min(1)
} catch (error) {
if (error instanceof ValidationError) {
console.log(error.category); // "user"
console.log(error.details.kind); // "Person"
console.log(error.details.operation); // "create"
console.log(error.details.issues);
// [{ path: "name", message: "String must contain at least 1 character(s)" }]
console.log(error.toUserMessage());
// "Validation failed for Person create: name - String must contain at least 1 character(s)
//
// Suggestion: Check the data you're providing matches the schema..."
}
}
```
### `DisjointError`
Thrown when attempting to create a node that violates a disjointness constraint.
```typescript
// If Person and Organization are disjoint:
await store.nodes.Person.create({ name: "Alice" }, { id: "entity-1" });
try {
// Same ID, different disjoint type
await store.nodes.Organization.create({ name: "Acme" }, { id: "entity-1" });
} catch (error) {
if (error instanceof DisjointError) {
console.log(error.category); // "constraint"
console.log(error.details);
// { existingType: "Person", attemptedType: "Organization" }
console.log(error.suggestion);
// "Use a different ID for the new node, or delete the existing node first..."
}
}
```
### `EndpointError`
Thrown when an edge is created with invalid endpoint types.
```typescript
// If worksAt only allows Person -> Company:
try {
await store.edges.worksAt.create(company, person, {}); // Wrong direction
} catch (error) {
if (error instanceof EndpointError) {
console.log(error.category); // "user"
console.log(error.suggestion);
// "Check the edge definition to see which node types are allowed..."
}
}
```
### `CardinalityError`
Thrown when a cardinality constraint is violated.
```typescript
// If worksAt has cardinality: "one" (person can only work at one company):
await store.edges.worksAt.create(alice, acme, { role: "Engineer" });
try {
await store.edges.worksAt.create(alice, otherCompany, { role: "Consultant" });
} catch (error) {
if (error instanceof CardinalityError) {
console.log(error.category); // "constraint"
console.log(error.details); // { edge: "worksAt", cardinality: "one" }
console.log(error.suggestion);
// "Remove the existing edge before creating a new one, or update the existing edge..."
}
}
```
### `UniquenessError`
Thrown when a uniqueness constraint is violated.
```typescript
// If email has a unique constraint:
await store.nodes.Person.create({ name: "Alice", email: "alice@example.com" });
try {
await store.nodes.Person.create({ name: "Bob", email: "alice@example.com" });
} catch (error) {
if (error instanceof UniquenessError) {
console.log(error.category); // "constraint"
console.log(error.details); // { field: "email", value: "alice@example.com" }
console.log(error.suggestion);
// "Use a different value for the unique field, or update the existing record..."
}
}
```
## Not Found Errors
### `NodeNotFoundError`
Thrown when a referenced node does not exist.
```typescript
try {
await store.nodes.Person.update("nonexistent-id", { name: "New Name" });
} catch (error) {
if (error instanceof NodeNotFoundError) {
console.log(error.category); // "user"
console.log(error.details); // { id: "nonexistent-id", type: "Person" }
console.log(error.suggestion);
// "Verify the node ID is correct and the node hasn't been deleted..."
}
}
```
### `EdgeNotFoundError`
Thrown when a referenced edge does not exist.
```typescript
try {
await store.edges.worksAt.update("nonexistent-edge", { role: "Manager" });
} catch (error) {
if (error instanceof EdgeNotFoundError) {
console.log(error.category); // "user"
console.log(error.details); // { id: "nonexistent-edge" }
console.log(error.suggestion);
// "Verify the edge ID is correct and the edge hasn't been deleted..."
}
}
```
### `KindNotFoundError`
Thrown when referencing a node or edge type that doesn't exist in the graph definition.
```typescript
try {
await store.query().from("NonExistentType", "n").execute();
} catch (error) {
if (error instanceof KindNotFoundError) {
console.log(error.category); // "user"
console.log(error.details); // { kind: "NonExistentType" }
console.log(error.suggestion);
// "Check the graph definition to see which node and edge types are available..."
}
}
```
### `EndpointNotFoundError`
Thrown when an edge references a node that doesn't exist.
```typescript
try {
await store.edges.worksAt.create(
{ kind: "Person", id: "nonexistent" },
company,
{ role: "Engineer" }
);
} catch (error) {
if (error instanceof EndpointNotFoundError) {
console.log(error.category); // "user"
console.log(error.details); // { kind: "Person", id: "nonexistent" }
console.log(error.suggestion);
// "Create the referenced node first, or verify the node ID is correct..."
}
}
```
## Delete Errors
### `RestrictedDeleteError`
Thrown when delete is blocked due to existing edges (when `onDelete: "restrict"`).
```typescript
// If Person has edges and onDelete is "restrict":
try {
await store.nodes.Person.delete(alice.id);
} catch (error) {
if (error instanceof RestrictedDeleteError) {
console.log(error.category); // "constraint"
console.log(error.details); // { nodeId: "...", edgeCount: 3 }
console.log(error.suggestion);
// "Delete all edges connected to this node first, or change the delete behavior..."
}
}
```
## Configuration Errors
### `ConfigurationError`
Thrown when the store, backend, or schema definition is misconfigured.
```typescript
// Using transactions on D1 (which doesn't support them):
try {
await store.transaction(async (tx) => {
// ...
});
} catch (error) {
if (error instanceof ConfigurationError) {
console.log(error.category); // "system"
console.log(error.suggestion);
// "Check the backend documentation for supported features..."
}
}
```
### `SchemaMismatchError`
Thrown when the database schema doesn't match the expected graph definition.
```typescript
try {
const [store] = await createStoreWithSchema(graph, backend);
} catch (error) {
if (error instanceof SchemaMismatchError) {
console.log(error.category); // "system"
console.log(error.details); // { expected: "...", actual: "..." }
console.log(error.suggestion);
// "Run migrations to update the database schema..."
}
}
```
### `MigrationError`
Thrown when schema migration fails due to breaking changes that require manual intervention.
```typescript
try {
const [store] = await createStoreWithSchema(graph, backend);
} catch (error) {
if (error instanceof MigrationError) {
console.log(error.category); // "system"
console.log(error.details.breakingChanges);
// ["Removed required field 'email' from Person"]
console.log(error.suggestion);
// "Review the breaking changes and perform manual migration if needed..."
}
}
```
## Query Errors
### `UnsupportedPredicateError`
Thrown when using a query predicate that isn't supported by the current backend.
```typescript
// Using vector similarity on a backend without vector support:
try {
await store
.query()
.from("Document", "d")
.whereNode("d", (d) => d.embedding.similarTo(queryVector, 10))
.execute();
} catch (error) {
if (error instanceof UnsupportedPredicateError) {
console.log(error.category); // "system"
console.log(error.suggestion);
// "Use a backend that supports this predicate, or rewrite the query..."
}
}
```
## Error Handling Patterns
### Using Error Utilities
TypeGraph provides utility functions for common error handling patterns:
```typescript
import {
isTypeGraphError,
isUserRecoverable,
isConstraintError,
isSystemError,
getErrorSuggestion,
} from "@nicia-ai/typegraph";
try {
await store.nodes.Person.create(data);
} catch (error) {
if (!isTypeGraphError(error)) {
// Not a TypeGraph error, handle differently
throw error;
}
// Get suggestion regardless of error type
const suggestion = getErrorSuggestion(error);
if (isUserRecoverable(error)) {
// User can fix this by providing different input
return {
error: error.toUserMessage(),
suggestion,
};
}
if (isConstraintError(error)) {
// Business rule violation
return {
error: "This operation violates a constraint",
details: error.details,
};
}
if (isSystemError(error)) {
// Infrastructure/configuration issue
console.error(error.toLogString());
throw error;
}
}
```
### Catch Specific Errors
```typescript
import {
ValidationError,
NodeNotFoundError,
DisjointError,
} from "@nicia-ai/typegraph";
try {
await store.nodes.Person.create(data);
} catch (error) {
if (error instanceof ValidationError) {
// Handle validation failure with contextual details
return {
error: "Invalid data",
issues: error.details.issues,
entity: error.details.kind,
};
}
if (error instanceof DisjointError) {
// Handle constraint violation
return { error: "ID already used by different type" };
}
throw error; // Re-throw unexpected errors
}
```
### Check Error Codes
```typescript
try {
await store.nodes.Person.update(id, data);
} catch (error) {
if (error instanceof TypeGraphError) {
switch (error.code) {
case "NODE_NOT_FOUND":
return { error: "Person not found" };
case "VALIDATION_ERROR":
return { error: "Invalid data", issues: error.details.issues };
default:
throw error;
}
}
throw error;
}
```
### Transaction Error Handling
```typescript
try {
await store.transaction(async (tx) => {
const person = await tx.nodes.Person.create({ name: "Alice" });
const company = await tx.nodes.Company.create({ name: "Acme" });
await tx.edges.worksAt.create(person, company, { role: "Engineer" });
});
} catch (error) {
// Transaction is automatically rolled back on any error
if (error instanceof ValidationError) {
console.log("Validation failed, transaction rolled back");
console.log("Failed on:", error.details.kind, error.details.operation);
}
throw error;
}
```
## Contextual Validation Utilities
For library authors or advanced use cases, validation utilities are available from the schema sub-export:
```typescript
import {
validateNodeProps,
validateEdgeProps,
wrapZodError,
createValidationError,
} from "@nicia-ai/typegraph/schema";
// Validate node properties with full context
const validated = validateNodeProps(PersonSchema, inputData, {
kind: "Person",
operation: "create",
});
// Wrap a Zod error with TypeGraph context
try {
schema.parse(data);
} catch (zodError) {
throw wrapZodError(zodError, {
entityType: "node",
kind: "Person",
operation: "update",
id: "person-123",
});
}
```
## Error Codes Reference
| Code | Error Class | Category | Description |
|------|-------------|----------|-------------|
| `VALIDATION_ERROR` | `ValidationError` | user | Schema validation failed |
| `DISJOINT_ERROR` | `DisjointError` | constraint | Disjointness constraint violated |
| `ENDPOINT_ERROR` | `EndpointError` | user | Invalid edge endpoint types |
| `CARDINALITY_ERROR` | `CardinalityError` | constraint | Cardinality constraint violated |
| `UNIQUENESS_ERROR` | `UniquenessError` | constraint | Uniqueness constraint violated |
| `NODE_NOT_FOUND` | `NodeNotFoundError` | user | Referenced node doesn't exist |
| `EDGE_NOT_FOUND` | `EdgeNotFoundError` | user | Referenced edge doesn't exist |
| `KIND_NOT_FOUND` | `KindNotFoundError` | user | Unknown node/edge type |
| `ENDPOINT_NOT_FOUND` | `EndpointNotFoundError` | user | Edge endpoint node doesn't exist |
| `RESTRICTED_DELETE` | `RestrictedDeleteError` | constraint | Delete blocked by existing edges |
| `CONFIGURATION_ERROR` | `ConfigurationError` | system | Invalid configuration |
| `SCHEMA_MISMATCH` | `SchemaMismatchError` | system | Database schema mismatch |
| `MIGRATION_ERROR` | `MigrationError` | system | Migration failed |
| `UNSUPPORTED_PREDICATE` | `UnsupportedPredicateError` | system | Predicate not supported |
# Integration Patterns
> Strategies for integrating TypeGraph into your application architecture
This guide covers common integration patterns for adding TypeGraph to existing
applications, from simple setups to production deployment strategies.
## Direct Drizzle Integration (Shared Database)
If you're already using Drizzle ORM, TypeGraph can share your existing database
connection. TypeGraph tables coexist alongside your application tables.
```typescript
import { drizzle } from "drizzle-orm/node-postgres";
import { Pool } from "pg";
import { createPostgresBackend, generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";
import { createStore } from "@nicia-ai/typegraph";
// Your existing Drizzle setup
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const db = drizzle(pool);
// Add TypeGraph tables to your existing database
await pool.query(generatePostgresMigrationSQL());
// Create TypeGraph backend using the same connection
const backend = createPostgresBackend(db);
const store = createStore(graph, backend);
// For pure TypeGraph operations, use store.transaction()
await store.transaction(async (tx) => {
const person = await tx.nodes.Person.create({ name: "Alice" });
const company = await tx.nodes.Company.create({ name: "Acme" });
await tx.edges.worksAt.create(person, company, { role: "Engineer" });
});
```
### Mixed Drizzle + TypeGraph Transactions
When combining TypeGraph operations with direct Drizzle queries in the same atomic transaction,
create a temporary backend from the Drizzle transaction:
```typescript
await db.transaction(async (tx) => {
// Direct Drizzle operations
await tx.insert(auditLog).values({ action: "user_created" });
// TypeGraph operations in the same transaction
const txBackend = createPostgresBackend(tx);
const txStore = createStore(graph, txBackend);
await txStore.nodes.Person.create({ name: "Alice" });
});
```
This pattern is only needed when you must combine both in one atomic transaction.
**When to use:**
- You want a single database to manage
- Your graph data relates to existing tables
- You need cross-cutting transactions
**Considerations:**
- TypeGraph tables use the `typegraph_` prefix to avoid collisions
- Run TypeGraph migrations alongside your application migrations
- Connection pool is shared, so size accordingly
## Drizzle-Kit Managed Migrations (Recommended)
If you use `drizzle-kit` to manage migrations, you can import TypeGraph's table
definitions directly into your schema file. This lets drizzle-kit generate
migrations for all tables—both yours and TypeGraph's—in one place.
### Setup
**1. Import TypeGraph tables into your schema:**
```typescript
// schema.ts
import { sqliteTable, text, integer } from "drizzle-orm/sqlite-core";
// Import TypeGraph tables (these are standard Drizzle table definitions)
export * from "@nicia-ai/typegraph/sqlite";
// Or for PostgreSQL:
// export * from "@nicia-ai/typegraph/postgres";
// Your application tables
export const users = sqliteTable("users", {
id: text("id").primaryKey(),
name: text("name").notNull(),
email: text("email").notNull(),
});
```
**2. Generate migrations normally:**
```bash
npx drizzle-kit generate
```
Drizzle-kit will now see all tables—TypeGraph's and yours—and generate migrations
for them.
**3. Apply migrations:**
```bash
npx drizzle-kit migrate
# Or for Cloudflare D1:
wrangler d1 migrations apply your-database
```
**4. Create the backend:**
```typescript
import { drizzle } from "drizzle-orm/better-sqlite3";
import Database from "better-sqlite3";
import { createSqliteBackend, tables } from "@nicia-ai/typegraph/sqlite";
import { createStore } from "@nicia-ai/typegraph";
const sqlite = new Database("app.db");
const db = drizzle(sqlite);
// Use the same tables that drizzle-kit manages
const backend = createSqliteBackend(db, { tables });
const store = createStore(graph, backend);
```
### Custom Table Names
To avoid conflicts or match your naming conventions, use the factory function:
```typescript
// schema.ts
import { createSqliteTables } from "@nicia-ai/typegraph/sqlite";
// Create tables with custom names
export const typegraphTables = createSqliteTables({
nodes: "myapp_graph_nodes",
edges: "myapp_graph_edges",
uniques: "myapp_graph_uniques",
schemaVersions: "myapp_graph_schema_versions",
embeddings: "myapp_graph_embeddings",
});
// Export individual tables for drizzle-kit
export const {
nodes: myappGraphNodes,
edges: myappGraphEdges,
uniques: myappGraphUniques,
schemaVersions: myappGraphSchemaVersions,
embeddings: myappGraphEmbeddings,
} = typegraphTables;
```
Then pass the same tables to the backend:
```typescript
import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";
import { typegraphTables } from "./schema";
const backend = createSqliteBackend(db, { tables: typegraphTables });
```
### Adding TypeGraph Indexes
The table factory functions also accept `indexes`, which drizzle-kit will include in migrations:
```ts
// schema.ts
import { createSqliteTables } from "@nicia-ai/typegraph/sqlite";
import { defineNodeIndex } from "@nicia-ai/typegraph/indexes";
import { Person } from "./graph";
const personEmail = defineNodeIndex(Person, { fields: ["email"] });
export const typegraphTables = createSqliteTables({}, { indexes: [personEmail] });
```
For PostgreSQL, use `createPostgresTables` from `@nicia-ai/typegraph/postgres`.
See [Indexes](/performance/indexes) for covering fields, partial indexes, and profiler integration.
If you only need PostgreSQL adapter exports, import from `@nicia-ai/typegraph/postgres`:
```typescript
import { createPostgresBackend, tables } from "@nicia-ai/typegraph/postgres";
```
### PostgreSQL with pgvector
For PostgreSQL with vector search, ensure the pgvector extension is enabled
before running migrations:
```sql
CREATE EXTENSION IF NOT EXISTS vector;
```
Then in your schema:
```typescript
// schema.ts
export * from "@nicia-ai/typegraph/postgres";
export const users = pgTable("users", { ... });
```
**When to use:**
- You already use drizzle-kit for migrations
- You want a single migration workflow for all tables
- You need Cloudflare D1 or other platforms that require drizzle-kit migrations
**Advantages over raw SQL migrations:**
- Single source of truth for schema
- Type-safe schema in TypeScript
- Drizzle-kit handles migration diffs automatically
- Works with all drizzle-kit supported platforms
## Separate Database
Use a dedicated database when you want isolation between your application data
and graph data.
```typescript
import { Pool } from "pg";
import { drizzle } from "drizzle-orm/node-postgres";
import { createPostgresBackend, generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";
// Application database (your existing setup)
const appPool = new Pool({ connectionString: process.env.APP_DATABASE_URL });
const appDb = drizzle(appPool);
// Dedicated TypeGraph database
const graphPool = new Pool({ connectionString: process.env.GRAPH_DATABASE_URL });
const graphDb = drizzle(graphPool);
await graphPool.query(generatePostgresMigrationSQL());
const backend = createPostgresBackend(graphDb);
const store = createStore(graph, backend);
```
**When to use:**
- Your primary database doesn't support required features (e.g., pgvector)
- You want independent scaling for graph operations
- Compliance requires data separation
- You're adding graph capabilities to a legacy system
**Considerations:**
- No cross-database transactions (use eventual consistency patterns)
- Sync data between databases via application logic or events
- Separate backup/restore procedures
## In-Memory (Ephemeral Graphs)
Use in-memory SQLite for temporary graphs, caching, or computation.
```typescript
import { createLocalSqliteBackend } from "@nicia-ai/typegraph/sqlite/local";
function createEphemeralStore(graph: GraphDef) {
const { backend } = createLocalSqliteBackend();
return createStore(graph, backend);
}
// Use case: Build a temporary graph for computation
async function computeRecommendations(userId: string): Promise {
const tempStore = createEphemeralStore(recommendationGraph);
// Load relevant data into temporary graph
const userData = await fetchUserData(userId);
await populateGraph(tempStore, userData);
// Run graph algorithms
const results = await tempStore
.query()
.from("User", "u")
.whereNode("u", (u) => u.id.eq(userId))
.traverse("similar", "s")
.to("Product", "p")
.select((ctx) => ctx.p)
.execute();
return results;
}
```
**When to use:**
- Temporary computation graphs
- Request-scoped graph state
- Graph-based caching with expiration
- Isolated test fixtures
**Considerations:**
- Data lost on process termination
- Memory usage scales with graph size
- No persistence—rebuild on restart
## Hybrid Overlay (Graph on Existing Data)
Add graph relationships on top of existing relational data without migrating
your data model. Your existing tables remain the source of truth; TypeGraph
stores only the relationships and graph-specific metadata.
Use the `externalRef()` helper to create type-safe references to external tables:
```typescript
import {
createExternalRef,
defineEdge,
defineGraph,
defineNode,
embedding,
externalRef,
} from "@nicia-ai/typegraph";
import { z } from "zod";
// Define nodes that reference your existing tables
const User = defineNode("User", {
schema: z.object({
// Type-safe reference to your existing users table
source: externalRef("users"),
// Denormalized fields for graph queries (optional)
displayName: z.string().optional(),
}),
});
const Document = defineNode("Document", {
schema: z.object({
source: externalRef("documents"),
embedding: embedding(1536).optional(),
}),
});
// Graph-only relationships not in your relational schema
const relatedTo = defineEdge("relatedTo", {
schema: z.object({
relationship: z.enum(["cites", "extends", "contradicts"]),
confidence: z.number().min(0).max(1),
}),
});
const authored = defineEdge("authored");
const graph = defineGraph({
id: "document_graph",
nodes: { User, Document },
edges: {
relatedTo: { type: relatedTo, from: [Document], to: [Document] },
authored: { type: authored, from: [User], to: [Document] },
},
});
```
The `externalRef()` helper validates that references include both the table name
and ID, catching errors at insert time:
```typescript
// Valid: includes table and id
await store.nodes.Document.create({
source: { table: "documents", id: "doc_123" },
});
// Error: wrong table name (caught by TypeScript and runtime validation)
await store.nodes.Document.create({
source: { table: "users", id: "doc_123" }, // Type error!
});
// Use createExternalRef() for a cleaner API
const docRef = createExternalRef("documents");
await store.nodes.Document.create({
source: docRef("doc_456"),
});
```
**Syncing with external data:**
```typescript
// Sync helper: Create or update graph node from app data
async function syncDocument(store: Store, appDocument: AppDocument) {
const existing = await store
.query()
.from("Document", "d")
.whereNode("d", (d) => d.source.get("id").eq(appDocument.id))
.select((ctx) => ctx.d)
.first();
if (existing) {
await store.nodes.Document.update(existing.id, {
embedding: await generateEmbedding(appDocument.content),
});
return existing;
}
return store.nodes.Document.create({
source: { table: "documents", id: appDocument.id },
embedding: await generateEmbedding(appDocument.content),
});
}
// Query combining graph traversal with app data hydration
async function findRelatedDocuments(documentId: string) {
// Get graph relationships
const related = await store
.query()
.from("Document", "d")
.whereNode("d", (d) => d.source.get("id").eq(documentId))
.traverse("relatedTo", "r")
.to("Document", "related")
.select((ctx) => ({
source: ctx.related.source,
relationship: ctx.r.relationship,
confidence: ctx.r.confidence,
}))
.execute();
// Hydrate with full data from app database
const externalIds = related.map((r) => r.source.id);
const fullDocuments = await appDb
.select()
.from(documents)
.where(inArray(documents.id, externalIds));
return related.map((r) => ({
...r,
document: fullDocuments.find((d) => d.id === r.source.id),
}));
}
```
**When to use:**
- Adding graph capabilities to an existing application
- Semantic search over existing content
- Relationship discovery without schema changes
- Gradual migration from relational to graph thinking
**Considerations:**
- Maintain sync between app data and graph nodes
- Decide what to denormalize (tradeoff: query speed vs. sync complexity)
- The `table` field in `externalRef` enables referencing multiple external sources
## Background Embedding Workers
Decouple embedding generation from request handling using background jobs.
```typescript
// job-queue.ts - Define the embedding job
interface EmbeddingJob {
nodeType: string;
nodeId: string;
content: string;
}
// worker.ts - Process embedding jobs
import { createStore } from "@nicia-ai/typegraph";
async function processEmbeddingJob(job: EmbeddingJob) {
const { nodeType, nodeId, content } = job;
// Generate embedding (expensive operation)
const embedding = await openai.embeddings.create({
model: "text-embedding-ada-002",
input: content,
});
// Update the node
const collection = store.nodes[nodeType as keyof typeof store.nodes];
await collection.update(nodeId, {
embedding: embedding.data[0].embedding,
});
}
// api-handler.ts - Enqueue jobs on create/update
async function createDocument(data: DocumentInput) {
// Create node without embedding (fast)
const doc = await store.nodes.Document.create({
title: data.title,
content: data.content,
// embedding: undefined - will be populated by worker
});
// Enqueue embedding job (non-blocking)
await jobQueue.add("generate-embedding", {
nodeType: "Document",
nodeId: doc.id,
content: data.content,
});
return doc;
}
```
**Batch processing for bulk imports:**
```typescript
async function backfillEmbeddings(batchSize = 100) {
let processed = 0;
while (true) {
// Find nodes missing embeddings
const nodes = await store
.query()
.from("Document", "d")
.whereNode("d", (d) => d.embedding.isNull())
.select((ctx) => ({
id: ctx.d.id,
content: ctx.d.content,
}))
.limit(batchSize)
.execute();
if (nodes.length === 0) break;
// Batch embed
const embeddings = await openai.embeddings.create({
model: "text-embedding-ada-002",
input: nodes.map((n) => n.content),
});
// Batch update
await store.transaction(async (tx) => {
for (const [i, node] of nodes.entries()) {
await tx.nodes.Document.update(node.id, {
embedding: embeddings.data[i].embedding,
});
}
});
processed += nodes.length;
console.log(`Processed ${processed} documents`);
}
}
```
**When to use:**
- Embedding generation is slow (100-500ms per call)
- You want fast API response times
- Bulk importing existing content
- Retry logic for API failures
**Considerations:**
- Handle job failures and retries
- Consider rate limits on embedding APIs
- Queries on `embedding` should handle null values during population
## Testing
For test setup patterns, seed data strategies, and profiler-based index coverage checks,
see the dedicated [Testing](/testing) guide.
## Deployment Patterns
### Edge and Serverless
Deploy TypeGraph at the edge using SQLite-compatible runtimes.
> **Note:** Edge environments cannot use `@nicia-ai/typegraph/sqlite` because it
> depends on `better-sqlite3`, a native Node.js addon. Instead, use
> `@nicia-ai/typegraph/sqlite` which is driver-agnostic.
**Cloudflare Workers with D1:**
```typescript
// worker.ts
import { drizzle } from "drizzle-orm/d1";
import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";
export default {
async fetch(request: Request, env: Env): Promise {
const db = drizzle(env.DB);
const backend = createSqliteBackend(db);
const store = createStore(graph, backend);
// Handle request with graph queries
const results = await store
.query()
.from("Document", "d")
.whereNode("d", (d) => d.embedding.similarTo(queryEmbedding, 5))
.select((ctx) => ctx.d)
.execute();
return Response.json(results);
},
};
```
**Turso (libSQL):**
```typescript
import { createClient } from "@libsql/client";
import { drizzle } from "drizzle-orm/libsql";
import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";
const client = createClient({
url: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN,
});
const db = drizzle(client);
const backend = createSqliteBackend(db);
const store = createStore(graph, backend);
```
> For Turso and D1, use [drizzle-kit managed migrations](#drizzle-kit-managed-migrations-recommended)
> to set up the schema.
**Bun with built-in SQLite:**
Bun runs locally, so you can use the Node.js-compatible path with better-sqlite3, or
use bun:sqlite with drizzle-kit managed migrations:
```typescript
import { Database } from "bun:sqlite";
import { drizzle } from "drizzle-orm/bun-sqlite";
import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";
const sqlite = new Database("app.db");
const db = drizzle(sqlite);
const backend = createSqliteBackend(db);
const store = createStore(graph, backend);
```
> Use [drizzle-kit managed migrations](#drizzle-kit-managed-migrations-recommended)
> to set up the schema with bun:sqlite.
**When to use:**
- Low-latency requirements (data close to users)
- Serverless functions with graph queries
- Read-heavy workloads
**Considerations:**
- SQLite limitations (single-writer, no pgvector)
- Cold start times include DB initialization
- sqlite-vec for vector search (cosine/L2 only)
### Read Replica Separation
Route heavy graph queries to read replicas while writes go to primary.
```typescript
import { Pool } from "pg";
import { drizzle } from "drizzle-orm/node-postgres";
import { createPostgresBackend } from "@nicia-ai/typegraph/postgres";
// Primary for writes
const primaryPool = new Pool({
connectionString: process.env.PRIMARY_DATABASE_URL,
max: 10,
});
const primaryDb = drizzle(primaryPool);
const primaryBackend = createPostgresBackend(primaryDb);
const primaryStore = createStore(graph, primaryBackend);
// Replica for reads
const replicaPool = new Pool({
connectionString: process.env.REPLICA_DATABASE_URL,
max: 50, // Higher pool for read-heavy workloads
});
const replicaDb = drizzle(replicaPool);
const replicaBackend = createPostgresBackend(replicaDb);
const replicaStore = createStore(graph, replicaBackend);
// Route based on operation
export const stores = {
write: primaryStore,
read: replicaStore,
};
// Usage
async function searchDocuments(query: string) {
// Read from replica
return stores.read
.query()
.from("Document", "d")
.whereNode("d", (d) => d.embedding.similarTo(queryEmbedding, 10))
.select((ctx) => ctx.d)
.execute();
}
async function createDocument(data: DocumentInput) {
// Write to primary
return stores.write.nodes.Document.create(data);
}
```
**When to use:**
- Heavy read workloads (semantic search, graph traversals)
- Write/read ratio is heavily skewed toward reads
- Need to scale read capacity independently
**Considerations:**
- Replication lag means reads may be slightly stale
- Don't use replica for read-after-write scenarios
- Monitor replication lag in production
### Multi-Tenant Architecture
Three approaches for multi-tenant deployments, each with different tradeoffs.
#### Option 1: Shared tables with tenant isolation (simplest)
```typescript
import { defineNode, defineGraph } from "@nicia-ai/typegraph";
// Include tenantId in your node schemas
const Document = defineNode("Document", {
schema: z.object({
tenantId: z.string(),
title: z.string(),
content: z.string(),
}),
});
// Always filter by tenant in queries
function createTenantQuery(store: Store, tenantId: string) {
return {
searchDocuments: (query: string) =>
store
.query()
.from("Document", "d")
.whereNode("d", (d) =>
d.tenantId.eq(tenantId).and(
d.embedding.similarTo(queryEmbedding, 10)
)
)
.select((ctx) => ctx.d)
.execute(),
createDocument: (data: Omit) =>
store.nodes.Document.create({ ...data, tenantId }),
};
}
// Middleware extracts tenant and creates scoped API
function withTenant(req: Request) {
const tenantId = req.headers.get("x-tenant-id")!;
return createTenantQuery(store, tenantId);
}
```
#### Option 2: Schema per tenant (PostgreSQL)
```typescript
import { sql } from "drizzle-orm";
async function createTenantStore(tenantId: string) {
const schemaName = `tenant_${tenantId}`;
// Create schema if not exists
await pool.query(`CREATE SCHEMA IF NOT EXISTS ${schemaName}`);
// Run migrations in tenant schema
await pool.query(`SET search_path TO ${schemaName}`);
await pool.query(generatePostgresMigrationSQL());
await pool.query(`SET search_path TO public`);
// Create Drizzle instance with schema
const db = drizzle(pool, { schema: { schemaName } });
const backend = createPostgresBackend(db);
return createStore(graph, backend);
}
// Cache tenant stores
const tenantStores = new Map();
async function getTenantStore(tenantId: string): Promise {
if (!tenantStores.has(tenantId)) {
tenantStores.set(tenantId, await createTenantStore(tenantId));
}
return tenantStores.get(tenantId)!;
}
```
#### Option 3: Database per tenant (strongest isolation)
```typescript
interface TenantConfig {
id: string;
databaseUrl: string;
}
async function createTenantStore(config: TenantConfig) {
const pool = new Pool({ connectionString: config.databaseUrl });
await pool.query(generatePostgresMigrationSQL());
const db = drizzle(pool);
const backend = createPostgresBackend(db);
return {
store: createStore(graph, backend),
close: () => pool.end(),
};
}
// Connection manager with LRU eviction
class TenantConnectionManager {
private stores = new Map Promise }>();
private maxConnections = 100;
async getStore(tenantId: string): Promise {
if (!this.stores.has(tenantId)) {
if (this.stores.size >= this.maxConnections) {
await this.evictOldest();
}
const config = await fetchTenantConfig(tenantId);
this.stores.set(tenantId, await createTenantStore(config));
}
return this.stores.get(tenantId)!.store;
}
private async evictOldest() {
const [oldestId, oldest] = this.stores.entries().next().value;
await oldest.close();
this.stores.delete(oldestId);
}
}
```
**Comparison:**
| Approach | Isolation | Complexity | Scaling | Cost |
|----------|-----------|------------|---------|------|
| Shared tables | Low (row-level) | Low | Single DB | Lowest |
| Schema per tenant | Medium | Medium | Single DB, separate schemas | Low |
| Database per tenant | High | High | Independent DBs | Highest |
**When to use each:**
- **Shared tables**: SaaS with many small tenants, cost-sensitive
- **Schema per tenant**: Moderate isolation needs, PostgreSQL only
- **Database per tenant**: Enterprise customers requiring data isolation, compliance requirements
## Next Steps
- [Quick Start](/getting-started) - Basic setup and first graph
- [Semantic Search](/semantic-search) - Vector embeddings and similarity
- [Performance](/performance/overview) - Optimization strategies
# Limitations
> Known constraints and backend-specific limitations
This page documents TypeGraph's known limitations and constraints.
## Cloudflare D1 Transactions
Cloudflare D1 does not support atomic transactions. When using D1, calling
`store.transaction()` will throw a `ConfigurationError`.
```typescript
// Throws ConfigurationError on D1
await store.transaction(async (tx) => {
// ...
});
```
**Workaround:** Execute operations directly without transaction wrapper. Operations
execute sequentially but without atomicity guarantees.
```typescript
// Alternative: execute operations directly (not atomic)
const person = await store.nodes.Person.create({ name: "Alice" });
const company = await store.nodes.Company.create({ name: "Acme" });
await store.edges.worksAt.create(person, company, { role: "Engineer" });
```
**Check support programmatically:**
```typescript
if (backend.capabilities.transactions) {
await store.transaction(async (tx) => {
/* ... */
});
} else {
// Handle non-transactional execution
}
```
## Recursive Traversal Depth
Variable-length traversals use two caps:
1. Unbounded traversals (no `maxHops` option) are capped at 100 hops.
2. Explicit `maxHops` values are validated up to 1000 hops (`maxHops: >1000` throws).
3. Cycle prevention is on by default. To skip cycle checks for speed, opt into
`cyclePolicy: "allow"` (which may revisit nodes across hops).
This prevents runaway queries while still supporting deep, intentionally bounded traversals.
```typescript
// Implicitly limited to 100 hops
store
.query()
.from("Person", "p")
.traverse("reportsTo", "e")
.recursive()
.to("Person", "manager");
// Explicit limits up to 1000 are honored
store
.query()
.from("Person", "p")
.traverse("reportsTo", "e")
.recursive({ maxHops: 200 }) // honored
.to("Person", "manager");
// Explicit limits above 1000 throw
store
.query()
.from("Person", "p")
.traverse("reportsTo", "e")
.recursive({ maxHops: 2000 }) // throws
.to("Person", "manager");
```
The unbounded-traversal limit is defined as `MAX_RECURSIVE_DEPTH`:
```typescript
import { MAX_RECURSIVE_DEPTH } from "@nicia-ai/typegraph";
// MAX_RECURSIVE_DEPTH = 100
```
## Connection Management
TypeGraph does not manage database connections. You are responsible for:
1. **Creating and configuring** the database connection
2. **Implementing connection pooling** for production use
3. **Closing connections** when done
```typescript
import Database from "better-sqlite3";
import { drizzle } from "drizzle-orm/better-sqlite3";
import { createSqliteBackend, generateSqliteMigrationSQL } from "@nicia-ai/typegraph/sqlite";
// You manage the connection
const sqlite = new Database("app.db");
sqlite.exec(generateSqliteMigrationSQL());
const db = drizzle(sqlite);
const backend = createSqliteBackend(db);
const store = createStore(graph, backend);
// You close the connection
sqlite.close();
```
For production deployments, use connection pooling:
```typescript
import { Pool } from "pg";
import { drizzle } from "drizzle-orm/node-postgres";
import { createPostgresBackend } from "@nicia-ai/typegraph/postgres";
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Maximum connections
});
const db = drizzle(pool);
const backend = createPostgresBackend(db);
```
The `store.close()` method is a no-op. Connection cleanup is your responsibility.
## Predicate Serialization
Where predicates in unique constraints cannot be serialized. If you use schema
serialization for versioning or migration, predicates are stored as
`"[predicate]"` and cannot be reconstructed.
```typescript
// This predicate works at runtime...
unique({
name: "email_unique_when_active",
fields: ["email"],
where: (props) => props.status.isNotNull(),
});
// ...but serializes as:
// { "where": "[predicate]" }
```
**Workaround:** For full schema serialization support, avoid predicates in unique constraints.
Use application-level validation instead.
## Vector Search Backend Requirements
Vector similarity search requires specific database extensions:
| Backend | Requirement |
|---------|-------------|
| PostgreSQL | pgvector extension |
| SQLite | sqlite-vec extension |
| D1 | Not supported |
Using vector predicates on unsupported backends throws `UnsupportedPredicateError`:
```typescript
try {
await store
.query()
.from("Document", "d")
.whereNode("d", (d) => d.embedding.similarTo(queryVector, 10))
.execute();
} catch (error) {
if (error instanceof UnsupportedPredicateError) {
// Vector search not available on this backend
}
}
```
## Query Builder Type Inference
Complex query chains may occasionally require explicit type annotations when TypeScript cannot
infer the full type. This is rare but can occur with deeply nested selects or unions.
```typescript
// If type inference fails, add explicit type
const results = await store
.query()
.from("Person", "p")
.select((ctx) => ({
name: ctx.p.name as string, // Explicit annotation
}))
.execute();
```
## Bulk Operation Limits
Bulk operations (`bulkCreate`, `bulkInsert`, `bulkUpsertById`, `bulkDelete`) have practical limits based on your database:
| Database | Recommended Batch Size |
|----------|----------------------|
| SQLite | 500-1000 items |
| PostgreSQL | 1000-5000 items |
For larger datasets, batch your operations:
```typescript
const BATCH_SIZE = 1000;
for (let i = 0; i < items.length; i += BATCH_SIZE) {
const batch = items.slice(i, i + BATCH_SIZE);
await store.nodes.Person.bulkCreate(batch);
}
```
## No Built-in Graph Algorithms
TypeGraph deliberately excludes graph algorithms. The following are **not** provided:
- Shortest path (Dijkstra, A*)
- PageRank
- Community detection
- Centrality measures
- Graph partitioning
For these use cases, export your data to a specialized graph processing library or database.
## Single Database Deployment
TypeGraph is designed for single-database deployments. It does not support:
- Distributed storage across multiple databases
- Sharding
- Cross-database queries
- Replication coordination
For distributed graph workloads, consider a dedicated graph database.
## Temporal Query Limitations
Temporal queries (`asOf`, `includeEnded`) work correctly but have some constraints:
- Point-in-time queries cannot be combined with streaming (`.stream()`)
- Historical data is only available if temporal fields (`validFrom`, `validTo`) were populated at creation time
- Clock skew between application servers can affect temporal accuracy
## Schema Migration Constraints
Automatic migrations (`createStoreWithSchema`) only handle additive changes:
| Change Type | Auto-Migrated |
|-------------|---------------|
| Add new node type | Yes |
| Add new edge type | Yes |
| Add optional property | Yes |
| Add required property | No |
| Remove property | No |
| Rename type | No |
| Change property type | No |
Breaking changes throw `MigrationError` and require manual migration.
# Multiple Graphs
> Using separate graph definitions for different domains in the same application
TypeGraph supports multiple graphs for applications that have distinct data domains that benefit from separate graph definitions.
## When to Use Multiple Graphs
Use separate graphs when you have:
- **Distinct domains**: A RAG system for documents and a business network for suppliers have different node types,
edge semantics, and query patterns
- **Independent lifecycles**: One graph might evolve rapidly while another is stable
- **Team ownership**: Different teams own different graphs, with separate schema review processes
- **Different retention policies**: Document chunks might be ephemeral while business relationships are long-lived
**Don't use multiple graphs** when:
- You need cross-graph queries or traversals (use a single graph with ontology relations instead)
- The domains are closely related (e.g., Users and Documents that Users author)
- You're trying to solve multi-tenancy (use tenant isolation patterns instead)
## Example: Documents and Business Network
A company needs two graphs:
1. **Documents graph**: Powers semantic search over internal documents
2. **Organization graph**: Tracks suppliers, partners, and contracts
### Defining the Graphs
```typescript
// graphs/documents.ts
import { z } from "zod";
import { defineNode, defineEdge, defineGraph, embedding } from "@nicia-ai/typegraph";
const Document = defineNode("Document", {
schema: z.object({
title: z.string(),
source: z.string(),
createdAt: z.string().datetime(),
}),
});
const Chunk = defineNode("Chunk", {
schema: z.object({
content: z.string(),
embedding: embedding(1536),
position: z.number().int(),
}),
});
const hasChunk = defineEdge("hasChunk");
export const documentsGraph = defineGraph({
id: "documents",
nodes: {
Document: { type: Document },
Chunk: { type: Chunk },
},
edges: {
hasChunk: { type: hasChunk, from: [Document], to: [Chunk] },
},
});
```
```typescript
// graphs/organization.ts
import { z } from "zod";
import { defineNode, defineEdge, defineGraph, subClassOf } from "@nicia-ai/typegraph";
const Organization = defineNode("Organization", {
schema: z.object({
name: z.string(),
domain: z.string().optional(),
}),
});
const Supplier = defineNode("Supplier", {
schema: z.object({
name: z.string(),
domain: z.string().optional(),
category: z.enum(["materials", "services", "logistics"]),
}),
});
const Partner = defineNode("Partner", {
schema: z.object({
name: z.string(),
domain: z.string().optional(),
partnershipLevel: z.enum(["bronze", "silver", "gold"]),
}),
});
const Contract = defineNode("Contract", {
schema: z.object({
title: z.string(),
value: z.number(),
startDate: z.string().datetime(),
endDate: z.string().datetime().optional(),
status: z.enum(["draft", "active", "expired"]).default("draft"),
}),
});
const supplies = defineEdge("supplies");
const hasContract = defineEdge("hasContract");
export const organizationGraph = defineGraph({
id: "organization",
nodes: {
Organization: { type: Organization },
Supplier: { type: Supplier },
Partner: { type: Partner },
Contract: { type: Contract },
},
edges: {
supplies: { type: supplies, from: [Supplier], to: [Organization] },
hasContract: { type: hasContract, from: [Organization], to: [Contract] },
},
ontology: [
subClassOf(Supplier, Organization),
subClassOf(Partner, Organization),
],
});
```
### Creating Stores
Both graphs can share the same database backend. Each graph's data is isolated by its `id`.
```typescript
// stores.ts
import { createStore } from "@nicia-ai/typegraph";
import { createPostgresBackend } from "@nicia-ai/typegraph/postgres";
import { drizzle } from "drizzle-orm/node-postgres";
import { Pool } from "pg";
import { documentsGraph } from "./graphs/documents";
import { organizationGraph } from "./graphs/organization";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const db = drizzle(pool);
const backend = createPostgresBackend(db);
// Same backend, different stores
export const documentsStore = createStore(documentsGraph, backend);
export const organizationStore = createStore(organizationGraph, backend);
```
### Using the Stores
Each store is fully independent with its own typed API:
```typescript
// Semantic search in documents
async function searchDocuments(query: string, embedding: number[]) {
return documentsStore
.query()
.from("Chunk", "c")
.whereNode("c", (c) => c.embedding.similarTo(embedding, 10))
.select((ctx) => ({
content: ctx.c.content,
position: ctx.c.position,
}))
.execute();
}
// Business queries in organization
async function getActiveSuppliers(category: string) {
return organizationStore
.query()
.from("Supplier", "s")
.whereNode("s", (s) => s.category.eq(category))
.traverse("hasContract", "e")
.to("Contract", "c")
.whereNode("c", (c) => c.status.eq("active"))
.select((ctx) => ({
supplier: ctx.s.name,
contract: ctx.c.title,
value: ctx.c.value,
}))
.execute();
}
```
## Coordinating Across Graphs
Since cross-graph queries aren't supported, coordinate at the application level.
### Shared Identifiers
Use consistent IDs when entities relate across graphs:
```typescript
// When ingesting a supplier's documents, use the supplier ID as a reference
async function ingestSupplierDocument(
supplierId: string,
title: string,
content: string,
embedding: number[]
) {
// Store document with supplier reference in metadata
const doc = await documentsStore.nodes.Document.create({
title,
source: `supplier:${supplierId}`,
createdAt: new Date().toISOString(),
});
const chunk = await documentsStore.nodes.Chunk.create({
content,
embedding,
position: 0,
});
await documentsStore.edges.hasChunk.create(doc, chunk, {});
return doc;
}
// Later, find documents for a supplier
async function getSupplierDocuments(supplierId: string) {
return documentsStore
.query()
.from("Document", "d")
.whereNode("d", (d) => d.source.eq(`supplier:${supplierId}`))
.select((ctx) => ctx.d)
.execute();
}
```
### Application-Level Joins
Combine results from multiple graphs in your application:
```typescript
interface SupplierWithDocuments {
supplier: { name: string; category: string };
documents: Array<{ title: string }>;
}
async function getSupplierOverview(
supplierId: string
): Promise {
// Parallel queries to both graphs
const [supplier, documents] = await Promise.all([
organizationStore.nodes.Supplier.getById(supplierId),
getSupplierDocuments(supplierId),
]);
return {
supplier: {
name: supplier.name,
category: supplier.category,
},
documents: documents.map((d) => ({ title: d.title })),
};
}
```
### Event-Driven Sync
For loose coupling, use events to keep graphs in sync:
```typescript
// When a supplier is created, set up document ingestion
eventBus.on("supplier.created", async (event) => {
const { supplierId, name } = event.payload;
// Create a placeholder document node for future ingestion
await documentsStore.nodes.Document.create({
title: `${name} - Supplier Profile`,
source: `supplier:${supplierId}`,
createdAt: new Date().toISOString(),
});
});
// When a supplier is deleted, clean up related documents
eventBus.on("supplier.deleted", async (event) => {
const { supplierId } = event.payload;
const docs = await documentsStore
.query()
.from("Document", "d")
.whereNode("d", (d) => d.source.eq(`supplier:${supplierId}`))
.select((ctx) => ctx.d.id)
.execute();
for (const docId of docs) {
await documentsStore.nodes.Document.delete(docId);
}
});
```
## Separate Backends
For stronger isolation, use separate database connections:
```typescript
// Documents in PostgreSQL with pgvector for embeddings
const documentsPool = new Pool({
connectionString: process.env.DOCUMENTS_DATABASE_URL,
});
const documentsBackend = createPostgresBackend(drizzle(documentsPool));
export const documentsStore = createStore(documentsGraph, documentsBackend);
// Organization data in a separate database
const orgPool = new Pool({
connectionString: process.env.ORG_DATABASE_URL,
});
const orgBackend = createPostgresBackend(drizzle(orgPool));
export const organizationStore = createStore(organizationGraph, orgBackend);
```
**When to separate backends:**
- Different performance profiles (vector search vs. relational queries)
- Compliance requirements (PII in one database, analytics in another)
- Independent scaling needs
- Different backup/retention policies
## Schema Management
Each graph has independent schema versioning:
```typescript
import { createStoreWithSchema } from "@nicia-ai/typegraph";
// Each graph tracks its own schema version
const [documentsStore, docsSchemaResult] = await createStoreWithSchema(
documentsGraph,
backend
);
const [orgStore, orgSchemaResult] = await createStoreWithSchema(
organizationGraph,
backend
);
// Check migration status independently
if (docsSchemaResult.status === "migrated") {
console.log("Documents schema was migrated");
}
if (orgSchemaResult.status === "migrated") {
console.log("Organization schema was migrated");
}
```
## Caveats
**No cross-graph queries**: You cannot traverse from a node in one graph to a node in another. If you need this, consider:
- Merging the graphs into one with clear ontology separation
- Using application-level joins as shown above
**Separate ontology closures**: Each graph computes its own `subClassOf`, `implies`, etc. closures. Ontology relations
don't span graphs.
**Independent transactions**: A transaction in one store doesn't include the other. For cross-graph consistency, use
sagas or eventual consistency patterns.
**Shared tables**: When using the same backend, both graphs write to the same `typegraph_nodes` and `typegraph_edges`
tables, differentiated by `graph_id`. This is fine for most cases but means a database-level issue affects both
graphs.
## Next Steps
- [Multi-Tenant SaaS](./examples/multi-tenant) - Isolating data by tenant within a single graph
- [Schema Migrations](./schema-management) - Versioning and migrations
- [Integration Patterns](./integration) - More deployment strategies
# Execute
> Running queries with execute(), paginate(), and stream()
Execute operations run your query and retrieve results. Use `execute()` for simple queries,
`paginate()` for cursor-based pagination, and `stream()` for processing large datasets.
## execute()
Run the query and return all results:
```typescript
const results = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.status.eq("active"))
.select((ctx) => ctx.p)
.execute();
// results: readonly Person[]
```
### Return Type
Returns a readonly array of the selected type:
```typescript
// TypeScript infers the shape from your selection
const results = await store
.query()
.from("Person", "p")
.select((ctx) => ({
name: ctx.p.name,
email: ctx.p.email,
}))
.execute();
// results: readonly { name: string; email: string | undefined }[]
```
## first()
Get the first result or `undefined`:
```typescript
const alice = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.email.eq("alice@example.com"))
.select((ctx) => ctx.p)
.first();
if (alice) {
console.log(alice.name);
}
```
## count()
Count matching results without fetching data:
```typescript
const activeCount = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.status.eq("active"))
.count();
// activeCount: number
```
## exists()
Check if any results exist:
```typescript
const hasActiveUsers = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.status.eq("active"))
.exists();
// hasActiveUsers: boolean
```
## Cursor Pagination
For large datasets, cursor-based pagination is more efficient than `limit`/`offset`. It uses keyset
pagination which doesn't degrade as you go deeper.
### paginate()
```typescript
const firstPage = await store
.query()
.from("Person", "p")
.select((ctx) => ({
id: ctx.p.id,
name: ctx.p.name,
}))
.orderBy("p", "name", "asc") // ORDER BY required
.paginate({ first: 20 });
```
### Pagination Result Shape
```typescript
{
data: readonly T[], // The actual results
hasNextPage: boolean, // More results available forward
hasPrevPage: boolean, // More results available backward
nextCursor: string | undefined, // Opaque cursor for next page
prevCursor: string | undefined, // Opaque cursor for previous page
}
```
### Forward Pagination
Use `first` and `after` to paginate forward:
```typescript
// Get first page
const page1 = await query.paginate({ first: 20 });
// Get next page using the cursor
if (page1.hasNextPage && page1.nextCursor) {
const page2 = await query.paginate({
first: 20,
after: page1.nextCursor,
});
}
```
### Backward Pagination
Use `last` and `before` to paginate backward:
```typescript
// Get last page
const lastPage = await query.paginate({ last: 20 });
// Get previous page
if (lastPage.hasPrevPage && lastPage.prevCursor) {
const prevPage = await query.paginate({
last: 20,
before: lastPage.prevCursor,
});
}
```
### Pagination Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `first` | `number` | Number of results from the start |
| `after` | `string` | Cursor to start after (forward pagination) |
| `last` | `number` | Number of results from the end |
| `before` | `string` | Cursor to start before (backward pagination) |
### Pagination with Traversals
Pagination works with graph traversals:
```typescript
const employeesPage = await store
.query()
.from("Company", "c")
.whereNode("c", (c) => c.name.eq("Acme Corp"))
.traverse("worksAt", "e", { direction: "in" })
.to("Person", "p")
.select((ctx) => ({
id: ctx.p.id,
name: ctx.p.name,
role: ctx.e.role,
}))
.orderBy("p", "name", "asc")
.paginate({ first: 50 });
```
## Streaming
For very large datasets, use streaming to process results without loading everything into memory.
### stream()
```typescript
const stream = store
.query()
.from("Event", "e")
.select((ctx) => ctx.e)
.orderBy("e", "createdAt", "desc") // ORDER BY required
.stream({ batchSize: 1000 });
// Process results as they arrive
for await (const event of stream) {
console.log(event.title);
await processEvent(event);
}
```
### Batch Size
The `batchSize` option controls how many records are fetched per database query:
```typescript
// Smaller batches: Lower memory usage, more database queries
.stream({ batchSize: 100 })
// Larger batches: Higher memory usage, fewer database queries
.stream({ batchSize: 5000 })
// Default is 1000
.stream()
```
### Streaming with Processing
```typescript
async function exportAllUsers(): Promise {
const stream = store
.query()
.from("User", "u")
.whereNode("u", (u) => u.status.eq("active"))
.select((ctx) => ({
id: ctx.u.id,
email: ctx.u.email,
name: ctx.u.name,
}))
.orderBy("u", "id", "asc")
.stream({ batchSize: 500 });
let count = 0;
for await (const user of stream) {
await exportToExternalSystem(user);
count++;
if (count % 1000 === 0) {
console.log(`Exported ${count} users...`);
}
}
console.log(`Export complete: ${count} users`);
}
```
## Prepared Queries
Prepared queries let you compile a query once and execute it many times with different parameter
values. This eliminates recompilation overhead for repeated query shapes.
### `param(name)`
Use `param()` to declare a named placeholder inside any predicate position:
```typescript
import { param } from "@nicia-ai/typegraph";
```
### `prepare()`
Call `.prepare()` on an executable query to pre-compile the AST and SQL. Returns a `PreparedQuery`
that can be executed with different bindings.
```typescript
const findByName = store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.name.eq(param("name")))
.select((ctx) => ctx.p)
.prepare();
// Execute with different bindings — no recompilation
const alices = await findByName.execute({ name: "Alice" });
const bobs = await findByName.execute({ name: "Bob" });
```
### Parameterized Bounds
Parameters work anywhere a scalar value is accepted:
```typescript
const findByAge = store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.age.between(param("minAge"), param("maxAge")))
.select((ctx) => ctx.p)
.prepare();
const youngAdults = await findByAge.execute({ minAge: 18, maxAge: 25 });
const seniors = await findByAge.execute({ minAge: 65, maxAge: 120 });
```
`prepared.execute(bindings)` validates bindings strictly: all declared parameters must be
provided, and unknown binding keys are rejected.
### Supported Positions
`param()` works with any scalar predicate:
| Predicate | Example |
|-----------|---------|
| `eq` / `neq` | `p.name.eq(param("name"))` |
| `gt` / `gte` / `lt` / `lte` | `p.age.gt(param("minAge"))` |
| `between` | `p.age.between(param("lo"), param("hi"))` |
| `contains` | `p.name.contains(param("substr"))` |
| `startsWith` / `endsWith` | `p.name.startsWith(param("prefix"))` |
| `like` / `ilike` | `p.email.like(param("pattern"))` |
:::caution
`param()` is **not** supported in `in()` / `notIn()` — the array length must be known at compile time.
:::
### Performance
When the backend supports `executeRaw` (both SQLite and PostgreSQL backends do), the pre-compiled
SQL text is sent directly to the database driver with substituted parameter values — zero
recompilation overhead. When `executeRaw` is unavailable, the prepared query substitutes parameters
into the AST and recompiles.
## Query Debugging
### toAst()
Get the query AST for inspection:
```typescript
const builder = store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.status.eq("active"))
.select((ctx) => ctx.p);
const ast = builder.toAst();
console.log(JSON.stringify(ast, null, 2));
```
### compile()
Compile to SQL without executing:
```typescript
const compiled = builder.compile();
console.log("SQL:", compiled.sql);
console.log("Parameters:", compiled.params);
```
Useful for:
- Debugging query behavior
- Understanding performance characteristics
- Building custom query executors
## Ordering Requirements
Both `paginate()` and `stream()` require an `orderBy()` clause:
```typescript
// Required for pagination
.orderBy("p", "name", "asc")
.paginate({ first: 20 });
// Required for streaming
.orderBy("e", "createdAt", "desc")
.stream();
```
### Stable Ordering
For deterministic pagination, include a unique field in your ordering:
```typescript
.orderBy("p", "name", "asc")
.orderBy("p", "id", "asc") // Ensures stable ordering
```
## Real-World Examples
### Paginated API Endpoint
```typescript
async function listUsers(cursor?: string, limit = 20) {
const query = store
.query()
.from("User", "u")
.whereNode("u", (u) => u.status.eq("active"))
.select((ctx) => ({
id: ctx.u.id,
name: ctx.u.name,
email: ctx.u.email,
}))
.orderBy("u", "createdAt", "desc")
.orderBy("u", "id", "desc");
const result = cursor
? await query.paginate({ first: limit, after: cursor })
: await query.paginate({ first: limit });
return {
users: result.data,
nextCursor: result.nextCursor,
hasMore: result.hasNextPage,
};
}
```
### Batch Processing
```typescript
async function processAllOrders() {
const stream = store
.query()
.from("Order", "o")
.whereNode("o", (o) => o.status.eq("pending"))
.select((ctx) => ctx.o)
.orderBy("o", "createdAt", "asc")
.stream({ batchSize: 100 });
for await (const order of stream) {
try {
await fulfillOrder(order);
await store.update("Order", order.id, { status: "fulfilled" });
} catch (error) {
console.error(`Failed to process order ${order.id}:`, error);
}
}
}
```
### Infinite Scroll
```typescript
function useInfiniteUsers() {
const [users, setUsers] = useState([]);
const [cursor, setCursor] = useState();
const [hasMore, setHasMore] = useState(true);
async function loadMore() {
const result = await store
.query()
.from("User", "u")
.select((ctx) => ctx.u)
.orderBy("u", "name", "asc")
.paginate({ first: 20, after: cursor });
setUsers((prev) => [...prev, ...result.data]);
setCursor(result.nextCursor);
setHasMore(result.hasNextPage);
}
return { users, loadMore, hasMore };
}
```
## Next Steps
- [Order](/queries/order) - Ordering and limiting results
- [Shape](/queries/shape) - Output transformation
- [Overview](/queries/overview) - Query categories reference
# Filter
> Reducing results with whereNode() and whereEdge()
Filter operations reduce the result set based on property values. TypeGraph provides `whereNode()`
for filtering nodes and `whereEdge()` for filtering edges during traversals.
## whereNode()
Filter nodes based on their properties:
```typescript
const engineers = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.role.eq("Engineer"))
.select((ctx) => ctx.p)
.execute();
```
### Parameters
```typescript
.whereNode(alias, predicateFunction)
```
| Parameter | Type | Description |
|-----------|------|-------------|
| `alias` | `string` | The node alias to filter (must exist in query) |
| `predicateFunction` | `(accessor) => Predicate` | Function that returns a predicate |
The predicate function receives a typed accessor for the node's properties.
## whereEdge()
Filter based on edge properties during traversals:
```typescript
const highPaying = await store
.query()
.from("Person", "p")
.traverse("worksAt", "e")
.whereEdge("e", (e) => e.salary.gte(100000))
.to("Company", "c")
.select((ctx) => ({
person: ctx.p.name,
company: ctx.c.name,
salary: ctx.e.salary,
}))
.execute();
```
### Parameters
```typescript
.whereEdge(alias, predicateFunction)
```
| Parameter | Type | Description |
|-----------|------|-------------|
| `alias` | `string` | The edge alias to filter (must exist in query) |
| `predicateFunction` | `(accessor) => Predicate` | Function that returns a predicate |
## Combining Predicates
### AND
Both conditions must be true:
```typescript
.whereNode("p", (p) =>
p.status.eq("active").and(p.role.eq("admin"))
)
```
### OR
Either condition can be true:
```typescript
.whereNode("p", (p) =>
p.role.eq("admin").or(p.role.eq("moderator"))
)
```
### NOT
Negate a condition:
```typescript
.whereNode("p", (p) =>
p.status.eq("deleted").not()
)
```
### Complex Combinations
Build complex logic with parenthetical grouping:
```typescript
.whereNode("p", (p) =>
p.status
.eq("active")
.and(p.role.eq("admin").or(p.role.eq("moderator")))
)
```
This evaluates as: `status = 'active' AND (role = 'admin' OR role = 'moderator')`
## Multiple Filters
Chain multiple `whereNode()` calls for AND logic:
```typescript
const activeManagers = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.status.eq("active"))
.whereNode("p", (p) => p.role.eq("Manager"))
.select((ctx) => ctx.p)
.execute();
```
This is equivalent to:
```typescript
.whereNode("p", (p) =>
p.status.eq("active").and(p.role.eq("Manager"))
)
```
## Filtering After Traversal
Filter nodes at any point in the query:
```typescript
const techCompanyEngineers = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.role.eq("Engineer"))
.traverse("worksAt", "e")
.to("Company", "c")
.whereNode("c", (c) => c.industry.eq("Technology"))
.select((ctx) => ({
person: ctx.p.name,
company: ctx.c.name,
}))
.execute();
```
## Common Predicates
Here are the most commonly used predicates. For complete reference, see [Predicates](/queries/predicates/).
### Equality
```typescript
p.name.eq("Alice") // equals
p.name.neq("Bob") // not equals
```
### Comparison
```typescript
p.age.gt(21) // greater than
p.age.gte(21) // greater than or equal
p.age.lt(65) // less than
p.age.lte(65) // less than or equal
p.age.between(18, 65) // inclusive range
```
### String Matching
```typescript
p.name.contains("ali") // substring match
p.name.startsWith("A") // prefix match
p.name.endsWith("ice") // suffix match
p.email.like("%@example.com") // SQL LIKE pattern
p.name.ilike("alice") // case-insensitive LIKE
```
### Null Checks
```typescript
p.deletedAt.isNull() // is null/undefined
p.email.isNotNull() // is not null
```
### List Membership
```typescript
p.status.in(["active", "pending"])
p.status.notIn(["archived", "deleted"])
```
### Array Operations
```typescript
p.tags.contains("typescript")
p.tags.containsAll(["typescript", "nodejs"])
p.tags.containsAny(["typescript", "rust", "go"])
p.tags.isEmpty()
p.tags.isNotEmpty()
```
## Predicate Types by Field
The available predicates depend on the field type:
| Field Type | Key Predicates |
|------------|----------------|
| String | `eq`, `contains`, `startsWith`, `like`, `ilike` |
| Number | `eq`, `gt`, `gte`, `lt`, `lte`, `between` |
| Date | `eq`, `gt`, `gte`, `lt`, `lte`, `between` |
| Array | `contains`, `containsAll`, `containsAny`, `isEmpty` |
| Object | `get()`, `hasKey`, `pathEquals` |
| Embedding | `similarTo()` |
See [Predicates](/queries/predicates/) for complete documentation.
## Count and Existence Helpers
### Count Results
```typescript
const count: number = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.status.eq("active"))
.count();
```
### Check Existence
```typescript
const exists: boolean = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.email.eq("alice@example.com"))
.exists();
```
### Get First Result
```typescript
const alice = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.email.eq("alice@example.com"))
.select((ctx) => ctx.p)
.first();
if (alice) {
console.log(alice.name);
}
```
## Next Steps
- [Predicates](/queries/predicates/) - Complete predicate reference
- [Traverse](/queries/traverse) - Navigate relationships
- [Advanced](/queries/advanced) - Subqueries with `exists()` and `inSubquery()`
# Traverse
> Navigate relationships with traverse() and optionalTraverse()
Traversals let you navigate relationships in your graph. Instead of writing complex SQL joins,
describe the path you want to follow.
## Single-Hop Traversal
Follow one edge from a node to connected nodes:
```typescript
const employments = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.id.eq("alice-123"))
.traverse("worksAt", "e") // Follow worksAt edges
.to("Company", "c") // Arrive at Company nodes
.select((ctx) => ({
person: ctx.p.name,
company: ctx.c.name,
role: ctx.e.role, // Edge properties are accessible
}))
.execute();
```
## Parameters
### traverse()
```typescript
.traverse(edgeKind, edgeAlias, options?)
```
| Parameter | Type | Description |
|-----------|------|-------------|
| `edgeKind` | `string` | The edge kind to traverse |
| `edgeAlias` | `string` | Unique alias for referencing this edge |
| `options.direction` | `"out" \| "in"` | Traversal direction (default: `"out"`) |
| `options.expand` | `"none" \| "implying" \| "inverse" \| "all"` | Ontology edge expansion mode (default: `"inverse"`) |
| `options.from` | `string` | Fan-out from a different node alias |
### optionalTraverse()
```typescript
.optionalTraverse(edgeKind, edgeAlias, options?)
```
Uses the same options as `traverse()`, but returns optional edge/node values in the result context.
### to()
```typescript
.to(nodeKind, nodeAlias, options?)
```
| Parameter | Type | Description |
|-----------|------|-------------|
| `nodeKind` | `string` | The target node kind |
| `nodeAlias` | `string` | Unique alias for referencing this node |
| `options.includeSubClasses` | `boolean` | Include subclass kinds (default: `false`) |
## Direction
By default, traversals follow edges in their defined direction (from → to). Use `direction: "in"` to traverse backwards:
```typescript
// Edge definition: worksAt goes from Person → Company
// Forward: Find companies where Alice works
.from("Person", "p")
.traverse("worksAt", "e") // Person → Company
.to("Company", "c")
// Backward: Find people who work at Acme
.from("Company", "c")
.whereNode("c", (c) => c.name.eq("Acme"))
.traverse("worksAt", "e", { direction: "in" }) // Company ← Person
.to("Person", "p")
```
## Edge Properties
Edges can carry properties. Access them through the edge alias:
```typescript
const employments = await store
.query()
.from("Person", "p")
.traverse("worksAt", "e")
.to("Company", "c")
.select((ctx) => ({
person: ctx.p.name,
company: ctx.c.name,
role: ctx.e.role, // Edge property
salary: ctx.e.salary, // Edge property
startDate: ctx.e.startDate, // Edge property
}))
.execute();
```
### Edge Object Structure
Each edge provides these fields:
| Property | Type | Description |
|----------|------|-------------|
| `id` | `string` | Unique edge identifier |
| `kind` | `string` | Edge type name |
| `fromId` | `string` | ID of the source node |
| `toId` | `string` | ID of the target node |
| `meta.createdAt` | `string` | When the edge was created |
| `meta.updatedAt` | `string` | When the edge was last updated |
| `meta.deletedAt` | `string \| undefined` | Soft delete timestamp |
| `meta.validFrom` | `string \| undefined` | Temporal validity start |
| `meta.validTo` | `string \| undefined` | Temporal validity end |
| *schema props* | varies | Properties defined in edge schema |
### Filtering on Edge Properties
Use `whereEdge()` to filter based on edge values:
```typescript
const highPaying = await store
.query()
.from("Person", "p")
.traverse("worksAt", "e")
.whereEdge("e", (e) => e.salary.gte(100000))
.to("Company", "c")
.select((ctx) => ({
person: ctx.p.name,
company: ctx.c.name,
salary: ctx.e.salary,
}))
.execute();
```
## Multi-Hop Traversals
Chain traversals to follow multiple relationships:
```typescript
const projectTasks = await store
.query()
.from("Person", "person")
.whereNode("person", (p) => p.name.eq("Alice"))
.traverse("worksOn", "e1")
.to("Project", "project")
.traverse("hasTask", "e2")
.to("Task", "task")
.select((ctx) => ({
person: ctx.person.name,
project: ctx.project.name,
task: ctx.task.title,
}))
.execute();
```
Each hop starts from the previous node set and arrives at new nodes.
### Mixed Directions
Combine forward and backward traversals:
```typescript
const teamStructure = await store
.query()
.from("Person", "p")
.traverse("worksAt", "e1") // Forward: Person → Company
.to("Company", "c")
.traverse("manages", "e2", { direction: "in" }) // Backward: Person ← manages
.to("Person", "manager")
.select((ctx) => ({
employee: ctx.p.name,
company: ctx.c.name,
manager: ctx.manager.name,
}))
.execute();
```
## Optional Traversals
Use `optionalTraverse()` for LEFT JOIN semantics—include results even when the traversal has no matches:
```typescript
const peopleWithOptionalEmployer = await store
.query()
.from("Person", "p")
.optionalTraverse("worksAt", "e")
.to("Company", "c")
.select((ctx) => ({
person: ctx.p.name,
company: ctx.c?.name, // May be undefined if no employer
}))
.execute();
// Includes all people, even those without a worksAt edge
```
### Mixing Required and Optional
```typescript
const employeesWithOptionalManager = await store
.query()
.from("Person", "p")
.traverse("worksAt", "e1") // Required: must work at a company
.to("Company", "c")
.optionalTraverse("reportsTo", "e2") // Optional: might not have manager
.to("Person", "manager")
.select((ctx) => ({
employee: ctx.p.name,
company: ctx.c.name,
manager: ctx.manager?.name, // undefined for top-level employees
}))
.execute();
```
### Optional Edge Access
With optional traversals, the edge may be `undefined`:
```typescript
.select((ctx) => ({
person: ctx.p.name,
company: ctx.c?.name, // Node may be undefined
role: ctx.e?.role, // Edge may be undefined
salary: ctx.e?.salary,
}))
```
## Ontology-Aware Traversals
If your ontology defines edge implications, expand queries to include implying edges:
```typescript
// Ontology: implies(marriedTo, knows), implies(bestFriends, knows)
const connections = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.name.eq("Alice"))
.traverse("knows", "e", { expand: "implying" })
.to("Person", "other")
.select((ctx) => ctx.other.name)
.execute();
// Returns people connected via "knows", "marriedTo", or "bestFriends"
```
If your ontology defines inverse edge kinds, you can expand traversals to include inverse edges:
```typescript
// Ontology: inverseOf(manages, managedBy)
const relationships = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.name.eq("Alice"))
.traverse("manages", "e", { expand: "inverse" })
.to("Person", "other")
.select((ctx) => ({
name: ctx.other.name,
viaEdgeKind: ctx.e.kind,
}))
.execute();
// Traverses both "manages" and "managedBy"
```
You can combine both options:
```typescript
.traverse("knows", "e", { expand: "all" })
```
:::note[Default expansion mode]
The default expansion mode is `"inverse"`, meaning traversals automatically include inverse edge kinds
from your ontology. To opt out for a single traversal, pass `expand: "none"`. To change the default
for all traversals, set `queryDefaults.traversalExpansion` in `createStore` options.
:::
## Real-World Examples
### Organizational Hierarchy
```typescript
const teamMembers = await store
.query()
.from("Person", "manager")
.whereNode("manager", (p) => p.name.eq("VP Engineering"))
.traverse("manages", "e")
.to("Person", "report")
.select((ctx) => ({
manager: ctx.manager.name,
report: ctx.report.name,
department: ctx.report.department,
}))
.execute();
```
### Social Graph
```typescript
const friends = await store
.query()
.from("Person", "me")
.whereNode("me", (p) => p.id.eq(currentUserId))
.traverse("follows", "e")
.to("Person", "friend")
.select((ctx) => ({
id: ctx.friend.id,
name: ctx.friend.name,
followedAt: ctx.e.createdAt,
}))
.orderBy("e", "createdAt", "desc")
.limit(50)
.execute();
```
### E-Commerce
```typescript
const orderDetails = await store
.query()
.from("Order", "o")
.whereNode("o", (o) => o.id.eq(orderId))
.traverse("contains", "e")
.to("Product", "p")
.select((ctx) => ({
product: ctx.p.name,
quantity: ctx.e.quantity,
unitPrice: ctx.e.unitPrice,
}))
.execute();
```
## Next Steps
- [Recursive](/queries/recursive) - Variable-length paths with `recursive()`
- [Filter](/queries/filter) - Filter nodes and edges with predicates
- [Shape](/queries/shape) - Transform output with `select()`
# Evolving Schemas in Production
> Step-by-step guide for safely evolving your graph schema across deployments
Your graph schema will change as your application grows. This guide covers how
to make those changes safely — from adding a field to renaming a node type.
For API reference, see [Schema Migrations](/schema-management).
## How Schema Evolution Works
When you call `createStoreWithSchema()`, TypeGraph:
1. Serializes your current graph definition
2. Compares it against the stored schema (by hash, then by diff)
3. **Safe changes** — auto-migrates and bumps the version
4. **Breaking changes** — throws `MigrationError` (or returns `status: "breaking"`)
The key insight: TypeGraph manages **schema metadata**, not data migration. When
you add an optional field, TypeGraph records that the schema now includes it. It
does not alter existing rows — Zod defaults handle that at read time.
## Safe Changes
These changes are backwards compatible and auto-migrate without intervention:
- Adding new node types
- Adding new edge types
- Adding optional properties (with defaults)
- Adding ontology relations
### Adding an Optional Property
```typescript
// Version 1
const Person = defineNode("Person", {
schema: z.object({
name: z.string(),
}),
});
// Version 2 — safe, auto-migrates
const Person = defineNode("Person", {
schema: z.object({
name: z.string(),
email: z.string().optional(),
}),
});
```
On startup, `createStoreWithSchema()` returns `status: "migrated"`. Existing
Person nodes return `email: undefined` — no data transformation needed.
### Adding a Node Type with Edges
```typescript
// Version 2 — add Company and worksAt in one deploy
const Company = defineNode("Company", {
schema: z.object({ name: z.string() }),
});
const worksAt = defineEdge("worksAt", {
schema: z.object({ role: z.string() }),
});
const graph = defineGraph({
id: "my_app",
nodes: {
Person: { type: Person },
Company: { type: Company },
},
edges: {
worksAt: { type: worksAt, from: [Person], to: [Company] },
},
});
```
This is a single safe migration. New node and edge types don't affect existing
data.
## Breaking Changes
These require explicit handling:
- Removing node or edge types
- Removing properties
- Adding required properties (no default)
- Renaming types or properties
TypeGraph will throw `MigrationError` by default. You have two options: fix
the schema to be backwards compatible, or use the expand-contract pattern.
## The Expand-Contract Pattern
For breaking changes, use a multi-deploy strategy. This is the same pattern
used in relational database migrations — deploy in phases so there's never a
moment where running code is incompatible with the schema.
### Renaming a Property
Rename `name` to `fullName` on Person in three deploys:
#### Deploy 1 — Expand: add the new property
```typescript
const Person = defineNode("Person", {
schema: z.object({
name: z.string(),
fullName: z.string().optional(), // New property, optional for now
}),
});
```
Safe migration. Then backfill existing data:
```typescript
const [store] = await createStoreWithSchema(graph, backend);
const people = await store.query(Person).execute();
for (const person of people) {
if (!person.properties.fullName) {
await store.nodes.Person.update(person.id, {
fullName: person.properties.name,
});
}
}
```
#### Deploy 2 — Switch: use the new property everywhere
Update all application code to read/write `fullName` instead of `name`. Both
properties still exist, so this deploy is safe.
#### Deploy 3 — Contract: remove the old property
```typescript
const Person = defineNode("Person", {
schema: z.object({
fullName: z.string(),
}),
});
```
This is a breaking change (removing `name`). Use `migrateSchema()` to force it:
```typescript
import { getSchemaChanges, migrateSchema } from "@nicia-ai/typegraph/schema";
const [store, result] = await createStoreWithSchema(graph, backend, {
throwOnBreaking: false,
});
if (result.status === "breaking") {
// We've already backfilled — safe to force migrate
const activeSchema = await backend.getActiveSchema(graph.id);
await migrateSchema(backend, graph, activeSchema!.version);
}
```
### Removing a Node Type
#### Deploy 1 — Stop creating new instances
Update application code to stop creating the deprecated node type. Existing data
remains.
#### Deploy 2 — Clean up references
Delete edges that reference the deprecated node type, then delete the nodes
themselves:
```typescript
// Delete all edges connected to deprecated nodes
const deprecated = await store.query(OldNode).execute();
for (const node of deprecated) {
await store.nodes.OldNode.delete(node.id);
}
```
#### Deploy 3 — Remove from schema
Remove the node type from `defineGraph()` and force migrate.
### Changing a Property Type
Change `age` from `z.string()` to `z.number()`:
#### Deploy 1 — Add the new property
```typescript
const Person = defineNode("Person", {
schema: z.object({
age: z.string(),
ageNumeric: z.number().optional(),
}),
});
```
#### Deploy 2 — Backfill and switch
```typescript
const people = await store.query(Person).execute();
for (const person of people) {
if (person.properties.ageNumeric === undefined) {
await store.nodes.Person.update(person.id, {
ageNumeric: parseInt(person.properties.age, 10),
});
}
}
```
#### Deploy 3 — Contract
Remove `age`, rename `ageNumeric` to `age` with the new type, and force migrate.
## Pre-Deploy Schema Checks
Use `getSchemaChanges()` in CI to catch breaking changes before they reach
production.
### CI/CD Script
```typescript
import { getSchemaChanges } from "@nicia-ai/typegraph/schema";
async function checkSchema(backend: GraphBackend, graph: GraphDef) {
const diff = await getSchemaChanges(backend, graph);
if (!diff) {
console.log("No existing schema — first deploy");
return;
}
if (!diff.hasChanges) {
console.log("Schema unchanged");
return;
}
console.log("Schema changes detected:");
console.log(diff.summary);
for (const change of [...diff.nodes, ...diff.edges]) {
const icon =
change.severity === "safe"
? "[safe]"
: change.severity === "warning"
? "[warn]"
: "[BREAKING]";
console.log(` ${icon} ${change.details}`);
}
if (diff.hasBreakingChanges) {
console.error("Breaking changes require migration before deploy.");
process.exit(1);
}
}
```
### Staging Validation
Before deploying to production, run against a staging database that mirrors
production schema state:
```typescript
const [store, result] = await createStoreWithSchema(graph, stagingBackend);
switch (result.status) {
case "initialized":
console.log("Staging DB was empty — initialized");
break;
case "migrated":
console.log(
`Auto-migrated v${result.fromVersion} → v${result.toVersion}`,
);
console.log("Changes:", result.diff.summary);
break;
case "breaking":
console.error("Would break in production. Fix before deploying.");
process.exit(1);
break;
}
```
## Testing Schema Changes
### Unit Testing Migrations
Test that your migration code handles existing data correctly:
```typescript
import { createStoreWithSchema, defineGraph, defineNode } from "@nicia-ai/typegraph";
import { createTestBackend } from "./test-utils";
it("migrates name to fullName", async () => {
const backend = createTestBackend();
// Set up v1 with data
const graphV1 = defineGraph({
id: "test",
nodes: { Person: { type: PersonV1 } },
edges: {},
});
const [storeV1] = await createStoreWithSchema(graphV1, backend);
await storeV1.nodes.Person.create({ name: "Alice" });
// Migrate to v2 (expand phase)
const graphV2 = defineGraph({
id: "test",
nodes: { Person: { type: PersonV2WithBothFields } },
edges: {},
});
const [storeV2, result] = await createStoreWithSchema(graphV2, backend);
expect(result.status).toBe("migrated");
// Run backfill
const people = await storeV2.query(PersonV2WithBothFields).execute();
for (const person of people) {
await storeV2.nodes.Person.update(person.id, {
fullName: person.properties.name,
});
}
// Verify
const updated = await storeV2.query(PersonV2WithBothFields).execute();
expect(updated[0].properties.fullName).toBe("Alice");
});
```
### Previewing Changes Without Applying
Use `getSchemaChanges()` to see what would change without modifying the database:
```typescript
import { getSchemaChanges } from "@nicia-ai/typegraph/schema";
const diff = await getSchemaChanges(backend, newGraph);
if (diff?.hasChanges) {
console.log("Pending changes:", diff.summary);
console.log("Breaking:", diff.hasBreakingChanges);
for (const change of diff.nodes) {
console.log(` ${change.severity}: ${change.details}`);
}
}
```
## Version History
TypeGraph preserves all schema versions in the `typegraph_schema_versions`
table. Only one version is active at a time.
```text
typegraph_schema_versions
├── version 1 (initial) ← inactive
├── version 2 (added email) ← inactive
├── version 3 (added Company) ← active
```
Access version history through the backend:
```typescript
// Get a specific version
const v1 = await backend.getSchemaVersion("my_app", 1);
console.log("V1 created at:", v1?.created_at);
// Get the active version
const active = await backend.getActiveSchema("my_app");
console.log("Current version:", active?.version);
```
## Summary: Change Classification
| Change | Classification | Auto-Migrated? |
| ------------------------------ | -------------- | -------------- |
| Add node type | Safe | Yes |
| Add edge type | Safe | Yes |
| Add optional property | Safe | Yes |
| Add ontology relation | Safe | Yes |
| Add required property | Breaking | No |
| Remove property | Breaking | No |
| Remove node/edge type | Breaking | No |
| Rename node/edge type | Breaking | No |
| Change property type | Breaking | No |
| Change onDelete behavior | Warning | Yes |
| Change unique constraints | Warning | Yes |
| Change edge cardinality | Warning | Yes |
| Change edge endpoint kinds | Warning | Yes |
## Rollback
If a deployment goes wrong, you can switch back to a previous schema version.
Version history is always preserved — `rollbackSchema()` simply changes which
version is active.
```typescript
import { rollbackSchema } from "@nicia-ai/typegraph/schema";
// Roll back to version 2
await rollbackSchema(backend, "my_app", 2);
```
This does not delete newer versions. You can migrate forward again later.
## Migration Hooks
Use `onBeforeMigrate` and `onAfterMigrate` for observability — logging,
metrics, and alerts during schema migrations:
```typescript
const [store, result] = await createStoreWithSchema(graph, backend, {
onBeforeMigrate: (context) => {
console.log(`Migrating ${context.graphId} v${context.fromVersion} → v${context.toVersion}`);
console.log("Changes:", context.diff.summary);
},
onAfterMigrate: (context) => {
console.log(`Migration complete: v${context.toVersion}`);
metrics.increment("schema_migrations_total");
},
});
```
For data transformations (backfill scripts), run them explicitly after store
creation rather than inside hooks. This gives you control over retries and
error handling:
```typescript
const [store, result] = await createStoreWithSchema(graph, backend);
if (result.status === "migrated" && result.toVersion === 3) {
// Backfill fullName from name for the expand phase
const people = await store.query(Person).execute();
for (const person of people) {
if (!person.properties.fullName) {
await store.nodes.Person.update(person.id, {
fullName: person.properties.name,
});
}
}
}
```
## Current Limitations
- **No automatic data transformation.** TypeGraph tracks schema metadata
changes but does not transform existing rows. Use backfill scripts (or
`onAfterMigrate` hooks) for data migration.
- **No rename detection.** Renaming a property looks like a removal + addition.
Use the expand-contract pattern instead.
- **Schema-level only.** Migrations operate on the graph definition, not on
underlying database tables. TypeGraph's storage tables are
schema-agnostic (nodes and edges are stored as JSON properties), so
"schema migration" means updating the metadata that TypeGraph tracks, not
running `ALTER TABLE`.
# Schema Migrations
> Schema versioning, migration, and lifecycle management
For a practical guide on evolving schemas across deployments, see
[Evolving Schemas in Production](/schema-evolution).
## When Do You Need Schema Management?
As your application evolves, your graph schema changes:
- **Adding features**: New node types, new properties, new relationships
- **Refactoring**: Renaming types, changing property formats
- **Deploying safely**: Ensuring schema changes don't break running applications
Without schema management, you'd face:
- No way to know if the database matches your code
- Silent failures when property names change
- Manual migration scripts for every deployment
TypeGraph's schema management:
1. **Stores the schema in the database** alongside your data
2. **Detects changes** between your code and the stored schema
3. **Auto-migrates safe changes** (adding types, optional properties)
4. **Blocks breaking changes** until you handle them explicitly
## How It Works
TypeGraph stores your graph schema in the database, enabling version tracking,
safe migrations, and runtime introspection.
When you create a store with `createStoreWithSchema()`, TypeGraph:
1. Serializes your graph definition to JSON
2. Compares it with the stored schema (if any)
3. Returns the result so you can act on it
## Schema Lifecycle
When you create a store, TypeGraph can automatically manage schema versions:
```typescript
import { createStoreWithSchema } from "@nicia-ai/typegraph";
const [store, result] = await createStoreWithSchema(graph, backend);
switch (result.status) {
case "initialized":
console.log(`Schema initialized at version ${result.version}`);
break;
case "unchanged":
console.log(`Schema unchanged at version ${result.version}`);
break;
case "migrated":
console.log(`Migrated from v${result.fromVersion} to v${result.toVersion}`);
break;
case "pending":
console.log(`Safe changes pending at version ${result.version}`);
break;
case "breaking":
console.log("Breaking changes detected:", result.actions);
break;
}
```
## Basic vs Managed Store
TypeGraph provides two ways to create a store:
### Basic Store (No Schema Management)
Use `createStore()` when you manage schema versions yourself:
```typescript
import { createStore } from "@nicia-ai/typegraph";
const store = createStore(graph, backend);
// No schema versioning - you handle migrations manually
```
### Managed Store (Automatic Schema Management)
Use `createStoreWithSchema()` for automatic version tracking:
```typescript
import { createStoreWithSchema } from "@nicia-ai/typegraph";
const [store, result] = await createStoreWithSchema(graph, backend, {
autoMigrate: true, // Auto-apply safe changes (default: true)
throwOnBreaking: true, // Throw on breaking changes (default: true)
onBeforeMigrate: (context) => {
console.log(`Migrating ${context.graphId} from v${context.fromVersion} to v${context.toVersion}`);
},
onAfterMigrate: (context) => {
console.log(`Migration complete: v${context.toVersion}`);
},
});
```
## Schema Validation Results
The validation result indicates what happened during store initialization:
| Status | Meaning |
| ------------- | -------------------------------------------------- |
| `initialized` | First run - schema version 1 was created |
| `unchanged` | Schema matches stored version - no changes |
| `migrated` | Safe changes auto-applied, new version created |
| `pending` | Safe changes detected but `autoMigrate` is `false` |
| `breaking` | Breaking changes detected, action required |
## Safe vs Breaking Changes
### Safe Changes (Auto-Migrated)
These changes are backwards compatible and can be auto-migrated:
- Adding new node types
- Adding new edge types
- Adding optional properties with defaults
- Adding new ontology relations
### Breaking Changes (Require Manual Action)
These changes require manual migration:
- Removing node or edge types
- Renaming node or edge types
- Changing property types
- Removing properties
- Changing cardinality constraints to be more restrictive
## Handling Breaking Changes
When breaking changes are detected:
```typescript
const [store, result] = await createStoreWithSchema(graph, backend, {
throwOnBreaking: false, // Don't throw, inspect instead
});
if (result.status === "breaking") {
console.log("Breaking changes detected:");
console.log("Summary:", result.diff.summary);
console.log("Required actions:");
for (const action of result.actions) {
console.log(` - ${action}`);
}
// Option 1: Fix your schema to be backwards compatible
// Option 2: Force migration (data loss possible!)
// import { migrateSchema } from "@nicia-ai/typegraph/schema";
// await migrateSchema(backend, graph, currentVersion);
}
```
## Schema Introspection
Query the stored schema at runtime:
```typescript
import { getActiveSchema, isSchemaInitialized, getSchemaChanges } from "@nicia-ai/typegraph/schema";
// Check if schema exists
const initialized = await isSchemaInitialized(backend, "my_graph");
// Get the current schema
const schema = await getActiveSchema(backend, "my_graph");
if (schema) {
console.log("Graph ID:", schema.graphId);
console.log("Version:", schema.version);
console.log("Nodes:", Object.keys(schema.nodes));
console.log("Edges:", Object.keys(schema.edges));
}
// Preview changes without applying
const diff = await getSchemaChanges(backend, graph);
if (diff?.hasChanges) {
console.log("Pending changes:", diff.summary);
console.log("Is backwards compatible:", !diff.hasBreakingChanges);
}
```
## Manual Migration
For full control over migrations:
```typescript
import {
initializeSchema,
migrateSchema,
rollbackSchema,
ensureSchema,
} from "@nicia-ai/typegraph/schema";
// Initialize schema (first run only)
const row = await initializeSchema(backend, graph);
console.log("Created version:", row.version);
// Migrate to new version
const newVersion = await migrateSchema(backend, graph, currentVersion);
console.log("Migrated to version:", newVersion);
// Rollback to a previous version
await rollbackSchema(backend, "my_graph", 1);
console.log("Rolled back to version 1");
// Or use ensureSchema for automatic handling
const result = await ensureSchema(backend, graph, {
autoMigrate: true,
throwOnBreaking: true,
});
```
## Schema Serialization
Schemas are stored as JSON documents with computed hashes for fast comparison:
```typescript
import { serializeSchema, computeSchemaHash } from "@nicia-ai/typegraph/schema";
// Serialize a graph definition
const serialized = serializeSchema(graph, 1);
// Compute hash for comparison
const hash = computeSchemaHash(serialized);
```
The serialized schema includes:
- Graph ID and version
- All node types with their Zod schemas (as JSON Schema)
- All edge types with endpoints and constraints
- Complete ontology relations
- Uniqueness constraints and delete behaviors
## Version History
TypeGraph maintains a history of all schema versions:
```text
typegraph_schema_versions
├── version 1 (initial)
├── version 2 (added User node)
├── version 3 (added email property) ← active
└── ...
```
Only one version is marked as "active" at a time. Previous versions are
preserved for auditing and potential rollback.
## Best Practices
### 1. Use Managed Stores in Production
```typescript
// Production: Use schema management
const [store, result] = await createStoreWithSchema(graph, backend);
// Development: Basic store is fine for rapid iteration
const store = createStore(graph, backend);
```
### 2. Check Migration Status on Startup
```typescript
async function initializeApp() {
const [store, result] = await createStoreWithSchema(graph, backend);
if (result.status === "breaking") {
console.error("Database schema incompatible with application!");
console.error("Run migrations before deploying this version.");
process.exit(1);
}
if (result.status === "migrated") {
console.log(`Schema auto-migrated to v${result.toVersion}`);
}
return store;
}
```
### 3. Preview Changes Before Deployment
```typescript
import { getSchemaChanges } from "@nicia-ai/typegraph/schema";
// In your CI/CD pipeline or migration script
const diff = await getSchemaChanges(backend, graph);
if (diff?.hasChanges) {
console.log("Schema changes detected:");
console.log(diff.summary);
if (!diff.isBackwardsCompatible) {
console.error("Breaking changes require manual migration!");
process.exit(1);
}
}
```
### 4. Add Properties with Defaults
When adding new properties, always provide defaults to ensure backwards
compatibility:
```typescript
// Good: Optional with default
const User = defineNode("User", {
schema: z.object({
name: z.string(),
// New property with default - safe migration
status: z.enum(["active", "inactive"]).default("active"),
}),
});
// Bad: Required without default - breaking change
const User = defineNode("User", {
schema: z.object({
name: z.string(),
status: z.enum(["active", "inactive"]), // No default!
}),
});
```
# Semantic Search
> Vector embeddings and similarity search for AI-powered retrieval
TypeGraph supports semantic search using vector embeddings, enabling you to find
semantically similar content using embedding models like OpenAI, Sentence Transformers,
CLIP, or any model that produces fixed-dimension vectors.
## Overview
Traditional search relies on exact keyword matching. Semantic search understands
meaning—"machine learning" matches documents about "neural networks" and "AI algorithms"
even without those exact words.
**Key capabilities:**
- Store embeddings as node properties alongside your graph data
- Find the k most similar nodes using cosine, L2, or inner product distance
- Combine semantic similarity with graph traversals and standard predicates
- Automatic vector indexing for fast approximate nearest neighbor search
## Use Cases
### Retrieval-Augmented Generation (RAG)
Build context-aware AI applications by retrieving relevant documents before
generating responses:
```typescript
async function ragQuery(question: string): Promise {
const questionEmbedding = await embed(question);
const context = await store
.query()
.from("Document", "d")
.whereNode("d", (d) =>
d.embedding.similarTo(questionEmbedding, 5, {
metric: "cosine",
minScore: 0.7,
})
)
.select((ctx) => ({
title: ctx.d.title,
content: ctx.d.content,
}))
.execute();
return await llm.chat({
messages: [
{
role: "system",
content: `Answer based on this context:\n${context.map((d) => d.content).join("\n\n")}`,
},
{ role: "user", content: question },
],
});
}
```
### Semantic Document Search
Find documents by meaning rather than keywords:
```typescript
const results = await store
.query()
.from("Article", "a")
.whereNode("a", (a) =>
a.embedding
.similarTo(queryEmbedding, 20)
.and(a.category.eq("technology"))
)
.select((ctx) => ctx.a)
.execute();
```
### Image Similarity
Use CLIP or similar vision models for image search:
```typescript
const similarImages = await store
.query()
.from("Image", "i")
.whereNode("i", (i) => i.clipEmbedding.similarTo(queryImageEmbedding, 10))
.select((ctx) => ({
url: ctx.i.url,
caption: ctx.i.caption,
}))
.execute();
```
### Product Recommendations
Recommend products based on embedding similarity:
```typescript
const recommendations = await store
.query()
.from("Product", "p")
.whereNode("p", (p) =>
p.embedding
.similarTo(referenceProductEmbedding, 10)
.and(p.inStock.eq(true))
)
.select((ctx) => ctx.p)
.execute();
```
## Database Setup
Vector search requires database-specific extensions for storing and querying
high-dimensional vectors efficiently.
### PostgreSQL with pgvector
[pgvector](https://github.com/pgvector/pgvector) is the recommended extension
for PostgreSQL. It provides:
- Native `vector` column type
- HNSW and IVFFlat indexes for fast approximate nearest neighbor search
- Support for cosine, L2, and inner product distance
**Installation:**
```sql
-- Install the extension (requires superuser or database owner)
CREATE EXTENSION vector;
```
**Docker setup:**
```yaml
services:
postgres:
image: pgvector/pgvector:pg16
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
ports:
- "5432:5432"
```
**TypeGraph migration includes vector support:**
```typescript
import { generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";
// Generates DDL including:
// - CREATE EXTENSION IF NOT EXISTS vector;
// - typegraph_embeddings table with vector column
const migrationSQL = generatePostgresMigrationSQL();
```
### SQLite with sqlite-vec
[sqlite-vec](https://github.com/asg017/sqlite-vec) provides vector search
for SQLite. It offers:
- `vec_f32` type for 32-bit float vectors
- Cosine and L2 distance functions
- Works with any SQLite database
**Installation:**
```bash
npm install sqlite-vec
```
**Loading the extension:**
```typescript
import Database from "better-sqlite3";
import * as sqliteVec from "sqlite-vec";
const sqlite = new Database("myapp.db");
sqliteVec.load(sqlite);
```
**Limitations:**
- sqlite-vec does not support inner product distance
- Use `cosine` or `l2` metrics only
### Supported Distance Metrics
| Metric | PostgreSQL | SQLite | Description |
|--------|------------|--------|-------------|
| `cosine` | `<=>` | `vec_distance_cosine` | Cosine distance (1 - similarity). Best for normalized embeddings. |
| `l2` | `<->` | `vec_distance_l2` | Euclidean distance. Good for unnormalized vectors. |
| `inner_product` | `<#>` | Not supported | Negative inner product. For maximum inner product search (MIPS). |
## Schema Design
### Defining Embedding Properties
Use the `embedding()` function to define vector properties with a specific dimension:
```typescript
import { defineNode, embedding } from "@nicia-ai/typegraph";
import { z } from "zod";
const Document = defineNode("Document", {
schema: z.object({
title: z.string(),
content: z.string(),
embedding: embedding(1536), // OpenAI ada-002 dimension
}),
});
const Image = defineNode("Image", {
schema: z.object({
url: z.string(),
caption: z.string().optional(),
clipEmbedding: embedding(512), // CLIP ViT-B/32 dimension
}),
});
```
### Common Embedding Dimensions
| Model | Dimensions | Use Case |
|-------|------------|----------|
| all-MiniLM-L6-v2 | 384 | Fast, lightweight text embeddings |
| CLIP ViT-B/32 | 512 | Image-text multimodal |
| BERT base | 768 | General text embeddings |
| OpenAI ada-002 | 1536 | High-quality text embeddings |
| OpenAI text-embedding-3-small | 1536 | Efficient, high-quality |
| OpenAI text-embedding-3-large | 3072 | Maximum quality |
| Cohere embed-v3 | 1024 | Multilingual support |
### Optional Embeddings
Embedding properties can be optional for gradual population:
```typescript
const Article = defineNode("Article", {
schema: z.object({
title: z.string(),
content: z.string(),
embedding: embedding(1536).optional(),
}),
});
// Create without embedding
const article = await store.nodes.Article.create({
title: "Draft Article",
content: "...",
});
// Add embedding later via background job
await store.nodes.Article.update(article.id, {
embedding: await generateEmbedding(article.content),
});
```
### Multiple Embeddings per Node
Nodes can have multiple embedding fields for different purposes:
```typescript
const Product = defineNode("Product", {
schema: z.object({
name: z.string(),
description: z.string(),
imageUrl: z.string(),
// Text embedding for description search
textEmbedding: embedding(1536).optional(),
// Image embedding for visual similarity
imageEmbedding: embedding(512).optional(),
}),
});
```
## Storing Embeddings
Embeddings are stored when creating or updating nodes:
```typescript
// Using OpenAI
import OpenAI from "openai";
const openai = new OpenAI();
async function generateEmbedding(text: string): Promise {
const response = await openai.embeddings.create({
model: "text-embedding-ada-002",
input: text,
});
return response.data[0].embedding;
}
// Store with embedding
const embedding = await generateEmbedding("Machine learning fundamentals");
await store.nodes.Document.create({
title: "ML Guide",
content: "Machine learning fundamentals...",
embedding: embedding,
});
```
### Batch Embedding
For bulk operations, batch your embedding API calls:
```typescript
async function batchEmbed(texts: string[]): Promise {
const response = await openai.embeddings.create({
model: "text-embedding-ada-002",
input: texts,
});
return response.data.map((d) => d.embedding);
}
// Process in batches
const documents = await fetchDocumentsWithoutEmbeddings();
const batchSize = 100;
for (let i = 0; i < documents.length; i += batchSize) {
const batch = documents.slice(i, i + batchSize);
const embeddings = await batchEmbed(batch.map((d) => d.content));
await store.transaction(async (tx) => {
for (const [index, doc] of batch.entries()) {
await tx.nodes.Document.update(doc.id, {
embedding: embeddings[index],
});
}
});
}
```
## Querying
### Basic Similarity Search
Use `.similarTo()` to find the k most similar nodes:
```typescript
const queryEmbedding = await generateEmbedding("neural networks");
const similar = await store
.query()
.from("Document", "d")
.whereNode("d", (d) =>
d.embedding.similarTo(queryEmbedding, 10) // Top 10 most similar
)
.select((ctx) => ({
title: ctx.d.title,
content: ctx.d.content,
}))
.execute();
```
### Choosing a Distance Metric
```typescript
// Cosine similarity (default) - best for normalized embeddings
d.embedding.similarTo(queryEmbedding, 10, { metric: "cosine" })
// L2 (Euclidean) distance - for unnormalized embeddings
d.embedding.similarTo(queryEmbedding, 10, { metric: "l2" })
// Inner product - for maximum inner product search (PostgreSQL only)
d.embedding.similarTo(queryEmbedding, 10, { metric: "inner_product" })
```
**When to use each:**
- **Cosine**: Most common choice. Works well with normalized embeddings
(OpenAI, Sentence Transformers). Focuses on direction, not magnitude.
- **L2**: Use when vector magnitude matters. Good for detecting exact
duplicates.
- **Inner product**: For MIPS (maximum inner product search). Useful when
embeddings encode both relevance and importance in magnitude.
### Minimum Score Filtering
Filter results below a similarity threshold:
```typescript
const highQualityMatches = await store
.query()
.from("Document", "d")
.whereNode("d", (d) =>
d.embedding.similarTo(queryEmbedding, 100, {
metric: "cosine",
minScore: 0.8, // Only results with similarity >= 0.8
})
)
.select((ctx) => ctx.d)
.execute();
```
The `minScore` parameter filters results using **similarity** (not distance):
- **Cosine**: 1.0 = identical, 0.0 = orthogonal. Typical thresholds: 0.7-0.9
- **L2**: Maximum distance to include (lower = more similar)
- **Inner product**: Minimum inner product value
:::note[Similarity vs Distance]
While the underlying database operators use distance (where 0 = identical for cosine),
`minScore` uses similarity semantics for intuitive usage. TypeGraph converts internally:
`distance_threshold = 1 - minScore` for cosine.
:::
### Combining with Predicates
Semantic search integrates with all standard query predicates:
```typescript
const filteredSearch = await store
.query()
.from("Document", "d")
.whereNode("d", (d) =>
d.embedding
.similarTo(queryEmbedding, 20)
.and(d.category.eq("technology"))
.and(d.publishedAt.gte("2024-01-01"))
.and(d.status.eq("published"))
)
.select((ctx) => ctx.d)
.execute();
```
### Combining with Graph Traversals
Search within graph relationships:
```typescript
// Find similar documents by authors I follow
const personalizedSearch = await store
.query()
.from("Person", "me")
.whereNode("me", (p) => p.id.eq(currentUserId))
.traverse("follows", "f")
.to("Person", "author")
.traverse("authored", "a", { direction: "in" })
.to("Document", "d")
.whereNode("d", (d) =>
d.embedding.similarTo(queryEmbedding, 10)
)
.select((ctx) => ({
title: ctx.d.title,
author: ctx.author.name,
}))
.execute();
```
## Best Practices
### Normalize Your Embeddings
Most embedding models produce normalized vectors (unit length). If yours doesn't,
normalize before storing:
```typescript
function normalize(vector: number[]): number[] {
const magnitude = Math.sqrt(vector.reduce((sum, v) => sum + v * v, 0));
return vector.map((v) => v / magnitude);
}
await store.nodes.Document.create({
title: "Example",
content: "...",
embedding: normalize(rawEmbedding),
});
```
### Use Consistent Embedding Models
Always use the same model for both storing and querying:
```typescript
// Bad: Mixing models
const docEmbedding = await embed("text-embedding-ada-002", content);
const queryEmbedding = await embed("text-embedding-3-small", query); // Different!
// Good: Same model throughout
const MODEL = "text-embedding-ada-002";
const docEmbedding = await embed(MODEL, content);
const queryEmbedding = await embed(MODEL, query);
```
### Handle Missing Embeddings
Not all nodes may have embeddings. Handle gracefully:
```typescript
// Only search nodes with embeddings
const results = await store
.query()
.from("Document", "d")
.whereNode("d", (d) =>
d.embedding
.isNotNull()
.and(d.embedding.similarTo(queryEmbedding, 10))
)
.select((ctx) => ctx.d)
.execute();
```
### Choose Appropriate k Values
The `k` parameter (number of results) affects performance:
```typescript
// For RAG: Small k (3-10) for focused context
d.embedding.similarTo(query, 5)
// For exploration: Larger k with pagination
d.embedding.similarTo(query, 100)
```
### Index Considerations
Vector indexes (HNSW, IVFFlat) trade accuracy for speed:
- **Small datasets (< 10K)**: Exact search is fast enough
- **Medium datasets (10K-1M)**: HNSW provides good recall with fast queries
- **Large datasets (> 1M)**: Consider IVFFlat with appropriate parameters
TypeGraph creates HNSW indexes by default for optimal balance.
## Troubleshooting
### "Extension not found" errors
**PostgreSQL:**
```sql
-- Check if pgvector is installed
SELECT * FROM pg_extension WHERE extname = 'vector';
-- Install it
CREATE EXTENSION vector;
```
**SQLite:**
```typescript
// Ensure sqlite-vec is loaded before queries
import * as sqliteVec from "sqlite-vec";
sqliteVec.load(sqlite);
```
### "Inner product not supported" (SQLite)
sqlite-vec only supports `cosine` and `l2` metrics. Use one of those instead:
```typescript
// Instead of:
d.embedding.similarTo(query, 10, { metric: "inner_product" })
// Use:
d.embedding.similarTo(query, 10, { metric: "cosine" })
```
### Dimension mismatch errors
Ensure query embedding has the same dimension as stored embeddings:
```typescript
const Document = defineNode("Document", {
schema: z.object({
embedding: embedding(1536), // 1536 dimensions
}),
});
// Query embedding must also be 1536 dimensions
const queryEmbedding = await embed(text); // Verify this returns 1536-dim vector
```
### Slow queries
1. **Check index creation**: Vector indexes may not exist
2. **Reduce k**: Smaller k = faster queries
3. **Add filters**: Pre-filter with standard predicates before similarity search
4. **Consider approximate search**: HNSW indexes sacrifice some accuracy for speed
## API Reference
See the [Predicates documentation](/queries/predicates#embedding) for
complete API reference of the `similarTo()` predicate and related options.