Skip to content

Schemas & Stores

This reference documents the schema definition functions and store API for TypeGraph.

Creates a node type definition.

import { defineNode } from "@nicia-ai/typegraph";
function defineNode<K extends string, S extends z.ZodObject<any>>(
name: K,
options: {
schema: S;
description?: string;
},
): NodeType<K, S>;

Parameters:

ParameterTypeDescription
namestringUnique name for this node type
options.schemaz.ZodObjectZod object schema for node properties
options.descriptionstringOptional description

Example:

const Person = defineNode("Person", {
schema: z.object({
name: z.string(),
email: z.string().email().optional(),
}),
description: "A person in the system",
});

Creates an edge type definition.

import { defineEdge } from "@nicia-ai/typegraph";
function defineEdge<K extends string, S extends z.ZodObject<any>>(
name: K,
options?: {
schema?: S;
description?: string;
from?: NodeType[];
to?: NodeType[];
},
): EdgeType<K, S>;

Parameters:

ParameterTypeDescription
namestringUnique name for this edge type
options.schemaz.ZodObjectOptional Zod object schema (defaults to empty object)
options.descriptionstringOptional description
options.fromNodeType[]Optional domain constraint (valid source node types)
options.toNodeType[]Optional range constraint (valid target node types)

Example:

const worksAt = defineEdge("worksAt", {
schema: z.object({
role: z.string(),
startDate: z.string().optional(),
}),
});
const knows = defineEdge("knows"); // No schema needed

With Domain/Range Constraints:

When from and to are specified, the edge carries its endpoint constraints intrinsically:

const worksAt = defineEdge("worksAt", {
schema: z.object({
role: z.string(),
startDate: z.string().optional(),
}),
from: [Person], // Domain: only Person can be the source
to: [Company], // Range: only Company can be the target
});

Unconstrained Edges:

Edges without from/to are unconstrained — they can connect any node type to any node type:

const sameAs = defineEdge("sameAs");
const related = defineEdge("related", {
schema: z.object({ reason: z.string() }),
});

Direct use in defineGraph:

Any edge type can be used directly in defineGraph without an EdgeRegistration wrapper:

const graph = defineGraph({
id: "my_graph",
nodes: { Person: { type: Person }, Company: { type: Company } },
edges: {
worksAt, // Constrained — uses built-in from/to
sameAs, // Unconstrained — connects any node to any node
},
});

See Core Concepts for detailed documentation on domain/range constraints.

Creates a Zod schema for vector embeddings with dimension validation.

import { embedding } from "@nicia-ai/typegraph";
function embedding<D extends number>(dimensions: D): EmbeddingSchema<D>;

Parameters:

ParameterTypeDescription
dimensionsnumberThe number of dimensions (e.g., 384, 512, 768, 1536, 3072)

Example:

const Document = defineNode("Document", {
schema: z.object({
title: z.string(),
content: z.string(),
embedding: embedding(1536), // OpenAI ada-002
}),
});
// Optional embeddings
const Article = defineNode("Article", {
schema: z.object({
content: z.string(),
embedding: embedding(1536).optional(),
}),
});

See Semantic Search for query usage.

Creates a Zod schema for referencing external data sources. Use this for hybrid overlay patterns where TypeGraph stores relationships while your existing tables remain the source of truth.

import { externalRef } from "@nicia-ai/typegraph";
function externalRef<T extends string>(table: T): ExternalRefSchema<T>;

Parameters:

ParameterTypeDescription
tablestringIdentifier for the external table (e.g., “users”, “documents”)

Example:

const Document = defineNode("Document", {
schema: z.object({
source: externalRef("documents"),
embedding: embedding(1536).optional(),
}),
});
// Create with explicit table reference
await store.nodes.Document.create({
source: { table: "documents", id: "doc_123" },
});
// Query the external reference
const results = await store
.query()
.from("Document", "d")
.select((ctx) => ctx.d.source)
.execute();
// results[0].source = { table: "documents", id: "doc_123" }

Factory helper to create external reference values without repeating the table name.

import { createExternalRef } from "@nicia-ai/typegraph";
function createExternalRef<T extends string>(
table: T
): (id: string) => ExternalRefValue<T>;

Example:

const docRef = createExternalRef("documents");
await store.nodes.Document.create({
source: docRef("doc_123"), // { table: "documents", id: "doc_123" }
});

Creates a graph definition combining nodes, edges, and ontology.

import { defineGraph } from "@nicia-ai/typegraph";
function defineGraph<G extends GraphDef>(config: {
id: string;
nodes: Record<string, NodeRegistration>;
edges: Record<string, EdgeRegistration | EdgeType>;
ontology?: OntologyRelation[];
}): G;

Parameters:

ParameterTypeDescription
idstringUnique identifier for this graph
nodesRecord<string, NodeRegistration>Node type registrations
edgesRecord<string, EdgeRegistration | EdgeType>Edge registrations or edge types directly
ontologyOntologyRelation[]Optional semantic relationships

Edge entries can be:

  • EdgeRegistration — explicit { type, from, to } with optional cardinality
  • EdgeType with from/to — uses built-in constraints
  • EdgeType without from/to — unconstrained, connects any node to any node

Example:

const graph = defineGraph({
id: "my_graph",
nodes: {
Person: { type: Person },
Company: { type: Company, onDelete: "cascade" },
},
edges: {
worksAt: {
type: worksAt,
from: [Person],
to: [Company],
cardinality: "many",
},
sameAs, // Unconstrained — any→any
},
ontology: [disjointWith(Person, Company)],
});

Creates a store instance for a graph definition.

import { createStore } from "@nicia-ai/typegraph";
function createStore<G extends GraphDef>(
graph: G,
backend: GraphBackend,
options?: StoreOptions
): Store<G>;

Options:

OptionTypeDescription
hooksStoreHooksObservability hooks for monitoring operations
schemaSqlSchemaCustom table name configuration
queryDefaults.traversalExpansionTraversalExpansionDefault ontology expansion mode for traversals (default: "inverse")

Example:

const store = createStore(graph, backend);

Override the default traversal expansion:

const store = createStore(graph, backend, {
queryDefaults: { traversalExpansion: "none" },
});

createStoreWithSchema(graph, backend, options?)

Section titled “createStoreWithSchema(graph, backend, options?)”

Creates a store and ensures the database schema is initialized or migrated. This is the recommended factory for production use.

import { createStoreWithSchema } from "@nicia-ai/typegraph";
function createStoreWithSchema<G extends GraphDef>(
graph: G,
backend: GraphBackend,
options?: StoreOptions & SchemaManagerOptions
): Promise<[Store<G>, SchemaValidationResult]>;

Returns: A tuple of [store, validationResult]

The validation result indicates what happened:

  • status: "initialized" - Schema created for the first time
  • status: "unchanged" - Schema matches, no changes needed
  • status: "migrated" - Safe changes auto-applied (additive only)
  • status: "pending" - Safe changes detected but autoMigrate is false
  • status: "breaking" - Breaking changes detected, action required

Example:

const [store, result] = await createStoreWithSchema(graph, backend);
if (result.status === "initialized") {
console.log("Schema initialized at version", result.version);
} else if (result.status === "migrated") {
console.log(`Migrated from v${result.fromVersion} to v${result.toVersion}`);
} else if (result.status === "pending") {
console.log(`Safe changes pending at version ${result.version}`);
}

Throws: MigrationError if breaking changes are detected and throwOnBreaking is true (the default).

A type-level utility that projects a store’s collection surface onto a subset of node and edge keys. Use this to type reusable helpers that work with any store containing a shared subgraph.

import type { StoreProjection } from "@nicia-ai/typegraph";
type CoreStore = StoreProjection<
typeof myGraph,
"Document" | "Chunk",
"hasChunk"
>;
async function ingestChunk(store: CoreStore, document: Node<typeof Document>, text: string) {
const chunk = await store.nodes.Chunk.create({ text });
await store.edges.hasChunk.create(document, chunk);
return chunk;
}

Both Store<G> and TransactionContext<G> are structurally assignable to a StoreProjection whose keys are a subset of G. Node constraint names are erased so the projection works across graphs that register the same node types with different unique constraints.

See Shared Subgraph Helpers for a full example with multiple graphs.

The store provides typed node and edge collections via store.nodes.* and store.edges.*.

Each node type has a collection with these methods:

Method names follow what identifier is used to match an existing record:

If you have…Read-onlyGet-or-create
IDgetByIdupsertById
Unique constraint name + propsfindByConstraintgetOrCreateByConstraint
Edge endpoints (from, to) + optional matchOnfindByEndpointsgetOrCreateByEndpoints

Creates a new node.

store.nodes.Person.create(
props: { name: string; email?: string },
options?: { id?: string; validFrom?: string; validTo?: string }
): Promise<Node<Person>>;

Retrieves a node by ID.

store.nodes.Person.getById(id: NodeId<Person>): Promise<Node<Person> | undefined>;

Retrieves multiple nodes by ID in a single query. Returns results in input order, with undefined for missing IDs.

store.nodes.Person.getByIds(
ids: readonly NodeId<Person>[],
options?: QueryOptions
): Promise<readonly (Node<Person> | undefined)[]>;

When the backend supports batch lookups (getNodes), this executes a single SELECT ... WHERE id IN (...) query. Otherwise it falls back to sequential lookups.

const [alice, bob, unknown] = await store.nodes.Person.getByIds([
aliceId,
bobId,
"nonexistent",
]);
// alice: Node<Person>
// bob: Node<Person>
// unknown: undefined

Updates node properties.

store.nodes.Person.update(
id: NodeId<Person>,
props: Partial<{ name: string; email?: string }>
): Promise<Node<Person>>;

Soft-deletes a node.

store.nodes.Person.delete(id: NodeId<Person>): Promise<void>;

Permanently deletes a node. This is irreversible and should be used carefully.

store.nodes.Person.hardDelete(id: NodeId<Person>): Promise<void>;

Finds nodes of this kind with optional filtering and pagination.

store.nodes.Person.find(options?: {
where?: (accessor) => Predicate;
limit?: number;
offset?: number;
}): Promise<Node<Person>[]>;

The optional where predicate uses the same accessor API as whereNode() in the query builder:

const activeUsers = await store.nodes.Person.find({
where: (p) => p.status.eq("active"),
limit: 50,
});

Counts nodes of this kind (excluding soft-deleted nodes).

store.nodes.Person.count(): Promise<number>;

Creates a node from untyped data, relying on runtime Zod validation. Use this for dynamic dispatch (changesets, migrations, imports) where the data shape is determined at runtime, not compile time. The return type is fully typed — only the input gate is relaxed.

store.nodes.Person.createFromRecord(
data: Record<string, unknown>,
options?: { id?: string; validFrom?: string; validTo?: string }
): Promise<Node<Person>>;
// Data arrives from an external source at runtime
const importedRow: Record<string, unknown> = JSON.parse(line);
const person = await store.nodes.Person.createFromRecord(importedRow);
// person is fully typed as Node<Person>

Creates or updates a node by ID.

store.nodes.Person.upsertById(
id: string,
props: { name: string; email?: string },
options?: { validFrom?: string; validTo?: string }
): Promise<Node<Person>>;

Behavior:

  • Creates a new node if no node with the ID exists
  • Updates the existing node if one exists
  • Un-deletes soft-deleted nodes (clears deletedAt)

Upserts a node from untyped data, relying on runtime Zod validation. Same behavior as upsertById but accepts Record<string, unknown> instead of the typed schema input.

store.nodes.Person.upsertByIdFromRecord(
id: string,
data: Record<string, unknown>,
options?: { validFrom?: string; validTo?: string }
): Promise<Node<Person>>;
// Pre-seeded ID with dynamic data from a changeset
const run = await store.nodes.Run.upsertByIdFromRecord(
prepared.runId,
{ status: "running", ...dynamicConfig },
);

Creates multiple nodes efficiently. Uses a single multi-row INSERT when the backend supports it.

store.nodes.Person.bulkCreate(
items: readonly {
props: { name: string; email?: string };
id?: string;
validFrom?: string;
validTo?: string;
}[]
): Promise<Node<Person>[]>;

Use bulkInsert when you don’t need the created nodes back:

await store.nodes.Person.bulkInsert(batch);

Inserts multiple nodes without returning results. This is the dedicated fast path for bulk ingestion — wrapped in a transaction when the backend supports it.

store.nodes.Person.bulkInsert(
items: readonly {
props: { name: string; email?: string };
id?: string;
validFrom?: string;
validTo?: string;
}[]
): Promise<void>;

Creates or updates multiple nodes by ID.

store.nodes.Person.bulkUpsertById(
items: readonly {
id: string;
props: { name: string; email?: string };
validFrom?: string;
validTo?: string;
}[]
): Promise<Node<Person>[]>;

Soft-deletes multiple nodes.

store.nodes.Person.bulkDelete(
ids: readonly NodeId<Person>[]
): Promise<void>;

getOrCreateByConstraint(constraintName, props, options?)

Section titled “getOrCreateByConstraint(constraintName, props, options?)”

Looks up an existing node by a named uniqueness constraint. Returns the match if found, or creates a new node if not.

store.nodes.Person.getOrCreateByConstraint(
constraintName: string,
props: { name: string; email?: string },
options?: { ifExists?: "return" | "update" } // Default: "return"
): Promise<{
node: Node<Person>;
action: "created" | "found" | "updated" | "resurrected";
}>;

bulkGetOrCreateByConstraint(constraintName, items, options?)

Section titled “bulkGetOrCreateByConstraint(constraintName, items, options?)”

Batch version of getOrCreateByConstraint. Returns results in input order.

store.nodes.Person.bulkGetOrCreateByConstraint(
constraintName: string,
items: readonly {
props: { name: string; email?: string };
}[],
options?: { ifExists?: "return" | "update" }
): Promise<
{
node: Node<Person>;
action: "created" | "found" | "updated" | "resurrected";
}[]
>;

Looks up a node by a named uniqueness constraint without creating. Returns the matching node or undefined. Soft-deleted nodes are excluded.

store.nodes.Person.findByConstraint(
constraintName: string,
props: { name: string; email?: string }
): Promise<Node<Person> | undefined>;
const alice = await store.nodes.Person.findByConstraint("email", {
email: "alice@example.com",
name: "Alice",
});
if (alice) {
console.log(alice.id, alice.name);
}

Throws NodeConstraintNotFoundError if the constraint name is not defined on the node type.

bulkFindByConstraint(constraintName, items)

Section titled “bulkFindByConstraint(constraintName, items)”

Batch version of findByConstraint. Returns results in input order, with undefined for non-matches. Deduplicates within-batch lookups automatically.

store.nodes.Person.bulkFindByConstraint(
constraintName: string,
items: readonly { props: { name: string; email?: string } }[]
): Promise<(Node<Person> | undefined)[]>;
const results = await store.nodes.Person.bulkFindByConstraint("email", [
{ props: { email: "alice@example.com", name: "Alice" } },
{ props: { email: "nobody@example.com", name: "Nobody" } },
{ props: { email: "bob@example.com", name: "Bob" } },
]);
// results[0]: Node<Person> (Alice)
// results[1]: undefined
// results[2]: Node<Person> (Bob)

Each edge type has a type-safe collection. The from and to parameters are constrained to only accept node types declared in the edge registration.

Creates an edge. TypeScript enforces valid endpoint types.

// Given: worksAt: { type: worksAt, from: [Person], to: [Company] }
store.edges.worksAt.create(
from: NodeRef<Person>,
to: NodeRef<Company>,
props: { role: string }
): Promise<Edge<worksAt>>;
// Preferred: Pass node objects directly
await store.edges.worksAt.create(alice, acme, { role: "Engineer" });
// Compile error - Company is not a valid 'from' type
await store.edges.worksAt.create(acme, alice, { role: "Engineer" });

Both forms are exactly equivalent—TypeGraph extracts kind and id from either:

// Full node object (preferred - cleaner syntax)
await store.edges.worksAt.create(alice, acme, { role: "Engineer" });
// Explicit reference (useful when you only have IDs)
await store.edges.worksAt.create(
{ kind: "Person", id: aliceId },
{ kind: "Company", id: acmeId },
{ role: "Engineer" }
);

Use the explicit { kind, id } form when you have IDs but not the full node objects (e.g., from a previous query or external input).

Retrieves an edge by ID.

store.edges.worksAt.getById(id: EdgeId<worksAt>): Promise<Edge<worksAt> | undefined>;

Retrieves multiple edges by ID in a single query. Returns results in input order, with undefined for missing IDs.

store.edges.worksAt.getByIds(
ids: readonly EdgeId<worksAt>[],
options?: QueryOptions
): Promise<readonly (Edge<worksAt> | undefined)[]>;
const [edge1, edge2] = await store.edges.worksAt.getByIds([id1, id2]);

Updates edge properties.

store.edges.worksAt.update(
id: EdgeId<worksAt>,
props: Partial<{ role: string }>,
options?: { validTo?: string }
): Promise<Edge<worksAt>>;

Finds edges from a node.

store.edges.worksAt.findFrom(
from: NodeRef<Person>
): Promise<Edge<worksAt>[]>;

Finds edges to a node.

store.edges.worksAt.findTo(
to: NodeRef<Company>
): Promise<Edge<worksAt>[]>;

batchFindFrom(from) / batchFindTo(to) / batchFindByEndpoints(from, to, options?)

Section titled “batchFindFrom(from) / batchFindTo(to) / batchFindByEndpoints(from, to, options?)”

Deferred variants of findFrom, findTo, and findByEndpoints for use with store.batch(). These return a BatchableQuery instead of executing immediately.

store.edges.worksAt.batchFindFrom(from: NodeRef<Person>): BatchableQuery<Edge<worksAt>>;
store.edges.worksAt.batchFindTo(to: NodeRef<Company>): BatchableQuery<Edge<worksAt>>;
store.edges.worksAt.batchFindByEndpoints(
from: NodeRef<Person>,
to: NodeRef<Company>,
options?: { matchOn?: readonly string[]; props?: Partial<{ role: string }> }
): BatchableQuery<Edge<worksAt>>;
// Execute multiple edge lookups over a single connection
const [skills, employer] = await store.batch(
store.edges.hasSkill.batchFindFrom(alice),
store.edges.worksAt.batchFindFrom(alice),
);

batchFindByEndpoints returns a 0-or-1 element array (matching the at-most-one semantics of findByEndpoints).

Finds edges with endpoint filtering.

store.edges.worksAt.find(options?: {
from?: NodeRef<Person>;
to?: NodeRef<Company>;
limit?: number;
offset?: number;
}): Promise<Edge<worksAt>[]>;

For edge property filters, use the query builder with whereEdge(...).

Counts edges matching filters.

store.edges.worksAt.count(options?: {
from?: NodeRef<Person>;
to?: NodeRef<Company>;
}): Promise<number>;

Soft-deletes an edge.

store.edges.worksAt.delete(id: EdgeId<worksAt>): Promise<void>;

Permanently deletes an edge. This is irreversible and should be used carefully.

store.edges.worksAt.hardDelete(id: EdgeId<worksAt>): Promise<void>;

Creates multiple edges efficiently. Uses a single multi-row INSERT when the backend supports it.

store.edges.worksAt.bulkCreate(
items: readonly {
from: NodeRef<Person>;
to: NodeRef<Company>;
props?: { role: string };
id?: string;
validFrom?: string;
validTo?: string;
}[]
): Promise<Edge<worksAt>[]>;

Use bulkInsert for high-volume edge ingestion when you do not need returned payloads:

await store.edges.worksAt.bulkInsert(edgeBatch);

Inserts multiple edges without returning results. This is the dedicated fast path for bulk ingestion — wrapped in a transaction when the backend supports it.

store.edges.worksAt.bulkInsert(
items: readonly {
from: NodeRef<Person>;
to: NodeRef<Company>;
props?: { role: string };
id?: string;
validFrom?: string;
validTo?: string;
}[]
): Promise<void>;

Soft-deletes multiple edges.

store.edges.worksAt.bulkDelete(
ids: readonly EdgeId<worksAt>[]
): Promise<void>;

Creates or updates multiple edges by ID.

store.edges.worksAt.bulkUpsertById(
items: readonly {
id: EdgeId<worksAt>;
from: NodeRef<Person>;
to: NodeRef<Company>;
props?: { role: string };
validFrom?: string;
validTo?: string;
}[]
): Promise<Edge<worksAt>[]>;

getOrCreateByEndpoints(from, to, props, options?)

Section titled “getOrCreateByEndpoints(from, to, props, options?)”

Looks up an existing edge by endpoints (and optionally by property fields via matchOn). Returns the match if found, or creates a new edge if not.

store.edges.worksAt.getOrCreateByEndpoints(
from: NodeRef<Person>,
to: NodeRef<Company>,
props: { role: string },
options?: {
matchOn?: readonly ("role")[]; // Default: []
ifExists?: "return" | "update"; // Default: "return"
}
): Promise<{
edge: Edge<worksAt>;
action: "created" | "found" | "updated" | "resurrected";
}>;

bulkGetOrCreateByEndpoints(items, options?)

Section titled “bulkGetOrCreateByEndpoints(items, options?)”

Batch version of getOrCreateByEndpoints. Returns results in input order.

store.edges.worksAt.bulkGetOrCreateByEndpoints(
items: readonly {
from: NodeRef<Person>;
to: NodeRef<Company>;
props: { role: string };
}[],
options?: {
matchOn?: readonly ("role")[];
ifExists?: "return" | "update";
}
): Promise<
{
edge: Edge<worksAt>;
action: "created" | "found" | "updated" | "resurrected";
}[]
>;

Looks up an edge by its endpoints without creating. Returns the matching edge or undefined. Soft-deleted edges are excluded.

When matchOn is omitted, returns the first live edge between the two endpoints. When matchOn is provided, filters by the specified property fields.

store.edges.knows.findByEndpoints(
from: NodeRef<Person>,
to: NodeRef<Person>,
options?: {
matchOn?: readonly ("relationship" | "since")[];
props?: Partial<{ relationship: string; since: string }>;
}
): Promise<Edge<knows> | undefined>;
// Find any edge between Alice and Bob
const edge = await store.edges.knows.findByEndpoints(alice, bob);
// Find the specific "colleague" edge between Alice and Bob
const colleague = await store.edges.knows.findByEndpoints(alice, bob, {
matchOn: ["relationship"],
props: { relationship: "colleague" },
});

Executes a callback within an atomic transaction. All operations succeed together or are rolled back together. The transaction context (tx) provides the same nodes.* and edges.* collection API as the store itself.

await store.transaction(async (tx) => {
const person = await tx.nodes.Person.create({ name: "Alice" });
const company = await tx.nodes.Company.create({ name: "Acme" });
await tx.edges.worksAt.create(person, company, { role: "Engineer" });
});

The callback’s return value is forwarded to the caller:

const personId = await store.transaction(async (tx) => {
const person = await tx.nodes.Person.create({ name: "Alice" });
return person.id;
});
// personId is available here

If the callback throws, the transaction is rolled back and the error re-throws to the caller. No partial writes are persisted.

try {
await store.transaction(async (tx) => {
await tx.nodes.Person.create({ name: "Alice" });
throw new Error("something went wrong");
// Alice is NOT persisted — the entire transaction is rolled back
});
} catch (error) {
// error.message === "something went wrong"
}

Transactions do not nest. The transaction context intentionally omits the transaction() method, so attempting to start a transaction inside another transaction is a compile-time error. If you need to compose transactional operations, pass the tx context through your call chain.

Not all backends support atomic transactions. Cloudflare D1, for example, does not — calling store.transaction() on a D1-backed store throws a ConfigurationError. Check support at runtime with:

if (backend.capabilities.transactions) {
await store.transaction(async (tx) => { /* ... */ });
} else {
// fall back to individual operations with manual error handling
}

Hard-deletes all data for the current graph: nodes, edges, uniqueness entries, embeddings, and schema versions. Resets collection caches so the store is immediately reusable.

store.clear(): Promise<void>;

Wrapped in a transaction when the backend supports it. Does not affect other graphs sharing the same backend.

// Wipe all data and start fresh
await store.clear();
// Store is immediately reusable
const person = await store.nodes.Person.create({ name: "Alice" });

Executes multiple independent queries over a single connection with snapshot consistency. Accepts two or more queries (from .select(), set operations, or edge collection batchFind* methods) and returns a typed tuple of results preserving input order.

All queries run within an implicit transaction — they see the same database snapshot. This avoids connection pool pressure from Promise.all patterns (N connections → 1) while giving each query independent projection, filtering, sorting, and pagination.

store.batch<R1, R2, ...Rn>(
q1: BatchableQuery<R1>,
q2: BatchableQuery<R2>,
...qn: BatchableQuery<Rn>,
): Promise<readonly [readonly R1[], readonly R2[], ...readonly Rn[]]>;

Example:

const [people, companies] = await store.batch(
store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.status.eq("active"))
.select((ctx) => ({ id: ctx.p.id, name: ctx.p.name })),
store
.query()
.from("Company", "c")
.select((ctx) => ({ id: ctx.c.id, name: ctx.c.name }))
.orderBy("c", "name", "asc")
.limit(5),
);
// people: readonly { id: string; name: string }[]
// companies: readonly { id: string; name: string }[]

With traversals and mixed projections:

const [skills, artifacts, recentGoals] = await store.batch(
store
.query()
.from("Agent", "a")
.whereNode("a", (a) => a.id.eq(agentId))
.traverse("has_skill", "e")
.to("Skill", "s")
.select((ctx) => ({ id: ctx.s.id, name: ctx.s.name })),
store
.query()
.from("Agent", "a")
.whereNode("a", (a) => a.id.eq(agentId))
.traverse("references", "ref")
.to("Artifact", "art")
.select((ctx) => ({
id: ctx.art.id,
title: ctx.art.title,
pin: ctx.ref.activeVersionId,
})),
store
.query()
.from("Agent", "a")
.whereNode("a", (a) => a.id.eq(agentId))
.traverse("has_goal", "e")
.to("Goal", "g")
.select((ctx) => ({ id: ctx.g.id, name: ctx.g.name }))
.orderBy("g", "name", "asc")
.limit(10),
);

Set operations work too:

const [combined, separate] = await store.batch(
store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.role.eq("admin"))
.select((ctx) => ({ id: ctx.p.id, name: ctx.p.name }))
.union(
store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.role.eq("owner"))
.select((ctx) => ({ id: ctx.p.id, name: ctx.p.name })),
),
store
.query()
.from("Company", "c")
.select((ctx) => ({ id: ctx.c.id, name: ctx.c.name })),
);

Edge collection lookups:

// Edge batchFind* methods return BatchableQuery — mix freely with fluent queries
const [skills, employer, colleague] = await store.batch(
store.edges.hasSkill.batchFindFrom(alice),
store.edges.worksAt.batchFindFrom(alice),
store.edges.knows.batchFindByEndpoints(alice, bob),
);
PatternUse
Multiple queries with different shapes/filtersstore.batch()
Load entity with all relationships (uniform)store.subgraph()
Single query.execute() directly
Writes interleaved with readsstore.transaction()
Same-shape queries merged into one result.union() / .intersect() / .except()

Extracts a typed subgraph by performing a BFS traversal from a root node, following the specified edge kinds. Returns an indexed result with adjacency maps for immediate traversal.

Under the hood, this compiles to a single WITH RECURSIVE CTE — the traversal, filtering, and hydration all happen in the database.

store.subgraph<EK, NK>(
rootId: NodeId<AllNodeTypes<G>>,
options: SubgraphOptions<G, EK, NK>,
): Promise<SubgraphResult<G, NK, EK>>;

Options:

OptionTypeDefaultDescription
edgesreadonly EK[](required)Edge kinds to follow during traversal
maxDepthnumber10Maximum traversal depth from root (capped at MAX_RECURSIVE_DEPTH)
includeKindsreadonly NK[]all kindsNode kinds to include in the result. Other kinds are traversed through but omitted from output
excludeRootbooleanfalseExclude the root node from the result
direction"out" | "both""out""out" follows edges in their defined direction; "both" treats edges as undirected
cyclePolicy"prevent" | "allow""prevent"Whether to detect and skip cycles during traversal
project{ nodes?, edges? }(none)Per-kind field projection — see Projection below

Result:

type SubgraphResult<G, NK, EK> = Readonly<{
root: SubgraphNodeResult<G, NK> | undefined;
nodes: ReadonlyMap<string, SubgraphNodeResult<G, NK>>;
adjacency: ReadonlyMap<string, ReadonlyMap<EK, readonly SubgraphEdgeResult<G, EK>[]>>;
reverseAdjacency: ReadonlyMap<string, ReadonlyMap<EK, readonly SubgraphEdgeResult<G, EK>[]>>;
}>;
FieldDescription
rootThe root node, or undefined if it was not found or excludeRoot is set
nodesAll reachable nodes keyed by string ID
adjacencyForward adjacency: fromId → edgeKind → edges[]
reverseAdjacencyReverse adjacency: toId → edgeKind → edges[]

Edges are only included when both endpoints appear in the result set. Soft-deleted nodes and edges are automatically excluded. Duplicate nodes (reachable via multiple paths) are deduplicated.

Example:

const sg = await store.subgraph(run.id, {
edges: ["has_task", "runs_agent", "uses_skill"],
maxDepth: 4,
});
// Root node (the traversal starting point)
console.log(sg.root?.kind);
// Lookup by ID
const task = sg.nodes.get(taskId);
// Forward adjacency: edges of a kind from a node
const taskEdges = sg.adjacency.get(String(run.id))?.get("has_task") ?? [];
const tasks = taskEdges.map((edge) => sg.nodes.get(String(edge.toId)));
// Reverse adjacency: edges of a kind pointing to a node
const parentEdges = sg.reverseAdjacency.get(taskId)?.get("has_task") ?? [];
// Narrow by kind with a switch
for (const node of sg.nodes.values()) {
switch (node.kind) {
case "Task": {
console.log(node.title, node.status);
break;
}
case "Agent": {
console.log(node.model);
break;
}
}
}

Filtering to specific node kinds:

const tasksOnly = await store.subgraph(run.id, {
edges: ["has_task", "depends_on"],
includeKinds: ["Task"],
excludeRoot: true,
});
// tasksOnly.nodes values are typed as Node<typeof Task>

Bidirectional traversal:

// Find all nodes connected to a skill, regardless of edge direction
const neighborhood = await store.subgraph(skill.id, {
edges: ["uses_skill", "has_task"],
direction: "both",
maxDepth: 3,
});

By default, subgraph() returns fully hydrated nodes and edges. The project option lets you specify which properties to keep per kind, reducing payload size and enabling SQL-level field extraction via json_extract() / JSONB paths.

const result = await store.subgraph(rootId, {
edges: ["has_task", "uses_skill"],
maxDepth: 2,
project: {
nodes: {
Task: ["title", "meta"],
Skill: ["name"],
},
edges: {
uses_skill: ["priority"],
},
},
});
// Task → { kind, id, title, meta } — status omitted, compile-time error to access
// Skill → { kind, id, name }
// uses_skill → { id, kind, fromKind, fromId, toKind, toId, priority }

Projection rules:

  • Projected nodes always retain kind and id; projected edges always retain structural fields (id, kind, fromKind, fromId, toKind, toId).
  • Kinds omitted from project remain fully hydrated.
  • Include "meta" in the field list for the full metadata object, or omit it entirely. No partial metadata selection — the struct is small enough that subsetting adds complexity without savings.
  • Node projection keys must exist in includeKinds (or be any node kind when includeKinds is omitted). Edge projection keys must be in edges. Out-of-scope keys are a compile-time error.

Type narrowing:

Result types narrow per-kind based on the projection. Accessing an omitted field is a compile-time error:

for (const node of result.nodes.values()) {
if (node.kind === "Task") {
console.log(node.title); // OK
console.log(node.status); // TypeScript error — status was not projected
}
}

When storing a projection config in a variable, TypeScript widens field arrays to string[], defeating compile-time narrowing. Use defineSubgraphProject() to preserve literal types:

import { defineSubgraphProject } from "@nicia-ai/typegraph";
const agentProjection = defineSubgraphProject<typeof graph>()({
nodes: {
Task: ["title", "status"],
Skill: ["name"],
},
edges: {
uses_skill: ["priority"],
},
});
// Reuse across calls — types are preserved
const result = await store.subgraph(rootId, {
edges: ["has_task", "uses_skill"],
project: agentProjection,
});

TypeGraph offers several ways to load related data. The right choice depends on your access pattern:

PatternBest strategyWhy
Load entity with all relationshipssubgraph(maxDepth: 1)Single SQL round trip — fans out across all edge types in one recursive CTE
Load entity with deep chainsubgraph(maxDepth: N)Recursive CTE handles multi-hop in one query
Filter/sort within a relationship.query().traverse()Fluent query supports WHERE/ORDER/LIMIT on target nodes
Multiple independent queries with per-query controlstore.batch()Single connection, snapshot consistency, typed tuple results
Check if an edge existsedges.X.findFrom()Lightweight — no node resolution needed
Traverse + resolve one edge typeedges.X.findFrom() + nodes.X.getByIds()Two queries, simple and explicit

Key insight: subgraph() issues a single SQL statement regardless of how many edge types it traverses. Parallel findFrom calls scale linearly in round trips — one per edge type, plus additional queries for node resolution. The gap widens as relationship count grows.

For the common “load an entity and everything it touches” pattern (detail pages, config hydration, template instantiation), subgraph() with maxDepth: 1 is the fastest approach. When you need per-query filtering, sorting, or pagination across multiple independent queries, use store.batch() to run them over a single connection with snapshot consistency. Reserve individual fluent queries for one-off operations.

Creates a query builder. See Query Builder for full documentation.

const results = await store
.query()
.from("Person", "p")
.whereNode("p", (p) => p.name.startsWith("A"))
.select((ctx) => ctx.p)
.execute();

Execution methods (see Execute for details):

MethodReturnsDescription
execute()Promise<readonly T[]>Run query, return all results
first()Promise<T | undefined>Return first result or undefined
count()Promise<number>Count matching results
exists()Promise<boolean>Check if any results exist
paginate(options)Promise<PaginatedResult<T>>Cursor-based pagination
stream(options?)AsyncIterable<T>Stream results in batches
prepare()PreparedQuery<T>Pre-compile query for repeated execution with parameters

Execute multiple queries over a single connection. See Batch Query Execution.

The typed store.nodes.* and store.edges.* accessors require the kind name at compile time. When the kind is determined at runtime — iterating all kinds, resolving a node from edge metadata, building admin UIs or snapshot tools — use getNodeCollection and getEdgeCollection instead.

Returns the DynamicNodeCollection for the given kind, or undefined if the kind is not registered in this graph.

import { getNodeKinds } from "@nicia-ai/typegraph";
// Count every node kind
const counts: Record<string, number> = {};
for (const kind of getNodeKinds(graph)) {
const collection = store.getNodeCollection(kind);
if (collection) {
counts[kind] = await collection.count();
}
}
// Resolve a node from edge metadata
const collection = store.getNodeCollection(edge.fromKind);
const node = await collection?.getById(edge.fromId);

Returns the DynamicEdgeCollection for the given kind, or undefined if the kind is not registered in this graph.

import { getEdgeKinds } from "@nicia-ai/typegraph";
// Snapshot all edges
for (const kind of getEdgeKinds(graph)) {
const collection = store.getEdgeCollection(kind);
if (collection) {
const edges = await collection.find({ limit: 10_000 });
snapshot.push(...edges);
}
}

The returned collections expose the full API (create, getById, find, count, createFromRecord, etc.) with widened generics — see DynamicNodeCollection and DynamicEdgeCollection.

Access to the type registry for ontology lookups. The registry is an internal type; use store.registry directly without importing its type.

See Ontology for registry methods.

TypeGraph supports observability hooks for monitoring and logging store operations.

Configuration for observability callbacks:

import type {
HookContext,
QueryHookContext,
OperationHookContext,
StoreHooks,
} from "@nicia-ai/typegraph";
type StoreHooks = Readonly<{
onQueryStart?: (ctx: QueryHookContext) => void;
onQueryEnd?: (ctx: QueryHookContext, result: { rowCount: number; durationMs: number }) => void;
onOperationStart?: (ctx: OperationHookContext) => void;
onOperationEnd?: (ctx: OperationHookContext, result: { durationMs: number }) => void;
onError?: (ctx: HookContext, error: Error) => void;
}>;
type HookContext = Readonly<{
operationId: string;
graphId: string;
startedAt: Date;
}>;
type QueryHookContext = HookContext &
Readonly<{
sql: string;
params: readonly unknown[];
}>;
type OperationHookContext = HookContext &
Readonly<{
operation: "create" | "update" | "delete";
entity: "node" | "edge";
kind: string;
id: string;
}>;

Note: Batch operations (bulkCreate, bulkInsert, bulkUpsertById) skip per-item operation hooks for throughput. Query hooks still fire normally.

Example:

import { createStore, type StoreHooks } from "@nicia-ai/typegraph";
const hooks: StoreHooks = {
onQueryStart: (ctx) => {
console.log(`[${ctx.operationId}] SQL: ${ctx.sql}`);
},
onQueryEnd: (ctx, result) => {
console.log(`[${ctx.operationId}] ${result.rowCount} rows in ${result.durationMs}ms`);
},
onOperationStart: (ctx) => {
console.log(`[${ctx.operationId}] ${ctx.operation} ${ctx.entity}:${ctx.kind}`);
},
onOperationEnd: (ctx, result) => {
console.log(`[${ctx.operationId}] Completed in ${result.durationMs}ms`);
},
onError: (ctx, error) => {
console.error(`[${ctx.operationId}] Error:`, error.message);
},
};
const store = createStore(graph, backend, { hooks });
// Operations now trigger hooks
await store.nodes.Person.create({ name: "Alice" });
// Logs:
// [op-abc123] create node:Person
// [op-abc123] SQL: INSERT INTO ...
// [op-abc123] 1 rows in 2ms
// [op-abc123] Completed in 5ms