Skip to content

Backend Setup

TypeGraph stores graph data in your existing relational database using Drizzle ORM adapters. This guide covers setting up SQLite and PostgreSQL backends.

SQLite is ideal for development, testing, single-server deployments, and embedded applications.

For development and testing, use the convenience function that handles everything:

import { createLocalSqliteBackend } from "@nicia-ai/typegraph/sqlite/local";
import { createStore } from "@nicia-ai/typegraph";
// In-memory database (resets on restart)
const { backend } = createLocalSqliteBackend();
const store = createStore(graph, backend);
// File-based database (persisted)
const { backend, db } = createLocalSqliteBackend({ path: "./app.db" });
const store = createStore(graph, backend);

For full control over the database connection:

import Database from "better-sqlite3";
import { drizzle } from "drizzle-orm/better-sqlite3";
import { createSqliteBackend, generateSqliteMigrationSQL } from "@nicia-ai/typegraph/sqlite";
import { createStoreWithSchema } from "@nicia-ai/typegraph";
// Create and configure the database
const sqlite = new Database("app.db");
sqlite.pragma("journal_mode = WAL"); // Recommended for performance
sqlite.pragma("foreign_keys = ON");
// Create Drizzle instance and backend
const db = drizzle(sqlite);
const backend = createSqliteBackend(db);
// createStoreWithSchema auto-creates tables on first run
const [store] = await createStoreWithSchema(graph, backend);
// Clean up when done
process.on("exit", () => sqlite.close());

If you need to run DDL yourself (e.g. via a migration tool), use generateSqliteMigrationSQL() with createStore() instead:

sqlite.exec(generateSqliteMigrationSQL());
const store = createStore(graph, backend);

For semantic search, use sqlite-vec:

import Database from "better-sqlite3";
import { drizzle } from "drizzle-orm/better-sqlite3";
import { createSqliteBackend, generateSqliteMigrationSQL } from "@nicia-ai/typegraph/sqlite";
const sqlite = new Database("app.db");
// Load sqlite-vec extension
sqlite.loadExtension("vec0");
// Run migrations (includes vector index setup)
sqlite.exec(generateSqliteMigrationSQL());
const db = drizzle(sqlite);
const backend = createSqliteBackend(db);

See Semantic Search for query examples.

For edge deployments, shared-driver setups, or Turso cloud databases, use the first-class libsql backend:

Terminal window
npm install @libsql/client
import { createClient } from "@libsql/client";
import { createLibsqlBackend } from "@nicia-ai/typegraph/sqlite/libsql";
import { createStore } from "@nicia-ai/typegraph";
// Local file
const client = createClient({ url: "file:app.db" });
// Or remote Turso database
// const client = createClient({ url: "libsql://my-db.turso.io", authToken: "..." });
const { backend, db } = await createLibsqlBackend(client);
const store = createStore(graph, backend);

createLibsqlBackend handles DDL execution and configures the correct async execution profile automatically. It returns both the backend and the underlying Drizzle db instance for direct SQL access. The caller retains ownership of the client and is responsible for closing it when done — this allows sharing a single client across TypeGraph and other libraries.

Creates a SQLite backend with automatic database and schema setup.

function createLocalSqliteBackend(options?: {
path?: string; // Database path, defaults to ":memory:"
tables?: SqliteTables;
}): { backend: GraphBackend; db: BetterSQLite3Database };

Creates a SQLite backend from an existing Drizzle database instance.

function createSqliteBackend(
db: BetterSQLite3Database,
options?: { tables?: SqliteTables }
): GraphBackend;

Returns SQL for creating TypeGraph tables in SQLite.

function generateSqliteMigrationSQL(): string;

Creates a SQLite backend from a @libsql/client instance. Runs DDL automatically. The caller retains ownership of the client and is responsible for closing it.

async function createLibsqlBackend(
client: Client,
options?: { tables?: SqliteTables }
): Promise<{ backend: GraphBackend; db: LibSQLDatabase }>;

PostgreSQL is recommended for production deployments with concurrent access, large datasets, or when you need advanced features like pgvector.

import { Pool } from "pg";
import { drizzle } from "drizzle-orm/node-postgres";
import { createPostgresBackend } from "@nicia-ai/typegraph/postgres";
import { createStoreWithSchema } from "@nicia-ai/typegraph";
// Create connection pool
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Maximum connections
});
// Create Drizzle instance and backend
const db = drizzle(pool);
const backend = createPostgresBackend(db);
// createStoreWithSchema auto-creates tables on first run
const [store] = await createStoreWithSchema(graph, backend);

If you manage DDL externally, use generatePostgresMigrationSQL() with createStore():

import { generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";
await pool.query(generatePostgresMigrationSQL());
const store = createStore(graph, backend);

For semantic search, enable pgvector:

import { Pool } from "pg";
import { drizzle } from "drizzle-orm/node-postgres";
import { createPostgresBackend, generatePostgresMigrationSQL } from "@nicia-ai/typegraph/postgres";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
// Migration SQL includes pgvector extension
await pool.query(generatePostgresMigrationSQL());
// Runs: CREATE EXTENSION IF NOT EXISTS vector;
const db = drizzle(pool);
const backend = createPostgresBackend(db);

See Semantic Search for query examples.

For production, always use connection pooling:

import { Pool } from "pg";
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Maximum pool size
idleTimeoutMillis: 30000, // Close idle connections after 30s
connectionTimeoutMillis: 2000, // Timeout for new connections
});
// Handle pool errors
pool.on("error", (err) => {
console.error("Unexpected pool error", err);
});
// Graceful shutdown
process.on("SIGTERM", async () => {
await pool.end();
process.exit(0);
});

Creates a PostgreSQL backend adapter.

function createPostgresBackend(
db: NodePgDatabase,
options?: { tables?: PostgresTables }
): GraphBackend;

Returns SQL for creating TypeGraph tables in PostgreSQL, including the pgvector extension.

function generatePostgresMigrationSQL(): string;

Returns individual DDL statements (CREATE TABLE, CREATE INDEX) as an array. Useful when you need per-statement control, for example to execute them in separate transactions or log them individually.

function generatePostgresDDL(tables?: PostgresTables): string[];

TypeGraph exposes Drizzle adapters through public entrypoints:

  • @nicia-ai/typegraph/sqlite — Generic SQLite adapter (any Drizzle SQLite driver)
  • @nicia-ai/typegraph/sqlite/local — Batteries-included better-sqlite3 wrapper (Node.js only)
  • @nicia-ai/typegraph/sqlite/libsql — Batteries-included libsql wrapper (Node.js, Workers, browser)
  • @nicia-ai/typegraph/postgres — PostgreSQL adapter

Import from the entrypoint matching your database:

import { createSqliteBackend, tables } from "@nicia-ai/typegraph/sqlite";
import { createLocalSqliteBackend } from "@nicia-ai/typegraph/sqlite/local";
import { createLibsqlBackend } from "@nicia-ai/typegraph/sqlite/libsql";
import { createPostgresBackend, tables } from "@nicia-ai/typegraph/postgres";

TypeGraph supports Cloudflare D1 for edge deployments, with some limitations.

Use createStoreWithSchema() to automatically create tables on a fresh D1 database and manage schema versions across deployments:

import { drizzle } from "drizzle-orm/d1";
import { createStoreWithSchema } from "@nicia-ai/typegraph";
import { createSqliteBackend } from "@nicia-ai/typegraph/sqlite";
export default {
async fetch(request: Request, env: Env) {
const db = drizzle(env.DB);
const backend = createSqliteBackend(db);
const [store] = await createStoreWithSchema(graph, backend);
// Use store...
},
};

If you prefer to manage DDL yourself, use createStore() with manual migrations instead.

Important: D1 does not support transactions. See Limitations for details.

Check what features a backend supports:

const backend = createSqliteBackend(db);
if (backend.capabilities.transactions) {
await store.transaction(async (tx) => { /* ... */ });
} else {
// Handle non-transactional execution
}
if (backend.capabilities.vectorSearch) {
// Vector similarity queries available
}

TypeGraph does not manage database connections. You are responsible for:

  1. Creating connections with appropriate configuration
  2. Connection pooling for production use
  3. Closing connections on shutdown
// You create the connection
const sqlite = new Database("app.db");
const db = drizzle(sqlite);
const backend = createSqliteBackend(db);
const store = createStore(graph, backend);
// You close the connection
process.on("exit", () => {
sqlite.close();
});

The store.close() method is a no-op—cleanup is your responsibility.

// In-memory for fast tests
const { backend } = createLocalSqliteBackend();
// Or file-based for persistence during development
const { backend } = createLocalSqliteBackend({ path: "./dev.db" });
// Fresh in-memory database per test
beforeEach(() => {
const { backend } = createLocalSqliteBackend();
store = createStore(graph, backend);
});
// PostgreSQL with pooling
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20,
ssl: { rejectUnauthorized: false }, // For managed databases
});
await pool.query(generatePostgresMigrationSQL());
const db = drizzle(pool);
const backend = createPostgresBackend(db);
const [store] = await createStoreWithSchema(graph, backend);