Database
DealDome stores everything in a single PostgreSQL 16 database with the pgvector extension enabled. The Go backend connects via pgx — a high-performance PostgreSQL driver with built-in connection pooling — so every query goes through a managed pool without extra middleware.
Overview
PostgreSQL is the single source of truth for the entire platform: user accounts, Shopify store configs, AI agent memories, conversation history, financial data, and product analytics all live here. The pgvector extension adds native support for vector columns, making it possible to store OpenAI-generated embeddings and run similarity searches directly in SQL.
Key tables
Here is a rundown of the main tables in the database and what they are responsible for.
- Name
users- Type
- table
- Description
Login accounts for team members who access the dashboard and API.
- Name
api_tokens- Type
- table
- Description
Service tokens with scoped permissions for programmatic API access.
- Name
settings- Type
- table
- Description
Key-value configuration store. Holds API keys, Shopify store JSON configs, feature flags, and other runtime settings.
- Name
videos- Type
- table
- Description
Knowledge base source material — video metadata (title, URL, duration) used by the Elisa agent.
- Name
chunks- Type
- table
- Description
Transcript chunks split from videos, each with a text column and a
vector(1536)embedding column for similarity search.
- Name
memories- Type
- table
- Description
Agent long-term memories — facts, decisions, and context that Alfred remembers across conversations. Each memory has a
vector(1536)embedding used for deduplication and retrieval.
- Name
conversations- Type
- table
- Description
Chat sessions between users and agents, linked to a Telegram chat or API session.
- Name
messages- Type
- table
- Description
Individual messages within a conversation, including role (user, assistant, tool) and content.
- Name
scheduled_tasks- Type
- table
- Description
Scheduled reminders and recurring checks. Supports cron expressions for flexible scheduling.
- Name
products- Type
- table
- Description
Winning products tracked by the team, including cost price for margin calculations.
- Name
fixed_costs- Type
- table
- Description
Monthly fixed costs (subscriptions, tools, salaries) used in P&L calculations.
- Name
daily_financials- Type
- table
- Description
Daily profit-and-loss records per Shopify store — revenue, COGS, ad spend, and net profit.
- Name
ad_spend_entries- Type
- table
- Description
Per-day ad spend tracking across platforms (Meta, TikTok, etc.) linked to stores.
- Name
product_performance- Type
- table
- Description
Product-level ROAS (Return on Ad Spend) and performance metrics aggregated per day.
pgvector and embeddings
Two tables use vector columns: chunks and memories. When a new knowledge chunk or memory is created, the backend calls the OpenAI embeddings API to generate a 1536-dimensional vector and stores it alongside the text content.
Similarity search uses cosine distance (<=> operator) to find the most relevant results. This powers two core features:
- Knowledge retrieval — Elisa searches the
chunkstable to find transcript segments that match a user's question. - Memory deduplication and recall — Alfred searches the
memoriestable to load relevant context at the start of each conversation, and to check whether a new memory already exists before inserting it.
How it works
- Text comes in (a chunk of transcript or a new memory).
- The backend sends it to OpenAI's
text-embedding-3-smallmodel. - The returned 1536-dimension float array is stored in a
vector(1536)column. - At query time, the same embedding model converts the search query to a vector.
- PostgreSQL uses cosine distance to rank results by similarity.
SQL examples
CREATE EXTENSION IF NOT EXISTS vector;
CREATE TABLE chunks (
id SERIAL PRIMARY KEY,
video_id INTEGER REFERENCES videos(id),
content TEXT NOT NULL,
embedding vector(1536),
created_at TIMESTAMPTZ DEFAULT now()
);
CREATE INDEX ON chunks
USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);
The ivfflat index speeds up approximate nearest-neighbor searches significantly. Without it, PostgreSQL would do a full sequential scan on every query — fine for small tables, but slow once you have thousands of embeddings.
Migrations
Database migrations are managed with versioned SQL files. Each migration file is named with a sequential number and a short description (e.g., 001_create_users.sql, 002_add_memories.sql). On startup, the Go backend checks which migrations have already been applied and runs any new ones in order.
Migration example
CREATE TABLE scheduled_tasks (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
description TEXT,
cron TEXT, -- e.g. '0 9 * * *' for daily at 09:00
next_run_at TIMESTAMPTZ,
enabled BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT now()
);
Migrations run inside a transaction, so if anything fails the database stays in its previous state. The applied migrations are tracked in a schema_migrations table to prevent re-running.