Skip to main content
Supabase provides comprehensive AI and vector capabilities through pgvector, specialized storage buckets, and integration with popular AI frameworks. Store embeddings, perform similarity search, and build AI-powered applications at scale.

What are Vectors?

Vectors are arrays of numbers that represent the “meaning” or “features” of data. In AI applications, vectors (embeddings) capture semantic relationships:
// Text embeddings (1536 dimensions from OpenAI)
const embedding = [0.123, -0.456, 0.789, ...] // 1536 numbers

// Each dimension captures different semantic features
// Similar texts have similar vectors

Vector Similarity

Vectors enable semantic search by measuring similarity:
Query: "cat chases mouse"
Embedding: [0.2, 0.8, -0.1, ...]

Document 1: "kitten hunts rodent"  
Embedding: [0.21, 0.79, -0.09, ...]  → High similarity ✓

Document 2: "weather forecast today"
Embedding: [-0.5, 0.1, 0.9, ...]   → Low similarity ✗

Architecture

Supabase offers multiple ways to work with vectors:

1. pgvector Extension

Store vectors in Postgres tables with full SQL support:
create extension vector;

create table documents (
  id bigint primary key,
  content text,
  embedding vector(1536)
);

2. Vector Buckets

Specialized Storage buckets optimized for vector operations:
// Create a vector bucket
const { data } = await supabase.storage.createBucket('embeddings', {
  public: false,
  bucketType: 'vector'
})

3. Analytics Buckets

Store large-scale vector datasets using Apache Iceberg:
-- Query vectors in analytics buckets via SQL
select * from iceberg.embeddings
where category = 'documents'
limit 1000;

Working with pgvector

Enable the Extension

  1. Go to Database → Extensions
  2. Search for “vector”
  3. Enable the extension

Create a Vector Table

create table documents (
  id bigint generated by default as identity primary key,
  title text not null,
  content text not null,
  embedding vector(1536), -- OpenAI text-embedding-3-small
  created_at timestamptz default now()
);

-- Create an index for fast similarity search
create index on documents 
using hnsw (embedding vector_cosine_ops);

Generating Embeddings

Use popular embedding models:
// Using OpenAI
import OpenAI from 'openai'

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })

const response = await openai.embeddings.create({
  model: 'text-embedding-3-small',
  input: 'Your text here'
})

const embedding = response.data[0].embedding // [0.123, -0.456, ...]
// Using Transformers.js (local, no API needed)
import { pipeline } from '@huggingface/transformers'

const generateEmbedding = await pipeline(
  'feature-extraction',
  'Supabase/gte-small'
)

const output = await generateEmbedding('Your text here', {
  pooling: 'mean',
  normalize: true
})

const embedding = Array.from(output.data)

Storing Vectors

// Store document with embedding
const { data, error } = await supabase
  .from('documents')
  .insert({
    title: 'Introduction to AI',
    content: 'Artificial intelligence is...',
    embedding: embedding
  })
Find similar documents using vector operators:
-- Create a similarity search function
create or replace function match_documents(
  query_embedding vector(1536),
  match_threshold float,
  match_count int
)
returns table (
  id bigint,
  title text,
  content text,
  similarity float
)
language sql stable
as $$
  select
    documents.id,
    documents.title,
    documents.content,
    1 - (documents.embedding <=> query_embedding) as similarity
  from documents
  where 1 - (documents.embedding <=> query_embedding) > match_threshold
  order by documents.embedding <=> query_embedding
  limit match_count;
$$;
Call from your application:
// Generate embedding for search query
const queryEmbedding = await generateEmbedding('machine learning basics')

// Search for similar documents
const { data: documents } = await supabase
  .rpc('match_documents', {
    query_embedding: queryEmbedding,
    match_threshold: 0.7,
    match_count: 10
  })

console.log(documents)
// [
//   { id: 42, title: 'ML Fundamentals', similarity: 0.92 },
//   { id: 17, title: 'Intro to Neural Networks', similarity: 0.85 },
//   ...
// ]

Distance Metrics

pgvector supports three distance metrics:

Cosine Distance (most common)

Measures angle between vectors (0 = identical, 2 = opposite):
select embedding <=> query_embedding as distance
from documents
order by embedding <=> query_embedding
limit 5;

Euclidean Distance

Measures straight-line distance:
select embedding <-> query_embedding as distance
from documents
order by embedding <-> query_embedding;

Inner Product

Measures dot product (use with normalized vectors):
select (embedding <#> query_embedding) * -1 as similarity
from documents
order by embedding <#> query_embedding
limit 5;

Indexing Strategies

HNSW (Hierarchical Navigable Small World)

Best for most use cases - fast and accurate:
-- Create HNSW index
create index on documents
using hnsw (embedding vector_cosine_ops)
with (m = 16, ef_construction = 64);

-- Parameters:
-- m: max connections per layer (higher = better recall, more memory)
-- ef_construction: search quality during build (higher = better index)

IVFFlat (Inverted File Flat)

Better for very large datasets:
-- Create IVFFlat index
create index on documents
using ivfflat (embedding vector_cosine_ops)
with (lists = 100);

-- Adjust lists based on row count:
-- lists = rows / 1000 for < 1M rows
-- lists = sqrt(rows) for >= 1M rows

-- Set probes at query time
set ivfflat.probes = 10;
No index - exact results but slow:
-- Just query directly (useful for small datasets)
select * from documents
order by embedding <=> query_embedding
limit 10;

RAG (Retrieval Augmented Generation)

Build AI applications that combine search with LLMs:
async function answerQuestion(question: string) {
  // 1. Generate embedding for the question
  const questionEmbedding = await generateEmbedding(question)
  
  // 2. Find relevant documents
  const { data: documents } = await supabase.rpc('match_documents', {
    query_embedding: questionEmbedding,
    match_threshold: 0.7,
    match_count: 5
  })
  
  // 3. Build context from retrieved documents
  const context = documents
    .map(doc => doc.content)
    .join('\n\n')
  
  // 4. Generate answer using LLM
  const completion = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [
      {
        role: 'system',
        content: 'Answer questions based only on the provided context.'
      },
      {
        role: 'user',
        content: `Context:\n${context}\n\nQuestion: ${question}`
      }
    ]
  })
  
  return completion.choices[0].message.content
}

// Usage
const answer = await answerQuestion('What is machine learning?')
console.log(answer)
Combine vector similarity with traditional filters:
create or replace function hybrid_search(
  query_embedding vector(1536),
  query_text text,
  match_count int
)
returns table (
  id bigint,
  title text,
  content text,
  similarity float
)
language sql stable
as $$
  select
    documents.id,
    documents.title,
    documents.content,
    -- Combine vector similarity with text search
    (1 - (documents.embedding <=> query_embedding)) * 0.7 +
    ts_rank(to_tsvector('english', documents.content), plainto_tsquery('english', query_text)) * 0.3 as similarity
  from documents
  where to_tsvector('english', documents.content) @@ plainto_tsquery('english', query_text)
  order by similarity desc
  limit match_count;
$$;

Metadata Filtering

Filter by metadata before similarity search:
create table documents (
  id bigint primary key,
  content text,
  embedding vector(1536),
  category text,
  author text,
  published_at timestamptz
);

create or replace function search_documents(
  query_embedding vector(1536),
  filter_category text,
  match_count int
)
returns table (
  id bigint,
  content text,
  similarity float
)
language sql stable
as $$
  select
    id,
    content,
    1 - (embedding <=> query_embedding) as similarity
  from documents
  where category = filter_category  -- Filter first
  order by embedding <=> query_embedding
  limit match_count;
$$;

Vector Buckets

For specialized vector storage:
// Create a vector bucket
const { data, error } = await supabase.storage.createBucket('embeddings', {
  public: false,
  bucketType: 'vector'
})

// Store vectors with metadata
const { data, error } = await supabase.storage
  .from('embeddings')
  .upload('documents/doc-1.vector', {
    vector: embedding,
    metadata: {
      title: 'Document Title',
      category: 'AI'
    }
  })

// Query similar vectors
const { data, error } = await supabase.storage
  .from('embeddings')
  .search({
    vector: queryEmbedding,
    limit: 10,
    distance: 'cosine',
    filter: { category: 'AI' }
  })

Performance Optimization

Choose the Right Dimensions

// Fewer dimensions = faster, less storage
// OpenAI text-embedding-3-small: 1536 dimensions
// OpenAI text-embedding-3-large: 3072 dimensions

// You can reduce dimensions:
const response = await openai.embeddings.create({
  model: 'text-embedding-3-large',
  input: 'Text',
  dimensions: 256 // Reduce from 3072 to 256
})

Batch Operations

// Generate embeddings in batches
const texts = ['text1', 'text2', 'text3', ...]

for (let i = 0; i < texts.length; i += 100) {
  const batch = texts.slice(i, i + 100)
  
  const embeddings = await Promise.all(
    batch.map(text => generateEmbedding(text))
  )
  
  await supabase.from('documents').insert(
    batch.map((text, idx) => ({
      content: text,
      embedding: embeddings[idx]
    }))
  )
}

Partial Indexes

-- Index only recent documents
create index on documents
using hnsw (embedding vector_cosine_ops)
where created_at > now() - interval '30 days';

AI Frameworks Integration

LangChain

import { SupabaseVectorStore } from 'langchain/vectorstores/supabase'
import { OpenAIEmbeddings } from 'langchain/embeddings/openai'

const vectorStore = await SupabaseVectorStore.fromExistingIndex(
  new OpenAIEmbeddings(),
  {
    client: supabase,
    tableName: 'documents',
    queryName: 'match_documents'
  }
)

// Add documents
await vectorStore.addDocuments([
  { pageContent: 'Text 1', metadata: { source: 'doc1' } },
  { pageContent: 'Text 2', metadata: { source: 'doc2' } }
])

// Search
const results = await vectorStore.similaritySearch('query', 5)

LlamaIndex

from llama_index.vector_stores import SupabaseVectorStore
from llama_index import VectorStoreIndex, Document

vector_store = SupabaseVectorStore(
    postgres_connection_string=os.environ["DATABASE_URL"],
    collection_name="documents"
)

index = VectorStoreIndex.from_vector_store(vector_store)

# Query
query_engine = index.as_query_engine()
response = query_engine.query("What is AI?")

Best Practices

Normalize Vectors

Normalize embeddings when using inner product or cosine similarity.

Index Strategy

Use HNSW for most cases; IVFFlat for 1M+ vectors.

Batch Processing

Generate and store embeddings in batches for better performance.

Monitor Quality

Track similarity thresholds and adjust based on results.

Next Steps

Vector Columns

Learn about storing vectors in Postgres

Vector Indexes

Optimize similarity search with indexes

Python Clients

Use Python for AI applications

Examples

Explore AI example applications