
Embeddings are the foundation of vector search, allowing us to represent meaning-rich content like documents or queries as numerical vectors. But to use them effectively, it’s essential to understand what’s actually being embedded—whether that’s individual words, full sentences, or larger chunks of text.

As vector search becomes a foundational feature in modern applications—from semantic search and recommendation engines to AI-driven insights—developers are increasingly adopting PostgreSQL with the pgvector extension. However, one concept often creates confusion: the difference between similarity and distance.