AI Factory terminology

AI Factory terminology defines key concepts and technologies used across EDB Postgres® AI (EDB PG AI), Hybrid Manager AI Factory, and related components.

This page complements these conceptual explanations:

Understanding these terms helps you build trusted, governed, and scalable AI solutions with EDB PG AI.


Core AI concepts

Machine learning (ML)

Machine learning uses algorithms that improve through data exposure to perform tasks without explicit programming. It powers predictive analytics, automation, and decision making in AI Factory through Model Serving, Pipelines, and Assistants.

Learn more

Deep learning (DL)

Deep learning leverages multi-layer neural networks to model complex data patterns. It supports advanced applications such as language understanding and image recognition within AI Factory, enabled through Model Serving with GPU acceleration.

Learn more

Natural language processing (NLP)

Natural language processing (NLP) enables computers to understand and generate human language. It underpins semantic search and conversational AI in AI Factory via Knowledge Bases, Retrievers, and Assistants.

Learn more

Large language models (LLMs)

LLMs are large deep learning models trained on massive text corpora. In AI Factory, they drive Assistants, Retrieval-Augmented Generation (RAG), and various model pipelines, deployed using Model Serving.

Learn more

Embeddings

Embeddings are vector representations of data that capture semantic meaning. AI Factory Pipelines create embeddings used in Knowledge Bases and served through the Vector Engine to enable semantic search and RAG.

Learn more

Vector databases

Vector databases store embeddings and enable fast similarity search. AI Factory provides this through the Vector Engine, built on the open-source pgvector extension, integrated directly with Postgres.

Retrieval-augmented generation (RAG)

RAG combines vector search with LLM generation to ground model responses in relevant documents. In AI Factory, it is implemented through Knowledge Bases, Retrievers, and Model Serving.

Intro to RAG


AI for databases

Intelligent database management

Intelligent database management applies AI to optimize Postgres performance and operations. AI Factory extends this with intelligent retrieval and search using Vector Engine and Pipelines.

In-database machine learning (In-DB ML)

In-DB ML enables running vector search and ML pipelines inside Postgres, reducing data movement and latency. AI Factory implements this through Vector Engine and Pipelines.

Vector search in Postgres

Vector search allows you to query embeddings directly within Postgres. AI Factory uses pgvector to power this capability through the Vector Engine, supporting Knowledge Bases and RAG.

AIDB

AIDB (AI-in-Database) brings vector search, embedding pipelines, and future ML capabilities to HCP-managed Postgres clusters. It is the foundation for AI Factory Pipelines and Knowledge Bases.

Natural language interfaces to databases

Natural language interfaces enable users to query Postgres using natural language rather than SQL. AI Factory Assistants leverage this pattern through Gen AI Builder, combining LLMs with structured and unstructured retrieval.


AI infrastructure

AI-accelerated hardware

AI Factory uses GPU-accelerated Kubernetes clusters to serve deep learning models and high-throughput inference. Model workloads in Model Serving run on GPU-enabled nodes.

Learn more

KServe

KServe is the open-source Kubernetes-native framework AI Factory uses to deploy and manage ML models. It provides InferenceServices, autoscaling, and observability for AI Factory Model Serving.

KServe documentation

Model Serving

Model Serving deploys AI models as production-grade inference services, using KServe under the hood. It supports LLMs, embedding models, vision models, and custom AI workloads.

See: Model Serving, Model Serving explained


Gen AI Builder

Gen AI Builder

Gen AI Builder is the application layer of AI Factory for building AI-powered apps and agents. It integrates:

It is deployed on Hybrid Manager AI Factory, enabling full-stack Sovereign AI.


Hybrid Manager AI Factory

Hybrid Manager

Hybrid Manager provides the Kubernetes-based control plane for AI Factory workloads, managing GPU resources, Pipelines, Model Serving, and Knowledge Bases. It provides end-to-end governance, security, and observability.

See: Hybrid Manager AI Factory

Pipelines

Pipelines automate the creation of embeddings and Knowledge Bases from data sources in your control, ensuring transparency and auditability.

See: Pipelines overview

Knowledge Bases

Knowledge Bases index content for semantic search and RAG. They support multi-source, multi-modal search and power AI Factory Assistants.

See: Knowledge Bases explained

Model Serving

Model Serving deploys models using Kubernetes-native KServe and integrates with the Model Library. It powers Assistants, Knowledge Bases, and custom AI applications.

See: Model Serving


Additional concepts

Image and Model Library

The Image and Model Library in Hybrid Manager manages container images for both Postgres and AI model deployments. The Model Library provides an AI-focused view, supporting Model Serving and governed image workflows.

See: Model Image Library explained

NVIDIA NIM

NVIDIA NIM containers are optimized AI model images from NVIDIA for high-performance inference. AI Factory uses NIM containers for LLMs, embeddings, and vision models, served through Model Serving.

See: Deploy NIM containers


Summary

Understanding these terms will help you build with:

Together, these capabilities enable you to deliver trusted, governed, Sovereign AI — with your data, in your databases, under your control.



Could this page be better? Report a problem or suggest an addition!