Glossary and key terms
This glossary defines the key terms and architectural components used throughout the Sovereign AI and Data Factory documentation.
Administration Node
The central control node of the system. Hosts the Hybrid Manager interface, authentication services, and orchestration logic.
AI Node
A GPU-enabled server used for model inference, vector processing, and agentic execution. Part of the optional AI workload configuration.
Agentic Workflow
A multi-step automation pattern where AI agents perform tasks independently using internal tools, memory, and feedback loops. Often built on top of LLMs and vector data.
Control Plane
The set of three Hybrid Manager nodes responsible for cluster scheduling, monitoring, lifecycle operations, and access control. Operates independently of data workloads.
Compute Node
A general-purpose high-performance node used for running Postgres clusters, analytics, or vector pipelines. Can be scaled out as needed.
Embedding
The process of converting structured or unstructured data into vector representations suitable for AI workflows such as retrieval and classification.
EDB Hybrid Manager
EDB’s management layer for orchestrating database and AI workloads in Kubernetes environments. Used to deploy clusters, monitor systems, and manage lifecycle operations.
Inference
The act of generating output from a machine learning model. In this system, inference is performed on dedicated GPU nodes using KServe and NVIDIA NIM containers.
KServe
An open-source model serving framework used to run inference workloads. Integrated into Hybrid Manager and GPU nodes for serving LLMs and embedding models.
Lifecycle Services
Ongoing support and management for both hardware and software, including security patches, OS updates, Postgres upgrades, and firmware management.
Postgres Distributed (PGD)
EDB’s multi-node, high-availability Postgres solution. Optional for customers who require globally distributed or synchronous multi-region clusters.
RAG (Retrieval-Augmented Generation)
A workload pattern where structured data (often in Postgres) is retrieved and passed to an LLM to generate natural language responses. Requires embedding and inference capabilities.
Sovereign AI
An AI deployment model where all data, models, and execution environments remain physically and logically within a customer-controlled space—fully air-gapped if needed.
Vector Store
A database or index that stores embedding vectors and supports similarity search. Postgres and pgvector are used in this system for local vector storage.
Could this page be better? Report a problem or suggest an addition!