Using models in Gen AI Builder Innovation Release
This documentation covers the current Innovation Release of
EDB Postgres AI. You may also want the docs for the current LTS version.
Hub quick links: Gen AI how-tos — Model Serving — Model Library
How models are consumed
In Hybrid Manager, Gen AI Builder lets you build assistants that use model endpoints served from the Model Library.
- When creating or editing an Assistant, you select which model to use (see Create an Assistant).
- Models available here are governed by Hybrid Manager: pulled from the Model Library, deployed to your project, and exposed via internal endpoints.
- This ensures that model calls stay within your environment — no external API calls are made by default.
Knowledge Bases and pipelines
Knowledge Bases in Gen AI Builder can be populated with embeddings generated by Pipelines:
- Pipelines ingest and prepare documents into vector indexes (see Vector Engine concepts).
- Knowledge Bases reference these pipelines, making them queryable for Retrieval-Augmented Generation (RAG).
- Assistants then combine model responses with Knowledge Base retrievals to ground outputs in your organization’s data.
See: Knowledge Bases (hub) and Pipelines (hub).
Environment and service discovery
When Gen AI Builder runs inside Hybrid Manager:
- Service discovery and endpoints are automatically managed by the platform.
- You do not need to configure external URLs; models appear directly in the Assistant creation UI.
- Any required environment variables (for service routing or authentication) are injected by HM.
This means you focus on building assistants — Hybrid Manager takes care of wiring models, data, and observability together.
Common tasks
Use these how-tos from the hub to start building:
Key takeaway
In Hybrid Manager, models and data are co-located:
- Models: deployed from the Model Library into your HM project.
- Data: ingested through pipelines, stored in Knowledge Bases.
- Assistants: combine both, with no traffic leaving your cluster.