Model capabilities in Hybrid Manager
Hybrid Manager provides full support for Model Serving and Model Library, delivered through the AI Factory workload.
Models are deployed as scalable Inference Services using KServe on Hybrid Manager’s Kubernetes infrastructure.
The Model Library provides discovery and management of supported models and container images.
Key components
- Model Serving — deploy models as network-accessible inference services.
- Model Library — discover and manage AI models and container images.
Where to learn more
- Model Serving in Hybrid Manager
- Model Library in Asset Library in Hybrid Manager
- Full feature details: Model Serving Hub, Model Library Hub
GPUs
Understand the role of GPUs in Model Serving with AI Factory, how Hybrid Manager uses them, and how to prepare GPU resources.
Asset Library
How Asset Library works within Hybrid Manager and how to manage model images and AI assets.
Model Serving
How Model Serving works within Hybrid Manager and key deployment considerations.
- On this page
- Key components
- Where to learn more
← Prev
AI Factory Learning Paths (Hybrid Manager)
↑ Up
AI Factory in Hybrid Manager
Next →
GPUs in Model Serving
Could this page be better? Report a problem or suggest an addition!