AI Factory How-To Guides (Hybrid Manager)
Use these How-To Guides to deploy and operate AI Factory capabilities inside your Hybrid Manager (HCP) environment.
Hybrid Manager runs AI Factory as an integrated AI workload inside your HCP Kubernetes project. While many core concepts are shared with the AI Factory Hub, there are important Hybrid Manager-specific patterns and steps to follow when:
- Setting up Model Serving infrastructure
- Deploying and verifying AI models (InferenceServices)
- Managing GPU resources for model serving
- Using AI models in Gen AI applications and database extensions
- Managing project- and user-scope permissions for AI Factory
Topics
Gen AI
Model Serving
Model Library
- Integrate private container registry
- Define repository rules
- Manage repository metadata
- Deploy AI models from Model Library
Learn more
For conceptual background, see:
Setup GPU resources
How to provision and configure GPU resources in Hybrid Manager to support Model Serving.
Verify InferenceServices and GPU Usage
How to verify InferenceServices deployments and GPU resource usage in Hybrid Manager.
- On this page
- Topics
- Learn more
← Prev
Image and Model Library Explained
↑ Up
Learn AI Factory in Hybrid Manager
Next →
Setup GPU resources in Hybrid Manager
Could this page be better? Report a problem or suggest an addition!