Limitations

Please be aware that AI applications developed with GenAI Builder may rely on tools to enhance models' capabilities which may in turn introduce security risks. Certain tools as they allow models to interact with the PGAI environment can present risk. In particular, tools enabling interactions with the network, the file system or the OS (such as executing commands) should all be considered risky, depending on the details of such interaction.

The level of risk is variable. For example, a tool that allows an LLM to read files in a well defined, application-related directory is low risk, while one that allows reading any file on the agent's file system would be problematic.

Customers can elect to deploy tools they develop and make them available to models for use. Introducing potentially user-controlled code through the agent, directly into production environments, should be done with the awareness of assuming a risk.


Could this page be better? Report a problem or suggest an addition!