Open
Description
Describe the bug
I'm trying to use the LlamaGuard7B validator in a Streamlit app with use_remote=True, but it fails consistently with:
Failed to get valid response from lamaguard-7b model. status: none. detail: unknown error
Environment
- guardrails-ai version: 0.6.6
- Python version: 3.10
- Remote inference: Enabled via use_remote=True
I tested other validators and they work, and even tried with a newer API key, but the problem still persists
code:
input_validators.append(LlamaGuard7B( policies=selected_policies, on_fail="noop", use_remote=True ))
Could you confirm if the remote endpoint for LlamaGuard7B is still active? Or, Is there any additional configuration required for remote inference to work?
Also, how should we handle fallback in case the remote model fails?
Thanks in advance!