Skip to content

[bug] Failed to get valid response from lamaguard-7b model. status: none. detail: unknown error #1277

Open
@mmdj0

Description

@mmdj0

Describe the bug
I'm trying to use the LlamaGuard7B validator in a Streamlit app with use_remote=True, but it fails consistently with:
Failed to get valid response from lamaguard-7b model. status: none. detail: unknown error
Environment

  • guardrails-ai version: 0.6.6
  • Python version: 3.10
  • Remote inference: Enabled via use_remote=True

I tested other validators and they work, and even tried with a newer API key, but the problem still persists

code:
input_validators.append(LlamaGuard7B( policies=selected_policies, on_fail="noop", use_remote=True ))

Could you confirm if the remote endpoint for LlamaGuard7B is still active? Or, Is there any additional configuration required for remote inference to work?
Also, how should we handle fallback in case the remote model fails?

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions