@@ -56,7 +56,7 @@ pip install guardrails-ai
56
56
### Create Input and Output Guards for LLM Validation
57
57
58
58
1 . Download and configure the Guardrails Hub CLI.
59
-
59
+
60
60
``` bash
61
61
pip install guardrails-ai
62
62
guardrails configure
@@ -96,7 +96,7 @@ pip install guardrails-ai
96
96
` ` `
97
97
98
98
Then, create a Guard from the installed guardrails.
99
-
99
+
100
100
` ` ` python
101
101
from guardrails import Guard, OnFailAction
102
102
from guardrails.hub import CompetitorCheck, ToxicLanguage
@@ -161,7 +161,7 @@ raw_output, validated_output, *rest = guard(
161
161
print(validated_output)
162
162
```
163
163
164
- This prints:
164
+ This prints:
165
165
```
166
166
{
167
167
"pet_type": "dog",
@@ -175,7 +175,7 @@ Guardrails can be set up as a standalone service served by Flask with `guardrail
175
175
176
176
1. Install: `pip install "guardrails-ai"`
177
177
2. Configure: `guardrails configure`
178
- 3. Create a config: `guardrails create --validators=hub://guardrails/two_words --name=two-word-guard`
178
+ 3. Create a config: `guardrails create --validators=hub://guardrails/two_words --guard- name=two-word-guard`
179
179
4. Start the dev server: `guardrails start --config=./config.py`
180
180
5. Interact with the dev server via the snippets below
181
181
```
@@ -204,7 +204,7 @@ completion = openai.chat.completions.create(
204
204
)
205
205
```
206
206
207
- For production deployments, we recommend using Docker with Gunicorn as the WSGI server for improved performance and scalability.
207
+ For production deployments, we recommend using Docker with Gunicorn as the WSGI server for improved performance and scalability.
208
208
209
209
## FAQ
210
210
0 commit comments