|
1 |
| -# giskard-LMUtils |
| 1 | +# Giskard LLM Utils |
2 | 2 |
|
3 |
| -Giskard LLM Utils |
| 3 | +A Python library providing utility functions and tools for working with Large Language Models (LLMs). This library is part of the Giskard ecosystem and provides various utilities for LLM operations, including model management, clustering, and more. |
| 4 | + |
| 5 | +## Purpose |
| 6 | + |
| 7 | +This library aims to simplify working with LLMs by providing: |
| 8 | + |
| 9 | +- A unified interface for different LLM providers through LiteLLM |
| 10 | +- Support for both cloud-based and local embedding models |
| 11 | +- Easy configuration through environment variables or direct initialization |
| 12 | +- Synchronous and asynchronous operations for better performance |
| 13 | + |
| 14 | +## Installation |
| 15 | + |
| 16 | +### Standard Installation |
| 17 | + |
| 18 | +```bash |
| 19 | +pip install giskard-lmutils |
| 20 | +``` |
| 21 | + |
| 22 | +### Local Embedding Support |
| 23 | + |
| 24 | +For local embedding capabilities, install with the `local-embedding` extra: |
| 25 | + |
| 26 | +```bash |
| 27 | +pip install "giskard-lmutils[local-embedding]" |
| 28 | +``` |
| 29 | + |
| 30 | +This will install the required dependencies (`torch` and `transformers`) for running embedding models locally. |
| 31 | + |
| 32 | +### Development Installation |
| 33 | + |
| 34 | +1. Install python, Rye and make |
| 35 | +2. Clone this repository |
| 36 | +3. Setup the virtual environment using `make setup` |
| 37 | + |
| 38 | +## Using LiteLLMModel |
| 39 | + |
| 40 | +The `LiteLLMModel` class provides a unified interface for working with various LLM providers through the [LiteLLM](https://github.com/BerriAI/litellm) library. It supports both completion and embedding operations, with both synchronous and asynchronous methods. |
| 41 | + |
| 42 | +### Configuration |
| 43 | + |
| 44 | +You can configure the model in two ways: |
| 45 | + |
| 46 | +1. Through environment variables: |
| 47 | + |
| 48 | +```bash |
| 49 | +# Required for OpenAI models |
| 50 | +export OPENAI_API_KEY="your-api-key" |
| 51 | + |
| 52 | +# Model configuration |
| 53 | +export GSK_COMPLETION_MODEL="gpt-3.5-turbo" |
| 54 | +export GSK_EMBEDDING_MODEL="text-embedding-ada-002" |
| 55 | +``` |
| 56 | + |
| 57 | +```python |
| 58 | +from giskard_lmutils.model import LiteLLMModel |
| 59 | + |
| 60 | +# This will use environment variables for model names |
| 61 | +model = LiteLLMModel( |
| 62 | + completion_params={"temperature": 0.7}, |
| 63 | + embedding_params={"is_local": False} # Optional, defaults to False |
| 64 | +) |
| 65 | + |
| 66 | + |
| 67 | +2. Through direct initialization: |
| 68 | + |
| 69 | +# Or specify models directly |
| 70 | +model = LiteLLMModel( |
| 71 | + completion_model="gpt-3.5-turbo", |
| 72 | + embedding_model="text-embedding-ada-002", |
| 73 | + completion_params={"temperature": 0.7}, |
| 74 | + embedding_params={"is_local": False} # Optional, defaults to False |
| 75 | +) |
| 76 | +``` |
| 77 | + |
| 78 | +Note: When using OpenAI models, you must set the `OPENAI_API_KEY` environment variable. For other providers, refer to the [LiteLLM documentation](https://github.com/BerriAI/litellm) for their specific API key requirements. |
| 79 | + |
| 80 | +### Usage Examples |
| 81 | + |
| 82 | +#### Text Completion |
| 83 | + |
| 84 | +```python |
| 85 | +# Synchronous completion |
| 86 | +response = model.complete([ |
| 87 | + {"role": "user", "content": "What is the capital of France?"} |
| 88 | +]) |
| 89 | + |
| 90 | +# Asynchronous completion |
| 91 | +response = await model.acomplete([ |
| 92 | + {"role": "user", "content": "What is the capital of France?"} |
| 93 | +]) |
| 94 | +``` |
| 95 | + |
| 96 | +#### Text Embedding |
| 97 | + |
| 98 | +```python |
| 99 | +# Synchronous embedding |
| 100 | +embeddings = model.embed(["Hello, world!", "Another text"]) |
| 101 | + |
| 102 | +# Asynchronous embedding |
| 103 | +embeddings = await model.aembed(["Hello, world!", "Another text"]) |
| 104 | + |
| 105 | +# Local embedding |
| 106 | +model = LiteLLMModel( |
| 107 | + embedding_model="sentence-transformers/all-MiniLM-L6-v2", |
| 108 | + embedding_params={"is_local": True} |
| 109 | +) |
| 110 | +embeddings = model.embed(["Hello, world!"]) |
| 111 | +``` |
| 112 | + |
| 113 | +## Requirements |
| 114 | + |
| 115 | +- Python >= 3.9, < 3.14 |
| 116 | +- Core dependencies: |
| 117 | + - numpy >= 2.2.2 |
| 118 | + - litellm >= 1.59.3 |
| 119 | +- Optional dependencies (for local embedding): |
| 120 | + - torch >= 2.6.0 |
| 121 | + - transformers >= 4.51.3 |
| 122 | + |
| 123 | +## License |
| 124 | + |
| 125 | +This project is licensed under the Apache Software License 2.0 - see the [LICENSE](LICENSE) file for details. |
| 126 | + |
| 127 | +## Contributing |
| 128 | + |
| 129 | +Contributions are welcome! Please feel free to submit a Pull Request. |
| 130 | + |
| 131 | +## Authors |
| 132 | + |
| 133 | +- Kevin Messiaen ( [email protected]) |
0 commit comments