You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tutorial_3_agents.rst
+36Lines changed: 36 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -381,6 +381,42 @@ so you can (e.g) pass in a temperature or max tokens. Here are some examples of
381
381
382
382
383
383
384
+
Local LLMs [Experimental]
385
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
386
+
You can use local LLMs via `Ollama <https://ollama.ai>`_. Please note: Support for local LLMs via Ollama is currently experimental andnot officially supported. If you use Ollama with Plurals and find any bugs, please
387
+
post a GitHub issue.
388
+
389
+
1. Install Ollama and start the server:
390
+
391
+
.. code-block:: bash
392
+
393
+
ollama start
394
+
395
+
2. Pull your desired model:
396
+
397
+
.. code-block:: bash
398
+
399
+
ollama pull gemma:2b
400
+
401
+
3. Configure your Agent with the Ollama endpoint:
402
+
403
+
.. code-block:: python
404
+
405
+
from plurals.agent import Agent
406
+
407
+
# point to local Ollama server
408
+
local_agent= Agent(
409
+
model="ollama/gemma:2b",
410
+
kwargs={'api_base': 'http://localhost:11434'}
411
+
)
412
+
print(local_agent.process("Say hello"))
413
+
414
+
.. note::
415
+
- First run the model locally with``ollama run gemma:2b``
416
+
- Full model list: https://ollama.ai/library
417
+
- Again: This integration is*experimental*as of now. In contrast to other documented features, API stability isnot currently guaranteed.
418
+
419
+
384
420
Inspecting the exact prompts that an Agent is doing
0 commit comments