You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using local LM-studio Llama 3.2 system for LLM on system
Feature description
I use LM-studio a local AI with openAI endpoints It has an openAI API, I can run a local server of an LLM, to apply for jobs, and then take that system, and then incorporate a local machines endpoint instead of OpenAI.
This would cost me no money to use, and I wouldn't need Gemini or ChatGPT pro. It would save me money. I will look at it myself also since this is open source.
Motivation
No response
Alternatives considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Feature summary
Using local LM-studio Llama 3.2 system for LLM on system
Feature description
I use LM-studio a local AI with openAI endpoints It has an openAI API, I can run a local server of an LLM, to apply for jobs, and then take that system, and then incorporate a local machines endpoint instead of OpenAI.
This would cost me no money to use, and I wouldn't need Gemini or ChatGPT pro. It would save me money. I will look at it myself also since this is open source.
Motivation
No response
Alternatives considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: