Welcome Developers and Contributors! This README is designed to provide a structured guide to help you dive into this project seamlessly.
- Ensure you have
pnpm
ornpm
packages installed on the computer - Ensure you have
node.js
installed on your computer.
# go to api-endpoint folder
cd api-endpoint
# Install dependencies
pnpm install
# Start the development server
pnpm run dev
🌐 API streaming server is ready: This is a core of streaming outputs directly from LLM model into API based version!
# go to chrome extension folder
cd chatwithpage-extension
# Install dependencies
pnpm install
# Get a build of the chrome extension ready
pnpm build
📂 Output Directory: All production files will be located in the build
folder.
- Model Installation: Go to ollama.ai and follow installation instructions.
- Run Model:
ollama run mistral
- Server Setup: Install
litellm
via pip:pip install litellm
- Start Server:
litellm --model ollama/mistral --api_base http://localhost:11434 --temperature 0.3 --max_tokens 2048
- Navigate: Go to
chrome://extensions
. - Enable Developer Mode.
- Load Extension: Click "Load Unpacked" and navigate to
build/chrome-mv3-dev
orbuild/chrome-mv3-prod
.
- Streamline local-stream repo.
- Ensure compatibility with Vercel AI and OpenAI npm modules.
This extension is proudly built with Plasmo.
📘 Note: Your collaboration is highly valued. Let's build something awesome together!