You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Update server instructions for web front end
* Update server README
* Remove duplicate OAI instructions
* Fix duplicate text
---------
Co-authored-by: Jesse Johnson <[email protected]>
Then you can utilize llama.cpp as an OpenAI's **chat.completion** or **text_completion** API
210
+
211
+
### Extending the Web Front End
212
+
213
+
The default location for the static files is `examples/server/public`. You can extend the front end by running the server binary with `--path` set to `./your-directory` and importing `/completion.js` to get access to the llamaComplete() method. A simple example is below:
214
+
215
+
```
216
+
<html>
217
+
<body>
218
+
<pre>
219
+
<script type="module">
220
+
import { llamaComplete } from '/completion.js'
221
+
222
+
llamaComplete({
223
+
prompt: "### Instruction:\nWrite dad jokes, each one paragraph. You can use html formatting if needed.\n\n### Response:",
0 commit comments