Skip to content

Commit 983b555

Browse files
jessejohnsonJesse Johnson
and
Jesse Johnson
authored
Update Server Instructions (ggml-org#2113)
* Update server instructions for web front end * Update server README * Remove duplicate OAI instructions * Fix duplicate text --------- Co-authored-by: Jesse Johnson <[email protected]>
1 parent ec326d3 commit 983b555

File tree

1 file changed

+25
-1
lines changed

1 file changed

+25
-1
lines changed

examples/server/README.md

+25-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Command line options:
2121
- `-to N`, `--timeout N`: Server read/write timeout in seconds. Default `600`.
2222
- `--host`: Set the hostname or ip address to listen. Default `127.0.0.1`.
2323
- `--port`: Set the port to listen. Default: `8080`.
24-
- `--public`: path from which to serve static files (default examples/server/public)
24+
- `--path`: path from which to serve static files (default examples/server/public)
2525
- `--embedding`: Enable embedding extraction, Default: disabled.
2626

2727
## Build
@@ -207,3 +207,27 @@ openai.api_base = "http://<Your api-server IP>:port"
207207
```
208208

209209
Then you can utilize llama.cpp as an OpenAI's **chat.completion** or **text_completion** API
210+
211+
### Extending the Web Front End
212+
213+
The default location for the static files is `examples/server/public`. You can extend the front end by running the server binary with `--path` set to `./your-directory` and importing `/completion.js` to get access to the llamaComplete() method. A simple example is below:
214+
215+
```
216+
<html>
217+
<body>
218+
<pre>
219+
<script type="module">
220+
import { llamaComplete } from '/completion.js'
221+
222+
llamaComplete({
223+
prompt: "### Instruction:\nWrite dad jokes, each one paragraph. You can use html formatting if needed.\n\n### Response:",
224+
n_predict: 1024,
225+
},
226+
null,
227+
(chunk) => document.write(chunk.data.content)
228+
)
229+
</script>
230+
</pre>
231+
</body>
232+
</html>
233+
```

0 commit comments

Comments
 (0)