Skip to content

Commit fbf188b

Browse files
committed
Add a simpler streaming example;fix UTF-8 handling in the more complicated one
1 parent 404963d commit fbf188b

File tree

1 file changed

+63
-8
lines changed
  • files/en-us/web/api/fetch_api/using_fetch

1 file changed

+63
-8
lines changed

files/en-us/web/api/fetch_api/using_fetch/index.md

Lines changed: 63 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -397,17 +397,67 @@ The method will throw an exception if the response body is not in the appropriat
397397

398398
Request and response bodies are actually {{domxref("ReadableStream")}} objects, and whenever you read them, you're streaming the content. This is good for memory efficiency, because the browser doesn't have to buffer the entire response in memory before the caller retrieves it using a method like `json()`.
399399

400-
This also means that the caller can create a reader for the stream using {{domxref("ReadableStream.getReader()")}}, and then process the content as it arrives, without having to wait for the entire response body to arrive.
400+
This also means that the caller can process the content incrementally as it is received.
401401

402-
In the example below, we fetch a text resource and process it line by line:
402+
For example, consider a `GET` request that fetches a large text file and processes it in some way, or displays it to the user:
403+
404+
```js
405+
const url = "https://www.example.org/a-large-file.txt";
406+
407+
async function fetchText(url) {
408+
try {
409+
const response = await fetch(url);
410+
if (!response.ok) {
411+
throw new Error(`Response status: ${response.status}`);
412+
}
413+
414+
const text = await response.text();
415+
console.log(text);
416+
} catch (e) {
417+
console.error(e);
418+
}
419+
}
420+
```
421+
422+
If we use {{domxref("Response.text()")}}, as above, we must wait until the whole file has been received before we can process any of it.
423+
424+
If we stream the response instead, we can process chunks of the body as they are received from the network:
425+
426+
```js
427+
const url = "https://www.example.org/a-large-file.txt";
428+
429+
async function fetchTextAsStream(url) {
430+
try {
431+
const response = await fetch(url);
432+
if (!response.ok) {
433+
throw new Error(`Response status: ${response.status}`);
434+
}
435+
436+
const stream = response.body.pipeThrough(new TextDecoderStream());
437+
for await (const value of stream) {
438+
console.log(value);
439+
}
440+
} catch (e) {
441+
console.error(e);
442+
}
443+
}
444+
```
445+
446+
In this example, we {{jsxref("Statements/for-await...of", "iterate asynchronously", "", "nocode")}} over the stream, processing each chunk as it arrives.
447+
448+
Note that when you access the body directly like this, you get the raw bytes of the response and must transform it yourself. In this case we call {{domxref("ReadableStream.pipeThrough()")}} to pipe the response through a {{domxref("TextDecoderStream")}}, which decodes the UTF-8-encoded body data as text.
449+
450+
### Processing a text file line by line
451+
452+
In the example below, we fetch a text resource and process it line by line, using a regular expression to look for line endings. For simplicity, we assume the text is UTF-8, and don't handle fetch errors:
403453

404454
```js
405455
async function* makeTextFileLineIterator(fileURL) {
406-
const utf8Decoder = new TextDecoder("utf-8");
407456
const response = await fetch(fileURL);
408-
const reader = response.body.getReader();
457+
const reader = response.body.pipeThrough(new TextDecoderStream()).getReader();
458+
409459
let { value: chunk, done: readerDone } = await reader.read();
410-
chunk = chunk ? utf8Decoder.decode(chunk) : "";
460+
chunk = chunk || "";
411461

412462
const newline = /\r?\n/gm;
413463
let startIndex = 0;
@@ -419,7 +469,7 @@ async function* makeTextFileLineIterator(fileURL) {
419469
if (readerDone) break;
420470
const remainder = chunk.substr(startIndex);
421471
({ value: chunk, done: readerDone } = await reader.read());
422-
chunk = remainder + (chunk ? utf8Decoder.decode(chunk) : "");
472+
chunk = remainder + (chunk || "");
423473
startIndex = newline.lastIndex = 0;
424474
continue;
425475
}
@@ -433,13 +483,17 @@ async function* makeTextFileLineIterator(fileURL) {
433483
}
434484
}
435485

436-
async function run() {
486+
async function run(urlOfFile) {
437487
for await (const line of makeTextFileLineIterator(urlOfFile)) {
438488
processLine(line);
439489
}
440490
}
441491

442-
run();
492+
function processLine(line) {
493+
console.log(line);
494+
}
495+
496+
run("https://www.example.org/a-large-file.txt");
443497
```
444498

445499
### Locked and disturbed streams
@@ -519,6 +573,7 @@ self.addEventListener("fetch", (event) => {
519573
## See also
520574

521575
- [Service Worker API](/en-US/docs/Web/API/Service_Worker_API)
576+
- [Streams API](/en-US/docs/Web/API/Streams_API)
522577
- [CORS](/en-US/docs/Web/HTTP/CORS)
523578
- [HTTP](/en-US/docs/Web/HTTP)
524579
- [Fetch examples on GitHub](https://github.com/mdn/dom-examples/tree/main/fetch)

0 commit comments

Comments
 (0)