Streams

Living Standard — Last Updated

Participate:
GitHub whatwg/streams (new issue, open issues)
IRC: #whatwg on Freenode
Commits:
GitHub whatwg/streams/commits
[SNAPSHOT-LINK]
@streamsstandard
Tests:
web-platform-tests streams/ (ongoing work)
Translations (non-normative):
日本語
Demos:
streams.spec.whatwg.org/demos

Abstract

This specification provides APIs for creating, composing, and consuming streams of data that map efficiently to low-level I/O primitives.

1. Introduction

This section is non-normative.

Large swathes of the web platform are built on streaming data: that is, data that is created, processed, and consumed in an incremental fashion, without ever reading all of it into memory. The Streams Standard provides a common set of APIs for creating and interfacing with such streaming data, embodied in readable streams, writable streams, and transform streams.

These APIs have been designed to efficiently map to low-level I/O primitives, including specializations for byte streams where appropriate. They allow easy composition of multiple streams into pipe chains, or can be used directly via readers and writers. Finally, they are designed to automatically provide backpressure and queuing.

This standard provides the base stream primitives which other parts of the web platform can use to expose their streaming data. For example, [FETCH] exposes Response bodies as ReadableStream instances. More generally, the platform is full of streaming abstractions waiting to be expressed as streams: multimedia streams, file streams, inter-global communication, and more benefit from being able to process data incrementally instead of buffering it all into memory and processing it in one go. By providing the foundation for these streams to be exposed to developers, the Streams Standard enables use cases like:

Web developers can also use the APIs described here to create their own streams, with the same APIs as those provided by the platform. Other developers can then transparently compose platform-provided streams with those supplied by libraries. In this way, the APIs described here provide unifying abstraction for all streams, encouraging an ecosystem to grow around these shared and composable interfaces.

2. Model

A chunk is a single piece of data that is written to or read from a stream. It can be of any type; streams can even contain chunks of different types. A chunk will often not be the most atomic unit of data for a given stream; for example a byte stream might contain chunks consisting of 16 KiB Uint8Arrays, instead of single bytes.

2.1. Readable streams

A readable stream represents a source of data, from which you can read. In other words, data comes out of a readable stream. Concretely, a readable stream is an instance of the ReadableStream class.

Although a readable stream can be created with arbitrary behavior, most readable streams wrap a lower-level I/O source, called the underlying source. There are two types of underlying source: push sources and pull sources.

Push sources push data at you, whether or not you are listening for it. They may also provide a mechanism for pausing and resuming the flow of data. An example push source is a TCP socket, where data is constantly being pushed from the OS level, at a rate that can be controlled by changing the TCP window size.

Pull sources require you to request data from them. The data may be available synchronously, e.g. if it is held by the operating system’s in-memory buffers, or asynchronously, e.g. if it has to be read from disk. An example pull source is a file handle, where you seek to specific locations and read specific amounts.

Readable streams are designed to wrap both types of sources behind a single, unified interface. For web developer–created streams, the implementation details of a source are provided by an object with certain methods and properties that is passed to the ReadableStream() constructor.

Chunks are enqueued into the stream by the stream’s underlying source. They can then be read one at a time via the stream’s public interface, in particular by using a readable stream reader acquired using the stream’s getReader() method.

Code that reads from a readable stream using its public interface is known as a consumer.

Consumers also have the ability to cancel a readable stream, using its cancel() method. This indicates that the consumer has lost interest in the stream, and will immediately close the stream, throw away any queued chunks, and execute any cancellation mechanism of the underlying source.

Consumers can also tee a readable stream using its tee() method. This will lock the stream, making it no longer directly usable; however, it will create two new streams, called branches, which can be consumed independently.

For streams representing bytes, an extended version of the readable stream is provided to handle bytes efficiently, in particular by minimizing copies. The underlying source for such a readable stream is called an underlying byte source. A readable stream whose underlying source is an underlying byte source is sometimes called a readable byte stream. Consumers of a readable byte stream can acquire a BYOB reader using the stream’s getReader() method.

2.2. Writable streams

A writable stream represents a destination for data, into which you can write. In other words, data goes in to a writable stream. Concretely, a writable stream is an instance of the WritableStream class.

Analogously to readable streams, most writable streams wrap a lower-level I/O sink, called the underlying sink. Writable streams work to abstract away some of the complexity of the underlying sink, by queuing subsequent writes and only delivering them to the underlying sink one by one.

Chunks are written to the stream via its public interface, and are passed one at a time to the stream’s underlying sink. For web developer-created streams, the implementation details of the sink are provided by an object with certain methods that is passed to the WritableStream() constructor.

Code that writes into a writable stream using its public interface is known as a producer.

Producers also have the ability to abort a writable stream, using its abort() method. This indicates that the producer believes something has gone wrong, and that future writes should be discontinued. It puts the stream in an errored state, even without a signal from the underlying sink, and it discards all writes in the stream’s internal queue.

2.3. Transform streams

A transform stream consists of a pair of streams: a writable stream, known as its writable side, and a readable stream, known as its readable side. In a manner specific to the transform stream in question, writes to the writable side result in new data being made available for reading from the readable side.

Concretely, any object with a writable property and a readable property can serve as a transform stream. However, the standard TransformStream class makes it much easier to create such a pair that is properly entangled. It wraps a transformer, which defines algorithms for the specific transformation to be performed. For web developer–created streams, the implementation details of a transformer are provided by an object with certain methods and properties that is passed to the TransformStream() constructor.

An identity transform stream is a type of transform stream which forwards all chunks written to its writable side to its readable side, without any changes. This can be useful in a variety of scenarios. By default, the TransformStream constructor will create an identity transform stream, when no transform() method is present on the transformer object.

Some examples of potential transform streams include:

2.4. Pipe chains and backpressure

Streams are primarily used by piping them to each other. A readable stream can be piped directly to a writable stream, using its pipeTo() method, or it can be piped through one or more transform streams first, using its pipeThrough() method.

A set of streams piped together in this way is referred to as a pipe chain. In a pipe chain, the original source is the underlying source of the first readable stream in the chain; the ultimate sink is the underlying sink of the final writable stream in the chain.

Once a pipe chain is constructed, it will propagate signals regarding how fast chunks should flow through it. If any step in the chain cannot yet accept chunks, it propagates a signal backwards through the pipe chain, until eventually the original source is told to stop producing chunks so fast. This process of normalizing flow from the original source according to how fast the chain can process chunks is called backpressure.

Concretely, the original source is given the controller.desiredSize (or byteController.desiredSize) value, and can then adjust its rate of data flow accordingly. This value is derived from the writer.desiredSize corresponding to the ultimate sink, which gets updated as the ultimate sink finishes writing chunks. The pipeTo() method used to construct the chain automatically ensures this information propagates back through the pipe chain.

When teeing a readable stream, the backpressure signals from its two branches will aggregate, such that if neither branch is read from, a backpressure signal will be sent to the underlying source of the original stream.

Piping locks the readable and writable streams, preventing them from being manipulated for the duration of the pipe operation. This allows the implementation to perform important optimizations, such as directly shuttling data from the underlying source to the underlying sink while bypassing many of the intermediate queues.

2.5. Internal queues and queuing strategies

Both readable and writable streams maintain internal queues, which they use for similar purposes. In the case of a readable stream, the internal queue contains chunks that have been enqueued by the underlying source, but not yet read by the consumer. In the case of a writable stream, the internal queue contains chunks which have been written to the stream by the producer, but not yet processed and acknowledged by the underlying sink.

A queuing strategy is an object that determines how a stream should signal backpressure based on the state of its internal queue. The queuing strategy assigns a size to each chunk, and compares the total size of all chunks in the queue to a specified number, known as the high water mark. The resulting difference, high water mark minus total size, is used to determine the desired size to fill the stream’s queue.

For readable streams, an underlying source can use this desired size as a backpressure signal, slowing down chunk generation so as to try to keep the desired size above or at zero. For writable streams, a producer can behave similarly, avoiding writes that would cause the desired size to go negative.

Concretely, a queuing strategy for web developer–created streams is given by any JavaScript object with a highWaterMark property. For byte streams the highWaterMark always has units of bytes. For other streams the default unit is chunks, but a size() function can be included in the strategy object which returns the size for a given chunk. This permits the highWaterMark to be specified in arbitrary floating-point units.

A simple example of a queuing strategy would be one that assigns a size of one to each chunk, and has a high water mark of three. This would mean that up to three chunks could be enqueued in a readable stream, or three chunks written to a writable stream, before the streams are considered to be applying backpressure.

In JavaScript, such a strategy could be written manually as { highWaterMark: 3, size() { return 1; }}, or using the built-in CountQueuingStrategy class, as new CountQueuingStrategy({ highWaterMark: 3 }).

2.6. Locking

A readable stream reader, or simply reader, is an object that allows direct reading of chunks from a readable stream. Without a reader, a consumer can only perform high-level operations on the readable stream: canceling the stream, or piping the readable stream to a writable stream. A reader is acquired via the stream’s getReader() method.

A readable byte stream has the ability to vend two types of readers: default readers and BYOB readers. BYOB ("bring your own buffer") readers allow reading into a developer-supplied buffer, thus minimizing copies. A non-byte readable stream can only vend default readers. Default readers are instances of the ReadableStreamDefaultReader class, while BYOB readers are instances of ReadableStreamBYOBReader.

Similarly, a writable stream writer, or simply writer, is an object that allows direct writing of chunks to a writable stream. Without a writer, a producer can only perform the high-level operations of aborting the stream or piping a readable stream to the writable stream. Writers are represented by the WritableStreamDefaultWriter class.

Under the covers, these high-level operations actually use a reader or writer themselves.

A given readable or writable stream only has at most one reader or writer at a time. We say in this case the stream is locked, and that the reader or writer is active. This state can be determined using the readableStream.locked or writableStream.locked properties.

A reader or writer also has the capability to release its lock, which makes it no longer active, and allows further readers or writers to be acquired. This is done via the defaultReader.releaseLock(), byobReader.releaseLock(), or writer.releaseLock() method, as appropriate.

3. Readable streams

3.1. Using readable streams

The simplest way to consume a readable stream is to simply pipe it to a writable stream. This ensures that backpressure is respected, and any errors (either writing or reading) are propagated through the chain:
readableStream.pipeTo(writableStream)
  .then(() => console.log("All data successfully written!"))
  .catch(e => console.error("Something went wrong!", e));
If you simply want to be alerted of each new chunk from a readable stream, you can pipe it to a new writable stream that you custom-create for that purpose:
readableStream.pipeTo(new WritableStream({
  write(chunk) {
    console.log("Chunk received", chunk);
  },
  close() {
    console.log("All data successfully read!");
  },
  abort(e) {
    console.error("Something went wrong!", e);
  }
}));

By returning promises from your write() implementation, you can signal backpressure to the readable stream.

Although readable streams will usually be used by piping them to a writable stream, you can also read them directly by acquiring a reader and using its read() method to get successive chunks. For example, this code logs the next chunk in the stream, if available:
const reader = readableStream.getReader();

reader.read().then(
  ({ value, done }) => {
    if (done) {
      console.log("The stream was already closed!");
    } else {
      console.log(value);
    }
  },
  e => console.error("The stream became errored and cannot be read from!", e)
);

This more manual method of reading a stream is mainly useful for library authors building new high-level operations on streams, beyond the provided ones of piping and teeing.

The above example showed using the readable stream’s default reader. If the stream is a readable byte stream, you can also acquire a BYOB reader for it, which allows more precise control over buffer allocation in order to avoid copies. For example, this code reads the first 1024 bytes from the stream into a single memory buffer:
const reader = readableStream.getReader({ mode: "byob" });

let startingAB = new ArrayBuffer(1024);
readInto(startingAB)
  .then(buffer => console.log("The first 1024 bytes:", buffer))
  .catch(e => console.error("Something went wrong!", e));

function readInto(buffer, offset = 0) {
  if (offset === buffer.byteLength) {
    return Promise.resolve(buffer);
  }

  const view = new Uint8Array(buffer, offset, buffer.byteLength - offset);
  return reader.read(view).then(newView => {
    return readInto(newView.buffer, offset + newView.byteLength);
  });
}

An important thing to note here is that the final buffer value is different from the startingAB, but it (and all intermediate buffers) shares the same backing memory allocation. At each step, the buffer is transferred to a new ArrayBuffer object. The newView is a new Uint8Array, with that ArrayBuffer object as its buffer property, the offset that bytes were written to as its byteOffset property, and the number of bytes that were written as its byteLength property.

3.2. Class ReadableStream

The ReadableStream class is a concrete instance of the general readable stream concept. It is adaptable to any chunk type, and maintains an internal queue to keep track of data supplied by the underlying source but not yet read by any consumer.

3.2.1. Class definition

This section is non-normative.

If one were to write the ReadableStream class in something close to the syntax of [ECMASCRIPT], it would look like

class ReadableStream {
  constructor(underlyingSource = {}, strategy = {})

  get locked()

  cancel(reason)
  getIterator({ preventCancel } = {})
  getReader({ mode } = {})
  pipeThrough({ writable, readable },
              { preventClose, preventAbort, preventCancel, signal } = {})
  pipeTo(dest, { preventClose, preventAbort, preventCancel, signal } = {})
  tee()

  [@@asyncIterator]({ preventCancel } = {})
}

3.2.2. Internal slots

Instances of ReadableStream are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[disturbed]] A boolean flag set to true when the stream has been read from or canceled
[[readableStreamController]] A ReadableStreamDefaultController or ReadableByteStreamController created with the ability to control the state and queue of this stream; also used for the IsReadableStream brand check
[[reader]] A ReadableStreamDefaultReader or ReadableStreamBYOBReader instance, if the stream is locked to a reader, or undefined if it is not
[[state]] A string containing the stream’s current state, used internally; one of "readable", "closed", or "errored"
[[storedError]] A value indicating how the stream failed, to be given as a failure reason or exception when trying to operate on an errored stream

3.2.3. new ReadableStream(underlyingSource = {}, strategy = {})

The underlyingSource argument represents the underlying source, as described in § 3.2.4 Underlying source API.

The strategy argument represents the stream’s queuing strategy, as described in § 6.1.1 The queuing strategy API. If it is not provided, the default behavior will be the same as a CountQueuingStrategy with a high water mark of 1.

1. Perform ! InitializeReadableStream(*this*). 1. Let _size_ be ? GetV(_strategy_, `"size"`). 1. Let _highWaterMark_ be ? GetV(_strategy_, `"highWaterMark"`). 1. Let _type_ be ? GetV(_underlyingSource_, `"type"`). 1. Let _typeString_ be ? ToString(_type_). 1. If _typeString_ is `"bytes"`, 1. If _size_ is not *undefined*, throw a *RangeError* exception. 1. If _highWaterMark_ is *undefined*, let _highWaterMark_ be *0*. 1. Set _highWaterMark_ to ? ValidateAndNormalizeHighWaterMark(_highWaterMark_). 1. Perform ? SetUpReadableByteStreamControllerFromUnderlyingSource(*this*, _underlyingSource_, _highWaterMark_). 1. Otherwise, if _type_ is *undefined*, 1. Let _sizeAlgorithm_ be ? MakeSizeAlgorithmFromSizeFunction(_size_). 1. If _highWaterMark_ is *undefined*, let _highWaterMark_ be *1*. 1. Set _highWaterMark_ to ? ValidateAndNormalizeHighWaterMark(_highWaterMark_). 1. Perform ? SetUpReadableStreamDefaultControllerFromUnderlyingSource(*this*, _underlyingSource_, _highWaterMark_, _sizeAlgorithm_). 1. Otherwise, throw a *RangeError* exception.

3.2.4. Underlying source API

This section is non-normative.

The ReadableStream() constructor accepts as its first argument a JavaScript object representing the underlying source. Such objects can contain any of the following properties:

start(controller)

A function that is called immediately during creation of the ReadableStream.

Typically this is used adapt a push source by setting up relevant event listeners, as in the example of § 8.1 A readable stream with an underlying push source (no backpressure support), or to acquire access to a pull source, as in § 8.4 A readable stream with an underlying pull source.

If this setup process is asynchronous, it can return a promise to signal success or failure; a rejected promise will error the stream. Any thrown exceptions will be re-thrown by the ReadableStream() constructor.

pull(controller)

A function that is called whenever the stream’s internal queue of chunks becomes not full, i.e. whenever the queue’s desired size becomes positive. Generally, it will be called repeatedly until the queue reaches its high water mark (i.e. until the desired size becomes non-positive).

For push sources, this can be used to resume a paused flow, as in § 8.2 A readable stream with an underlying push source and backpressure support. For pull sources, it is used to acquire new chunks to enqueue into the stream, as in § 8.4 A readable stream with an underlying pull source.

This function will not be called until start() successfully completes. Additionally, it will only be called repeatedly if it enqueues at least one chunk or fulfills a BYOB request; a no-op pull() implementation will not be continually called.

If the function returns a promise, then it will not be called again until that promise fulfills. (If the promise rejects, the stream will become errored.) This is mainly used in the case of pull sources, where the promise returned represents the process of acquiring a new chunk. Throwing an exception is treated the same as returning a rejected promise.

cancel(reason)

A function that is called whenever the consumer cancels the stream, via stream.cancel(), defaultReader.cancel(), or byobReader.cancel(). It takes as its argument the same value as was passed to those methods by the consumer.

Readable streams can additionally be canceled under certain conditions during piping; see the definition of the pipeTo() method for more details.

For all streams, this is generally used to release access to the underlying resource; see for example § 8.1 A readable stream with an underlying push source (no backpressure support).

If the shutdown process is asynchronous, it can return a promise to signal success or failure; the result will be communicated via the return value of the cancel() method that was called. Additionally, a rejected promise will error the stream, instead of letting it close. Throwing an exception is treated the same as returning a rejected promise.

type (byte streams only)

Can be set to "bytes" to signal that the constructed ReadableStream is a readable byte stream. This ensures that the resulting ReadableStream will successfully be able to vend BYOB readers via its getReader() method. It also affects the controller argument passed to the start() and pull() methods; see below.

For an example of how to set up a readable byte stream, including using the different controller interface, see § 8.3 A readable byte stream with an underlying push source (no backpressure support).

Setting any value other than "bytes" or undefined will cause the ReadableStream() constructor to throw an exception.

autoAllocateChunkSize (byte streams only)

Can be set to a positive integer to cause the implementation to automatically allocate buffers for the underlying source code to write into. In this case, when a consumer is using a default reader, the stream implementation will automatically allocate an ArrayBuffer of the given size, so that controller.byobRequest is always present, as if the consumer was using a BYOB reader.

This is generally used to cut down on the amount of code needed to handle consumers that use default readers, as can be seen by comparing § 8.3 A readable byte stream with an underlying push source (no backpressure support) without auto-allocation to § 8.5 A readable byte stream with an underlying pull source with auto-allocation.

The type of the controller argument passed to the start() and pull() methods depends on the value of the type option. If type is set to undefined (including via omission), controller will be a ReadableStreamDefaultController. If it’s set to "bytes", controller will be a ReadableByteStreamController.

3.2.5. Properties of the ReadableStream prototype

3.2.5.1. get locked
The locked getter returns whether or not the readable stream is locked to a reader.
1. If ! IsReadableStream(*this*) is *false*, throw a *TypeError* exception. 1. Return ! IsReadableStreamLocked(*this*).
3.2.5.2. cancel(reason)
The cancel method cancels the stream, signaling a loss of interest in the stream by a consumer. The supplied reason argument will be given to the underlying source’s cancel() method, which might or might not use it.
1. If ! IsReadableStream(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. If ! IsReadableStreamLocked(*this*) is *true*, return a promise rejected with a *TypeError* exception. 1. Return ! ReadableStreamCancel(*this*, _reason_).
3.2.5.3. getIterator({ preventCancel } = {})
The getIterator method returns an async iterator which can be used to consume the stream. The return() method of this iterator object will, by default, cancel the stream; it will also release the reader.
1. If ! IsReadableStream(*this*) is *false*, throw a *TypeError* exception. 1. Let _reader_ be ? AcquireReadableStreamDefaultReader(*this*). 1. Let _iterator_ be ! ObjectCreate(`ReadableStreamAsyncIteratorPrototype`). 1. Set _iterator_.[[asyncIteratorReader]] to _reader_. 1. Set _iterator_.[[preventCancel]] to ! ToBoolean(_preventCancel_). 1. Return _iterator_.
3.2.5.4. getReader({ mode } = {})
The getReader method creates a reader of the type specified by the mode option and locks the stream to the new reader. While the stream is locked, no other reader can be acquired until this one is released.

This functionality is especially useful for creating abstractions that desire the ability to consume a stream in its entirety. By getting a reader for the stream, you can ensure nobody else can interleave reads with yours or cancel the stream, which would interfere with your abstraction.

When mode is undefined, the method creates a default reader (an instance of ReadableStreamDefaultReader). The reader provides the ability to directly read individual chunks from the stream via the reader’s read() method.

When mode is "byob", the getReader method creates a BYOB reader (an instance of ReadableStreamBYOBReader). This feature only works on readable byte streams, i.e. streams which were constructed specifically with the ability to handle "bring your own buffer" reading. The reader provides the ability to directly read individual chunks from the stream via the reader’s read() method, into developer-supplied buffers, allowing more precise control over allocation.

1. If ! IsReadableStream(*this*) is *false*, throw a *TypeError* exception. 1. If _mode_ is *undefined*, return ? AcquireReadableStreamDefaultReader(*this*, *true*). 1. Set _mode_ to ? ToString(_mode_). 1. If _mode_ is `"byob"`, return ? AcquireReadableStreamBYOBReader(*this*, *true*). 1. Throw a *RangeError* exception.
An example of an abstraction that might benefit from using a reader is a function like the following, which is designed to read an entire readable stream into memory as an array of chunks.
function readAllChunks(readableStream) {
  const reader = readableStream.getReader();
  const chunks = [];

  return pump();

  function pump() {
    return reader.read().then(({ value, done }) => {
      if (done) {
        return chunks;
      }

      chunks.push(value);
      return pump();
    });
  }
}

Note how the first thing it does is obtain a reader, and from then on it uses the reader exclusively. This ensures that no other consumer can interfere with the stream, either by reading chunks or by canceling the stream.

3.2.5.5. pipeThrough({ writable, readable }, { preventClose, preventAbort, preventCancel, signal } = {})
The pipeThrough method provides a convenient, chainable way of piping this readable stream through a transform stream (or any other { writable, readable } pair). It simply pipes the stream into the writable side of the supplied pair, and returns the readable side for further use.

Piping a stream will lock it for the duration of the pipe, preventing any other consumer from acquiring a reader.

1. If ! IsReadableStream(*this*) is *false*, throw a *TypeError* exception. 1. If ! IsWritableStream(_writable_) is *false*, throw a *TypeError* exception. 1. If ! IsReadableStream(_readable_) is *false*, throw a *TypeError* exception. 1. Set _preventClose_ to ! ToBoolean(_preventClose_), set _preventAbort_ to ! ToBoolean(_preventAbort_), and set _preventCancel_ to ! ToBoolean(_preventCancel_). 1. If _signal_ is not *undefined*, and _signal_ is not an instance of the `AbortSignal` interface, throw a *TypeError* exception. 1. If ! IsReadableStreamLocked(*this*) is *true*, throw a *TypeError* exception. 1. If ! IsWritableStreamLocked(_writable_) is *true*, throw a *TypeError* exception. 1. Let _promise_ be ! ReadableStreamPipeTo(*this*, _writable_, _preventClose_, _preventAbort_, _preventCancel_, _signal_). 1. Set _promise_.[[PromiseIsHandled]] to *true*. 1. Return _readable_.
A typical example of constructing pipe chain using pipeThrough(transform, options) would look like
httpResponseBody
  .pipeThrough(decompressorTransform)
  .pipeThrough(ignoreNonImageFilesTransform)
  .pipeTo(mediaGallery);
3.2.5.6. pipeTo(dest, { preventClose, preventAbort, preventCancel, signal } = {})
The pipeTo method pipes this readable stream to a given writable stream. The way in which the piping process behaves under various error conditions can be customized with a number of passed options. It returns a promise that fulfills when the piping process completes successfully, or rejects if any errors were encountered.

Piping a stream will lock it for the duration of the pipe, preventing any other consumer from acquiring a reader.

Errors and closures of the source and destination streams propagate as follows:

The signal option can be set to an AbortSignal to allow aborting an ongoing pipe operation via the corresponding AbortController. In this case, the source readable stream will be canceled, and the destination writable stream aborted, unless the respective options preventCancel or preventAbort are set.

1. If ! IsReadableStream(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. If ! IsWritableStream(_dest_) is *false*, return a promise rejected with a *TypeError* exception. 1. Set _preventClose_ to ! ToBoolean(_preventClose_), set _preventAbort_ to ! ToBoolean(_preventAbort_), and set _preventCancel_ to ! ToBoolean(_preventCancel_). 1. If _signal_ is not *undefined*, and _signal_ is not an instance of the `AbortSignal` interface, return a promise rejected with a *TypeError* exception. 1. If ! IsReadableStreamLocked(*this*) is *true*, return a promise rejected with a *TypeError* exception. 1. If ! IsWritableStreamLocked(_dest_) is *true*, return a promise rejected with a *TypeError* exception. 1. Return ! ReadableStreamPipeTo(*this*, _dest_, _preventClose_, _preventAbort_, _preventCancel_, _signal_).
3.2.5.7. tee()
The tee method tees this readable stream, returning a two-element array containing the two resulting branches as new ReadableStream instances.

Teeing a stream will lock it, preventing any other consumer from acquiring a reader. To cancel the stream, cancel both of the resulting branches; a composite cancellation reason will then be propagated to the stream’s underlying source.

Note that the chunks seen in each branch will be the same object. If the chunks are not immutable, this could allow interference between the two branches.

1. If ! IsReadableStream(*this*) is *false*, throw a *TypeError* exception. 1. Let _branches_ be ? ReadableStreamTee(*this*, *false*). 1. Return ! CreateArrayFromList(_branches_).
Teeing a stream is most useful when you wish to let two independent consumers read from the stream in parallel, perhaps even at different speeds. For example, given a writable stream cacheEntry representing an on-disk file, and another writable stream httpRequestBody representing an upload to a remote server, you could pipe the same readable stream to both destinations at once:
const [forLocal, forRemote] = readableStream.tee();

Promise.all([
  forLocal.pipeTo(cacheEntry),
  forRemote.pipeTo(httpRequestBody)
])
.then(() => console.log("Saved the stream to the cache and also uploaded it!"))
.catch(e => console.error("Either caching or uploading failed: ", e));
3.2.5.8. [@@asyncIterator]({ preventCancel } = {})

The @@asyncIterator method is an alias of getIterator().

The initial value of the @@asyncIterator method is the same function object as the initial value of the getIterator() method.

3.3. ReadableStreamAsyncIteratorPrototype

ReadableStreamAsyncIteratorPrototype is an ordinary object that is used by getIterator() to construct the objects it returns. Instances of ReadableStreamAsyncIteratorPrototype implement the AsyncIterator abstract interface from the JavaScript specification. [ECMASCRIPT]

The ReadableStreamAsyncIteratorPrototype object must have its [[Prototype]] internal slot set to %AsyncIteratorPrototype%.

3.3.1. Internal slots

Objects created by getIterator(), using ReadableStreamAsyncIteratorPrototype as their prototype, are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[asyncIteratorReader]] A ReadableStreamDefaultReader instance
[[preventCancel]] A boolean value indicating if the stream will be canceled when the async iterator’s return() method is called

3.3.2. next()

1. If ! IsReadableStreamAsyncIterator(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. Let _reader_ be *this*.[[asyncIteratorReader]]. 1. If _reader_.[[ownerReadableStream]] is *undefined*, return a promise rejected with a *TypeError* exception. 1. Return the result of reacting to ! ReadableStreamDefaultReaderRead(_reader_) with the following fulfillment steps given the argument _result_: 1. Assert: Type(_result_) is Object. 1. Let _done_ be ! Get(_result_, `"done"`). 1. Assert: Type(_done_) is Boolean. 1. If _done_ is *true*, perform ! ReadableStreamReaderGenericRelease(_reader_). 1. Let _value_ be ! Get(_result_, `"value"`). 1. Return ! ReadableStreamCreateReadResult(_value_, _done_, *true*).

3.3.3. return( value )

1. If ! IsReadableStreamAsyncIterator(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. Let _reader_ be *this*.[[asyncIteratorReader]]. 1. If _reader_.[[ownerReadableStream]] is *undefined*, return a promise rejected with a *TypeError* exception. 1. If _reader_.[[readRequests]] is not empty, return a promise rejected with a *TypeError* exception. 1. If *this*.[[preventCancel]] is *false*, then: 1. Let _result_ be ! ReadableStreamReaderGenericCancel(_reader_, _value_). 1. Perform ! ReadableStreamReaderGenericRelease(_reader_). 1. Return the result of reacting to _result_ with a fulfillment step that returns ! ReadableStreamCreateReadResult(_value_, *true*, *true*). 1. Perform ! ReadableStreamReaderGenericRelease(_reader_). 1. Return a promise resolved with ! ReadableStreamCreateReadResult(_value_, *true*, *true*).

3.4. General readable stream abstract operations

The following abstract operations, unlike most in this specification, are meant to be generally useful by other specifications, instead of just being part of the implementation of this spec’s classes.

3.4.1. AcquireReadableStreamBYOBReader ( stream[, forAuthorCode ] )

This abstract operation is meant to be called from other specifications that may wish to acquire a BYOB reader for a given stream.

1. If _forAuthorCode_ was not passed, set it to *false*. 1. Let _reader_ be ? Construct(`ReadableStreamBYOBReader`, « _stream_ »). 1. Set _reader_.[[forAuthorCode]] to _forAuthorCode_. 1. Return _reader_.

3.4.2. AcquireReadableStreamDefaultReader ( stream[, forAuthorCode ] )

This abstract operation is meant to be called from other specifications that may wish to acquire a default reader for a given stream.

Other specifications ought to leave forAuthorCode as its default value of false, unless they are planning to directly expose the resulting { value, done } object to authors. See the note regarding ReadableStreamCreateReadResult for more information.

1. If _forAuthorCode_ was not passed, set it to *false*. 1. Let _reader_ be ? Construct(`ReadableStreamDefaultReader`, « _stream_ »). 1. Set _reader_.[[forAuthorCode]] to _forAuthorCode_. 1. Return _reader_.

3.4.3. CreateReadableStream ( startAlgorithm, pullAlgorithm, cancelAlgorithm [, highWaterMark [, sizeAlgorithm ] ] )

This abstract operation is meant to be called from other specifications that wish to create ReadableStream instances. The pullAlgorithm and cancelAlgorithm algorithms must return promises; if supplied, sizeAlgorithm must be an algorithm accepting chunk objects and returning a number; and if supplied, highWaterMark must be a non-negative, non-NaN number.

CreateReadableStream throws an exception if and only if the supplied startAlgorithm throws.

1. If _highWaterMark_ was not passed, set it to *1*. 1. If _sizeAlgorithm_ was not passed, set it to an algorithm that returns *1*. 1. Assert: ! IsNonNegativeNumber(_highWaterMark_) is *true*. 1. Let _stream_ be ObjectCreate(the original value of `ReadableStream`'s `prototype` property). 1. Perform ! InitializeReadableStream(_stream_). 1. Let _controller_ be ObjectCreate(the original value of `ReadableStreamDefaultController`'s `prototype` property). 1. Perform ? SetUpReadableStreamDefaultController(_stream_, _controller_, _startAlgorithm_, _pullAlgorithm_, _cancelAlgorithm_, _highWaterMark_, _sizeAlgorithm_). 1. Return _stream_.

3.4.4. CreateReadableByteStream ( startAlgorithm, pullAlgorithm, cancelAlgorithm [, highWaterMark [, autoAllocateChunkSize ] ] )

This abstract operation is meant to be called from other specifications that wish to create ReadableStream instances of type "bytes". The pullAlgorithm and cancelAlgorithm algorithms must return promises; if supplied, highWaterMark must be a non-negative, non-NaN number, and if supplied, autoAllocateChunkSize must be a positive integer.

CreateReadableByteStream throws an exception if and only if the supplied startAlgorithm throws.

1. If _highWaterMark_ was not passed, set it to *0*. 1. If _autoAllocateChunkSize_ was not passed, set it to *undefined*. 1. Assert: ! IsNonNegativeNumber(_highWaterMark_) is *true*. 1. If _autoAllocateChunkSize_ is not *undefined*, 1. Assert: ! IsInteger(_autoAllocateChunkSize_) is *true*. 1. Assert: _autoAllocateChunkSize_ is positive. 1. Let _stream_ be ObjectCreate(the original value of `ReadableStream`'s `prototype` property). 1. Perform ! InitializeReadableStream(_stream_). 1. Let _controller_ be ObjectCreate(the original value of `ReadableByteStreamController`'s `prototype` property). 1. Perform ? SetUpReadableByteStreamController(_stream_, _controller_, _startAlgorithm_, _pullAlgorithm_, _cancelAlgorithm_, _highWaterMark_, _autoAllocateChunkSize_). 1. Return _stream_.

3.4.5. InitializeReadableStream ( stream )

1. Set _stream_.[[state]] to `"readable"`. 1. Set _stream_.[[reader]] and _stream_.[[storedError]] to *undefined*. 1. Set _stream_.[[disturbed]] to *false*.

3.4.6. IsReadableStream ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have a [[readableStreamController]] internal slot, return *false*. 1. Return *true*.

3.4.7. IsReadableStreamDisturbed ( stream )

This abstract operation is meant to be called from other specifications that may wish to query whether or not a readable stream has ever been read from or canceled.

1. Assert: ! IsReadableStream(_stream_) is *true*. 1. Return _stream_.[[disturbed]].

3.4.8. IsReadableStreamLocked ( stream )

This abstract operation is meant to be called from other specifications that may wish to query whether or not a readable stream is locked to a reader.

1. Assert: ! IsReadableStream(_stream_) is *true*. 1. If _stream_.[[reader]] is *undefined*, return *false*. 1. Return *true*.

3.4.9. IsReadableStreamAsyncIterator ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have a [[asyncIteratorReader]] internal slot, return *false*. 1. Return *true*.

3.4.10. ReadableStreamTee ( stream, cloneForBranch2 )

This abstract operation is meant to be called from other specifications that may wish to tee a given readable stream.

The second argument, cloneForBranch2, governs whether or not the data from the original stream will be cloned (using HTML’s serializable objects framework) before appearing in the second of the returned branches. This is useful for scenarios where both branches are to be consumed in such a way that they might otherwise interfere with each other, such as by transferring their chunks. However, it does introduce a noticeable asymmetry between the two branches, and limits the possible chunks to serializable ones. [HTML]

In this standard ReadableStreamTee is always called with cloneForBranch2 set to false; other specifications pass true.

1. Assert: ! IsReadableStream(_stream_) is *true*. 1. Assert: Type(_cloneForBranch2_) is Boolean. 1. Let _reader_ be ? AcquireReadableStreamDefaultReader(_stream_). 1. Let _reading_ be *false*. 1. Let _canceled1_ be *false*. 1. Let _canceled2_ be *false*. 1. Let _reason1_ be *undefined*. 1. Let _reason2_ be *undefined*. 1. Let _branch1_ be *undefined*. 1. Let _branch2_ be *undefined*. 1. Let _cancelPromise_ be a new promise. 1. Let _pullAlgorithm_ be the following steps: 1. If _reading_ is *true*, return a promise resolved with *undefined*. 1. Set _reading_ to *true*. 1. Let _readPromise_ be the result of reacting to ! ReadableStreamDefaultReaderRead(_reader_) with the following fulfillment steps given the argument _result_: 1. Set _reading_ to *false*. 1. Assert: Type(_result_) is Object. 1. Let _done_ be ! Get(_result_, `"done"`). 1. Assert: Type(_done_) is Boolean. 1. If _done_ is *true*, 1. If _canceled1_ is *false*, 1. Perform ! ReadableStreamDefaultControllerClose(_branch1_.[[readableStreamController]]). 1. If _canceled2_ is *false*, 1. Perform ! ReadableStreamDefaultControllerClose(_branch2_.[[readableStreamController]]). 1. Return. 1. Let _value_ be ! Get(_result_, `"value"`). 1. Let _value1_ and _value2_ be _value_. 1. If _canceled2_ is *false* and _cloneForBranch2_ is *true*, set _value2_ to ? StructuredDeserialize(? StructuredSerialize(_value2_), the current Realm Record). 1. If _canceled1_ is *false*, perform ? ReadableStreamDefaultControllerEnqueue(_branch1_.[[readableStreamController]], _value1_). 1. If _canceled2_ is *false*, perform ? ReadableStreamDefaultControllerEnqueue(_branch2_.[[readableStreamController]], _value2_). 1. Set _readPromise_.[[PromiseIsHandled]] to *true*. 1. Return a promise resolved with *undefined*. 1. Let _cancel1Algorithm_ be the following steps, taking a _reason_ argument: 1. Set _canceled1_ to *true*. 1. Set _reason1_ to _reason_. 1. If _canceled2_ is *true*, 1. Let _compositeReason_ be ! CreateArrayFromList(« _reason1_, _reason2_ »). 1. Let _cancelResult_ be ! ReadableStreamCancel(_stream_, _compositeReason_). 1. Resolve _cancelPromise_ with _cancelResult_. 1. Return _cancelPromise_. 1. Let _cancel2Algorithm_ be the following steps, taking a _reason_ argument: 1. Set _canceled2_ to *true*. 1. Set _reason2_ to _reason_. 1. If _canceled1_ is *true*, 1. Let _compositeReason_ be ! CreateArrayFromList(« _reason1_, _reason2_ »). 1. Let _cancelResult_ be ! ReadableStreamCancel(_stream_, _compositeReason_). 1. Resolve _cancelPromise_ with _cancelResult_. 1. Return _cancelPromise_. 1. Let _startAlgorithm_ be an algorithm that returns *undefined*. 1. Set _branch1_ to ! CreateReadableStream(_startAlgorithm_, _pullAlgorithm_, _cancel1Algorithm_). 1. Set _branch2_ to ! CreateReadableStream(_startAlgorithm_, _pullAlgorithm_, _cancel2Algorithm_). 1. Upon rejection of _reader_.[[closedPromise]] with reason _r_, 1. Perform ! ReadableStreamDefaultControllerError(_branch1_.[[readableStreamController]], _r_). 1. Perform ! ReadableStreamDefaultControllerError(_branch2_.[[readableStreamController]], _r_). 1. Return « _branch1_, _branch2_ ».

3.4.11. ReadableStreamPipeTo ( source, dest, preventClose, preventAbort, preventCancel, signal )

1. Assert: ! IsReadableStream(_source_) is *true*. 1. Assert: ! IsWritableStream(_dest_) is *true*. 1. Assert: Type(_preventClose_) is Boolean, Type(_preventAbort_) is Boolean, and Type(_preventCancel_) is Boolean. 1. Assert: _signal_ is *undefined* or _signal_ is an instance of the `AbortSignal` interface. 1. Assert: ! IsReadableStreamLocked(_source_) is *false*. 1. Assert: ! IsWritableStreamLocked(_dest_) is *false*. 1. If ! IsReadableByteStreamController(_source_.[[readableStreamController]]) is *true*, let _reader_ be either ! AcquireReadableStreamBYOBReader(_source_) or ! AcquireReadableStreamDefaultReader(_source_), at the user agent’s discretion. 1. Otherwise, let _reader_ be ! AcquireReadableStreamDefaultReader(_source_). 1. Let _writer_ be ! AcquireWritableStreamDefaultWriter(_dest_). 1. Set _source_.[[disturbed]] to *true*. 1. Let _shuttingDown_ be *false*. 1. Let _promise_ be a new promise. 1. If _signal_ is not *undefined*, 1. Let _abortAlgorithm_ be the following steps: 1. Let _error_ be a new "`AbortError`" `DOMException`. 1. Let _actions_ be an empty ordered set. 1. If _preventAbort_ is *false*, append the following action to _actions_: 1. If _dest_.[[state]] is `"writable"`, return ! WritableStreamAbort(_dest_, _error_). 1. Otherwise, return a promise resolved with *undefined*. 1. If _preventCancel_ is *false*, append the following action action to _actions_: 1. If _source_.[[state]] is `"readable"`, return ! ReadableStreamCancel(_source_, _error_). 1. Otherwise, return a promise resolved with *undefined*. 1. Shutdown with an action consisting of getting a promise to wait for all of the actions in _actions_, and with _error_. 1. If _signal_’s aborted flag is set, perform _abortAlgorithm_ and return _promise_. 1. Add _abortAlgorithm_ to _signal_. 1. In parallel but not really; see #905, using _reader_ and _writer_, read all chunks from _source_ and write them to _dest_. Due to the locking provided by the reader and writer, the exact manner in which this happens is not observable to author code, and so there is flexibility in how this is done. The following constraints apply regardless of the exact algorithm used: * Public API must not be used: while reading or writing, or performing any of the operations below, the JavaScript-modifiable reader, writer, and stream APIs (i.e. methods on the appropriate prototypes) must not be used. Instead, the streams must be manipulated directly. * Backpressure must be enforced: * While WritableStreamDefaultWriterGetDesiredSize(_writer_) is ≤ *0* or is *null*, the user agent must not read from _reader_. * If _reader_ is a BYOB reader, WritableStreamDefaultWriterGetDesiredSize(_writer_) should be used as a basis to determine the size of the chunks read from _reader_.

It’s frequently inefficient to read chunks that are too small or too large. Other information might be factored in to determine the optimal chunk size.

* Reads or writes should not be delayed for reasons other than these backpressure signals.

An implementation that waits for each write to successfully complete before proceeding to the next read/write operation violates this recommendation. In doing so, such an implementation makes the internal queue of _dest_ useless, as it ensures _dest_ always contains at most one queued chunk.

* Shutdown must stop activity: if _shuttingDown_ becomes *true*, the user agent must not initiate further reads from _reader_, and must only perform writes of already-read chunks, as described below. In particular, the user agent must check the below conditions before performing any reads or writes, since they might lead to immediate shutdown. * Error and close states must be propagated: the following conditions must be applied in order. 1. Errors must be propagated forward: if _source_.[[state]] is or becomes `"errored"`, then 1. If _preventAbort_ is *false*, shutdown with an action of ! WritableStreamAbort(_dest_, _source_.[[storedError]]) and with _source_.[[storedError]]. 1. Otherwise, shutdown with _source_.[[storedError]]. 1. Errors must be propagated backward: if _dest_.[[state]] is or becomes `"errored"`, then 1. If _preventCancel_ is *false*, shutdown with an action of ! ReadableStreamCancel(_source_, _dest_.[[storedError]]) and with _dest_.[[storedError]]. 1. Otherwise, shutdown with _dest_.[[storedError]]. 1. Closing must be propagated forward: if _source_.[[state]] is or becomes `"closed"`, then 1. If _preventClose_ is *false*, shutdown with an action of ! WritableStreamDefaultWriterCloseWithErrorPropagation(_writer_). 1. Otherwise, shutdown. 1. Closing must be propagated backward: if ! WritableStreamCloseQueuedOrInFlight(_dest_) is *true* or _dest_.[[state]] is `"closed"`, then 1. Assert: no chunks have been read or written. 1. Let _destClosed_ be a new *TypeError*. 1. If _preventCancel_ is *false*, shutdown with an action of ! ReadableStreamCancel(_source_, _destClosed_) and with _destClosed_. 1. Otherwise, shutdown with _destClosed_. * Shutdown with an action: if any of the above requirements ask to shutdown with an action _action_, optionally with an error _originalError_, then: 1. If _shuttingDown_ is *true*, abort these substeps. 1. Set _shuttingDown_ to *true*. 1. If _dest_.[[state]] is `"writable"` and ! WritableStreamCloseQueuedOrInFlight(_dest_) is *false*, 1. If any chunks have been read but not yet written, write them to _dest_. 1. Wait until every chunk that has been read has been written (i.e. the corresponding promises have settled). 1. Let _p_ be the result of performing _action_. 1. Upon fulfillment of _p_, finalize, passing along _originalError_ if it was given. 1. Upon rejection of _p_ with reason _newError_, finalize with _newError_. * Shutdown: if any of the above requirements or steps ask to shutdown, optionally with an error _error_, then: 1. If _shuttingDown_ is *true*, abort these substeps. 1. Set _shuttingDown_ to *true*. 1. If _dest_.[[state]] is `"writable"` and ! WritableStreamCloseQueuedOrInFlight(_dest_) is *false*, 1. If any chunks have been read but not yet written, write them to _dest_. 1. Wait until every chunk that has been read has been written (i.e. the corresponding promises have settled). 1. Finalize, passing along _error_ if it was given. * Finalize: both forms of shutdown will eventually ask to finalize, optionally with an error _error_, which means to perform the following steps: 1. Perform ! WritableStreamDefaultWriterRelease(_writer_). 1. Perform ! ReadableStreamReaderGenericRelease(_reader_). 1. If _signal_ is not *undefined*, remove _abortAlgorithm_ from _signal_. 1. If _error_ was given, reject _promise_ with _error_. 1. Otherwise, resolve _promise_ with *undefined*. 1. Return _promise_.

Various abstract operations performed here include object creation (often of promises), which usually would require specifying a realm for the created object. However, because of the locking, none of these objects can be observed by author code. As such, the realm used to create them does not matter.

3.5. The interface between readable streams and controllers

In terms of specification factoring, the way that the ReadableStream class encapsulates the behavior of both simple readable streams and readable byte streams into a single class is by centralizing most of the potentially-varying logic inside the two controller classes, ReadableStreamDefaultController and ReadableByteStreamController. Those classes define most of the stateful internal slots and abstract operations for how a stream’s internal queue is managed and how it interfaces with its underlying source or underlying byte source.

Each controller class defines two internal methods, which are called by the ReadableStream algorithms:

[[CancelSteps]](reason)
The controller’s steps that run in reaction to the stream being canceled, used to clean up the state stored in the controller and inform the underlying source.
[[PullSteps]]()
The controller’s steps that run when a default reader is read from, used to pull from the controller any queued chunks, or pull from the underlying source to get more chunks.

(These are defined as internal methods, instead of as abstract operations, so that they can be called polymorphically by the ReadableStream algorithms, without having to branch on which type of controller is present.)

The rest of this section concerns abstract operations that go in the other direction: they are used by the controller implementations to affect their associated ReadableStream object. This translates internal state changes of the controller into developer-facing results visible through the ReadableStream's public API.

3.5.1. ReadableStreamAddReadIntoRequest ( stream )

1. Assert: ! IsReadableStreamBYOBReader(_stream_.[[reader]]) is *true*. 1. Assert: _stream_.[[state]] is `"readable"` or `"closed"`. 1. Let _promise_ be a new promise. 1. Let _readIntoRequest_ be Record {[[promise]]: _promise_}. 1. Append _readIntoRequest_ as the last element of _stream_.[[reader]].[[readIntoRequests]]. 1. Return _promise_.

3.5.2. ReadableStreamAddReadRequest ( stream )

1. Assert: ! IsReadableStreamDefaultReader(_stream_.[[reader]]) is *true*. 1. Assert: _stream_.[[state]] is `"readable"`. 1. Let _promise_ be a new promise. 1. Let _readRequest_ be Record {[[promise]]: _promise_}. 1. Append _readRequest_ as the last element of _stream_.[[reader]].[[readRequests]]. 1. Return _promise_.

3.5.3. ReadableStreamCancel ( stream, reason )

1. Set _stream_.[[disturbed]] to *true*. 1. If _stream_.[[state]] is `"closed"`, return a promise resolved with *undefined*. 1. If _stream_.[[state]] is `"errored"`, return a promise rejected with _stream_.[[storedError]]. 1. Perform ! ReadableStreamClose(_stream_). 1. Let _sourceCancelPromise_ be ! _stream_.[[readableStreamController]].[[CancelSteps]](_reason_). 1. Return the result of reacting to _sourceCancelPromise_ with a fulfillment step that returns *undefined*.

3.5.4. ReadableStreamClose ( stream )

1. Assert: _stream_.[[state]] is `"readable"`. 1. Set _stream_.[[state]] to `"closed"`. 1. Let _reader_ be _stream_.[[reader]]. 1. If _reader_ is *undefined*, return. 1. If ! IsReadableStreamDefaultReader(_reader_) is *true*, 1. Repeat for each _readRequest_ that is an element of _reader_.[[readRequests]], 1. Resolve _readRequest_.[[promise]] with ! ReadableStreamCreateReadResult(*undefined*, *true*, _reader_.[[forAuthorCode]]). 1. Set _reader_.[[readRequests]] to an empty List. 1. Resolve _reader_.[[closedPromise]] with *undefined*.
The case where stream.[[state]] is "closed", but stream.[[closeRequested]] is false, will happen if the stream was closed without its controller’s close method ever being called: i.e., if the stream was closed by a call to cancel(reason). In this case we allow the controller’s close method to be called and silently do nothing, since the cancelation was outside the control of the underlying source.

3.5.5. ReadableStreamCreateReadResult ( value, done, forAuthorCode )

When forAuthorCode is true, this abstract operation gives the same result as CreateIterResultObject(value, done). This provides the expected semantics when the object is to be returned from the defaultReader.read() or byobReader.read() methods.

However, resolving promises with such objects will unavoidably result in an access to Object.prototype.then. For internal use, particularly in pipeTo() and in other specifications, it is important that reads not be observable by author code—even if that author code has tampered with Object.prototype. For this reason, a false value of forAuthorCode results in an object with a null prototype, keeping promise resolution unobservable.

The underlying issue here is that reading from streams always uses promises for { value, done } objects, even in specifications. Although it is conceivable we could rephrase all of the internal algorithms to not use promises and not use JavaScript objects, and instead only package up the results into promise-for-{ value, done } when a read() method is called, this would be a large undertaking, which we have not done. See whatwg/infra#181 for more background on this subject.

1. Let _prototype_ be *null*. 1. If _forAuthorCode_ is *true*, set _prototype_ to %ObjectPrototype%. 1. Assert: Type(_done_) is Boolean. 1. Let _obj_ be ObjectCreate(_prototype_). 1. Perform CreateDataProperty(_obj_, `"value"`, _value_). 1. Perform CreateDataProperty(_obj_, `"done"`, _done_). 1. Return _obj_.

3.5.6. ReadableStreamError ( stream, e )

1. Assert: ! IsReadableStream(_stream_) is *true*. 1. Assert: _stream_.[[state]] is `"readable"`. 1. Set _stream_.[[state]] to `"errored"`. 1. Set _stream_.[[storedError]] to _e_. 1. Let _reader_ be _stream_.[[reader]]. 1. If _reader_ is *undefined*, return. 1. If ! IsReadableStreamDefaultReader(_reader_) is *true*, 1. Repeat for each _readRequest_ that is an element of _reader_.[[readRequests]], 1. Reject _readRequest_.[[promise]] with _e_. 1. Set _reader_.[[readRequests]] to a new empty List. 1. Otherwise, 1. Assert: ! IsReadableStreamBYOBReader(_reader_). 1. Repeat for each _readIntoRequest_ that is an element of _reader_.[[readIntoRequests]], 1. Reject _readIntoRequest_.[[promise]] with _e_. 1. Set _reader_.[[readIntoRequests]] to a new empty List. 1. Reject _reader_.[[closedPromise]] with _e_. 1. Set _reader_.[[closedPromise]].[[PromiseIsHandled]] to *true*.

3.5.7. ReadableStreamFulfillReadIntoRequest ( stream, chunk, done )

1. Let _reader_ be _stream_.[[reader]]. 1. Let _readIntoRequest_ be the first element of _reader_.[[readIntoRequests]]. 1. Remove _readIntoRequest_ from _reader_.[[readIntoRequests]], shifting all other elements downward (so that the second becomes the first, and so on). 1. Resolve _readIntoRequest_.[[promise]] with ! ReadableStreamCreateReadResult(_chunk_, _done_, _reader_.[[forAuthorCode]]).

3.5.8. ReadableStreamFulfillReadRequest ( stream, chunk, done )

1. Let _reader_ be _stream_.[[reader]]. 1. Let _readRequest_ be the first element of _reader_.[[readRequests]]. 1. Remove _readRequest_ from _reader_.[[readRequests]], shifting all other elements downward (so that the second becomes the first, and so on). 1. Resolve _readRequest_.[[promise]] with ! ReadableStreamCreateReadResult(_chunk_, _done_, _reader_.[[forAuthorCode]]).

3.5.9. ReadableStreamGetNumReadIntoRequests ( stream )

1. Return the number of elements in _stream_.[[reader]].[[readIntoRequests]].

3.5.10. ReadableStreamGetNumReadRequests ( stream )

1. Return the number of elements in _stream_.[[reader]].[[readRequests]].

3.5.11. ReadableStreamHasBYOBReader ( stream )

1. Let _reader_ be _stream_.[[reader]]. 1. If _reader_ is *undefined*, return *false*. 1. If ! IsReadableStreamBYOBReader(_reader_) is *false*, return *false*. 1. Return *true*.

3.5.12. ReadableStreamHasDefaultReader ( stream )

1. Let _reader_ be _stream_.[[reader]]. 1. If _reader_ is *undefined*, return *false*. 1. If ! IsReadableStreamDefaultReader(_reader_) is *false*, return *false*. 1. Return *true*.

3.6. Class ReadableStreamDefaultReader

The ReadableStreamDefaultReader class represents a default reader designed to be vended by a ReadableStream instance.

3.6.1. Class definition

This section is non-normative.

If one were to write the ReadableStreamDefaultReader class in something close to the syntax of [ECMASCRIPT], it would look like

class ReadableStreamDefaultReader {
  constructor(stream)

  get closed()

  cancel(reason)
  read()
  releaseLock()
}

3.6.2. Internal slots

Instances of ReadableStreamDefaultReader are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[closedPromise]] A promise returned by the reader’s closed getter
[[forAuthorCode]] A boolean flag indicating whether this reader is visible to author code
[[ownerReadableStream]] A ReadableStream instance that owns this reader
[[readRequests]] A List of promises returned by calls to the reader’s read() method that have not yet been resolved, due to the consumer requesting chunks sooner than they are available; also used for the IsReadableStreamDefaultReader brand check

3.6.3. new ReadableStreamDefaultReader(stream)

The ReadableStreamDefaultReader constructor is generally not meant to be used directly; instead, a stream’s getReader() method ought to be used.
1. If ! IsReadableStream(_stream_) is *false*, throw a *TypeError* exception. 1. If ! IsReadableStreamLocked(_stream_) is *true*, throw a *TypeError* exception. 1. Perform ! ReadableStreamReaderGenericInitialize(*this*, _stream_). 1. Set *this*.[[readRequests]] to a new empty List.

3.6.4. Properties of the ReadableStreamDefaultReader prototype

3.6.4.1. get closed
The closed getter returns a promise that will be fulfilled when the stream becomes closed, or rejected if the stream ever errors or the reader’s lock is released before the stream finishes closing.
1. If ! IsReadableStreamDefaultReader(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. Return *this*.[[closedPromise]].
3.6.4.2. cancel(reason)
If the reader is active, the cancel method behaves the same as that for the associated stream.
1. If ! IsReadableStreamDefaultReader(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. If *this*.[[ownerReadableStream]] is *undefined*, return a promise rejected with a *TypeError* exception. 1. Return ! ReadableStreamReaderGenericCancel(*this*, _reason_).
3.6.4.3. read()
The read method will return a promise that allows access to the next chunk from the stream’s internal queue, if available.

If reading a chunk causes the queue to become empty, more data will be pulled from the underlying source.

1. If ! IsReadableStreamDefaultReader(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. If *this*.[[ownerReadableStream]] is *undefined*, return a promise rejected with a *TypeError* exception. 1. Return ! ReadableStreamDefaultReaderRead(*this*).
3.6.4.4. releaseLock()
The releaseLock method releases the reader’s lock on the corresponding stream. After the lock is released, the reader is no longer active. If the associated stream is errored when the lock is released, the reader will appear errored in the same way from now on; otherwise, the reader will appear closed.

A reader’s lock cannot be released while it still has a pending read request, i.e., if a promise returned by the reader’s read() method has not yet been settled. Attempting to do so will throw a TypeError and leave the reader locked to the stream.

1. If ! IsReadableStreamDefaultReader(*this*) is *false*, throw a *TypeError* exception. 1. If *this*.[[ownerReadableStream]] is *undefined*, return. 1. If *this*.[[readRequests]] is not empty, throw a *TypeError* exception. 1. Perform ! ReadableStreamReaderGenericRelease(*this*).

3.7. Class ReadableStreamBYOBReader

The ReadableStreamBYOBReader class represents a BYOB reader designed to be vended by a ReadableStream instance.

3.7.1. Class definition

This section is non-normative.

If one were to write the ReadableStreamBYOBReader class in something close to the syntax of [ECMASCRIPT], it would look like

class ReadableStreamBYOBReader {
  constructor(stream)

  get closed()

  cancel(reason)
  read(view)
  releaseLock()
}

3.7.2. Internal slots

Instances of ReadableStreamBYOBReader are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[closedPromise]] A promise returned by the reader’s closed getter
[[forAuthorCode]] A boolean flag indicating whether this reader is visible to author code
[[ownerReadableStream]] A ReadableStream instance that owns this reader
[[readIntoRequests]] A List of promises returned by calls to the reader’s read(view) method that have not yet been resolved, due to the consumer requesting chunks sooner than they are available; also used for the IsReadableStreamBYOBReader brand check

3.7.3. new ReadableStreamBYOBReader(stream)

The ReadableStreamBYOBReader constructor is generally not meant to be used directly; instead, a stream’s getReader() method ought to be used.
1. If ! IsReadableStream(_stream_) is *false*, throw a *TypeError* exception. 1. If ! IsReadableByteStreamController(_stream_.[[readableStreamController]]) is *false*, throw a *TypeError* exception. 1. If ! IsReadableStreamLocked(_stream_) is *true*, throw a *TypeError* exception. 1. Perform ! ReadableStreamReaderGenericInitialize(*this*, _stream_). 1. Set *this*.[[readIntoRequests]] to a new empty List.

3.7.4. Properties of the ReadableStreamBYOBReader prototype

3.7.4.1. get closed
The closed getter returns a promise that will be fulfilled when the stream becomes closed, or rejected if the stream ever errors or the reader’s lock is released before the stream finishes closing.
1. If ! IsReadableStreamBYOBReader(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. Return *this*.[[closedPromise]].
3.7.4.2. cancel(reason)
If the reader is active, the cancel method behaves the same as that for the associated stream.
1. If ! IsReadableStreamBYOBReader(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. If *this*.[[ownerReadableStream]] is *undefined*, return a promise rejected with a *TypeError* exception. 1. Return ! ReadableStreamReaderGenericCancel(*this*, _reason_).
3.7.4.3. read(view)
The read method will write read bytes into view and return a promise resolved with a possibly transferred buffer as described below.

If reading a chunk causes the queue to become empty, more data will be pulled from the underlying byte source.

1. If ! IsReadableStreamBYOBReader(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. If *this*.[[ownerReadableStream]] is *undefined*, return a promise rejected with a *TypeError* exception. 1. If Type(_view_) is not Object, return a promise rejected with a *TypeError* exception. 1. If _view_ does not have a [[ViewedArrayBuffer]] internal slot, return a promise rejected with a *TypeError* exception. 1. If ! IsDetachedBuffer(_view_.[[ViewedArrayBuffer]]) is *true*, return a promise rejected with a *TypeError* exception. 1. If _view_.[[ByteLength]] is *0*, return a promise rejected with a *TypeError* exception. 1. Return ! ReadableStreamBYOBReaderRead(*this*, _view_).
3.7.4.4. releaseLock()
The releaseLock method releases the reader’s lock on the corresponding stream. After the lock is released, the reader is no longer active. If the associated stream is errored when the lock is released, the reader will appear errored in the same way from now on; otherwise, the reader will appear closed.

A reader’s lock cannot be released while it still has a pending read request, i.e., if a promise returned by the reader’s read() method has not yet been settled. Attempting to do so will throw a TypeError and leave the reader locked to the stream.

1. If ! IsReadableStreamBYOBReader(*this*) is *false*, throw a *TypeError* exception. 1. If *this*.[[ownerReadableStream]] is *undefined*, return. 1. If *this*.[[readIntoRequests]] is not empty, throw a *TypeError* exception. 1. Perform ! ReadableStreamReaderGenericRelease(*this*).

3.8. Readable stream reader abstract operations

3.8.1. IsReadableStreamDefaultReader ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have a [[readRequests]] internal slot, return *false*. 1. Return *true*.

3.8.2. IsReadableStreamBYOBReader ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have a [[readIntoRequests]] internal slot, return *false*. 1. Return *true*.

3.8.3. ReadableStreamReaderGenericCancel ( reader, reason )

1. Let _stream_ be _reader_.[[ownerReadableStream]]. 1. Assert: _stream_ is not *undefined*. 1. Return ! ReadableStreamCancel(_stream_, _reason_).

3.8.4. ReadableStreamReaderGenericInitialize ( reader, stream )

1. Set _reader_.[[forAuthorCode]] to *true*. 1. Set _reader_.[[ownerReadableStream]] to _stream_. 1. Set _stream_.[[reader]] to _reader_. 1. If _stream_.[[state]] is `"readable"`, 1. Set _reader_.[[closedPromise]] to a new promise. 1. Otherwise, if _stream_.[[state]] is `"closed"`, 1. Set _reader_.[[closedPromise]] to a promise resolved with *undefined*. 1. Otherwise, 1. Assert: _stream_.[[state]] is `"errored"`. 1. Set _reader_.[[closedPromise]] to a promise rejected with _stream_.[[storedError]]. 1. Set _reader_.[[closedPromise]].[[PromiseIsHandled]] to *true*.

3.8.5. ReadableStreamReaderGenericRelease ( reader )

1. Assert: _reader_.[[ownerReadableStream]] is not *undefined*. 1. Assert: _reader_.[[ownerReadableStream]].[[reader]] is _reader_. 1. If _reader_.[[ownerReadableStream]].[[state]] is `"readable"`, reject _reader_.[[closedPromise]] with a *TypeError* exception. 1. Otherwise, set _reader_.[[closedPromise]] to a promise rejected with a *TypeError* exception. 1. Set _reader_.[[closedPromise]].[[PromiseIsHandled]] to *true*. 1. Set _reader_.[[ownerReadableStream]].[[reader]] to *undefined*. 1. Set _reader_.[[ownerReadableStream]] to *undefined*.

3.8.6. ReadableStreamBYOBReaderRead ( reader, view )

1. Let _stream_ be _reader_.[[ownerReadableStream]]. 1. Assert: _stream_ is not *undefined*. 1. Set _stream_.[[disturbed]] to *true*. 1. If _stream_.[[state]] is `"errored"`, return a promise rejected with _stream_.[[storedError]]. 1. Return ! ReadableByteStreamControllerPullInto(_stream_.[[readableStreamController]], _view_).

3.8.7. ReadableStreamDefaultReaderRead ( reader )

1. Let _stream_ be _reader_.[[ownerReadableStream]]. 1. Assert: _stream_ is not *undefined*. 1. Set _stream_.[[disturbed]] to *true*. 1. If _stream_.[[state]] is `"closed"`, return a promise resolved with ! ReadableStreamCreateReadResult(*undefined*, *true*, _reader_.[[forAuthorCode]]). 1. If _stream_.[[state]] is `"errored"`, return a promise rejected with _stream_.[[storedError]]. 1. Assert: _stream_.[[state]] is `"readable"`. 1. Return ! _stream_.[[readableStreamController]].[[PullSteps]]().

3.9. Class ReadableStreamDefaultController

The ReadableStreamDefaultController class has methods that allow control of a ReadableStream's state and internal queue. When constructing a ReadableStream that is not a readable byte stream, the underlying source is given a corresponding ReadableStreamDefaultController instance to manipulate.

3.9.1. Class definition

This section is non-normative.

If one were to write the ReadableStreamDefaultController class in something close to the syntax of [ECMASCRIPT], it would look like

class ReadableStreamDefaultController {
  constructor() // always throws

  get desiredSize()

  close()
  enqueue(chunk)
  error(e)
}

3.9.2. Internal slots

Instances of ReadableStreamDefaultController are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[cancelAlgorithm]] A promise-returning algorithm, taking one argument (the cancel reason), which communicates a requested cancelation to the underlying source
[[closeRequested]] A boolean flag indicating whether the stream has been closed by its underlying source, but still has chunks in its internal queue that have not yet been read
[[controlledReadableStream]] The ReadableStream instance controlled
[[pullAgain]] A boolean flag set to true if the stream’s mechanisms requested a call to the underlying source’s pull algorithm to pull more data, but the pull could not yet be done since a previous call is still executing
[[pullAlgorithm]] A promise-returning algorithm that pulls data from the underlying source
[[pulling]] A boolean flag set to true while the underlying source’s pull algorithm is executing and the returned promise has not yet fulfilled, used to prevent reentrant calls
[[queue]] A List representing the stream’s internal queue of chunks
[[queueTotalSize]] The total size of all the chunks stored in [[queue]] (see § 6.2 Queue-with-sizes operations)
[[started]] A boolean flag indicating whether the underlying source has finished starting
[[strategyHWM]] A number supplied to the constructor as part of the stream’s queuing strategy, indicating the point at which the stream will apply backpressure to its underlying source
[[strategySizeAlgorithm]] An algorithm to calculate the size of enqueued chunks, as part of the stream’s queuing strategy

3.9.3. new ReadableStreamDefaultController()

The ReadableStreamDefaultController constructor cannot be used directly; ReadableStreamDefaultController instances are created automatically during ReadableStream construction.
1. Throw a *TypeError*.

3.9.4. Properties of the ReadableStreamDefaultController prototype

3.9.4.1. get desiredSize
The desiredSize getter returns the desired size to fill the controlled stream’s internal queue. It can be negative, if the queue is over-full. An underlying source ought to use this information to determine when and how to apply backpressure.
1. If ! IsReadableStreamDefaultController(*this*) is *false*, throw a *TypeError* exception. 1. Return ! ReadableStreamDefaultControllerGetDesiredSize(*this*).
3.9.4.2. close()
The close method will close the controlled readable stream. Consumers will still be able to read any previously-enqueued chunks from the stream, but once those are read, the stream will become closed.
1. If ! IsReadableStreamDefaultController(*this*) is *false*, throw a *TypeError* exception. 1. If ! ReadableStreamDefaultControllerCanCloseOrEnqueue(*this*) is *false*, throw a *TypeError* exception. 1. Perform ! ReadableStreamDefaultControllerClose(*this*).
3.9.4.3. enqueue(chunk)
The enqueue method will enqueue a given chunk in the controlled readable stream.
1. If ! IsReadableStreamDefaultController(*this*) is *false*, throw a *TypeError* exception. 1. If ! ReadableStreamDefaultControllerCanCloseOrEnqueue(*this*) is *false*, throw a *TypeError* exception. 1. Return ? ReadableStreamDefaultControllerEnqueue(*this*, _chunk_).
3.9.4.4. error(e)
The error method will error the readable stream, making all future interactions with it fail with the given error e.
1. If ! IsReadableStreamDefaultController(*this*) is *false*, throw a *TypeError* exception. 1. Perform ! ReadableStreamDefaultControllerError(*this*, _e_).

3.9.5. Readable stream default controller internal methods

The following are additional internal methods implemented by each ReadableStreamDefaultController instance. The readable stream implementation will polymorphically call to either these or their counterparts for BYOB controllers.

3.9.5.1. [[CancelSteps]](reason)
1. Perform ! ResetQueue(*this*). 1. Let _result_ be the result of performing *this*.[[cancelAlgorithm]], passing _reason_. 1. Perform ! ReadableStreamDefaultControllerClearAlgorithms(*this*). 1. Return _result_.
3.9.5.2. [[PullSteps]]( )
1. Let _stream_ be *this*.[[controlledReadableStream]]. 1. If *this*.[[queue]] is not empty, 1. Let _chunk_ be ! DequeueValue(*this*). 1. If *this*.[[closeRequested]] is *true* and *this*.[[queue]] is empty, 1. Perform ! ReadableStreamDefaultControllerClearAlgorithms(*this*). 1. Perform ! ReadableStreamClose(_stream_). 1. Otherwise, perform ! ReadableStreamDefaultControllerCallPullIfNeeded(*this*). 1. Return a promise resolved with ! ReadableStreamCreateReadResult(_chunk_, *false*, _stream_.[[reader]].[[forAuthorCode]]). 1. Let _pendingPromise_ be ! ReadableStreamAddReadRequest(_stream_). 1. Perform ! ReadableStreamDefaultControllerCallPullIfNeeded(*this*). 1. Return _pendingPromise_.

3.10. Readable stream default controller abstract operations

3.10.1. IsReadableStreamDefaultController ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have a [[controlledReadableStream]] internal slot, return *false*. 1. Return *true*.

3.10.2. ReadableStreamDefaultControllerCallPullIfNeeded ( controller )

1. Let _shouldPull_ be ! ReadableStreamDefaultControllerShouldCallPull(_controller_). 1. If _shouldPull_ is *false*, return. 1. If _controller_.[[pulling]] is *true*, 1. Set _controller_.[[pullAgain]] to *true*. 1. Return. 1. Assert: _controller_.[[pullAgain]] is *false*. 1. Set _controller_.[[pulling]] to *true*. 1. Let _pullPromise_ be the result of performing _controller_.[[pullAlgorithm]]. 1. Upon fulfillment of _pullPromise_, 1. Set _controller_.[[pulling]] to *false*. 1. If _controller_.[[pullAgain]] is *true*, 1. Set _controller_.[[pullAgain]] to *false*. 1. Perform ! ReadableStreamDefaultControllerCallPullIfNeeded(_controller_). 1. Upon rejection of _pullPromise_ with reason _e_, 1. Perform ! ReadableStreamDefaultControllerError(_controller_, _e_).

3.10.3. ReadableStreamDefaultControllerShouldCallPull ( controller )

1. Let _stream_ be _controller_.[[controlledReadableStream]]. 1. If ! ReadableStreamDefaultControllerCanCloseOrEnqueue(_controller_) is *false*, return *false*. 1. If _controller_.[[started]] is *false*, return *false*. 1. If ! IsReadableStreamLocked(_stream_) is *true* and ! ReadableStreamGetNumReadRequests(_stream_) > *0*, return *true*. 1. Let _desiredSize_ be ! ReadableStreamDefaultControllerGetDesiredSize(_controller_). 1. Assert: _desiredSize_ is not *null*. 1. If _desiredSize_ > *0*, return *true*. 1. Return *false*.

3.10.4. ReadableStreamDefaultControllerClearAlgorithms ( controller )

This abstract operation is called once the stream is closed or errored and the algorithms will not be executed any more. By removing the algorithm references it permits the underlying source object to be garbage collected even if the ReadableStream itself is still referenced.

The results of this algorithm are not currently observable, but could become so if JavaScript eventually adds weak references. But even without that factor, implementations will likely want to include similar steps.

1. Set _controller_.[[pullAlgorithm]] to *undefined*. 1. Set _controller_.[[cancelAlgorithm]] to *undefined*. 1. Set _controller_.[[strategySizeAlgorithm]] to *undefined*.

3.10.5. ReadableStreamDefaultControllerClose ( controller )

This abstract operation can be called by other specifications that wish to close a readable stream, in the same way a developer-created stream would be closed by its associated controller object. Specifications should not do this to streams they did not create, and must ensure they have obeyed the preconditions (listed here as asserts).

1. Let _stream_ be _controller_.[[controlledReadableStream]]. 1. Assert: ! ReadableStreamDefaultControllerCanCloseOrEnqueue(_controller_) is *true*. 1. Set _controller_.[[closeRequested]] to *true*. 1. If _controller_.[[queue]] is empty, 1. Perform ! ReadableStreamDefaultControllerClearAlgorithms(_controller_). 1. Perform ! ReadableStreamClose(_stream_).

3.10.6. ReadableStreamDefaultControllerEnqueue ( controller, chunk )

This abstract operation can be called by other specifications that wish to enqueue chunks in a readable stream, in the same way a developer would enqueue chunks using the stream’s associated controller object. Specifications should not do this to streams they did not create, and must ensure they have obeyed the preconditions (listed here as asserts).

1. Let _stream_ be _controller_.[[controlledReadableStream]]. 1. Assert: ! ReadableStreamDefaultControllerCanCloseOrEnqueue(_controller_) is *true*. 1. If ! IsReadableStreamLocked(_stream_) is *true* and ! ReadableStreamGetNumReadRequests(_stream_) > *0*, perform ! ReadableStreamFulfillReadRequest(_stream_, _chunk_, *false*). 1. Otherwise, 1. Let _result_ be the result of performing _controller_.[[strategySizeAlgorithm]], passing in _chunk_, and interpreting the result as an ECMAScript completion value. 1. If _result_ is an abrupt completion, 1. Perform ! ReadableStreamDefaultControllerError(_controller_, _result_.[[Value]]). 1. Return _result_. 1. Let _chunkSize_ be _result_.[[Value]]. 1. Let _enqueueResult_ be EnqueueValueWithSize(_controller_, _chunk_, _chunkSize_). 1. If _enqueueResult_ is an abrupt completion, 1. Perform ! ReadableStreamDefaultControllerError(_controller_, _enqueueResult_.[[Value]]). 1. Return _enqueueResult_. 1. Perform ! ReadableStreamDefaultControllerCallPullIfNeeded(_controller_).

3.10.7. ReadableStreamDefaultControllerError ( controller, e )

This abstract operation can be called by other specifications that wish to move a readable stream to an errored state, in the same way a developer would error a stream using its associated controller object. Specifications should not do this to streams they did not create.

1. Let _stream_ be _controller_.[[controlledReadableStream]]. 1. If _stream_.[[state]] is not `"readable"`, return. 1. Perform ! ResetQueue(_controller_). 1. Perform ! ReadableStreamDefaultControllerClearAlgorithms(_controller_). 1. Perform ! ReadableStreamError(_stream_, _e_).

3.10.8. ReadableStreamDefaultControllerGetDesiredSize ( controller )

This abstract operation can be called by other specifications that wish to determine the desired size to fill this stream’s internal queue, similar to how a developer would consult the desiredSize property of the stream’s associated controller object. Specifications should not use this on streams they did not create.

1. Let _stream_ be _controller_.[[controlledReadableStream]]. 1. Let _state_ be _stream_.[[state]]. 1. If _state_ is `"errored"`, return *null*. 1. If _state_ is `"closed"`, return *0*. 1. Return _controller_.[[strategyHWM]] − _controller_.[[queueTotalSize]].

3.10.9. ReadableStreamDefaultControllerHasBackpressure ( controller )

This abstract operation is used in the implementation of TransformStream.

1. If ! ReadableStreamDefaultControllerShouldCallPull(_controller_) is *true*, return *false*. 1. Otherwise, return *true*.

3.10.10. ReadableStreamDefaultControllerCanCloseOrEnqueue ( controller )

1. Let _state_ be _controller_.[[controlledReadableStream]].[[state]]. 1. If _controller_.[[closeRequested]] is *false* and _state_ is `"readable"`, return *true*. 1. Otherwise, return *false*.
The case where stream.[[closeRequested]] is false, but stream.[[state]] is not "readable", happens when the stream is errored via error(e), or when it is closed without its controller’s close method ever being called: e.g., if the stream was closed by a call to cancel(reason).

3.10.11. SetUpReadableStreamDefaultController(stream, controller, startAlgorithm, pullAlgorithm, cancelAlgorithm, highWaterMark, sizeAlgorithm )

1. Assert: _stream_.[[readableStreamController]] is *undefined*. 1. Set _controller_.[[controlledReadableStream]] to _stream_. 1. Set _controller_.[[queue]] and _controller_.[[queueTotalSize]] to *undefined*, then perform ! ResetQueue(_controller_). 1. Set _controller_.[[started]], _controller_.[[closeRequested]], _controller_.[[pullAgain]], and _controller_.[[pulling]] to *false*. 1. Set _controller_.[[strategySizeAlgorithm]] to _sizeAlgorithm_ and _controller_.[[strategyHWM]] to _highWaterMark_. 1. Set _controller_.[[pullAlgorithm]] to _pullAlgorithm_. 1. Set _controller_.[[cancelAlgorithm]] to _cancelAlgorithm_. 1. Set _stream_.[[readableStreamController]] to _controller_. 1. Let _startResult_ be the result of performing _startAlgorithm_. (This may throw an exception.) 1. Let _startPromise_ be a promise resolved with _startResult_. 1. Upon fulfillment of _startPromise_, 1. Set _controller_.[[started]] to *true*. 1. Assert: _controller_.[[pulling]] is *false*. 1. Assert: _controller_.[[pullAgain]] is *false*. 1. Perform ! ReadableStreamDefaultControllerCallPullIfNeeded(_controller_). 1. Upon rejection of _startPromise_ with reason _r_, 1. Perform ! ReadableStreamDefaultControllerError(_controller_, _r_).

3.10.12. SetUpReadableStreamDefaultControllerFromUnderlyingSource(stream, underlyingSource, highWaterMark, sizeAlgorithm )

1. Assert: _underlyingSource_ is not *undefined*. 1. Let _controller_ be ObjectCreate(the original value of `ReadableStreamDefaultController`'s `prototype` property). 1. Let _startAlgorithm_ be the following steps: 1. Return ? InvokeOrNoop(_underlyingSource_, `"start"`, « _controller_ »). 1. Let _pullAlgorithm_ be ? CreateAlgorithmFromUnderlyingMethod(_underlyingSource_, `"pull"`, *0*, « _controller_ »). 1. Let _cancelAlgorithm_ be ? CreateAlgorithmFromUnderlyingMethod(_underlyingSource_, `"cancel"`, *1*, « »). 1. Perform ? SetUpReadableStreamDefaultController(_stream_, _controller_, _startAlgorithm_, _pullAlgorithm_, _cancelAlgorithm_, _highWaterMark_, _sizeAlgorithm_).

3.11. Class ReadableByteStreamController

The ReadableByteStreamController class has methods that allow control of a ReadableStream's state and internal queue. When constructing a ReadableStream, the underlying byte source is given a corresponding ReadableByteStreamController instance to manipulate.

3.11.1. Class definition

This section is non-normative.

If one were to write the ReadableByteStreamController class in something close to the syntax of [ECMASCRIPT], it would look like

class ReadableByteStreamController {
  constructor() // always throws

  get byobRequest()
  get desiredSize()

  close()
  enqueue(chunk)
  error(e)
}

3.11.2. Internal slots

Instances of ReadableByteStreamController are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[autoAllocateChunkSize]] A positive integer, when the automatic buffer allocation feature is enabled. In that case, this value specifies the size of buffer to allocate. It is undefined otherwise.
[[byobRequest]] A ReadableStreamBYOBRequest instance representing the current BYOB pull request, or undefined if there are no pending requests
[[cancelAlgorithm]] A promise-returning algorithm, taking one argument (the cancel reason), which communicates a requested cancelation to the underlying source
[[closeRequested]] A boolean flag indicating whether the stream has been closed by its underlying byte source, but still has chunks in its internal queue that have not yet been read
[[controlledReadableByteStream]] The ReadableStream instance controlled
[[pullAgain]] A boolean flag set to true if the stream’s mechanisms requested a call to the underlying byte source’s pull() method to pull more data, but the pull could not yet be done since a previous call is still executing
[[pullAlgorithm]] A promise-returning algorithm that pulls data from the underlying source
[[pulling]] A boolean flag set to true while the underlying byte source’s pull() method is executing and has not yet fulfilled, used to prevent reentrant calls
[[pendingPullIntos]] A List of descriptors representing pending BYOB pull requests
[[queue]] A List representing the stream’s internal queue of chunks
[[queueTotalSize]] The total size (in bytes) of all the chunks stored in [[queue]]
[[started]] A boolean flag indicating whether the underlying source has finished starting
[[strategyHWM]] A number supplied to the constructor as part of the stream’s queuing strategy, indicating the point at which the stream will apply backpressure to its underlying byte source

Although ReadableByteStreamController instances have [[queue]] and [[queueTotalSize]] slots, we do not use most of the abstract operations in § 6.2 Queue-with-sizes operations on them, as the way in which we manipulate this queue is rather different than the others in the spec. Instead, we update the two slots together manually.

This might be cleaned up in a future spec refactoring.

3.11.3. new ReadableByteStreamController()

The ReadableByteStreamController constructor cannot be used directly; ReadableByteStreamController instances are created automatically during ReadableStream construction.
1. Throw a *TypeError* exception.

3.11.4. Properties of the ReadableByteStreamController prototype

3.11.4.1. get byobRequest
The byobRequest getter returns the current BYOB pull request.
1. If ! IsReadableByteStreamController(*this*) is *false*, throw a *TypeError* exception. 1. If *this*.[[byobRequest]] is *undefined* and *this*.[[pendingPullIntos]] is not empty, 1. Let _firstDescriptor_ be the first element of *this*.[[pendingPullIntos]]. 1. Let _view_ be ! Construct(%Uint8Array%, « _firstDescriptor_.[[buffer]], _firstDescriptor_.[[byteOffset]] + _firstDescriptor_.[[bytesFilled]], _firstDescriptor_.[[byteLength]] − _firstDescriptor_.[[bytesFilled]] »). 1. Let _byobRequest_ be ObjectCreate(the original value of `ReadableStreamBYOBRequest`'s `prototype` property). 1. Perform ! SetUpReadableStreamBYOBRequest(_byobRequest_, *this*, _view_). 1. Set *this*.[[byobRequest]] to _byobRequest_. 1. Return *this*.[[byobRequest]].
3.11.4.2. get desiredSize
The desiredSize getter returns the desired size to fill the controlled stream’s internal queue. It can be negative, if the queue is over-full. An underlying source ought to use this information to determine when and how to apply backpressure.
1. If ! IsReadableByteStreamController(*this*) is *false*, throw a *TypeError* exception. 1. Return ! ReadableByteStreamControllerGetDesiredSize(*this*).
3.11.4.3. close()
The close method will close the controlled readable stream. Consumers will still be able to read any previously-enqueued chunks from the stream, but once those are read, the stream will become closed.
1. If ! IsReadableByteStreamController(*this*) is *false*, throw a *TypeError* exception. 1. If *this*.[[closeRequested]] is *true*, throw a *TypeError* exception. 1. If *this*.[[controlledReadableByteStream]].[[state]] is not `"readable"`, throw a *TypeError* exception. 1. Perform ? ReadableByteStreamControllerClose(*this*).
3.11.4.4. enqueue(chunk)
The enqueue method will enqueue a given chunk in the controlled readable stream.
1. If ! IsReadableByteStreamController(*this*) is *false*, throw a *TypeError* exception. 1. If *this*.[[closeRequested]] is *true*, throw a *TypeError* exception. 1. If *this*.[[controlledReadableByteStream]].[[state]] is not `"readable"`, throw a *TypeError* exception. 1. If Type(_chunk_) is not Object, throw a *TypeError* exception. 1. If _chunk_ does not have a [[ViewedArrayBuffer]] internal slot, throw a *TypeError* exception. 1. If ! IsDetachedBuffer(_chunk_.[[ViewedArrayBuffer]]) is *true*, throw a *TypeError* exception. 1. Return ! ReadableByteStreamControllerEnqueue(*this*, _chunk_).
3.11.4.5. error(e)
The error method will error the readable stream, making all future interactions with it fail with the given error e.
1. If ! IsReadableByteStreamController(*this*) is *false*, throw a *TypeError* exception. 1. Perform ! ReadableByteStreamControllerError(*this*, _e_).

3.11.5. Readable stream BYOB controller internal methods

The following are additional internal methods implemented by each ReadableByteStreamController instance. The readable stream implementation will polymorphically call to either these or their counterparts for default controllers.

3.11.5.1. [[CancelSteps]](reason)
1. If *this*.[[pendingPullIntos]] is not empty, 1. Let _firstDescriptor_ be the first element of *this*.[[pendingPullIntos]]. 1. Set _firstDescriptor_.[[bytesFilled]] to *0*. 1. Perform ! ResetQueue(*this*). 1. Let _result_ be the result of performing *this*.[[cancelAlgorithm]], passing in _reason_. 1. Perform ! ReadableByteStreamControllerClearAlgorithms(*this*). 1. Return _result_.
3.11.5.2. [[PullSteps]]( )
1. Let _stream_ be *this*.[[controlledReadableByteStream]]. 1. Assert: ! ReadableStreamHasDefaultReader(_stream_) is *true*. 1. If *this*.[[queueTotalSize]] > *0*, 1. Assert: ! ReadableStreamGetNumReadRequests(_stream_) is *0*. 1. Let _entry_ be the first element of *this*.[[queue]]. 1. Remove _entry_ from *this*.[[queue]], shifting all other elements downward (so that the second becomes the first, and so on). 1. Set *this*.[[queueTotalSize]] to *this*.[[queueTotalSize]] − _entry_.[[byteLength]]. 1. Perform ! ReadableByteStreamControllerHandleQueueDrain(*this*). 1. Let _view_ be ! Construct(%Uint8Array%, « _entry_.[[buffer]], _entry_.[[byteOffset]], _entry_.[[byteLength]] »). 1. Return a promise resolved with ! ReadableStreamCreateReadResult(_view_, *false*, _stream_.[[reader]].[[forAuthorCode]]). 1. Let _autoAllocateChunkSize_ be *this*.[[autoAllocateChunkSize]]. 1. If _autoAllocateChunkSize_ is not *undefined*, 1. Let _buffer_ be Construct(%ArrayBuffer%, « _autoAllocateChunkSize_ »). 1. If _buffer_ is an abrupt completion, return a promise rejected with _buffer_.[[Value]]. 1. Let _pullIntoDescriptor_ be Record {[[buffer]]: _buffer_.[[Value]], [[byteOffset]]: *0*, [[byteLength]]: _autoAllocateChunkSize_, [[bytesFilled]]: *0*, [[elementSize]]: *1*, [[ctor]]: %Uint8Array%, [[readerType]]: `"default"`}. 1. Append _pullIntoDescriptor_ as the last element of *this*.[[pendingPullIntos]]. 1. Let _promise_ be ! ReadableStreamAddReadRequest(_stream_). 1. Perform ! ReadableByteStreamControllerCallPullIfNeeded(*this*). 1. Return _promise_.

3.12. Class ReadableStreamBYOBRequest

The ReadableStreamBYOBRequest class represents a pull into request in a ReadableByteStreamController.

3.12.1. Class definition

This section is non-normative.

If one were to write the ReadableStreamBYOBRequest class in something close to the syntax of [ECMASCRIPT], it would look like

class ReadableStreamBYOBRequest {
  constructor(controller, view)

  get view()

  respond(bytesWritten)
  respondWithNewView(view)
}

3.12.2. Internal slots

Instances of ReadableStreamBYOBRequest are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[associatedReadableByteStreamController]] The parent ReadableByteStreamController instance
[[view]] A typed array representing the destination region to which the controller can write generated data

3.12.3. new ReadableStreamBYOBRequest()

1. Throw a *TypeError* exception.

3.12.4. Properties of the ReadableStreamBYOBRequest prototype

3.12.4.1. get view
1. If ! IsReadableStreamBYOBRequest(*this*) is *false*, throw a *TypeError* exception. 1. Return *this*.[[view]].
3.12.4.2. respond(bytesWritten)
1. If ! IsReadableStreamBYOBRequest(*this*) is *false*, throw a *TypeError* exception. 1. If *this*.[[associatedReadableByteStreamController]] is *undefined*, throw a *TypeError* exception. 1. If ! IsDetachedBuffer(*this*.[[view]].[[ViewedArrayBuffer]]) is *true*, throw a *TypeError* exception. 1. Return ? ReadableByteStreamControllerRespond(*this*.[[associatedReadableByteStreamController]], _bytesWritten_).
3.12.4.3. respondWithNewView(view)
1. If ! IsReadableStreamBYOBRequest(*this*) is *false*, throw a *TypeError* exception. 1. If *this*.[[associatedReadableByteStreamController]] is *undefined*, throw a *TypeError* exception. 1. If Type(_view_) is not Object, throw a *TypeError* exception. 1. If _view_ does not have a [[ViewedArrayBuffer]] internal slot, throw a *TypeError* exception. 1. If ! IsDetachedBuffer(_view_.[[ViewedArrayBuffer]]) is *true*, throw a *TypeError* exception. 1. Return ? ReadableByteStreamControllerRespondWithNewView(*this*.[[associatedReadableByteStreamController]], _view_).

3.13. Readable stream BYOB controller abstract operations

3.13.1. IsReadableStreamBYOBRequest ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have an [[associatedReadableByteStreamController]] internal slot, return *false*. 1. Return *true*.

3.13.2. IsReadableByteStreamController ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have an [[controlledReadableByteStream]] internal slot, return *false*. 1. Return *true*.

3.13.3. ReadableByteStreamControllerCallPullIfNeeded ( controller )

1. Let _shouldPull_ be ! ReadableByteStreamControllerShouldCallPull(_controller_). 1. If _shouldPull_ is *false*, return. 1. If _controller_.[[pulling]] is *true*, 1. Set _controller_.[[pullAgain]] to *true*. 1. Return. 1. Assert: _controller_.[[pullAgain]] is *false*. 1. Set _controller_.[[pulling]] to *true*. 1. Let _pullPromise_ be the result of performing _controller_.[[pullAlgorithm]]. 1. Upon fulfillment of _pullPromise_, 1. Set _controller_.[[pulling]] to *false*. 1. If _controller_.[[pullAgain]] is *true*, 1. Set _controller_.[[pullAgain]] to *false*. 1. Perform ! ReadableByteStreamControllerCallPullIfNeeded(_controller_). 1. Upon rejection of _pullPromise_ with reason _e_, 1. Perform ! ReadableByteStreamControllerError(_controller_, _e_).

3.13.4. ReadableByteStreamControllerClearAlgorithms ( controller )

This abstract operation is called once the stream is closed or errored and the algorithms will not be executed any more. By removing the algorithm references it permits the underlying source object to be garbage collected even if the ReadableStream itself is still referenced.

The results of this algorithm are not currently observable, but could become so if JavaScript eventually adds weak references. But even without that factor, implementations will likely want to include similar steps.

1. Set _controller_.[[pullAlgorithm]] to *undefined*. 1. Set _controller_.[[cancelAlgorithm]] to *undefined*.

3.13.5. ReadableByteStreamControllerClearPendingPullIntos ( controller )

1. Perform ! ReadableByteStreamControllerInvalidateBYOBRequest(_controller_). 1. Set _controller_.[[pendingPullIntos]] to a new empty List.

3.13.6. ReadableByteStreamControllerClose ( controller )

1. Let _stream_ be _controller_.[[controlledReadableByteStream]]. 1. Assert: _controller_.[[closeRequested]] is *false*. 1. Assert: _stream_.[[state]] is `"readable"`. 1. If _controller_.[[queueTotalSize]] > *0*, 1. Set _controller_.[[closeRequested]] to *true*. 1. Return. 1. If _controller_.[[pendingPullIntos]] is not empty, 1. Let _firstPendingPullInto_ be the first element of _controller_.[[pendingPullIntos]]. 1. If _firstPendingPullInto_.[[bytesFilled]] > *0*, 1. Let _e_ be a new *TypeError* exception. 1. Perform ! ReadableByteStreamControllerError(_controller_, _e_). 1. Throw _e_. 1. Perform ! ReadableByteStreamControllerClearAlgorithms(_controller_). 1. Perform ! ReadableStreamClose(_stream_).

3.13.7. ReadableByteStreamControllerCommitPullIntoDescriptor ( stream, pullIntoDescriptor )

1. Assert: _stream_.[[state]] is not `"errored"`. 1. Let _done_ be *false*. 1. If _stream_.[[state]] is `"closed"`, 1. Assert: _pullIntoDescriptor_.[[bytesFilled]] is *0*. 1. Set _done_ to *true*. 1. Let _filledView_ be ! ReadableByteStreamControllerConvertPullIntoDescriptor(_pullIntoDescriptor_). 1. If _pullIntoDescriptor_.[[readerType]] is `"default"`, 1. Perform ! ReadableStreamFulfillReadRequest(_stream_, _filledView_, _done_). 1. Otherwise, 1. Assert: _pullIntoDescriptor_.[[readerType]] is `"byob"`. 1. Perform ! ReadableStreamFulfillReadIntoRequest(_stream_, _filledView_, _done_).

3.13.8. ReadableByteStreamControllerConvertPullIntoDescriptor ( pullIntoDescriptor )

1. Let _bytesFilled_ be _pullIntoDescriptor_.[[bytesFilled]]. 1. Let _elementSize_ be _pullIntoDescriptor_.[[elementSize]]. 1. Assert: _bytesFilled_ ≤ _pullIntoDescriptor_.[[byteLength]]. 1. Assert: _bytesFilled_ mod _elementSize_ is *0*. 1. Return ! Construct(_pullIntoDescriptor_.[[ctor]], « _pullIntoDescriptor_.[[buffer]], _pullIntoDescriptor_.[[byteOffset]], _bytesFilled_ ÷ _elementSize_ »).

3.13.9. ReadableByteStreamControllerEnqueue ( controller, chunk )

1. Let _stream_ be _controller_.[[controlledReadableByteStream]]. 1. Assert: _controller_.[[closeRequested]] is *false*. 1. Assert: _stream_.[[state]] is `"readable"`. 1. Let _buffer_ be _chunk_.[[ViewedArrayBuffer]]. 1. Let _byteOffset_ be _chunk_.[[ByteOffset]]. 1. Let _byteLength_ be _chunk_.[[ByteLength]]. 1. Let _transferredBuffer_ be ! TransferArrayBuffer(_buffer_). 1. If ! ReadableStreamHasDefaultReader(_stream_) is *true* 1. If ! ReadableStreamGetNumReadRequests(_stream_) is *0*, 1. Perform ! ReadableByteStreamControllerEnqueueChunkToQueue(_controller_, _transferredBuffer_, _byteOffset_, _byteLength_). 1. Otherwise, 1. Assert: _controller_.[[queue]] is empty. 1. Let _transferredView_ be ! Construct(%Uint8Array%, « _transferredBuffer_, _byteOffset_, _byteLength_ »). 1. Perform ! ReadableStreamFulfillReadRequest(_stream_, _transferredView_, *false*). 1. Otherwise, if ! ReadableStreamHasBYOBReader(_stream_) is *true*, 1. Perform ! ReadableByteStreamControllerEnqueueChunkToQueue(_controller_, _transferredBuffer_, _byteOffset_, _byteLength_). 1. Perform ! ReadableByteStreamControllerProcessPullIntoDescriptorsUsingQueue(_controller_). 1. Otherwise, 1. Assert: ! IsReadableStreamLocked(_stream_) is *false*. 1. Perform ! ReadableByteStreamControllerEnqueueChunkToQueue(_controller_, _transferredBuffer_, _byteOffset_, _byteLength_). 1. Perform ! ReadableByteStreamControllerCallPullIfNeeded(_controller_).

3.13.10. ReadableByteStreamControllerEnqueueChunkToQueue ( controller, buffer, byteOffset, byteLength )

1. Append Record {[[buffer]]: _buffer_, [[byteOffset]]: _byteOffset_, [[byteLength]]: _byteLength_} as the last element of _controller_.[[queue]]. 1. Add _byteLength_ to _controller_.[[queueTotalSize]].

3.13.11. ReadableByteStreamControllerError ( controller, e )

1. Let _stream_ be _controller_.[[controlledReadableByteStream]]. 1. If _stream_.[[state]] is not `"readable"`, return. 1. Perform ! ReadableByteStreamControllerClearPendingPullIntos(_controller_). 1. Perform ! ResetQueue(_controller_). 1. Perform ! ReadableByteStreamControllerClearAlgorithms(_controller_). 1. Perform ! ReadableStreamError(_stream_, _e_).

3.13.12. ReadableByteStreamControllerFillHeadPullIntoDescriptor ( controller, size, pullIntoDescriptor )

1. Assert: either _controller_.[[pendingPullIntos]] is empty, or the first element of _controller_.[[pendingPullIntos]] is _pullIntoDescriptor_. 1. Perform ! ReadableByteStreamControllerInvalidateBYOBRequest(_controller_). 1. Set _pullIntoDescriptor_.[[bytesFilled]] to _pullIntoDescriptor_.[[bytesFilled]] + _size_.

3.13.13. ReadableByteStreamControllerFillPullIntoDescriptorFromQueue ( controller, pullIntoDescriptor )

1. Let _elementSize_ be _pullIntoDescriptor_.[[elementSize]]. 1. Let _currentAlignedBytes_ be _pullIntoDescriptor_.[[bytesFilled]] − (_pullIntoDescriptor_.[[bytesFilled]] mod _elementSize_). 1. Let _maxBytesToCopy_ be min(_controller_.[[queueTotalSize]], _pullIntoDescriptor_.[[byteLength]] − _pullIntoDescriptor_.[[bytesFilled]]). 1. Let _maxBytesFilled_ be _pullIntoDescriptor_.[[bytesFilled]] + _maxBytesToCopy_. 1. Let _maxAlignedBytes_ be _maxBytesFilled_ − (_maxBytesFilled_ mod _elementSize_). 1. Let _totalBytesToCopyRemaining_ be _maxBytesToCopy_. 1. Let _ready_ be *false*. 1. If _maxAlignedBytes_ > _currentAlignedBytes_, 1. Set _totalBytesToCopyRemaining_ to _maxAlignedBytes_ − _pullIntoDescriptor_.[[bytesFilled]]. 1. Set _ready_ to *true*. 1. Let _queue_ be _controller_.[[queue]]. 1. Repeat the following steps while _totalBytesToCopyRemaining_ > *0*, 1. Let _headOfQueue_ be the first element of _queue_. 1. Let _bytesToCopy_ be min(_totalBytesToCopyRemaining_, _headOfQueue_.[[byteLength]]). 1. Let _destStart_ be _pullIntoDescriptor_.[[byteOffset]] + _pullIntoDescriptor_.[[bytesFilled]]. 1. Perform ! CopyDataBlockBytes(_pullIntoDescriptor_.[[buffer]].[[ArrayBufferData]], _destStart_, _headOfQueue_.[[buffer]].[[ArrayBufferData]], _headOfQueue_.[[byteOffset]], _bytesToCopy_). 1. If _headOfQueue_.[[byteLength]] is _bytesToCopy_, 1. Remove the first element of _queue_, shifting all other elements downward (so that the second becomes the first, and so on). 1. Otherwise, 1. Set _headOfQueue_.[[byteOffset]] to _headOfQueue_.[[byteOffset]] + _bytesToCopy_. 1. Set _headOfQueue_.[[byteLength]] to _headOfQueue_.[[byteLength]] − _bytesToCopy_. 1. Set _controller_.[[queueTotalSize]] to _controller_.[[queueTotalSize]] − _bytesToCopy_. 1. Perform ! ReadableByteStreamControllerFillHeadPullIntoDescriptor(_controller_, _bytesToCopy_, _pullIntoDescriptor_). 1. Set _totalBytesToCopyRemaining_ to _totalBytesToCopyRemaining_ − _bytesToCopy_. 1. If _ready_ is *false*, 1. Assert: _controller_.[[queueTotalSize]] is *0*. 1. Assert: _pullIntoDescriptor_.[[bytesFilled]] > *0*. 1. Assert: _pullIntoDescriptor_.[[bytesFilled]] < _pullIntoDescriptor_.[[elementSize]]. 1. Return _ready_.

3.13.14. ReadableByteStreamControllerGetDesiredSize ( controller )

1. Let _stream_ be _controller_.[[controlledReadableByteStream]]. 1. Let _state_ be _stream_.[[state]]. 1. If _state_ is `"errored"`, return *null*. 1. If _state_ is `"closed"`, return *0*. 1. Return _controller_.[[strategyHWM]] − _controller_.[[queueTotalSize]].

3.13.15. ReadableByteStreamControllerHandleQueueDrain ( controller )

1. Assert: _controller_.[[controlledReadableByteStream]].[[state]] is `"readable"`. 1. If _controller_.[[queueTotalSize]] is *0* and _controller_.[[closeRequested]] is *true*, 1. Perform ! ReadableByteStreamControllerClearAlgorithms(_controller_). 1. Perform ! ReadableStreamClose(_controller_.[[controlledReadableByteStream]]). 1. Otherwise, 1. Perform ! ReadableByteStreamControllerCallPullIfNeeded(_controller_).

3.13.16. ReadableByteStreamControllerInvalidateBYOBRequest ( controller )

1. If _controller_.[[byobRequest]] is *undefined*, return. 1. Set _controller_.[[byobRequest]].[[associatedReadableByteStreamController]] to *undefined*. 1. Set _controller_.[[byobRequest]].[[view]] to *undefined*. 1. Set _controller_.[[byobRequest]] to *undefined*.

3.13.17. ReadableByteStreamControllerProcessPullIntoDescriptorsUsingQueue ( controller )

1. Assert: _controller_.[[closeRequested]] is *false*. 1. Repeat the following steps while _controller_.[[pendingPullIntos]] is not empty, 1. If _controller_.[[queueTotalSize]] is *0*, return. 1. Let _pullIntoDescriptor_ be the first element of _controller_.[[pendingPullIntos]]. 1. If ! ReadableByteStreamControllerFillPullIntoDescriptorFromQueue(_controller_, _pullIntoDescriptor_) is *true*, 1. Perform ! ReadableByteStreamControllerShiftPendingPullInto(_controller_). 1. Perform ! ReadableByteStreamControllerCommitPullIntoDescriptor(_controller_.[[controlledReadableByteStream]], _pullIntoDescriptor_).

3.13.18. ReadableByteStreamControllerPullInto ( controller, view )

1. Let _stream_ be _controller_.[[controlledReadableByteStream]]. 1. Let _elementSize_ be 1. 1. Let _ctor_ be %DataView%. 1. If _view_ has a [[TypedArrayName]] internal slot (i.e., it is not a `DataView`), 1. Set _elementSize_ to the element size specified in the typed array constructors table for _view_.[[TypedArrayName]]. 1. Set _ctor_ to the constructor specified in the typed array constructors table for _view_.[[TypedArrayName]]. 1. Let _byteOffset_ be _view_.[[ByteOffset]]. 1. Let _byteLength_ be _view_.[[ByteLength]]. 1. Let _buffer_ be ! TransferArrayBuffer(_view_.[[ViewedArrayBuffer]]). 1. Let _pullIntoDescriptor_ be Record {[[buffer]]: _buffer_, [[byteOffset]]: _byteOffset_, [[byteLength]]: _byteLength_, [[bytesFilled]]: *0*, [[elementSize]]: _elementSize_, [[ctor]]: _ctor_, [[readerType]]: `"byob"`}. 1. If _controller_.[[pendingPullIntos]] is not empty, 1. Append _pullIntoDescriptor_ as the last element of _controller_.[[pendingPullIntos]]. 1. Return ! ReadableStreamAddReadIntoRequest(_stream_). 1. If _stream_.[[state]] is `"closed"`, 1. Let _emptyView_ be ! Construct(_ctor_, « _pullIntoDescriptor_.[[buffer]], _pullIntoDescriptor_.[[byteOffset]], *0* »). 1. Return a promise resolved with ! ReadableStreamCreateReadResult(_emptyView_, *true*, _stream_.[[reader]].[[forAuthorCode]]). 1. If _controller_.[[queueTotalSize]] > *0*, 1. If ! ReadableByteStreamControllerFillPullIntoDescriptorFromQueue(_controller_, _pullIntoDescriptor_) is *true*, 1. Let _filledView_ be ! ReadableByteStreamControllerConvertPullIntoDescriptor(_pullIntoDescriptor_). 1. Perform ! ReadableByteStreamControllerHandleQueueDrain(_controller_). 1. Return a promise resolved with ! ReadableStreamCreateReadResult(_filledView_, *false*, _stream_.[[reader]].[[forAuthorCode]]). 1. If _controller_.[[closeRequested]] is *true*, 1. Let _e_ be a *TypeError* exception. 1. Perform ! ReadableByteStreamControllerError(_controller_, _e_). 1. Return a promise rejected with _e_. 1. Append _pullIntoDescriptor_ as the last element of _controller_.[[pendingPullIntos]]. 1. Let _promise_ be ! ReadableStreamAddReadIntoRequest(_stream_). 1. Perform ! ReadableByteStreamControllerCallPullIfNeeded(_controller_). 1. Return _promise_.

3.13.19. ReadableByteStreamControllerRespond ( controller, bytesWritten )

1. Let _bytesWritten_ be ? ToNumber(_bytesWritten_). 1. If ! IsFiniteNonNegativeNumber(_bytesWritten_) is *false*, 1. Throw a *RangeError* exception. 1. Assert: _controller_.[[pendingPullIntos]] is not empty. 1. Perform ? ReadableByteStreamControllerRespondInternal(_controller_, _bytesWritten_).

3.13.20. ReadableByteStreamControllerRespondInClosedState ( controller, firstDescriptor )

1. Set _firstDescriptor_.[[buffer]] to ! TransferArrayBuffer(_firstDescriptor_.[[buffer]]). 1. Assert: _firstDescriptor_.[[bytesFilled]] is *0*. 1. Let _stream_ be _controller_.[[controlledReadableByteStream]]. 1. If ! ReadableStreamHasBYOBReader(_stream_) is *true*, 1. Repeat the following steps while ! ReadableStreamGetNumReadIntoRequests(_stream_) > *0*, 1. Let _pullIntoDescriptor_ be ! ReadableByteStreamControllerShiftPendingPullInto(_controller_). 1. Perform ! ReadableByteStreamControllerCommitPullIntoDescriptor(_stream_, _pullIntoDescriptor_).

3.13.21. ReadableByteStreamControllerRespondInReadableState ( controller, bytesWritten, pullIntoDescriptor )

1. If _pullIntoDescriptor_.[[bytesFilled]] + _bytesWritten_ > _pullIntoDescriptor_.[[byteLength]], throw a *RangeError* exception. 1. Perform ! ReadableByteStreamControllerFillHeadPullIntoDescriptor(_controller_, _bytesWritten_, _pullIntoDescriptor_). 1. If _pullIntoDescriptor_.[[bytesFilled]] < _pullIntoDescriptor_.[[elementSize]], return. 1. Perform ! ReadableByteStreamControllerShiftPendingPullInto(_controller_). 1. Let _remainderSize_ be _pullIntoDescriptor_.[[bytesFilled]] mod _pullIntoDescriptor_.[[elementSize]]. 1. If _remainderSize_ > *0*, 1. Let _end_ be _pullIntoDescriptor_.[[byteOffset]] + _pullIntoDescriptor_.[[bytesFilled]]. 1. Let _remainder_ be ? CloneArrayBuffer(_pullIntoDescriptor_.[[buffer]], _end_ − _remainderSize_, _remainderSize_, %ArrayBuffer%). 1. Perform ! ReadableByteStreamControllerEnqueueChunkToQueue(_controller_, _remainder_, *0*, _remainder_.[[ByteLength]]). 1. Set _pullIntoDescriptor_.[[buffer]] to ! TransferArrayBuffer(_pullIntoDescriptor_.[[buffer]]). 1. Set _pullIntoDescriptor_.[[bytesFilled]] to _pullIntoDescriptor_.[[bytesFilled]] − _remainderSize_. 1. Perform ! ReadableByteStreamControllerCommitPullIntoDescriptor(_controller_.[[controlledReadableByteStream]], _pullIntoDescriptor_). 1. Perform ! ReadableByteStreamControllerProcessPullIntoDescriptorsUsingQueue(_controller_).

3.13.22. ReadableByteStreamControllerRespondInternal ( controller, bytesWritten )

1. Let _firstDescriptor_ be the first element of _controller_.[[pendingPullIntos]]. 1. Let _stream_ be _controller_.[[controlledReadableByteStream]]. 1. If _stream_.[[state]] is `"closed"`, 1. If _bytesWritten_ is not *0*, throw a *TypeError* exception. 1. Perform ! ReadableByteStreamControllerRespondInClosedState(_controller_, _firstDescriptor_). 1. Otherwise, 1. Assert: _stream_.[[state]] is `"readable"`. 1. Perform ? ReadableByteStreamControllerRespondInReadableState(_controller_, _bytesWritten_, _firstDescriptor_). 1. Perform ! ReadableByteStreamControllerCallPullIfNeeded(_controller_).

3.13.23. ReadableByteStreamControllerRespondWithNewView ( controller, view )

1. Assert: _controller_.[[pendingPullIntos]] is not empty. 1. Let _firstDescriptor_ be the first element of _controller_.[[pendingPullIntos]]. 1. If _firstDescriptor_.[[byteOffset]] + _firstDescriptor_.[[bytesFilled]] is not _view_.[[ByteOffset]], throw a *RangeError* exception. 1. If _firstDescriptor_.[[byteLength]] is not _view_.[[ByteLength]], throw a *RangeError* exception. 1. Set _firstDescriptor_.[[buffer]] to _view_.[[ViewedArrayBuffer]]. 1. Perform ? ReadableByteStreamControllerRespondInternal(_controller_, _view_.[[ByteLength]]).

3.13.24. ReadableByteStreamControllerShiftPendingPullInto ( controller )

1. Let _descriptor_ be the first element of _controller_.[[pendingPullIntos]]. 1. Remove _descriptor_ from _controller_.[[pendingPullIntos]], shifting all other elements downward (so that the second becomes the first, and so on). 1. Perform ! ReadableByteStreamControllerInvalidateBYOBRequest(_controller_). 1. Return _descriptor_.

3.13.25. ReadableByteStreamControllerShouldCallPull ( controller )

1. Let _stream_ be _controller_.[[controlledReadableByteStream]]. 1. If _stream_.[[state]] is not `"readable"`, return *false*. 1. If _controller_.[[closeRequested]] is *true*, return *false*. 1. If _controller_.[[started]] is *false*, return *false*. 1. If ! ReadableStreamHasDefaultReader(_stream_) is *true* and ! ReadableStreamGetNumReadRequests(_stream_) > *0*, return *true*. 1. If ! ReadableStreamHasBYOBReader(_stream_) is *true* and ! ReadableStreamGetNumReadIntoRequests(_stream_) > *0*, return *true*. 1. Let _desiredSize_ be ! ReadableByteStreamControllerGetDesiredSize(_controller_). 1. Assert: _desiredSize_ is not *null*. 1. If _desiredSize_ > *0*, return *true*. 1. Return *false*.

3.13.26. SetUpReadableByteStreamController ( stream, controller, startAlgorithm, pullAlgorithm, cancelAlgorithm, highWaterMark, autoAllocateChunkSize )

1. Assert: _stream_.[[readableStreamController]] is *undefined*. 1. If _autoAllocateChunkSize_ is not *undefined*, 1. Assert: ! IsInteger(_autoAllocateChunkSize_) is *true*. 1. Assert: _autoAllocateChunkSize_ is positive. 1. Set _controller_.[[controlledReadableByteStream]] to _stream_. 1. Set _controller_.[[pullAgain]] and _controller_.[[pulling]] to *false*. 1. Set _controller_.[[byobRequest]] to *undefined*. 1. Perform ! ResetQueue(_controller_). 1. Set _controller_.[[closeRequested]] and _controller_.[[started]] to *false*. 1. Set _controller_.[[strategyHWM]] to ? ValidateAndNormalizeHighWaterMark(_highWaterMark_). 1. Set _controller_.[[pullAlgorithm]] to _pullAlgorithm_. 1. Set _controller_.[[cancelAlgorithm]] to _cancelAlgorithm_. 1. Set _controller_.[[autoAllocateChunkSize]] to _autoAllocateChunkSize_. 1. Set _controller_.[[pendingPullIntos]] to a new empty List. 1. Set _stream_.[[readableStreamController]] to _controller_. 1. Let _startResult_ be the result of performing _startAlgorithm_. 1. Let _startPromise_ be a promise resolved with _startResult_. 1. Upon fulfillment of _startPromise_, 1. Set _controller_.[[started]] to *true*. 1. Assert: _controller_.[[pulling]] is *false*. 1. Assert: _controller_.[[pullAgain]] is *false*. 1. Perform ! ReadableByteStreamControllerCallPullIfNeeded(_controller_). 1. Upon rejection of _startPromise_ with reason _r_, 1. Perform ! ReadableByteStreamControllerError(_controller_, _r_).

3.13.27. SetUpReadableByteStreamControllerFromUnderlyingSource ( stream, underlyingByteSource, highWaterMark )

1. Assert: _underlyingByteSource_ is not *undefined*. 1. Let _controller_ be ObjectCreate(the original value of `ReadableByteStreamController`'s `prototype` property). 1. Let _startAlgorithm_ be the following steps: 1. Return ? InvokeOrNoop(_underlyingByteSource_, `"start"`, « _controller_ »). 1. Let _pullAlgorithm_ be ? CreateAlgorithmFromUnderlyingMethod(_underlyingByteSource_, `"pull"`, *0*, « _controller_ »). 1. Let _cancelAlgorithm_ be ? CreateAlgorithmFromUnderlyingMethod(_underlyingByteSource_, `"cancel"`, *1*, « »). 1. Let _autoAllocateChunkSize_ be ? GetV(_underlyingByteSource_, `"autoAllocateChunkSize"`). 1. If _autoAllocateChunkSize_ is not *undefined*, 1. Set _autoAllocateChunkSize_ to ? ToNumber(_autoAllocateChunkSize_). 1. If ! IsInteger(_autoAllocateChunkSize_) is *false*, or if _autoAllocateChunkSize_ ≤ *0*, throw a *RangeError* exception. 1. Perform ? SetUpReadableByteStreamController(_stream_, _controller_, _startAlgorithm_, _pullAlgorithm_, _cancelAlgorithm_, _highWaterMark_, _autoAllocateChunkSize_).

3.13.28. SetUpReadableStreamBYOBRequest ( request, controller, view )

1. Assert: ! IsReadableByteStreamController(_controller_) is *true*. 1. Assert: Type(_view_) is Object. 1. Assert: _view_ has a [[ViewedArrayBuffer]] internal slot. 1. Assert: ! IsDetachedBuffer(_view_.[[ViewedArrayBuffer]]) is *false*. 1. Set _request_.[[associatedReadableByteStreamController]] to _controller_. 1. Set _request_.[[view]] to _view_.

4. Writable streams

4.1. Using writable streams

The usual way to write to a writable stream is to simply pipe a readable stream to it. This ensures that backpressure is respected, so that if the writable stream’s underlying sink is not able to accept data as fast as the readable stream can produce it, the readable stream is informed of this and has a chance to slow down its data production.
readableStream.pipeTo(writableStream)
  .then(() => console.log("All data successfully written!"))
  .catch(e => console.error("Something went wrong!", e));
You can also write directly to writable streams by acquiring a writer and using its write() and close() methods. Since writable streams queue any incoming writes, and take care internally to forward them to the underlying sink in sequence, you can indiscriminately write to a writable stream without much ceremony:
function writeArrayToStream(array, writableStream) {
  const writer = writableStream.getWriter();
  array.forEach(chunk => writer.write(chunk).catch(() => {}));

  return writer.close();
}

writeArrayToStream([1, 2, 3, 4, 5], writableStream)
  .then(() => console.log("All done!"))
  .catch(e => console.error("Error with the stream: " + e));

Note how we use .catch(() => {}) to suppress any rejections from the write() method; we’ll be notified of any fatal errors via a rejection of the close() method, and leaving them un-caught would cause potential unhandledrejection events and console warnings.

In the previous example we only paid attention to the success or failure of the entire stream, by looking at the promise returned by the writer’s close() method. That promise will reject if anything goes wrong with the stream—initializing it, writing to it, or closing it. And it will fulfill once the stream is successfully closed. Often this is all you care about.

However, if you care about the success of writing a specific chunk, you can use the promise returned by the writer’s write() method:

writer.write("i am a chunk of data")
  .then(() => console.log("chunk successfully written!"))
  .catch(e => console.error(e));

What "success" means is up to a given stream instance (or more precisely, its underlying sink) to decide. For example, for a file stream it could simply mean that the OS has accepted the write, and not necessarily that the chunk has been flushed to disk. Some streams might not be able to give such a signal at all, in which case the returned promise will fulfill immediately.

The desiredSize and ready properties of writable stream writers allow producers to more precisely respond to flow control signals from the stream, to keep memory usage below the stream’s specified high water mark. The following example writes an infinite sequence of random bytes to a stream, using desiredSize to determine how many bytes to generate at a given time, and using ready to wait for the backpressure to subside.
async function writeRandomBytesForever(writableStream) {
  const writer = writableStream.getWriter();

  while (true) {
    await writer.ready;

    const bytes = new Uint8Array(writer.desiredSize);
    crypto.getRandomValues(bytes);

    // Purposefully don’t await; awaiting writer.ready is enough.
    writer.write(bytes).catch(() => {});
  }
}

writeRandomBytesForever(myWritableStream).catch(e => console.error("Something broke", e));

Note how we don’t await the promise returned by write(); this would be redundant with awaiting the ready promise. Additionally, similar to a previous example, we use the .catch(() => {}) pattern on the promises returned by write(); in this case we’ll be notified about any failures awaiting the ready promise.

To further emphasize how it’s a bad idea to await the promise returned by write(), consider a modification of the above example, where we continue to use the WritableStreamDefaultWriter interface directly, but we don’t control how many bytes we have to write at a given time. In that case, the backpressure-respecting code looks the same:
async function writeSuppliedBytesForever(writableStream, getBytes) {
  const writer = writableStream.getWriter();

  while (true) {
    await writer.ready;

    const bytes = getBytes();
    writer.write(bytes).catch(() => {});
  }
}

Unlike the previous example, where—because we were always writing exactly writer.desiredSize bytes each time—the write() and ready promises were synchronized, in this case it’s quite possible that the ready promise fulfills before the one returned by write() does. Remember, the ready promise fulfills when the desired size becomes positive, which might be before the write succeeds (especially in cases with a larger high water mark).

In other words, awaiting the return value of write() means you never queue up writes in the stream’s internal queue, instead only executing a write after the previous one succeeds, which can result in low throughput.

4.2. Class WritableStream

4.2.1. Class definition

This section is non-normative.

If one were to write the WritableStream class in something close to the syntax of [ECMASCRIPT], it would look like

class WritableStream {
  constructor(underlyingSink = {}, strategy = {})

  get locked()

  abort(reason)
  close()
  getWriter()
}

4.2.2. Internal slots

Instances of WritableStream are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[backpressure]] The backpressure signal set by the controller
[[closeRequest]] The promise returned from the writer close() method
[[inFlightWriteRequest]] A slot set to the promise for the current in-flight write operation while the underlying sink’s write algorithm is executing and has not yet fulfilled, used to prevent reentrant calls
[[inFlightCloseRequest]] A slot set to the promise for the current in-flight close operation while the underlying sink’s close algorithm is executing and has not yet fulfilled, used to prevent the abort() method from interrupting close
[[pendingAbortRequest]] A Record containing the promise returned from abort() and the reason passed to abort()
[[state]] A string containing the stream’s current state, used internally; one of "writable", "closed", "erroring", or "errored"
[[storedError]] A value indicating how the stream failed, to be given as a failure reason or exception when trying to operate on the stream while in the "errored" state
[[writableStreamController]] A WritableStreamDefaultController created with the ability to control the state and queue of this stream; also used for the IsWritableStream brand check
[[writer]] A WritableStreamDefaultWriter instance, if the stream is locked to a writer, or undefined if it is not
[[writeRequests]] A List of promises representing the stream’s internal queue of write requests not yet processed by the underlying sink

The [[inFlightCloseRequest]] slot and [[closeRequest]] slot are mutually exclusive. Similarly, no element will be removed from [[writeRequests]] while [[inFlightWriteRequest]] is not undefined. Implementations can optimize storage for these slots based on these invariants.

4.2.3. new WritableStream(underlyingSink = {}, strategy = {})

The underlyingSink argument represents the underlying sink, as described in § 4.2.4 Underlying sink API.

The strategy argument represents the stream’s queuing strategy, as described in § 6.1.1 The queuing strategy API. If it is not provided, the default behavior will be the same as a CountQueuingStrategy with a high water mark of 1.

1. Perform ! InitializeWritableStream(_this_). 1. Let _size_ be ? GetV(_strategy_, `"size"`). 1. Let _highWaterMark_ be ? GetV(_strategy_, `"highWaterMark"`). 1. Let _type_ be ? GetV(_underlyingSink_, `"type"`). 1. If _type_ is not *undefined*, throw a *RangeError* exception.

This is to allow us to add new potential types in the future, without backward-compatibility concerns.

1. Let _sizeAlgorithm_ be ? MakeSizeAlgorithmFromSizeFunction(_size_). 1. If _highWaterMark_ is *undefined*, let _highWaterMark_ be *1*. 1. Set _highWaterMark_ to ? ValidateAndNormalizeHighWaterMark(_highWaterMark_). 1. Perform ? SetUpWritableStreamDefaultControllerFromUnderlyingSink(*this*, _underlyingSink_, _highWaterMark_, _sizeAlgorithm_).

4.2.4. Underlying sink API

This section is non-normative.

The WritableStream() constructor accepts as its first argument a JavaScript object representing the underlying sink. Such objects can contain any of the following properties:

start(controller)

A function that is called immediately during creation of the WritableStream.

Typically this is used to acquire access to the underlying sink resource being represented.

If this setup process is asynchronous, it can return a promise to signal success or failure; a rejected promise will error the stream. Any thrown exceptions will be re-thrown by the WritableStream() constructor.

write(chunk, controller)

A function that is called when a new chunk of data is ready to be written to the underlying sink. The stream implementation guarantees that this function will be called only after previous writes have succeeded, and never before start() has succeeded or after close() or abort() have been called.

This function is used to actually send the data to the resource presented by the underlying sink, for example by calling a lower-level API.

If the process of writing data is asynchronous, and communicates success or failure signals back to its user, then this function can return a promise to signal success or failure. This promise return value will be communicated back to the caller of writer.write(), so they can monitor that individual write. Throwing an exception is treated the same as returning a rejected promise.

Note that such signals are not always available; compare e.g. § 8.6 A writable stream with no backpressure or success signals with § 8.7 A writable stream with backpressure and success signals. In such cases, it’s best to not return anything.

The promise potentially returned by this function also governs whether the given chunk counts as written for the purposes of computed the desired size to fill the stream’s internal queue. That is, during the time it takes the promise to settle, writer.desiredSize will stay at its previous value, only increasing to signal the desire for more chunks once the write succeeds.

close()

A function that is called after the producer signals, via writer.close(), that they are done writing chunks to the stream, and subsequently all queued-up writes have successfully completed.

This function can perform any actions necessary to finalize or flush writes to the underlying sink, and release access to any held resources.

If the shutdown process is asynchronous, the function can return a promise to signal success or failure; the result will be communicated via the return value of the called writer.close() method. Additionally, a rejected promise will error the stream, instead of letting it close successfully. Throwing an exception is treated the same as returning a rejected promise.

abort(reason)

A function that is called after the producer signals, via stream.abort() or writer.abort(), that they wish to abort the stream. It takes as its argument the same value as was passed to those methods by the producer.

Writable streams can additionally be aborted under certain conditions during piping; see the definition of the pipeTo() method for more details.

This function can clean up any held resources, much like close(), but perhaps with some custom handling.

If the shutdown process is asynchronous, the function can return a promise to signal success or failure; the result will be communicated via the return value of the called abort() method. Throwing an exception is treated the same as returning a rejected promise. Regardless, the stream will be errored with a new TypeError indicating that it was aborted.

The controller argument passed to start() and write() is an instance of WritableStreamDefaultController, and has the ability to error the stream. This is mainly used for bridging the gap with non-promise-based APIs, as seen for example in § 8.6 A writable stream with no backpressure or success signals.

4.2.5. Properties of the WritableStream prototype

4.2.5.1. get locked
The locked getter returns whether or not the writable stream is locked to a writer.
1. If ! IsWritableStream(*this*) is *false*, throw a *TypeError* exception. 1. Return ! IsWritableStreamLocked(*this*).
4.2.5.2. abort(reason)
The abort method aborts the stream, signaling that the producer can no longer successfully write to the stream and it is to be immediately moved to an errored state, with any queued-up writes discarded. This will also execute any abort mechanism of the underlying sink.
1. If ! IsWritableStream(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. If ! IsWritableStreamLocked(*this*) is *true*, return a promise rejected with a *TypeError* exception. 1. Return ! WritableStreamAbort(*this*, _reason_).
4.2.5.3. close()
The close method closes the stream. The underlying sink will finish processing any previously-written chunks, before invoking its close behavior. During this time any further attempts to write will fail (without erroring the stream).

The method returns a promise that is fulfilled with undefined if all remaining chunks are successfully written and the stream successfully closes, or rejects if an error is encountered during this process.

1. If ! IsWritableStream(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. If ! IsWritableStreamLocked(*this*) is *true*, return a promise rejected with a *TypeError* exception. 1. If ! WritableStreamCloseQueuedOrInFlight(*this*) is *true*, return a promise rejected with a *TypeError* exception. 1. Return ! WritableStreamClose(*this*).
4.2.5.4. getWriter()
The getWriter method creates a writer (an instance of WritableStreamDefaultWriter) and locks the stream to the new writer. While the stream is locked, no other writer can be acquired until this one is released.

This functionality is especially useful for creating abstractions that desire the ability to write to a stream without interruption or interleaving. By getting a writer for the stream, you can ensure nobody else can write at the same time, which would cause the resulting written data to be unpredictable and probably useless.

1. If ! IsWritableStream(*this*) is *false*, throw a *TypeError* exception. 1. Return ? AcquireWritableStreamDefaultWriter(*this*).

4.3. General writable stream abstract operations

The following abstract operations, unlike most in this specification, are meant to be generally useful by other specifications, instead of just being part of the implementation of this spec’s classes.

4.3.1. AcquireWritableStreamDefaultWriter ( stream )

1. Return ? Construct(`WritableStreamDefaultWriter`, « _stream_ »).

4.3.2. CreateWritableStream ( startAlgorithm, writeAlgorithm, closeAlgorithm, abortAlgorithm [, highWaterMark [, sizeAlgorithm ] ] )

This abstract operation is meant to be called from other specifications that wish to create WritableStream instances. The writeAlgorithm, closeAlgorithm and abortAlgorithm algorithms must return promises; if supplied, sizeAlgorithm must be an algorithm accepting chunk objects and returning a number; and if supplied, highWaterMark must be a non-negative, non-NaN number.

CreateWritableStream throws an exception if and only if the supplied startAlgorithm throws.

1. If _highWaterMark_ was not passed, set it to *1*. 1. If _sizeAlgorithm_ was not passed, set it to an algorithm that returns *1*. 1. Assert: ! IsNonNegativeNumber(_highWaterMark_) is *true*. 1. Let _stream_ be ObjectCreate(the original value of `WritableStream`'s `prototype` property). 1. Perform ! InitializeWritableStream(_stream_). 1. Let _controller_ be ObjectCreate(the original value of `WritableStreamDefaultController`'s `prototype` property). 1. Perform ? SetUpWritableStreamDefaultController(_stream_, _controller_, _startAlgorithm_, _writeAlgorithm_, _closeAlgorithm_, _abortAlgorithm_, _highWaterMark_, _sizeAlgorithm_). 1. Return _stream_.

4.3.3. InitializeWritableStream ( stream )

1. Set _stream_.[[state]] to `"writable"`. 1. Set _stream_.[[storedError]], _stream_.[[writer]], _stream_.[[writableStreamController]], _stream_.[[inFlightWriteRequest]], _stream_.[[closeRequest]], _stream_.[[inFlightCloseRequest]] and _stream_.[[pendingAbortRequest]] to *undefined*. 1. Set _stream_.[[writeRequests]] to a new empty List. 1. Set _stream_.[[backpressure]] to *false*.

4.3.4. IsWritableStream ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have a [[writableStreamController]] internal slot, return *false*. 1. Return *true*.

4.3.5. IsWritableStreamLocked ( stream )

This abstract operation is meant to be called from other specifications that may wish to query whether or not a writable stream is locked to a writer.

1. Assert: ! IsWritableStream(_stream_) is *true*. 1. If _stream_.[[writer]] is *undefined*, return *false*. 1. Return *true*.

4.3.6. WritableStreamAbort ( stream, reason )

1. Let _state_ be _stream_.[[state]]. 1. If _state_ is `"closed"` or `"errored"`, return a promise resolved with *undefined*. 1. If _stream_.[[pendingAbortRequest]] is not *undefined*, return _stream_.[[pendingAbortRequest]].[[promise]]. 1. Assert: _state_ is `"writable"` or `"erroring"`. 1. Let _wasAlreadyErroring_ be *false*. 1. If _state_ is `"erroring"`, 1. Set _wasAlreadyErroring_ to *true*. 1. Set _reason_ to *undefined*. 1. Let _promise_ be a new promise. 1. Set _stream_.[[pendingAbortRequest]] to Record {[[promise]]: _promise_, [[reason]]: _reason_, [[wasAlreadyErroring]]: _wasAlreadyErroring_}. 1. If _wasAlreadyErroring_ is *false*, perform ! WritableStreamStartErroring(_stream_, _reason_). 1. Return _promise_.

4.3.7. WritableStreamClose ( stream )

1. Let _state_ be _stream_.[[state]]. 1. If _state_ is `"closed"` or `"errored"`, return a promise rejected with a *TypeError* exception. 1. Assert: _state_ is `"writable"` or `"erroring"`. 1. Assert: ! WritableStreamCloseQueuedOrInFlight(_stream_) is *false*. 1. Let _promise_ be a new promise. 1. Set _stream_.[[closeRequest]] to _promise_. 1. Let _writer_ be _stream_.[[writer]]. 1. If _writer_ is not *undefined*, and _stream_.[[backpressure]] is *true*, and _state_ is `"writable"`, resolve _writer_.[[readyPromise]] with *undefined*. 1. Perform ! WritableStreamDefaultControllerClose(_stream_.[[writableStreamController]]). 1. Return _promise_.

4.4. Writable stream abstract operations used by controllers

To allow future flexibility to add different writable stream behaviors (similar to the distinction between default readable streams and readable byte streams), much of the internal state of a writable stream is encapsulated by the WritableStreamDefaultController class.

The abstract operations in this section are interfaces that are used by the controller implementation to affect its associated WritableStream object, translating the controller’s internal state changes into developer-facing results visible through the WritableStream's public API.

4.4.1. WritableStreamAddWriteRequest ( stream )

1. Assert: ! IsWritableStreamLocked(_stream_) is *true*. 1. Assert: _stream_.[[state]] is `"writable"`. 1. Let _promise_ be a new promise. 1. Append _promise_ as the last element of _stream_.[[writeRequests]]. 1. Return _promise_.

4.4.2. WritableStreamDealWithRejection ( stream, error )

1. Let _state_ be _stream_.[[state]]. 1. If _state_ is `"writable"`, 1. Perform ! WritableStreamStartErroring(_stream_, _error_). 1. Return. 1. Assert: _state_ is `"erroring"`. 1. Perform ! WritableStreamFinishErroring(_stream_).

4.4.3. WritableStreamStartErroring ( stream, reason )

1. Assert: _stream_.[[storedError]] is *undefined*. 1. Assert: _stream_.[[state]] is `"writable"`. 1. Let _controller_ be _stream_.[[writableStreamController]]. 1. Assert: _controller_ is not *undefined*. 1. Set _stream_.[[state]] to `"erroring"`. 1. Set _stream_.[[storedError]] to _reason_. 1. Let _writer_ be _stream_.[[writer]]. 1. If _writer_ is not *undefined*, perform ! WritableStreamDefaultWriterEnsureReadyPromiseRejected(_writer_, _reason_). 1. If ! WritableStreamHasOperationMarkedInFlight(_stream_) is *false* and _controller_.[[started]] is *true*, perform ! WritableStreamFinishErroring(_stream_).

4.4.4. WritableStreamFinishErroring ( stream )

1. Assert: _stream_.[[state]] is `"erroring"`. 1. Assert: ! WritableStreamHasOperationMarkedInFlight(_stream_) is *false*. 1. Set _stream_.[[state]] to `"errored"`. 1. Perform ! _stream_.[[writableStreamController]].[[ErrorSteps]](). 1. Let _storedError_ be _stream_.[[storedError]]. 1. Repeat for each _writeRequest_ that is an element of _stream_.[[writeRequests]], 1. Reject _writeRequest_ with _storedError_. 1. Set _stream_.[[writeRequests]] to an empty List. 1. If _stream_.[[pendingAbortRequest]] is *undefined*, 1. Perform ! WritableStreamRejectCloseAndClosedPromiseIfNeeded(_stream_). 1. Return. 1. Let _abortRequest_ be _stream_.[[pendingAbortRequest]]. 1. Set _stream_.[[pendingAbortRequest]] to *undefined*. 1. If _abortRequest_.[[wasAlreadyErroring]] is *true*, 1. Reject _abortRequest_.[[promise]] with _storedError_. 1. Perform ! WritableStreamRejectCloseAndClosedPromiseIfNeeded(_stream_). 1. Return. 1. Let _promise_ be ! stream.[[writableStreamController]].[[AbortSteps]](_abortRequest_.[[reason]]). 1. Upon fulfillment of _promise_, 1. Resolve _abortRequest_.[[promise]] with *undefined*. 1. Perform ! WritableStreamRejectCloseAndClosedPromiseIfNeeded(_stream_). 1. Upon rejection of _promise_ with reason _reason_, 1. Reject _abortRequest_.[[promise]] with _reason_. 1. Perform ! WritableStreamRejectCloseAndClosedPromiseIfNeeded(_stream_).

4.4.5. WritableStreamFinishInFlightWrite ( stream )

1. Assert: _stream_.[[inFlightWriteRequest]] is not *undefined*. 1. Resolve _stream_.[[inFlightWriteRequest]] with *undefined*. 1. Set _stream_.[[inFlightWriteRequest]] to *undefined*.

4.4.6. WritableStreamFinishInFlightWriteWithError ( stream, error )

1. Assert: _stream_.[[inFlightWriteRequest]] is not *undefined*. 1. Reject _stream_.[[inFlightWriteRequest]] with _error_. 1. Set _stream_.[[inFlightWriteRequest]] to *undefined*. 1. Assert: _stream_.[[state]] is `"writable"` or `"erroring"`. 1. Perform ! WritableStreamDealWithRejection(_stream_, _error_).

4.4.7. WritableStreamFinishInFlightClose ( stream )

1. Assert: _stream_.[[inFlightCloseRequest]] is not *undefined*. 1. Resolve _stream_.[[inFlightCloseRequest]] with *undefined*. 1. Set _stream_.[[inFlightCloseRequest]] to *undefined*. 1. Let _state_ be _stream_.[[state]]. 1. Assert: _stream_.[[state]] is `"writable"` or `"erroring"`. 1. If _state_ is `"erroring"`, 1. Set _stream_.[[storedError]] to *undefined*. 1. If _stream_.[[pendingAbortRequest]] is not *undefined*, 1. Resolve _stream_.[[pendingAbortRequest]].[[promise]] with *undefined*. 1. Set _stream_.[[pendingAbortRequest]] to *undefined*. 1. Set _stream_.[[state]] to `"closed"`. 1. Let _writer_ be _stream_.[[writer]]. 1. If _writer_ is not *undefined*, resolve _writer_.[[closedPromise]] with *undefined*. 1. Assert: _stream_.[[pendingAbortRequest]] is *undefined*. 1. Assert: _stream_.[[storedError]] is *undefined*.

4.4.8. WritableStreamFinishInFlightCloseWithError ( stream, error )

1. Assert: _stream_.[[inFlightCloseRequest]] is not *undefined*. 1. Reject _stream_.[[inFlightCloseRequest]] with _error_. 1. Set _stream_.[[inFlightCloseRequest]] to *undefined*. 1. Assert: _stream_.[[state]] is `"writable"` or `"erroring"`. 1. If _stream_.[[pendingAbortRequest]] is not *undefined*, 1. Reject _stream_.[[pendingAbortRequest]].[[promise]] with _error_. 1. Set _stream_.[[pendingAbortRequest]] to *undefined*. 1. Perform ! WritableStreamDealWithRejection(_stream_, _error_).

4.4.9. WritableStreamCloseQueuedOrInFlight ( stream )

1. If _stream_.[[closeRequest]] is *undefined* and _stream_.[[inFlightCloseRequest]] is *undefined*, return *false*. 1. Return *true*.

4.4.10. WritableStreamHasOperationMarkedInFlight ( stream )

1. If _stream_.[[inFlightWriteRequest]] is *undefined* and _controller_.[[inFlightCloseRequest]] is *undefined*, return *false*. 1. Return *true*.

4.4.11. WritableStreamMarkCloseRequestInFlight ( stream )

1. Assert: _stream_.[[inFlightCloseRequest]] is *undefined*. 1. Assert: _stream_.[[closeRequest]] is not *undefined*. 1. Set _stream_.[[inFlightCloseRequest]] to _stream_.[[closeRequest]]. 1. Set _stream_.[[closeRequest]] to *undefined*.

4.4.12. WritableStreamMarkFirstWriteRequestInFlight ( stream )

1. Assert: _stream_.[[inFlightWriteRequest]] is *undefined*. 1. Assert: _stream_.[[writeRequests]] is not empty. 1. Let _writeRequest_ be the first element of _stream_.[[writeRequests]]. 1. Remove _writeRequest_ from _stream_.[[writeRequests]], shifting all other elements downward (so that the second becomes the first, and so on). 1. Set _stream_.[[inFlightWriteRequest]] to _writeRequest_.

4.4.13. WritableStreamRejectCloseAndClosedPromiseIfNeeded ( stream )

1. Assert: _stream_.[[state]] is `"errored"`. 1. If _stream_.[[closeRequest]] is not *undefined*, 1. Assert: _stream_.[[inFlightCloseRequest]] is *undefined*. 1. Reject _stream_.[[closeRequest]] with _stream_.[[storedError]]. 1. Set _stream_.[[closeRequest]] to *undefined*. 1. Let _writer_ be _stream_.[[writer]]. 1. If _writer_ is not *undefined*, 1. Reject _writer_.[[closedPromise]] with _stream_.[[storedError]]. 1. Set _writer_.[[closedPromise]].[[PromiseIsHandled]] to *true*.

4.4.14. WritableStreamUpdateBackpressure ( stream, backpressure )

1. Assert: _stream_.[[state]] is `"writable"`. 1. Assert: ! WritableStreamCloseQueuedOrInFlight(_stream_) is *false*. 1. Let _writer_ be _stream_.[[writer]]. 1. If _writer_ is not *undefined* and _backpressure_ is not _stream_.[[backpressure]], 1. If _backpressure_ is *true*, set _writer_.[[readyPromise]] to a new promise. 1. Otherwise, 1. Assert: _backpressure_ is *false*. 1. Resolve _writer_.[[readyPromise]] with *undefined*. 1. Set _stream_.[[backpressure]] to _backpressure_.

4.5. Class WritableStreamDefaultWriter

The WritableStreamDefaultWriter class represents a writable stream writer designed to be vended by a WritableStream instance.

4.5.1. Class definition

This section is non-normative.

If one were to write the WritableStreamDefaultWriter class in something close to the syntax of [ECMASCRIPT], it would look like

class WritableStreamDefaultWriter {
  constructor(stream)

  get closed()
  get desiredSize()
  get ready()

  abort(reason)
  close()
  releaseLock()
  write(chunk)
}

4.5.2. Internal slots

Instances of WritableStreamDefaultWriter are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[closedPromise]] A promise returned by the writer’s closed getter
[[ownerWritableStream]] A WritableStream instance that owns this writer
[[readyPromise]] A promise returned by the writer’s ready getter

4.5.3. new WritableStreamDefaultWriter(stream)

The WritableStreamDefaultWriter constructor is generally not meant to be used directly; instead, a stream’s getWriter() method ought to be used.
1. If ! IsWritableStream(_stream_) is *false*, throw a *TypeError* exception. 1. If ! IsWritableStreamLocked(_stream_) is *true*, throw a *TypeError* exception. 1. Set *this*.[[ownerWritableStream]] to _stream_. 1. Set _stream_.[[writer]] to *this*. 1. Let _state_ be _stream_.[[state]]. 1. If _state_ is `"writable"`, 1. If ! WritableStreamCloseQueuedOrInFlight(_stream_) is *false* and _stream_.[[backpressure]] is *true*, set *this*.[[readyPromise]] to a new promise. 1. Otherwise, set *this*.[[readyPromise]] to a promise resolved with *undefined*. 1. Set *this*.[[closedPromise]] to a new promise. 1. Otherwise, if _state_ is `"erroring"`, 1. Set *this*.[[readyPromise]] to a promise rejected with _stream_.[[storedError]]. 1. Set *this*.[[readyPromise]].[[PromiseIsHandled]] to *true*. 1. Set *this*.[[closedPromise]] to a new promise. 1. Otherwise, if _state_ is `"closed"`, 1. Set *this*.[[readyPromise]] to a promise resolved with *undefined*. 1. Set *this*.[[closedPromise]] to a promise resolved with *undefined*. 1. Otherwise, 1. Assert: _state_ is `"errored"`. 1. Let _storedError_ be _stream_.[[storedError]]. 1. Set *this*.[[readyPromise]] to a promise rejected with _storedError_. 1. Set *this*.[[readyPromise]].[[PromiseIsHandled]] to *true*. 1. Set *this*.[[closedPromise]] to a promise rejected with _storedError_. 1. Set *this*.[[closedPromise]].[[PromiseIsHandled]] to *true*.

4.5.4. Properties of the WritableStreamDefaultWriter prototype

4.5.4.1. get closed
The closed getter returns a promise that will be fulfilled when the stream becomes closed, or rejected if the stream ever errors or the writer’s lock is released before the stream finishes closing.
1. If ! IsWritableStreamDefaultWriter(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. Return *this*.[[closedPromise]].
4.5.4.2. get desiredSize
The desiredSize getter returns the desired size to fill the stream’s internal queue. It can be negative, if the queue is over-full. A producer can use this information to determine the right amount of data to write.

It will be null if the stream cannot be successfully written to (due to either being errored, or having an abort queued up). It will return zero if the stream is closed. The getter will throw an exception if invoked when the writer’s lock is released.

1. If ! IsWritableStreamDefaultWriter(*this*) is *false*, throw a *TypeError* exception. 1. If *this*.[[ownerWritableStream]] is *undefined*, throw a *TypeError* exception. 1. Return ! WritableStreamDefaultWriterGetDesiredSize(*this*).
4.5.4.3. get ready
The ready getter returns a promise that will be fulfilled when the desired size to fill the stream’s internal queue transitions from non-positive to positive, signaling that it is no longer applying backpressure. Once the desired size to fill the stream’s internal queue dips back to zero or below, the getter will return a new promise that stays pending until the next transition.

If the stream becomes errored or aborted, or the writer’s lock is released, the returned promise will become rejected.

1. If ! IsWritableStreamDefaultWriter(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. Return *this*.[[readyPromise]].
4.5.4.4. abort(reason)
If the writer is active, the abort method behaves the same as that for the associated stream. (Otherwise, it returns a rejected promise.)
1. If ! IsWritableStreamDefaultWriter(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. If *this*.[[ownerWritableStream]] is *undefined*, return a promise rejected with a *TypeError* exception. 1. Return ! WritableStreamDefaultWriterAbort(*this*, _reason_).
4.5.4.5. close()
If the writer is active, the close method behaves the same as that for the associated stream. (Otherwise, it returns a rejected promise.)
1. If ! IsWritableStreamDefaultWriter(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. Let _stream_ be *this*.[[ownerWritableStream]]. 1. If _stream_ is *undefined*, return a promise rejected with a *TypeError* exception. 1. If ! WritableStreamCloseQueuedOrInFlight(_stream_) is *true*, return a promise rejected with a *TypeError* exception. 1. Return ! WritableStreamDefaultWriterClose(*this*).
4.5.4.6. releaseLock()
The releaseLock method releases the writer’s lock on the corresponding stream. After the lock is released, the writer is no longer active. If the associated stream is errored when the lock is released, the writer will appear errored in the same way from now on; otherwise, the writer will appear closed.

Note that the lock can still be released even if some ongoing writes have not yet finished (i.e. even if the promises returned from previous calls to write() have not yet settled). It’s not necessary to hold the lock on the writer for the duration of the write; the lock instead simply prevents other producers from writing in an interleaved manner.

1. If ! IsWritableStreamDefaultWriter(*this*) is *false*, throw a *TypeError* exception. 1. Let _stream_ be *this*.[[ownerWritableStream]]. 1. If _stream_ is *undefined*, return. 1. Assert: _stream_.[[writer]] is not *undefined*. 1. Perform ! WritableStreamDefaultWriterRelease(*this*).
4.5.4.7. write(chunk)
The write method writes the given chunk to the writable stream, by waiting until any previous writes have finished successfully, and then sending the chunk to the underlying sink’s write() method. It will return a promise that fulfills with undefined upon a successful write, or rejects if the write fails or stream becomes errored before the writing process is initiated.

Note that what "success" means is up to the underlying sink; it might indicate simply that the chunk has been accepted, and not necessarily that it is safely saved to its ultimate destination.

1. If ! IsWritableStreamDefaultWriter(*this*) is *false*, return a promise rejected with a *TypeError* exception. 1. If *this*.[[ownerWritableStream]] is *undefined*, return a promise rejected with a *TypeError* exception. 1. Return ! WritableStreamDefaultWriterWrite(*this*, _chunk_).

4.6. Writable stream writer abstract operations

4.6.1. IsWritableStreamDefaultWriter ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have an [[ownerWritableStream]] internal slot, return *false*. 1. Return *true*.

4.6.2. WritableStreamDefaultWriterAbort ( writer, reason )

1. Let _stream_ be _writer_.[[ownerWritableStream]]. 1. Assert: _stream_ is not *undefined*. 1. Return ! WritableStreamAbort(_stream_, _reason_).

4.6.3. WritableStreamDefaultWriterClose ( writer )

1. Let _stream_ be _writer_.[[ownerWritableStream]]. 1. Assert: _stream_ is not *undefined*. 1. Return ! WritableStreamClose(_stream_).

4.6.4. WritableStreamDefaultWriterCloseWithErrorPropagation ( writer )

This abstract operation helps implement the error propagation semantics of pipeTo().

1. Let _stream_ be _writer_.[[ownerWritableStream]]. 1. Assert: _stream_ is not *undefined*. 1. Let _state_ be _stream_.[[state]]. 1. If ! WritableStreamCloseQueuedOrInFlight(_stream_) is *true* or _state_ is `"closed"`, return a promise resolved with *undefined*. 1. If _state_ is `"errored"`, return a promise rejected with _stream_.[[storedError]]. 1. Assert: _state_ is `"writable"` or `"erroring"`. 1. Return ! WritableStreamDefaultWriterClose(_writer_).

4.6.5. WritableStreamDefaultWriterEnsureClosedPromiseRejected( writer, error )

1. If _writer_.[[closedPromise]].[[PromiseState]] is `"pending"`, reject _writer_.[[closedPromise]] with _error_. 1. Otherwise, set _writer_.[[closedPromise]] to a promise rejected with _error_. 1. Set _writer_.[[closedPromise]].[[PromiseIsHandled]] to *true*.

4.6.6. WritableStreamDefaultWriterEnsureReadyPromiseRejected( writer, error )

1. If _writer_.[[readyPromise]].[[PromiseState]] is `"pending"`, reject _writer_.[[readyPromise]] with _error_. 1. Otherwise, set _writer_.[[readyPromise]] to a promise rejected with _error_. 1. Set _writer_.[[readyPromise]].[[PromiseIsHandled]] to *true*.

4.6.7. WritableStreamDefaultWriterGetDesiredSize ( writer )

1. Let _stream_ be _writer_.[[ownerWritableStream]]. 1. Let _state_ be _stream_.[[state]]. 1. If _state_ is `"errored"` or `"erroring"`, return *null*. 1. If _state_ is `"closed"`, return *0*. 1. Return ! WritableStreamDefaultControllerGetDesiredSize(_stream_.[[writableStreamController]]).

4.6.8. WritableStreamDefaultWriterRelease ( writer )

1. Let _stream_ be _writer_.[[ownerWritableStream]]. 1. Assert: _stream_ is not *undefined*. 1. Assert: _stream_.[[writer]] is _writer_. 1. Let _releasedError_ be a new *TypeError*. 1. Perform ! WritableStreamDefaultWriterEnsureReadyPromiseRejected(_writer_, _releasedError_). 1. Perform ! WritableStreamDefaultWriterEnsureClosedPromiseRejected(_writer_, _releasedError_). 1. Set _stream_.[[writer]] to *undefined*. 1. Set _writer_.[[ownerWritableStream]] to *undefined*.

4.6.9. WritableStreamDefaultWriterWrite ( writer, chunk )

1. Let _stream_ be _writer_.[[ownerWritableStream]]. 1. Assert: _stream_ is not *undefined*. 1. Let _controller_ be _stream_.[[writableStreamController]]. 1. Let _chunkSize_ be ! WritableStreamDefaultControllerGetChunkSize(_controller_, _chunk_). 1. If _stream_ is not equal to _writer_.[[ownerWritableStream]], return a promise rejected with a *TypeError* exception. 1. Let _state_ be _stream_.[[state]]. 1. If _state_ is `"errored"`, return a promise rejected with _stream_.[[storedError]]. 1. If ! WritableStreamCloseQueuedOrInFlight(_stream_) is *true* or _state_ is `"closed"`, return a promise rejected with a *TypeError* exception indicating that the stream is closing or closed. 1. If _state_ is `"erroring"`, return a promise rejected with _stream_.[[storedError]]. 1. Assert: _state_ is `"writable"`. 1. Let _promise_ be ! WritableStreamAddWriteRequest(_stream_). 1. Perform ! WritableStreamDefaultControllerWrite(_controller_, _chunk_, _chunkSize_). 1. Return _promise_.

4.7. Class WritableStreamDefaultController

The WritableStreamDefaultController class has methods that allow control of a WritableStream's state. When constructing a WritableStream, the underlying sink is given a corresponding WritableStreamDefaultController instance to manipulate.

4.7.1. Class definition

This section is non-normative.

If one were to write the WritableStreamDefaultController class in something close to the syntax of [ECMASCRIPT], it would look like

class WritableStreamDefaultController {
  constructor() // always throws

  error(e)
}

4.7.2. Internal slots

Instances of WritableStreamDefaultController are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[abortAlgorithm]] A promise-returning algorithm, taking one argument (the abort reason), which communicates a requested abort to the underlying sink
[[closeAlgorithm]] A promise-returning algorithm which communicates a requested close to the underlying sink
[[controlledWritableStream]] The WritableStream instance controlled
[[queue]] A List representing the stream’s internal queue of chunks
[[queueTotalSize]] The total size of all the chunks stored in [[queue]] (see § 6.2 Queue-with-sizes operations)
[[started]] A boolean flag indicating whether the underlying sink has finished starting
[[strategyHWM]] A number supplied by the creator of the stream as part of the stream’s queuing strategy, indicating the point at which the stream will apply backpressure to its underlying sink
[[strategySizeAlgorithm]] An algorithm to calculate the size of enqueued chunks, as part of the stream’s queuing strategy
[[writeAlgorithm]] A promise-returning algorithm, taking one argument (the chunk to write), which writes data to the underlying sink

4.7.3. new WritableStreamDefaultController()

The WritableStreamDefaultController constructor cannot be used directly; WritableStreamDefaultController instances are created automatically during WritableStream construction.
1. Throw a *TypeError* exception.

4.7.4. Properties of the WritableStreamDefaultController prototype

4.7.4.1. error(e)
The error method will error the writable stream, making all future interactions with it fail with the given error e.

This method is rarely used, since usually it suffices to return a rejected promise from one of the underlying sink’s methods. However, it can be useful for suddenly shutting down a stream in response to an event outside the normal lifecycle of interactions with the underlying sink.

1. If ! IsWritableStreamDefaultController(*this*) is *false*, throw a *TypeError* exception. 1. Let _state_ be *this*.[[controlledWritableStream]].[[state]]. 1. If _state_ is not `"writable"`, return. 1. Perform ! WritableStreamDefaultControllerError(*this*, _e_).

4.7.5. Writable stream default controller internal methods

The following are additional internal methods implemented by each WritableStreamDefaultController instance. The writable stream implementation will call into these.

The reason these are in method form, instead of as abstract operations, is to make it clear that the writable stream implementation is decoupled from the controller implementation, and could in the future be expanded with other controllers, as long as those controllers implemented such internal methods. A similar scenario is seen for readable streams, where there actually are multiple controller types and as such the counterpart internal methods are used polymorphically.

4.7.5.1. [[AbortSteps]]( reason )
1. Let _result_ be the result of performing *this*.[[abortAlgorithm]], passing _reason_. 1. Perform ! WritableStreamDefaultControllerClearAlgorithms(*this*). 1. Return _result_.
4.7.5.2. [[ErrorSteps]]()
1. Perform ! ResetQueue(*this*).

4.8. Writable stream default controller abstract operations

4.8.1. IsWritableStreamDefaultController ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have an [[controlledWritableStream]] internal slot, return *false*. 1. Return *true*.

4.8.2. SetUpWritableStreamDefaultController ( stream, controller, startAlgorithm, writeAlgorithm, closeAlgorithm, abortAlgorithm, highWaterMark, sizeAlgorithm )

1. Assert: ! IsWritableStream(_stream_) is *true*. 1. Assert: _stream_.[[writableStreamController]] is *undefined*. 1. Set _controller_.[[controlledWritableStream]] to _stream_. 1. Set _stream_.[[writableStreamController]] to _controller_. 1. Perform ! ResetQueue(_controller_). 1. Set _controller_.[[started]] to *false*. 1. Set _controller_.[[strategySizeAlgorithm]] to _sizeAlgorithm_. 1. Set _controller_.[[strategyHWM]] to _highWaterMark_. 1. Set _controller_.[[writeAlgorithm]] to _writeAlgorithm_. 1. Set _controller_.[[closeAlgorithm]] to _closeAlgorithm_. 1. Set _controller_.[[abortAlgorithm]] to _abortAlgorithm_. 1. Let _backpressure_ be ! WritableStreamDefaultControllerGetBackpressure(_controller_). 1. Perform ! WritableStreamUpdateBackpressure(_stream_, _backpressure_). 1. Let _startResult_ be the result of performing _startAlgorithm_. (This may throw an exception.) 1. Let _startPromise_ be a promise resolved with _startResult_. 1. Upon fulfillment of _startPromise_, 1. Assert: _stream_.[[state]] is `"writable"` or `"erroring"`. 1. Set _controller_.[[started]] to *true*. 1. Perform ! WritableStreamDefaultControllerAdvanceQueueIfNeeded(_controller_). 1. Upon rejection of _startPromise_ with reason _r_, 1. Assert: _stream_.[[state]] is `"writable"` or `"erroring"`. 1. Set _controller_.[[started]] to *true*. 1. Perform ! WritableStreamDealWithRejection(_stream_, _r_).

4.8.3. SetUpWritableStreamDefaultControllerFromUnderlyingSink ( stream, underlyingSink, highWaterMark, sizeAlgorithm )

1. Assert: _underlyingSink_ is not *undefined*. 1. Let _controller_ be ObjectCreate(the original value of `WritableStreamDefaultController`'s `prototype` property). 1. Let _startAlgorithm_ be the following steps: 1. Return ? InvokeOrNoop(_underlyingSink_, `"start"`, « _controller_ »). 1. Let _writeAlgorithm_ be ? CreateAlgorithmFromUnderlyingMethod(_underlyingSink_, `"write"`, *1*, « _controller_ »). 1. Let _closeAlgorithm_ be ? CreateAlgorithmFromUnderlyingMethod(_underlyingSink_, `"close"`, *0*, « »). 1. Let _abortAlgorithm_ be ? CreateAlgorithmFromUnderlyingMethod(_underlyingSink_, `"abort"`, *1*, « »). 1. Perform ? SetUpWritableStreamDefaultController(_stream_, _controller_, _startAlgorithm_, _writeAlgorithm_, _closeAlgorithm_, _abortAlgorithm_, _highWaterMark_, _sizeAlgorithm_).

4.8.4. WritableStreamDefaultControllerClearAlgorithms ( controller )

This abstract operation is called once the stream is closed or errored and the algorithms will not be executed any more. By removing the algorithm references it permits the underlying sink object to be garbage collected even if the WritableStream itself is still referenced.

The results of this algorithm are not currently observable, but could become so if JavaScript eventually adds weak references. But even without that factor, implementations will likely want to include similar steps.

This operation will be performed multiple times in some edge cases. After the first time it will do nothing.

1. Set _controller_.[[writeAlgorithm]] to *undefined*. 1. Set _controller_.[[closeAlgorithm]] to *undefined*. 1. Set _controller_.[[abortAlgorithm]] to *undefined*. 1. Set _controller_.[[strategySizeAlgorithm]] to *undefined*.

4.8.5. WritableStreamDefaultControllerClose ( controller )

1. Perform ! EnqueueValueWithSize(_controller_, `"close"`, *0*). 1. Perform ! WritableStreamDefaultControllerAdvanceQueueIfNeeded(_controller_).

4.8.6. WritableStreamDefaultControllerGetChunkSize ( controller, chunk )

1. Let _returnValue_ be the result of performing _controller_.[[strategySizeAlgorithm]], passing in _chunk_, and interpreting the result as an ECMAScript completion value. 1. If _returnValue_ is an abrupt completion, 1. Perform ! WritableStreamDefaultControllerErrorIfNeeded(_controller_, _returnValue_.[[Value]]). 1. Return 1. 1. Return _returnValue_.[[Value]].

4.8.7. WritableStreamDefaultControllerGetDesiredSize ( controller )

1. Return _controller_.[[strategyHWM]] − _controller_.[[queueTotalSize]].

4.8.8. WritableStreamDefaultControllerWrite ( controller, chunk, chunkSize )

1. Let _writeRecord_ be Record {[[chunk]]: _chunk_}. 1. Let _enqueueResult_ be EnqueueValueWithSize(_controller_, _writeRecord_, _chunkSize_). 1. If _enqueueResult_ is an abrupt completion, 1. Perform ! WritableStreamDefaultControllerErrorIfNeeded(_controller_, _enqueueResult_.[[Value]]). 1. Return. 1. Let _stream_ be _controller_.[[controlledWritableStream]]. 1. If ! WritableStreamCloseQueuedOrInFlight(_stream_) is *false* and _stream_.[[state]] is `"writable"`, 1. Let _backpressure_ be ! WritableStreamDefaultControllerGetBackpressure(_controller_). 1. Perform ! WritableStreamUpdateBackpressure(_stream_, _backpressure_). 1. Perform ! WritableStreamDefaultControllerAdvanceQueueIfNeeded(_controller_).

4.8.9. WritableStreamDefaultControllerAdvanceQueueIfNeeded ( controller )

1. Let _stream_ be _controller_.[[controlledWritableStream]]. 1. If _controller_.[[started]] is *false*, return. 1. If _stream_.[[inFlightWriteRequest]] is not *undefined*, return. 1. Let _state_ be _stream_.[[state]]. 1. Assert: _state_ is not `"closed"` or `"errored"`. 1. If _state_ is `"erroring"`, 1. Perform ! WritableStreamFinishErroring(_stream_). 1. Return. 1. If _controller_.[[queue]] is empty, return. 1. Let _writeRecord_ be ! PeekQueueValue(_controller_). 1. If _writeRecord_ is `"close"`, perform ! WritableStreamDefaultControllerProcessClose(_controller_). 1. Otherwise, perform ! WritableStreamDefaultControllerProcessWrite(_controller_, _writeRecord_.[[chunk]]).

4.8.10. WritableStreamDefaultControllerErrorIfNeeded ( controller, error )

1. If _controller_.[[controlledWritableStream]].[[state]] is `"writable"`, perform ! WritableStreamDefaultControllerError(_controller_, _error_).

4.8.11. WritableStreamDefaultControllerProcessClose ( controller )

1. Let _stream_ be _controller_.[[controlledWritableStream]]. 1. Perform ! WritableStreamMarkCloseRequestInFlight(_stream_). 1. Perform ! DequeueValue(_controller_). 1. Assert: _controller_.[[queue]] is empty. 1. Let _sinkClosePromise_ be the result of performing _controller_.[[closeAlgorithm]]. 1. Perform ! WritableStreamDefaultControllerClearAlgorithms(_controller_). 1. Upon fulfillment of _sinkClosePromise_, 1. Perform ! WritableStreamFinishInFlightClose(_stream_). 1. Upon rejection of _sinkClosePromise_ with reason _reason_, 1. Perform ! WritableStreamFinishInFlightCloseWithError(_stream_, _reason_).

4.8.12. WritableStreamDefaultControllerProcessWrite ( controller, chunk )

1. Let _stream_ be _controller_.[[controlledWritableStream]]. 1. Perform ! WritableStreamMarkFirstWriteRequestInFlight(_stream_). 1. Let _sinkWritePromise_ be the result of performing _controller_.[[writeAlgorithm]], passing in _chunk_. 1. Upon fulfillment of _sinkWritePromise_, 1. Perform ! WritableStreamFinishInFlightWrite(_stream_). 1. Let _state_ be _stream_.[[state]]. 1. Assert: _state_ is `"writable"` or `"erroring"`. 1. Perform ! DequeueValue(_controller_). 1. If ! WritableStreamCloseQueuedOrInFlight(_stream_) is *false* and _state_ is `"writable"`, 1. Let _backpressure_ be ! WritableStreamDefaultControllerGetBackpressure(_controller_). 1. Perform ! WritableStreamUpdateBackpressure(_stream_, _backpressure_). 1. Perform ! WritableStreamDefaultControllerAdvanceQueueIfNeeded(_controller_). 1. Upon rejection of _sinkWritePromise_ with _reason_, 1. If _stream_.[[state]] is `"writable"`, perform ! WritableStreamDefaultControllerClearAlgorithms(_controller_). 1. Perform ! WritableStreamFinishInFlightWriteWithError(_stream_, _reason_).

4.8.13. WritableStreamDefaultControllerGetBackpressure ( controller )

1. Let _desiredSize_ be ! WritableStreamDefaultControllerGetDesiredSize(_controller_). 1. Return _desiredSize_ ≤ *0*.

4.8.14. WritableStreamDefaultControllerError ( controller, error )

1. Let _stream_ be _controller_.[[controlledWritableStream]]. 1. Assert: _stream_.[[state]] is `"writable"`. 1. Perform ! WritableStreamDefaultControllerClearAlgorithms(_controller_). 1. Perform ! WritableStreamStartErroring(_stream_, _error_).

5. Transform streams

5.1. Using transform streams

The natural way to use a transform stream is to place it in a pipe between a readable stream and a writable stream. Chunks that travel from the readable stream to the writable stream will be transformed as they pass through the transform stream. Backpressure is respected, so data will not be read faster than it can be transformed and consumed.
readableStream
  .pipeThrough(transformStream)
  .pipeTo(writableStream)
  .then(() => console.log("All data successfully transformed!"))
  .catch(e => console.error("Something went wrong!", e));
You can also use the readable and writable properties of a transform stream directly to access the usual interfaces of a readable stream and writable stream. In this example we supply data to the writable side of the stream using its writer interface. The readable side is then piped to anotherWritableStream.
const writer = transformStream.writable.getWriter();
writer.write("input chunk");
transformStream.readable.pipeTo(anotherWritableStream);
One use of identity transform streams is to easily convert between readable and writable streams. For example, the fetch() API accepts a readable stream request body, but it can be more convenient to write data for uploading via a writable stream interface. Using an identity transform stream addresses this:
const { writable, readable } = new TransformStream();
fetch("...", { body: readable }).then(response => /* ... */);

const writer = writable.getWriter();
writer.write(new Uint8Array([0x73, 0x74, 0x72, 0x65, 0x61, 0x6D, 0x73, 0x21]));
writer.close();

Another use of identity transform streams is to add additional buffering to a pipe. In this example we add extra buffering between readableStream and writableStream.

const writableStrategy = new ByteLengthQueuingStrategy({ highWaterMark: 1024 * 1024 });

readableStream
  .pipeThrough(new TransformStream(undefined, writableStrategy))
  .pipeTo(writableStream);

5.2. Class TransformStream

5.2.1. Class definition

This section is non-normative.

If one were to write the TransformStream class in something close to the syntax of [ECMASCRIPT], it would look like

class TransformStream {
  constructor(transformer = {}, writableStrategy = {}, readableStrategy = {})

  get readable()
  get writable()
}

5.2.2. Internal slots

Instances of TransformStream are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[backpressure]] Whether there was backpressure on [[readable]] the last time it was observed
[[backpressureChangePromise]] A promise which is fulfilled and replaced every time the value of [[backpressure]] changes
[[readable]] The ReadableStream instance controlled by this object
[[transformStreamController]] A TransformStreamDefaultController created with the ability to control [[readable]] and [[writable]]; also used for the IsTransformStream brand check
[[writable]] The WritableStream instance controlled by this object

5.2.3. new TransformStream(transformer = {}, writableStrategy = {}, readableStrategy = {})

The transformer argument represents the transformer, as described in § 5.2.4 Transformer API.

The writableStrategy and readableStrategy arguments are the queuing strategy objects for the writable and readable sides respectively. These are used in the construction of the WritableStream and ReadableStream objects and can be used to add buffering to a TransformStream, in order to smooth out variations in the speed of the transformation, or to increase the amount of buffering in a pipe. If they are not provided, the default behavior will be the same as a CountQueuingStrategy, with respective high water marks of 1 and 0.

1. Let _writableSizeFunction_ be ? GetV(_writableStrategy_, `"size"`). 1. Let _writableHighWaterMark_ be ? GetV(_writableStrategy_, `"highWaterMark"`). 1. Let _readableSizeFunction_ be ? GetV(_readableStrategy_, `"size"`). 1. Let _readableHighWaterMark_ be ? GetV(_readableStrategy_, `"highWaterMark"`). 1. Let _writableType_ be ? GetV(_transformer_, `"writableType"`). 1. If _writableType_ is not *undefined*, throw a *RangeError* exception. 1. Let _writableSizeAlgorithm_ be ? MakeSizeAlgorithmFromSizeFunction(_writableSizeFunction_). 1. If _writableHighWaterMark_ is *undefined*, set _writableHighWaterMark_ to *1*. 1. Set _writableHighWaterMark_ to ? ValidateAndNormalizeHighWaterMark(_writableHighWaterMark_). 1. Let _readableType_ be ? GetV(_transformer_, `"readableType"`). 1. If _readableType_ is not *undefined*, throw a *RangeError* exception. 1. Let _readableSizeAlgorithm_ be ? MakeSizeAlgorithmFromSizeFunction(_readableSizeFunction_). 1. If _readableHighWaterMark_ is *undefined*, set _readableHighWaterMark_ to *0*. 1. Set _readableHighWaterMark_ be ? ValidateAndNormalizeHighWaterMark(_readableHighWaterMark_). 1. Let _startPromise_ be a new promise. 1. Perform ! InitializeTransformStream(*this*, _startPromise_, _writableHighWaterMark_, _writableSizeAlgorithm_, _readableHighWaterMark_, _readableSizeAlgorithm_). 1. Perform ? SetUpTransformStreamDefaultControllerFromTransformer(*this*, _transformer_). 1. Let _startResult_ be ? InvokeOrNoop(_transformer_, `"start"`, « *this*.[[transformStreamController]] »). 1. Resolve _startPromise_ with _startResult_.

5.2.4. Transformer API

This section is non-normative.

The TransformStream() constructor accepts as its first argument a JavaScript object representing the transformer. Such objects can contain any of the following methods:

start(controller)

A function that is called immediately during creation of the TransformStream.

Typically this is used to enqueue prefix chunks, using controller.enqueue(). Those chunks will be read from the readable side but don’t depend on any writes to the writable side.

If this initial process is asynchronous, for example because it takes some effort to acquire the prefix chunks, the function can return a promise to signal success or failure; a rejected promise will error the stream. Any thrown exceptions will be re-thrown by the TransformStream() constructor.

transform(chunk, controller)

A function called when a new chunk originally written to the writable side is ready to be transformed. The stream implementation guarantees that this function will be called only after previous transforms have succeeded, and never before start() has completed or after flush() has been called.

This function performs the actual transformation work of the transform stream. It can enqueue the results using controller.enqueue(). This permits a single chunk written to the writable side to result in zero or multiple chunks on the readable side, depending on how many times controller.enqueue() is called. § 8.9 A transform stream that replaces template tags demonstrates this by sometimes enqueuing zero chunks.

If the process of transforming is asynchronous, this function can return a promise to signal success or failure of the transformation. A rejected promise will error both the readable and writable sides of the transform stream.

If no transform() is supplied, the identity transform is used, which enqueues chunks unchanged from the writable side to the readable side.

flush(controller)

A function called after all chunks written to the writable side have been transformed by successfully passing through transform(), and the writable side is about to be closed.

Typically this is used to enqueue suffix chunks to the readable side, before that too becomes closed. An example can be seen in § 8.9 A transform stream that replaces template tags.

If the flushing process is asynchronous, the function can return a promise to signal success or failure; the result will be communicated to the caller of stream.writable.write(). Additionally, a rejected promise will error both the readable and writable sides of the stream. Throwing an exception is treated the same as returning a rejected promise.

(Note that there is no need to call controller.terminate() inside flush(); the stream is already in the process of successfully closing down, and terminating it would be counterproductive.)

The controller object passed to start(), transform(), and flush() is an instance of TransformStreamDefaultController, and has the ability to enqueue chunks to the readable side, or to terminate or error the stream.

5.2.5. Properties of the TransformStream prototype

5.2.5.1. get readable
The readable getter gives access to the readable side of the transform stream.
1. If ! IsTransformStream(*this*) is *false*, throw a *TypeError* exception. 1. Return *this*.[[readable]].
5.2.5.2. get writable
The writable getter gives access to the writable side of the transform stream.
1. If ! IsTransformStream(*this*) is *false*, throw a *TypeError* exception. 1. Return *this*.[[writable]].

5.3. General transform stream abstract operations

5.3.1. CreateTransformStream ( startAlgorithm, transformAlgorithm, flushAlgorithm [, writableHighWaterMark [, writableSizeAlgorithm [, readableHighWaterMark [, readableSizeAlgorithm ] ] ] ] )

This abstract operation is meant to be called from other specifications that wish to create TransformStream instances. The transformAlgorithm and flushAlgorithm algorithms must return promises; if supplied, writableHighWaterMark and readableHighWaterMark must be non-negative, non-NaN numbers; and if supplied, writableSizeAlgorithm and readableSizeAlgorithm must be algorithms accepting chunk objects and returning numbers.

CreateTransformStream throws an exception if and only if the supplied startAlgorithm throws.

1. If _writableHighWaterMark_ was not passed, set it to *1*. 1. If _writableSizeAlgorithm_ was not passed, set it to an algorithm that returns *1*. 1. If _readableHighWaterMark_ was not passed, set it to *0*. 1. If _readableSizeAlgorithm_ was not passed, set it to an algorithm that returns *1*. 1. Assert: ! IsNonNegativeNumber(_writableHighWaterMark_) is *true*. 1. Assert: ! IsNonNegativeNumber(_readableHighWaterMark_) is *true*. 1. Let _stream_ be ObjectCreate(the original value of `TransformStream`'s `prototype` property). 1. Let _startPromise_ be a new promise. 1. Perform ! InitializeTransformStream(_stream_, _startPromise_, _writableHighWaterMark_, _writableSizeAlgorithm_, _readableHighWaterMark_, _readableSizeAlgorithm_). 1. Let _controller_ be ObjectCreate(the original value of `TransformStreamDefaultController`'s `prototype` property). 1. Perform ! SetUpTransformStreamDefaultController(_stream_, _controller_, _transformAlgorithm_, _flushAlgorithm_). 1. Let _startResult_ be the result of performing _startAlgorithm_. (This may throw an exception.) 1. Resolve _startPromise_ with _startResult_. 1. Return _stream_.

5.3.2. InitializeTransformStream ( stream, startPromise, writableHighWaterMark, writableSizeAlgorithm, readableHighWaterMark, readableSizeAlgorithm )

1. Let _startAlgorithm_ be an algorithm that returns _startPromise_. 1. Let _writeAlgorithm_ be the following steps, taking a _chunk_ argument: 1. Return ! TransformStreamDefaultSinkWriteAlgorithm(_stream_, _chunk_). 1. Let _abortAlgorithm_ be the following steps, taking a _reason_ argument: 1. Return ! TransformStreamDefaultSinkAbortAlgorithm(_stream_, _reason_). 1. Let _closeAlgorithm_ be the following steps: 1. Return ! TransformStreamDefaultSinkCloseAlgorithm(_stream_). 1. Set _stream_.[[writable]] to ! CreateWritableStream(_startAlgorithm_, _writeAlgorithm_, _closeAlgorithm_, _abortAlgorithm_, _writableHighWaterMark_, _writableSizeAlgorithm_). 1. Let _pullAlgorithm_ be the following steps: 1. Return ! TransformStreamDefaultSourcePullAlgorithm(_stream_). 1. Let _cancelAlgorithm_ be the following steps, taking a _reason_ argument: 1. Perform ! TransformStreamErrorWritableAndUnblockWrite(_stream_, _reason_). 1. Return a promise resolved with *undefined*. 1. Set _stream_.[[readable]] to ! CreateReadableStream(_startAlgorithm_, _pullAlgorithm_, _cancelAlgorithm_, _readableHighWaterMark_, _readableSizeAlgorithm_). 1. Set _stream_.[[backpressure]] and _stream_.[[backpressureChangePromise]] to *undefined*.

The [[backpressure]] slot is set to *undefined* so that it can be initialized by TransformStreamSetBackpressure. Alternatively, implementations can use a strictly boolean value for [[backpressure]] and change the way it is initialized. This will not be visible to user code so long as the initialization is correctly completed before _transformer_’s start() method is called.

1. Perform ! TransformStreamSetBackpressure(_stream_, *true*). 1. Set _stream_.[[transformStreamController]] to *undefined*.

5.3.3. IsTransformStream ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have a [[transformStreamController]] internal slot, return *false*. 1. Return *true*.

5.3.4. TransformStreamError ( stream, e )

1. Perform ! ReadableStreamDefaultControllerError(_stream_.[[readable]].[[readableStreamController]], _e_). 1. Perform ! TransformStreamErrorWritableAndUnblockWrite(_stream_, _e_).

This operation works correctly when one or both sides are already errored. As a result, calling algorithms do not need to check stream states when responding to an error condition.

5.3.5. TransformStreamErrorWritableAndUnblockWrite ( stream, e )

1. Perform ! TransformStreamDefaultControllerClearAlgorithms(_stream_.[[transformStreamController]]). 1. Perform ! WritableStreamDefaultControllerErrorIfNeeded(_stream_.[[writable]].[[writableStreamController]], _e_). 1. If _stream_.[[backpressure]] is *true*, perform ! TransformStreamSetBackpressure(_stream_, *false*).

The TransformStreamDefaultSinkWriteAlgorithm abstract operation could be waiting for the promise stored in the [[backpressureChangePromise]] slot to resolve. This call to TransformStreamSetBackpressure ensures that the promise always resolves.

5.3.6. TransformStreamSetBackpressure ( stream, backpressure )

1. Assert: _stream_.[[backpressure]] is not _backpressure_. 1. If _stream_.[[backpressureChangePromise]] is not *undefined*, resolve stream.[[backpressureChangePromise]] with *undefined*. 1. Set _stream_.[[backpressureChangePromise]] to a new promise. 1. Set _stream_.[[backpressure]] to _backpressure_.

5.4. Class TransformStreamDefaultController

The TransformStreamDefaultController class has methods that allow manipulation of the associated ReadableStream and WritableStream. When constructing a TransformStream, the transformer object is given a corresponding TransformStreamDefaultController instance to manipulate.

5.4.1. Class definition

This section is non-normative.

If one were to write the TransformStreamDefaultController class in something close to the syntax of [ECMASCRIPT], it would look like

class TransformStreamDefaultController {
  constructor() // always throws

  get desiredSize()

  enqueue(chunk)
  error(reason)
  terminate()
}

5.4.2. Internal slots

Instances of TransformStreamDefaultController are created with the internal slots described in the following table:

Internal Slot Description (non-normative)
[[controlledTransformStream]] The TransformStream instance controlled; also used for the IsTransformStreamDefaultController brand check
[[flushAlgorithm]] A promise-returning algorithm which communicates a requested close to the transformer
[[transformAlgorithm]] A promise-returning algorithm, taking one argument (the chunk to transform), which requests the transformer perform its transformation

5.4.3. new TransformStreamDefaultController()

The TransformStreamDefaultController constructor cannot be used directly; TransformStreamDefaultController instances are created automatically during TransformStream construction.
1. Throw a *TypeError* exception.

5.4.4. Properties of the TransformStreamDefaultController prototype

5.4.4.1. get desiredSize
The desiredSize getter returns the desired size to fill the readable side’s internal queue. It can be negative, if the queue is over-full.
1. If ! IsTransformStreamDefaultController(*this*) is *false*, throw a *TypeError* exception. 1. Let _readableController_ be *this*.[[controlledTransformStream]].[[readable]].[[readableStreamController]]. 1. Return ! ReadableStreamDefaultControllerGetDesiredSize(_readableController_).
5.4.4.2. enqueue(chunk)
The enqueue method will enqueue a given chunk in the readable side.
1. If ! IsTransformStreamDefaultController(*this*) is *false*, throw a *TypeError* exception. 1. Perform ? TransformStreamDefaultControllerEnqueue(*this*, _chunk_).
5.4.4.3. error(reason)
The error method will error both the readable side and the writable side of the controlled transform stream, making all future interactions fail with the given reason. Any chunks queued for transformation will be discarded.
1. If ! IsTransformStreamDefaultController(*this*) is *false*, throw a *TypeError* exception. 1. Perform ! TransformStreamDefaultControllerError(*this*, _reason_).
5.4.4.4. terminate()
The terminate method will close the readable side and error the writable side of the controlled transform stream. This is useful when the transformer only needs to consume a portion of the chunks written to the writable side.
1. If ! IsTransformStreamDefaultController(*this*) is *false*, throw a *TypeError* exception. 1. Perform ! TransformStreamDefaultControllerTerminate(*this*).

5.5. Transform stream default controller abstract operations

5.5.1. IsTransformStreamDefaultController ( x )

1. If Type(_x_) is not Object, return *false*. 1. If _x_ does not have an [[controlledTransformStream]] internal slot, return *false*. 1. Return *true*.

5.5.2. SetUpTransformStreamDefaultController ( stream, controller, transformAlgorithm, flushAlgorithm )

1. Assert: ! IsTransformStream(_stream_) is *true*. 1. Assert: _stream_.[[transformStreamController]] is *undefined*. 1. Set _controller_.[[controlledTransformStream]] to _stream_. 1. Set _stream_.[[transformStreamController]] to _controller_. 1. Set _controller_.[[transformAlgorithm]] to _transformAlgorithm_. 1. Set _controller_.[[flushAlgorithm]] to _flushAlgorithm_.

5.5.3. SetUpTransformStreamDefaultControllerFromTransformer ( stream, transformer )

1. Assert: _transformer_ is not *undefined*. 1. Let _controller_ be ObjectCreate(the original value of `TransformStreamDefaultController`'s `prototype` property). 1. Let _transformAlgorithm_ be the following steps, taking a _chunk_ argument: 1. Let _result_ be TransformStreamDefaultControllerEnqueue(_controller_, _chunk_). 1. If _result_ is an abrupt completion, return a promise rejected with _result_.[[Value]]. 1. Otherwise, return a promise resolved with *undefined*. 1. Let _transformMethod_ be ? GetV(_transformer_, `"transform"`). 1. If _transformMethod_ is not *undefined*, 1. If ! IsCallable(_transformMethod_) is *false*, throw a *TypeError* exception. 1. Set _transformAlgorithm_ to the following steps, taking a _chunk_ argument: 1. Return ! PromiseCall(_transformMethod_, _transformer_, « _chunk_, _controller_ »). 1. Let _flushAlgorithm_ be ? CreateAlgorithmFromUnderlyingMethod(_transformer_, `"flush"`, *0*, « controller »). 1. Perform ! SetUpTransformStreamDefaultController(_stream_, _controller_, _transformAlgorithm_, _flushAlgorithm_).

5.5.4. TransformStreamDefaultControllerClearAlgorithms ( controller )

This abstract operation is called once the stream is closed or errored and the algorithms will not be executed any more. By removing the algorithm references it permits the transformer object to be garbage collected even if the TransformStream itself is still referenced.

The results of this algorithm are not currently observable, but could become so if JavaScript eventually adds weak references. But even without that factor, implementations will likely want to include similar steps.

1. Set _controller_.[[transformAlgorithm]] to *undefined*. 1. Set _controller_.[[flushAlgorithm]] to *undefined*.

5.5.5. TransformStreamDefaultControllerEnqueue ( controller, chunk )

This abstract operation can be called by other specifications that wish to enqueue chunks in the readable side, in the same way a developer would enqueue chunks using the stream’s associated controller object. Specifications should not do this to streams they did not create.

1. Let _stream_ be _controller_.[[controlledTransformStream]]. 1. Let _readableController_ be _stream_.[[readable]].[[readableStreamController]]. 1. If ! ReadableStreamDefaultControllerCanCloseOrEnqueue(_readableController_) is *false*, throw a *TypeError* exception. 1. Let _enqueueResult_ be ReadableStreamDefaultControllerEnqueue(_readableController_, _chunk_). 1. If _enqueueResult_ is an abrupt completion, 1. Perform ! TransformStreamErrorWritableAndUnblockWrite(_stream_, _enqueueResult_.[[Value]]). 1. Throw _stream_.[[readable]].[[storedError]]. 1. Let _backpressure_ be ! ReadableStreamDefaultControllerHasBackpressure(_readableController_). 1. If _backpressure_ is not _stream_.[[backpressure]], 1. Assert: _backpressure_ is *true*. 1. Perform ! TransformStreamSetBackpressure(_stream_, *true*).

5.5.6. TransformStreamDefaultControllerError ( controller, e )

This abstract operation can be called by other specifications that wish to move a transform stream to an errored state, in the same way a developer would error a stream using its associated controller object. Specifications should not do this to streams they did not create.

1. Perform ! TransformStreamError(_controller_.[[controlledTransformStream]], _e_).

5.5.7. TransformStreamDefaultControllerPerformTransform ( controller, chunk )

1. Let _transformPromise_ be the result of performing _controller_.[[transformAlgorithm]], passing _chunk_. 1. Return the result of reacting to _transformPromise_ with the following rejection steps given the argument _r_: 1. Perform ! TransformStreamError(_controller_.[[controlledTransformStream]], _r_). 1. Throw _r_.

5.5.8. TransformStreamDefaultControllerTerminate ( controller )

This abstract operation can be called by other specifications that wish to terminate a transform stream, in the same way a developer-created stream would be closed by its associated controller object. Specifications should not do this to streams they did not create.

1. Let _stream_ be _controller_.[[controlledTransformStream]]. 1. Let _readableController_ be _stream_.[[readable]].[[readableStreamController]]. 1. If ! ReadableStreamDefaultControllerCanCloseOrEnqueue(_readableController_) is *true*, perform ! ReadableStreamDefaultControllerClose(_readableController_). 1. Let _error_ be a *TypeError* exception indicating that the stream has been terminated. 1. Perform ! TransformStreamErrorWritableAndUnblockWrite(_stream_, _error_).

5.6. Transform stream default sink abstract operations

5.6.1. TransformStreamDefaultSinkWriteAlgorithm ( stream, chunk )

1. Assert: _stream_.[[writable]].[[state]] is `"writable"`. 1. Let _controller_ be _stream_.[[transformStreamController]]. 1. If _stream_.[[backpressure]] is *true*, 1. Let _backpressureChangePromise_ be _stream_.[[backpressureChangePromise]]. 1. Assert: _backpressureChangePromise_ is not *undefined*. 1. Return the result of reacting to _backpressureChangePromise_ with the following fulfillment steps: 1. Let _writable_ be _stream_.[[writable]]. 1. Let _state_ be _writable_.[[state]]. 1. If _state_ is `"erroring"`, throw _writable_.[[storedError]]. 1. Assert: _state_ is `"writable"`. 1. Return ! TransformStreamDefaultControllerPerformTransform(_controller_, _chunk_). 1. Return ! TransformStreamDefaultControllerPerformTransform(_controller_, _chunk_).

5.6.2. TransformStreamDefaultSinkAbortAlgorithm ( stream, reason )

1. Perform ! TransformStreamError(_stream_, _reason_). 1. Return a promise resolved with *undefined*.

5.6.3. TransformStreamDefaultSinkCloseAlgorithm( stream )

1. Let _readable_ be _stream_.[[readable]]. 1. Let _controller_ be _stream_.[[transformStreamController]]. 1. Let _flushPromise_ be the result of performing _controller_.[[flushAlgorithm]]. 1. Perform ! TransformStreamDefaultControllerClearAlgorithms(_controller_). 1. Return the result of reacting to _flushPromise_: 1. If _flushPromise_ was fulfilled, then: 1. If _readable_.[[state]] is `"errored"`, throw _readable_.[[storedError]]. 1. Let _readableController_ be _readable_.[[readableStreamController]]. 1. If ! ReadableStreamDefaultControllerCanCloseOrEnqueue(_readableController_) is *true*, perform ! ReadableStreamDefaultControllerClose(_readableController_). 1. If _flushPromise_ was rejected with reason _r_, then: 1. Perform ! TransformStreamError(_stream_, _r_). 1. Throw _readable_.[[storedError]].

5.7. Transform stream default source abstract operations

5.7.1. TransformStreamDefaultSourcePullAlgorithm( stream )

1. Assert: _stream_.[[backpressure]] is *true*. 1. Assert: _stream_.[[backpressureChangePromise]] is not *undefined*. 1. Perform ! TransformStreamSetBackpressure(_stream_, *false*). 1. Return _stream_.[[backpressureChangePromise]].

6. Other stream APIs and operations

6.1. Queuing strategies

6.1.1. The queuing strategy API

This section is non-normative.

The ReadableStream(), WritableStream(), and TransformStream() constructors all accept at least one argument representing an appropriate queuing strategy for the stream being created. Such objects contain the following properties:

size(chunk) (non-byte streams only)

A function that computes and returns the finite non-negative size of the given chunk value.

The result is used to determine backpressure, manifesting via the appropriate desiredSize property: either defaultController.desiredSize, byteController.desiredSize, or writer.desiredSize, depending on where the queuing strategy is being used. For readable streams, it also governs when the underlying source’s pull() method is called.

This function has to be idempotent and not cause side effects; very strange results can occur otherwise.

For readable byte streams, this function is not used, as chunks are always measured in bytes.

highWaterMark

A non-negative number indicating the high water mark of the stream using this queuing strategy.

Any object with these properties can be used when a queuing strategy object is expected. However, we provide two built-in queuing strategy classes that provide a common vocabulary for certain cases: ByteLengthQueuingStrategy and CountQueuingStrategy.

6.1.2. Class ByteLengthQueuingStrategy

A common queuing strategy when dealing with bytes is to wait until the accumulated byteLength properties of the incoming chunks reaches a specified high-water mark. As such, this is provided as a built-in queuing strategy that can be used when constructing streams.

When creating a readable stream or writable stream, you can supply a byte-length queuing strategy directly:
const stream = new ReadableStream(
  { ... },
  new ByteLengthQueuingStrategy({ highWaterMark: 16 * 1024 })
);

In this case, 16 KiB worth of chunks can be enqueued by the readable stream’s underlying source before the readable stream implementation starts sending backpressure signals to the underlying source.

const stream = new WritableStream(
  { ... },
  new ByteLengthQueuingStrategy({ highWaterMark: 32 * 1024 })
);

In this case, 32 KiB worth of chunks can be accumulated in the writable stream’s internal queue, waiting for previous writes to the underlying sink to finish, before the writable stream starts sending backpressure signals to any producers.

It is not necessary to use ByteLengthQueuingStrategy with readable byte streams, as they always measure chunks in bytes. Attempting to construct a byte stream with a ByteLengthQueuingStrategy will fail.

6.1.2.1. Class definition

This section is non-normative.

If one were to write the ByteLengthQueuingStrategy class in something close to the syntax of [ECMASCRIPT], it would look like

class ByteLengthQueuingStrategy {
  constructor({ highWaterMark })
  size(chunk)
}

Each ByteLengthQueuingStrategy instance will additionally have an own data property highWaterMark set by its constructor.

6.1.2.2. new ByteLengthQueuingStrategy({ highWaterMark })
The constructor takes a non-negative number for the high-water mark, and stores it on the object as a property.
1. Perform ! CreateDataProperty(*this*, `"highWaterMark"`, _highWaterMark_).
6.1.2.3. Properties of the ByteLengthQueuingStrategy prototype
6.1.2.3.1. size(chunk)
The size method returns the given chunk’s byteLength property. (If the chunk doesn’t have one, it will return undefined, causing the stream using this strategy to error.)

This method is intentionally generic; it does not require that its this value be a ByteLengthQueuingStrategy object.

1. Return ? GetV(_chunk_, `"byteLength"`).

6.1.3. Class CountQueuingStrategy

A common queuing strategy when dealing with streams of generic objects is to simply count the number of chunks that have been accumulated so far, waiting until this number reaches a specified high-water mark. As such, this strategy is also provided out of the box.

When creating a readable stream or writable stream, you can supply a count queuing strategy directly:
const stream = new ReadableStream(
  { ... },
  new CountQueuingStrategy({ highWaterMark: 10 })
);

In this case, 10 chunks (of any kind) can be enqueued by the readable stream’s underlying source before the readable stream implementation starts sending backpressure signals to the underlying source.

const stream = new WritableStream(
  { ... },
  new CountQueuingStrategy({ highWaterMark: 5 })
);

In this case, five chunks (of any kind) can be accumulated in the writable stream’s internal queue, waiting for previous writes to the underlying sink to finish, before the writable stream starts sending backpressure signals to any producers.

6.1.3.1. Class definition

This section is non-normative.

If one were to write the CountQueuingStrategy class in something close to the syntax of [ECMASCRIPT], it would look like

class CountQueuingStrategy {
  constructor({ highWaterMark })
  size(chunk)
}

Each CountQueuingStrategy instance will additionally have an own data property highWaterMark set by its constructor.

6.1.3.2. new CountQueuingStrategy({ highWaterMark })
The constructor takes a non-negative number for the high-water mark, and stores it on the object as a property.
1. Perform ! CreateDataProperty(*this*, `"highWaterMark"`, _highWaterMark_).
6.1.3.3. Properties of the CountQueuingStrategy prototype
6.1.3.3.1. size()
The size method returns one always, so that the total queue size is a count of the number of chunks in the queue.

This method is intentionally generic; it does not require that its this value be a CountQueuingStrategy object.

1. Return *1*.

6.2. Queue-with-sizes operations

The streams in this specification use a "queue-with-sizes" data structure to store queued up values, along with their determined sizes. Various specification objects contain a queue-with-sizes, represented by the object having two paired internal slots, always named [[queue]] and [[queueTotalSize]]. [[queue]] is a List of Records with [[value]] and [[size]] fields, and [[queueTotalSize]] is a JavaScript Number, i.e. a double-precision floating point number.

The following abstract operations are used when operating on objects that contain queues-with-sizes, in order to ensure that the two internal slots stay synchronized.

Due to the limited precision of floating-point arithmetic, the framework specified here, of keeping a running total in the [[queueTotalSize]] slot, is not equivalent to adding up the size of all chunks in [[queue]]. (However, this only makes a difference when there is a huge (~1015) variance in size between chunks, or when trillions of chunks are enqueued.)

6.2.1. DequeueValue ( container )

1. Assert: _container_ has [[queue]] and [[queueTotalSize]] internal slots. 1. Assert: _container_.[[queue]] is not empty. 1. Let _pair_ be the first element of _container_.[[queue]]. 1. Remove _pair_ from _container_.[[queue]], shifting all other elements downward (so that the second becomes the first, and so on). 1. Set _container_.[[queueTotalSize]] to _container_.[[queueTotalSize]] − _pair_.[[size]]. 1. If _container_.[[queueTotalSize]] < *0*, set _container_.[[queueTotalSize]] to *0*. (This can occur due to rounding errors.) 1. Return _pair_.[[value]].

6.2.2. EnqueueValueWithSize ( container, value, size )

1. Assert: _container_ has [[queue]] and [[queueTotalSize]] internal slots. 1. Let _size_ be ? ToNumber(_size_). 1. If ! IsFiniteNonNegativeNumber(_size_) is *false*, throw a *RangeError* exception. 1. Append Record {[[value]]: _value_, [[size]]: _size_} as the last element of _container_.[[queue]]. 1. Set _container_.[[queueTotalSize]] to _container_.[[queueTotalSize]] + _size_.

6.2.3. PeekQueueValue ( container )

1. Assert: _container_ has [[queue]] and [[queueTotalSize]] internal slots. 1. Assert: _container_.[[queue]] is not empty. 1. Let _pair_ be the first element of _container_.[[queue]]. 1. Return _pair_.[[value]].

6.2.4. ResetQueue ( container )

1. Assert: _container_ has [[queue]] and [[queueTotalSize]] internal slots. 1. Set _container_.[[queue]] to a new empty List. 1. Set _container_.[[queueTotalSize]] to *0*.

6.3. Miscellaneous operations

A few abstract operations are used in this specification for utility purposes. We define them here.

6.3.1. CreateAlgorithmFromUnderlyingMethod ( underlyingObject, methodName, algoArgCount, extraArgs )

1. Assert: _underlyingObject_ is not *undefined*. 1. Assert: ! IsPropertyKey(_methodName_) is *true*. 1. Assert: _algoArgCount_ is *0* or *1*. 1. Assert: _extraArgs_ is a List. 1. Let _method_ be ? GetV(_underlyingObject_, _methodName_). 1. If _method_ is not *undefined*, 1. If ! IsCallable(_method_) is *false*, throw a *TypeError* exception. 1. If _algoArgCount_ is *0*, return an algorithm that performs the following steps: 1. Return ! PromiseCall(_method_, _underlyingObject_, _extraArgs_). 1. Otherwise, return an algorithm that performs the following steps, taking an _arg_ argument: 1. Let _fullArgs_ be a List consisting of _arg_ followed by the elements of _extraArgs_ in order. 1. Return ! PromiseCall(_method_, _underlyingObject_, _fullArgs_). 1. Return an algorithm which returns a promise resolved with *undefined*.

6.3.2. InvokeOrNoop ( O, P, args )

InvokeOrNoop is a slight modification of the [ECMASCRIPT] Invoke abstract operation to return undefined when the method is not present.
1. Assert: _O_ is not *undefined*. 1. Assert: ! IsPropertyKey(_P_) is *true*. 1. Assert: _args_ is a List. 1. Let _method_ be ? GetV(_O_, _P_). 1. If _method_ is *undefined*, return *undefined*. 1. Return ? Call(_method_, _O_, _args_).

6.3.3. IsFiniteNonNegativeNumber ( v )

1. If ! IsNonNegativeNumber(_v_) is *false*, return *false*. 1. If _v_ is *+∞*, return *false*. 1. Return *true*.

6.3.4. IsNonNegativeNumber ( v )

1. If Type(_v_) is not Number, return *false*. 1. If _v_ is *NaN*, return *false*. 1. If _v_ < *0*, return *false*. 1. Return *true*.

6.3.5. PromiseCall ( F, V, args )

This encapsulates the relevant promise-related parts of the Web IDL call a user object’s operation algorithm for use while we work on moving to Web IDL.
1. Assert: ! IsCallable(_F_) is *true*. 1. Assert: _V_ is not *undefined*. 1. Assert: _args_ is a List. 1. Let _returnValue_ be Call(_F_, _V_, _args_). 1. If _returnValue_ is an abrupt completion, return a promise rejected with _returnValue_.[[Value]]. 1. Otherwise, return a promise resolved with _returnValue_.[[Value]].

6.3.6. TransferArrayBuffer ( O )

1. Assert: Type(_O_) is Object. 1. Assert: _O_ has an [[ArrayBufferData]] internal slot. 1. Assert: ! IsDetachedBuffer(_O_) is *false*. 1. Let _arrayBufferData_ be _O_.[[ArrayBufferData]]. 1. Let _arrayBufferByteLength_ be _O_.[[ArrayBufferByteLength]]. 1. Perform ! DetachArrayBuffer(_O_). 1. Return a new ArrayBuffer object (created in the current Realm Record) whose [[ArrayBufferData]] internal slot value is _arrayBufferData_ and whose [[ArrayBufferByteLength]] internal slot value is _arrayBufferByteLength_.

6.3.7. ValidateAndNormalizeHighWaterMark ( highWaterMark )

1. Set _highWaterMark_ to ? ToNumber(_highWaterMark_). 1. If _highWaterMark_ is *NaN* or _highWaterMark_ < *0*, throw a *RangeError* exception.

*+∞* is explicitly allowed as a valid high water mark. It causes backpressure to never be applied.

1. Return _highWaterMark_.

6.3.8. MakeSizeAlgorithmFromSizeFunction ( size )

1. If _size_ is *undefined*, return an algorithm that returns *1*. 1. If ! IsCallable(_size_) is *false*, throw a *TypeError* exception. 1. Return an algorithm that performs the following steps, taking a _chunk_ argument: 1. Return ? Call(_size_, *undefined*, « _chunk_ »).

7. Global properties

The following constructors must be exposed on the global object as data properties of the same name:

The attributes of these properties must be { [[Writable]]: true, [[Enumerable]]: false, [[Configurable]]: true }.

The ReadableStreamDefaultReader, ReadableStreamBYOBReader, ReadableStreamDefaultController, ReadableByteStreamController, WritableStreamDefaultWriter, WritableStreamDefaultController, and TransformStreamDefaultController classes are specifically not exposed, as they are not independently useful.

8. Examples of creating streams

This section, and all its subsections, are non-normative.

The previous examples throughout the standard have focused on how to use streams. Here we show how to create a stream, using the ReadableStream or WritableStream constructors.

8.1. A readable stream with an underlying push source (no backpressure support)

The following function creates readable streams that wrap WebSocket instances [HTML], which are push sources that do not support backpressure signals. It illustrates how, when adapting a push source, usually most of the work happens in the start() function.

function makeReadableWebSocketStream(url, protocols) {
  const ws = new WebSocket(url, protocols);
  ws.binaryType = "arraybuffer";

  return new ReadableStream({
    start(controller) {
      ws.onmessage = event => controller.enqueue(event.data);
      ws.onclose = () => controller.close();
      ws.onerror = () => controller.error(new Error("The WebSocket errored!"));
    },

    cancel() {
      ws.close();
    }
  });
}

We can then use this function to create readable streams for a web socket, and pipe that stream to an arbitrary writable stream:

const webSocketStream = makeReadableWebSocketStream("wss://example.com:443/", "protocol");

webSocketStream.pipeTo(writableStream)
  .then(() => console.log("All data successfully written!"))
  .catch(e => console.error("Something went wrong!", e));
This specific style of wrapping a web socket interprets web socket messages directly as chunks. This can be a convenient abstraction, for example when piping to a writable stream or transform stream for which each web socket message makes sense as a chunk to consume or transform.

However, often when people talk about "adding streams support to web sockets", they are hoping instead for a new capability to send an individual web socket message in a streaming fashion, so that e.g. a file could be transferred in a single message without holding all of its contents in memory on the client side. To accomplish this goal, we’d instead want to allow individual web socket messages to themselves be ReadableStream instances. That isn’t what we show in the above example.

For more background, see this discussion.

8.2. A readable stream with an underlying push source and backpressure support

The following function returns readable streams that wrap "backpressure sockets," which are hypothetical objects that have the same API as web sockets, but also provide the ability to pause and resume the flow of data with their readStop and readStart methods. In doing so, this example shows how to apply backpressure to underlying sources that support it.

function makeReadableBackpressureSocketStream(host, port) {
  const socket = createBackpressureSocket(host, port);

  return new ReadableStream({
    start(controller) {
      socket.ondata = event => {
        controller.enqueue(event.data);

        if (controller.desiredSize <= 0) {
          // The internal queue is full, so propagate
          // the backpressure signal to the underlying source.
          socket.readStop();
        }
      };

      socket.onend = () => controller.close();
      socket.onerror = () => controller.error(new Error("The socket errored!"));
    },

    pull() {
      // This is called if the internal queue has been emptied, but the
      // stream’s consumer still wants more data. In that case, restart
      // the flow of data if we have previously paused it.
      socket.readStart();
    },

    cancel() {
      socket.close();
    }
  });
}

We can then use this function to create readable streams for such "backpressure sockets" in the same way we do for web sockets. This time, however, when we pipe to a destination that cannot accept data as fast as the socket is producing it, or if we leave the stream alone without reading from it for some time, a backpressure signal will be sent to the socket.

8.3. A readable byte stream with an underlying push source (no backpressure support)

The following function returns readable byte streams that wraps a hypothetical UDP socket API, including a promise-returning select2() method that is meant to be evocative of the POSIX select(2) system call.

Since the UDP protocol does not have any built-in backpressure support, the backpressure signal given by desiredSize is ignored, and the stream ensures that when data is available from the socket but not yet requested by the developer, it is enqueued in the stream’s internal queue, to avoid overflow of the kernel-space queue and a consequent loss of data.

This has some interesting consequences for how consumers interact with the stream. If the consumer does not read data as fast as the socket produces it, the chunks will remain in the stream’s internal queue indefinitely. In this case, using a BYOB reader will cause an extra copy, to move the data from the stream’s internal queue to the developer-supplied buffer. However, if the consumer consumes the data quickly enough, a BYOB reader will allow zero-copy reading directly into developer-supplied buffers.

(You can imagine a more complex version of this example which uses desiredSize to inform an out-of-band backpressure signaling mechanism, for example by sending a message down the socket to adjust the rate of data being sent. That is left as an exercise for the reader.)

const DEFAULT_CHUNK_SIZE = 65536;

function makeUDPSocketStream(host, port) {
  const socket = createUDPSocket(host, port);

  return new ReadableStream({
    type: "bytes",

    start(controller) {
      readRepeatedly().catch(e => controller.error(e));

      function readRepeatedly() {
        return socket.select2().then(() => {
          // Since the socket can become readable even when there’s
          // no pending BYOB requests, we need to handle both cases.
          let bytesRead;
          if (controller.byobRequest) {
            const v = controller.byobRequest.view;
            bytesRead = socket.readInto(v.buffer, v.byteOffset, v.byteLength);
            controller.byobRequest.respond(bytesRead);
          } else {
            const buffer = new ArrayBuffer(DEFAULT_CHUNK_SIZE);
            bytesRead = socket.readInto(buffer, 0, DEFAULT_CHUNK_SIZE);
            controller.enqueue(new Uint8Array(buffer, 0, bytesRead));
          }

          if (bytesRead === 0) {
            controller.close();
            return;
          }

          return readRepeatedly();
        });
      }
    },

    cancel() {
      socket.close();
    }
  });
}

ReadableStream instances returned from this function can now vend BYOB readers, with all of the aforementioned benefits and caveats.

8.4. A readable stream with an underlying pull source

The following function returns readable streams that wrap portions of the Node.js file system API (which themselves map fairly directly to C’s fopen, fread, and fclose trio). Files are a typical example of pull sources. Note how in contrast to the examples with push sources, most of the work here happens on-demand in the pull() function, and not at startup time in the start() function.

const fs = require("pr/fs"); // https://github.com/jden/pr
const CHUNK_SIZE = 1024;

function makeReadableFileStream(filename) {
  let fd;
  let position = 0;

  return new ReadableStream({
    start() {
      return fs.open(filename, "r").then(result => {
        fd = result;
      });
    },

    pull(controller) {
      const buffer = new ArrayBuffer(CHUNK_SIZE);

      return fs.read(fd, buffer, 0, CHUNK_SIZE, position).then(bytesRead => {
        if (bytesRead === 0) {
          return fs.close(fd).then(() => controller.close());
        } else {
          position += bytesRead;
          controller.enqueue(new Uint8Array(buffer, 0, bytesRead));
        }
      });
    },

    cancel() {
      return fs.close(fd);
    }
  });
}

We can then create and use readable streams for files just as we could before for sockets.

8.5. A readable byte stream with an underlying pull source

The following function returns readable byte streams that allow efficient zero-copy reading of files, again using the Node.js file system API. Instead of using a predetermined chunk size of 1024, it attempts to fill the developer-supplied buffer, allowing full control.

const fs = require("pr/fs"); // https://github.com/jden/pr
const DEFAULT_CHUNK_SIZE = 1024;

function makeReadableByteFileStream(filename) {
  let fd;
  let position = 0;

  return new ReadableStream({
    type: "bytes",

    start() {
      return fs.open(filename, "r").then(result => {
        fd = result;
      });
    },

    pull(controller) {
      // Even when the consumer is using the default reader, the auto-allocation
      // feature allocates a buffer and passes it to us via byobRequest.
      const v = controller.byobRequest.view;

      return fs.read(fd, v.buffer, v.byteOffset, v.byteLength, position).then(bytesRead => {
        if (bytesRead === 0) {
          return fs.close(fd).then(() => controller.close());
        } else {
          position += bytesRead;
          controller.byobRequest.respond(bytesRead);
        }
      });
    },

    cancel() {
      return fs.close(fd);
    },

    autoAllocateChunkSize: DEFAULT_CHUNK_SIZE
  });
}

With this in hand, we can create and use BYOB readers for the returned ReadableStream. But we can also create default readers, using them in the same simple and generic manner as usual. The adaptation between the low-level byte tracking of the underlying byte source shown here, and the higher-level chunk-based consumption of a default reader, is all taken care of automatically by the streams implementation. The auto-allocation feature, via the autoAllocateChunkSize option, even allows us to write less code, compared to the manual branching in § 8.3 A readable byte stream with an underlying push source (no backpressure support).

8.6. A writable stream with no backpressure or success signals

The following function returns a writable stream that wraps a WebSocket [HTML]. Web sockets do not provide any way to tell when a given chunk of data has been successfully sent (without awkward polling of bufferedAmount, which we leave as an exercise to the reader). As such, this writable stream has no ability to communicate accurate backpressure signals or write success/failure to its producers. That is, the promises returned by its writer’s write() method and ready getter will always fulfill immediately.

function makeWritableWebSocketStream(url, protocols) {
  const ws = new WebSocket(url, protocols);

  return new WritableStream({
    start(controller) {
      ws.onerror = () => {
        controller.error(new Error("The WebSocket errored!"));
        ws.onclose = null;
      };
      ws.onclose = () => controller.error(new Error("The server closed the connection unexpectedly!"));
      return new Promise(resolve => ws.onopen = resolve);
    },

    write(chunk) {
      ws.send(chunk);
      // Return immediately, since the web socket gives us no easy way to tell
      // when the write completes.
    },

    close() {
      return closeWS(1000);
    },

    abort(reason) {
      return closeWS(4000, reason && reason.message);
    },
  });

  function closeWS(code, reasonString) {
    return new Promise((resolve, reject) => {
      ws.onclose = e => {
        if (e.wasClean) {
          resolve();
        } else {
          reject(new Error("The connection was not closed cleanly"));
        }
      };
      ws.close(code, reasonString);
    });
  }
}

We can then use this function to create writable streams for a web socket, and pipe an arbitrary readable stream to it:

const webSocketStream = makeWritableWebSocketStream("wss://example.com:443/", "protocol");

readableStream.pipeTo(webSocketStream)
  .then(() => console.log("All data successfully written!"))
  .catch(e => console.error("Something went wrong!", e));

See the earlier note about this style of wrapping web sockets into streams.

8.7. A writable stream with backpressure and success signals

The following function returns writable streams that wrap portions of the Node.js file system API (which themselves map fairly directly to C’s fopen, fwrite, and fclose trio). Since the API we are wrapping provides a way to tell when a given write succeeds, this stream will be able to communicate backpressure signals as well as whether an individual write succeeded or failed.

const fs = require("pr/fs"); // https://github.com/jden/pr

function makeWritableFileStream(filename) {
  let fd;

  return new WritableStream({
    start() {
      return fs.open(filename, "w").then(result => {
        fd = result;
      });
    },

    write(chunk) {
      return fs.write(fd, chunk, 0, chunk.length);
    },

    close() {
      return fs.close(fd);
    },

    abort() {
      return fs.close(fd);
    }
  });
}

We can then use this function to create a writable stream for a file, and write individual chunks of data to it:

const fileStream = makeWritableFileStream("/example/path/on/fs.txt");
const writer = fileStream.getWriter();

writer.write("To stream, or not to stream\n");
writer.write("That is the question\n");

writer.close()
  .then(() => console.log("chunks written and stream closed successfully!"))
  .catch(e => console.error(e));

Note that if a particular call to fs.write takes a longer time, the returned promise will fulfill later. In the meantime, additional writes can be queued up, which are stored in the stream’s internal queue. The accumulation of chunks in this queue can change the stream to return a pending promise from the ready getter, which is a signal to producers that they would benefit from backing off and stopping writing, if possible.

The way in which the writable stream queues up writes is especially important in this case, since as stated in the documentation for fs.write, "it is unsafe to use fs.write multiple times on the same file without waiting for the [promise]." But we don’t have to worry about that when writing the makeWritableFileStream function, since the stream implementation guarantees that the underlying sink’s write() method will not be called until any promises returned by previous calls have fulfilled!

8.8. A { readable, writable } stream pair wrapping the same underlying resource

The following function returns an object of the form { readable, writable }, with the readable property containing a readable stream and the writable property containing a writable stream, where both streams wrap the same underlying web socket resource. In essence, this combines § 8.1 A readable stream with an underlying push source (no backpressure support) and § 8.6 A writable stream with no backpressure or success signals.

While doing so, it illustrates how you can use JavaScript classes to create reusable underlying sink and underlying source abstractions.

function streamifyWebSocket(url, protocol) {
  const ws = new WebSocket(url, protocols);
  ws.binaryType = "arraybuffer";

  return {
    readable: new ReadableStream(new WebSocketSource(ws)),
    writable: new WritableStream(new WebSocketSink(ws))
  };
}

class WebSocketSource {
  constructor(ws) {
    this._ws = ws;
  }

  start(controller) {
    this._ws.onmessage = event => controller.enqueue(event.data);
    this._ws.onclose = () => controller.close();

    this._ws.addEventListener("error", () => {
      controller.error(new Error("The WebSocket errored!"));
    });
  }

  cancel() {
    this._ws.close();
  }
}

class WebSocketSink {
  constructor(ws) {
    this._ws = ws;
  }

  start(controller) {
    this._ws.onclose = () => controller.error(new Error("The server closed the connection unexpectedly!"));
    this._ws.addEventListener("error", () => {
      controller.error(new Error("The WebSocket errored!"));
      this._ws.onclose = null;
    });

    return new Promise(resolve => this._ws.onopen = resolve);
  }

  write(chunk) {
    this._ws.send(chunk);
  }

  close() {
    return this._closeWS(1000);
  }

  abort(reason) {
    return this._closeWS(4000, reason && reason.message);
  }

  _closeWS(code, reasonString) {
    return new Promise((resolve, reject) => {
      this._ws.onclose = e => {
        if (e.wasClean) {
          resolve();
        } else {
          reject(new Error("The connection was not closed cleanly"));
        }
      };
      this._ws.close(code, reasonString);
    });
  }
}

We can then use the objects created by this function to communicate with a remote web socket, using the standard stream APIs:

const streamyWS = streamifyWebSocket("wss://example.com:443/", "protocol");
const writer = streamyWS.writable.getWriter();
const reader = streamyWS.readable.getReader();

writer.write("Hello");
writer.write("web socket!");

reader.read().then(({ value, done }) => {
  console.log("The web socket says: ", value);
});

Note how in this setup canceling the readable side will implicitly close the writable side, and similarly, closing or aborting the writable side will implicitly close the readable side.

See the earlier note about this style of wrapping web sockets into streams.

8.9. A transform stream that replaces template tags

It’s often useful to substitute tags with variables on a stream of data, where the parts that need to be replaced are small compared to the overall data size. This example presents a simple way to do that. It maps strings to strings, transforming a template like "Time: {{time}} Message: {{message}}" to "Time: 15:36 Message: hello" assuming that { time: "15:36", message: "hello" } was passed in the substitutions parameter to LipFuzzTransformer.

This example also demonstrates one way to deal with a situation where a chunk contains partial data that cannot be transformed until more data is received. In this case, a partial template tag will be accumulated in the partialChunk instance variable until either the end of the tag is found or the end of the stream is reached.

class LipFuzzTransformer {
  constructor(substitutions) {
    this.substitutions = substitutions;
    this.partialChunk = "";
    this.lastIndex = undefined;
  }

  transform(chunk, controller) {
    chunk = this.partialChunk + chunk;
    this.partialChunk = "";
    // lastIndex is the index of the first character after the last substitution.
    this.lastIndex = 0;
    chunk = chunk.replace(/\{\{([a-zA-Z0-9_-]+)\}\}/g, this.replaceTag.bind(this));
    // Regular expression for an incomplete template at the end of a string.
    const partialAtEndRegexp = /\{(\{([a-zA-Z0-9_-]+(\})?)?)?$/g;
    // Avoid looking at any characters that have already been substituted.
    partialAtEndRegexp.lastIndex = this.lastIndex;
    this.lastIndex = undefined;
    const match = partialAtEndRegexp.exec(chunk);
    if (match) {
      this.partialChunk = chunk.substring(match.index);
      chunk = chunk.substring(0, match.index);
    }
    controller.enqueue(chunk);
  }

  flush(controller) {
    if (this.partialChunk.length > 0) {
      controller.enqueue(this.partialChunk);
    }
  }

  replaceTag(match, p1, offset) {
    let replacement = this.substitutions[p1];
    if (replacement === undefined) {
      replacement = "";
    }
    this.lastIndex = offset + replacement.length;
    return replacement;
  }
}

In this case we define the transformer to be passed to the TransformStream constructor as a class. This is useful when there is instance data to track.

The class would be used in code like:

const data = { userName, displayName, icon, date };
const ts = new TransformStream(new LipFuzzTransformer(data));

fetchEvent.respondWith(
  fetch(fetchEvent.request.url).then(response => {
    const transformedBody = response.body
      // Decode the binary-encoded response to string
      .pipeThrough(new TextDecoderStream())
      // Apply the LipFuzzTransformer
      .pipeThrough(ts)
      // Encode the transformed string
      .pipeThrough(new TextEncoderStream());
    return new Response(transformedBody);
  })
);
For simplicity, LipFuzzTransformer performs unescaped text substitutions. In real applications, a template system that performs context-aware escaping is good practice for security and robustness.

8.10. A transform stream created from a sync mapper function

The following function allows creating new TransformStream instances from synchronous "mapper" functions, of the type you would normally pass to Array.prototype.map. It demonstrates that the API is concise even for trivial transforms.

function mapperTransformStream(mapperFunction) {
  return new TransformStream({
    transform(chunk, controller) {
      controller.enqueue(mapperFunction(chunk));
    }
  });
}

This function can then be used to create a TransformStream that uppercases all its inputs:

const ts = mapperTransformStream(chunk => chunk.toUpperCase());
const writer = ts.writable.getWriter();
const reader = ts.readable.getReader();

writer.write("No need to shout");

// Logs "NO NEED TO SHOUT":
reader.read().then(({ value }) => console.log(value));

Although a synchronous transform never causes backpressure itself, it will only transform chunks as long as there is no backpressure, so resources will not be wasted.

Exceptions error the stream in a natural way:

const ts = mapperTransformStream(chunk => JSON.parse(chunk));
const writer = ts.writable.getWriter();
const reader = ts.readable.getReader();

writer.write("[1, ");

// Logs a SyntaxError, twice:
reader.read().catch(e => console.error(e));
writer.write("{}").catch(e => console.error(e));

Conventions

This specification depends on the Infra Standard. [INFRA]

This specification uses algorithm conventions very similar to those of [ECMASCRIPT], whose rules should be used to interpret it (apart from the exceptions enumerated below). In particular, the objects specified here should be treated as built-in objects. For example, their name and length properties are derived as described by that specification, as are the default property descriptor values and the treatment of missing, undefined, or surplus arguments.

We also depart from the [ECMASCRIPT] conventions in the following ways, mostly for brevity. It is hoped (and vaguely planned) that the conventions of ECMAScript itself will evolve in these ways.

It’s also worth noting that, as in [ECMASCRIPT], all numbers are represented as double-precision floating point values, and all arithmetic operations performed on them must be done in the standard way for such values.

Acknowledgments

The editors would like to thank Anne van Kesteren, AnthumChris, Arthur Langereis, Ben Kelly, Bert Belder, Brian di Palma, Calvin Metcalf, Dominic Tarr, Ed Hager, Forbes Lindesay, Forrest Norvell, Gary Blackwood, Gorgi Kosev, Gus Caplan, 贺师俊 (hax), Isaac Schlueter, isonmad, Jake Archibald, Jake Verbaten, Janessa Det, Jason Orendorff, Jens Nockert, Lennart Grahl, Mangala Sadhu Sangeet Singh Khalsa, Marcos Caceres, Marvin Hagemeister, Mattias Buelens, Michael Mior, Mihai Potra, Romain Bellessort, Simon Menke, Stephen Sugden, Surma, Tab Atkins, Tanguy Krotoff, Thorsten Lorenz, Till Schneidereit, Tim Caswell, Trevor Norris, tzik, Will Chan, Youenn Fablet, 平野裕 (Yutaka Hirano), and Xabier Rodríguez for their contributions to this specification. Community involvement in this specification has been above and beyond; we couldn’t have done it without you.

This standard is written by Adam Rice (Google, ricea@chromium.org), Domenic Denicola (Google, d@domenic.me), and 吉野剛史 (Takeshi Yoshino, Google, tyoshino@chromium.org).

Copyright © 1970 WHATWG (Apple, Google, Mozilla, Microsoft). This work is licensed under a Creative Commons Attribution 4.0 International License.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[ECMASCRIPT]
ECMAScript Language Specification. URL: https://tc39.github.io/ecma262/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[PROMISES-GUIDE]
Domenic Denicola. Writing Promise-Using Specifications. 9 November 2018. TAG Finding. URL: https://www.w3.org/2001/tag/doc/promises-guide
[WebIDL]
Boris Zbarsky. Web IDL. URL: https://heycam.github.io/webidl/

Informative References

[FETCH]
Anne van Kesteren. Fetch Standard. Living Standard. URL: https://fetch.spec.whatwg.org/
[SERVICE-WORKERS]
Alex Russell; et al. Service Workers 1. URL: https://w3c.github.io/ServiceWorker/