1. Introduction
This section is non-normative.
Large swathes of the web platform are built on streaming data: that is, data that is created, processed, and consumed in an incremental fashion, without ever reading all of it into memory. The Streams Standard provides a common set of APIs for creating and interfacing with such streaming data, embodied in readable streams , writable streams , and transform streams .
This standard provides the base stream primitives which other parts of the web platform can use to expose their streaming data. For example, [FETCH] could expose request bodies as a writable stream, or response bodies as a readable stream. More generally, the platform is full of streaming abstractions waiting to be expressed as streams: multimedia streams, file streams, interprocess communication, and more benefit from being able to process data incrementally instead of buffering it all into memory and processing it in one go. By providing the foundation for these streams to be exposed to developers, the Streams Standard enables use cases like:
- Video effects: piping a readable video stream through a transform stream that applies effects in real time.
-
Decompression:
piping
a
file
stream
through
a
transform
stream
that
selectively
decompresses
files
from
a
.tgz
archive,
turning
them
into
img
elements as the user scrolls through an image gallery. -
Image
decoding:
piping
an
HTTP
response
stream
through
a
transform
stream
that
decodes
bytes
into
bitmap
data,
and
then
through
another
transform
that
translates
bitmaps
into
PNGs.
If
installed
inside
the
fetch
hook of a service worker [SERVICE-WORKERS] , this would allow developers to transparently polyfill new image formats.
The APIs described here provide unifying abstraction for all such streams, encouraging an ecosystem to grow around these shared and composable interfaces. At the same time, they have been carefully designed to map efficiently to low-level I/O concerns, and to encapsulate the trickier issues (such as backpressure ) that come along for the ride.
2. Model
A
chunk
is
a
single
piece
of
data
that
is
written
to
or
read
from
a
stream.
It
can
be
of
any
type;
streams
can
even
contain
chunks
of
different
types.
A
chunk
will
often
not
be
the
most
atomic
unit
of
data
for
a
given
stream;
for
example
a
byte
stream
might
contain
chunks
consisting
of
16
KiB
Uint8Array
s,
instead
of
single
bytes.
2.1. Readable Streams
A readable stream represents a source of data, from which you can read. In other words, data comes out of a readable stream.
Although a readable stream can be created with arbitrary behavior, most readable streams wrap a lower-level I/O source, called the underlying source . There are two types of underlying source: push sources and pull sources.
Push sources push data at you, whether or not you are listening for it. They may also provide a mechanism for pausing and resuming the flow of data. An example push source is a TCP socket, where data is constantly being pushed from the OS level, at a rate that can be controlled by changing the TCP window size.
Pull sources require you to request data from them. The data may be available synchronously, e.g. if it is held by the operating system’s in-memory buffers, or asynchronously, e.g. if it has to be read from disk. An example pull source is a file handle, where you seek to specific locations and read specific amounts.
Readable streams are designed to wrap both types of sources behind a single, unified interface.
Chunks are enqueued into the stream by the stream’s underlying source . They can then be read one at a time via the stream’s public interface.
Code that reads from a readable stream using its public interface is known as a consumer .
Consumers also have the ability to cancel a readable stream. This indicates that the consumer has lost interest in the stream, and will immediately close the stream, throw away any queued chunks , and execute any cancellation mechanism of the underlying source .
Consumers can also tee a readable stream. This will lock the stream, making it no longer directly usable; however, it will create two new streams, called branches , which can be consumed independently.
For streams representing bytes, an extended version of the readable stream is provided to handle bytes efficiently, in particular by minimizing copies. The underlying source for such a readable stream is called a underlying byte source . A readable stream whose underlying source is an underlying byte source is sometimes called a readable byte stream .
2.2. Writable Streams
A writable stream represents a destination for data, into which you can write. In other words, data goes in to a writable stream.
Analogously to readable streams, most writable streams wrap a lower-level I/O sink, called the underlying sink . Writable streams work to abstract away some of the complexity of the underlying sink, by queuing subsequent writes and only delivering them to the underlying sink one by one.
Chunks are written to the stream via its public interface, and are passed one at a time to the stream’s underlying sink .
Code that writes into a writable stream using its public interface is known as a producer .
Producers also have the ability to abort a writable stream. This indicates that the producer believes something has gone wrong, and that future writes should be discontinued. It puts the stream in an errored state, even without a signal from the underlying sink .
2.3. Transform Streams
A transform stream consists of a pair of streams: a writable stream, and a readable stream. In a manner specific to the transform stream in question, writes to the writable side result in new data being made available for reading from the readable side.
Some examples of transform streams include:
- A GZIP compressor, to which uncompressed bytes are written and from which compressed bytes are read;
- A video decoder, to which encoded bytes are written and from which uncompressed video frames are read;
- A text decoder, to which bytes are written and from which strings are read;
- A CSV-to-JSON converter, to which strings representing lines of a CSV file are written and from which corresponding JavaScript objects are read.
2.4. Pipe Chains and Backpressure
Streams are primarily used by piping them to each other. A readable stream can be piped directly to a writable stream, or it can be piped through one or more transform streams first.
A set of streams piped together in this way is referred to as a pipe chain . In a pipe chain, the original source is the underlying source of the first readable stream in the chain; the ultimate sink is the underlying sink of the final writable stream in the chain.
Once a pipe chain is constructed, it can be used to propagate signals regarding how fast chunks should flow through it. If any step in the chain cannot yet accept chunks, it propagates a signal backwards through the pipe chain, until eventually the original source is told to stop producing chunks so fast. This process of normalizing flow from the original source according to how fast the chain can process chunks is called backpressure .
When teeing a readable stream, the backpressure signals from its two branches will aggregate, such that if neither branch is read from, a backpressure signal will be sent to the underlying source of the original stream.
2.5. Internal Queues and Queuing Strategies
Both readable and writable streams maintain internal queues , which they use for similar purposes. In the case of a readable stream, the internal queue contains chunks that have been enqueued by the underlying source , but not yet read by the consumer. In the case of a writable stream, the internal queue contains chunks which have been written to the stream by the producer, but not yet processed and acknowledged by the underlying sink .
A queuing strategy is an object that determines how a stream should signal backpressure based on the state of its internal queue . The queuing strategy assigns a size to each chunk , and compares the total size of all chunks in the queue to a specified number, known as the high water mark . The resulting difference, high water mark minus total size, is used to determine the desired size to fill the stream’s queue .
For readable streams, an underlying source can use this desired size as a backpressure signal, slowing down chunk generation so as to try to keep the desired size above or at zero. For writable streams, a producer can behave similarly, avoiding writes that would cause the desired size to go negative.
2.6. Locking
A readable stream reader , or simply reader, is an object that allows direct reading of chunks from a readable stream . Without a reader, a consumer can only perform high-level operations on the readable stream: canceling the stream, or piping the readable stream to a writable stream.
Similarly, a writable stream writer , or simply writer, is an object that allows direct writing of chunks to a writable stream . Without a writer, a producer can only perform the high-level operations of aborting the stream or piping a readable stream to the writable stream.
(Under the covers, these high-level operations actually use a reader or writer themselves.)
A given readable or writable stream only has at most one reader or writer at a time. We say in this case the stream is locked , and that the reader or writer is active .
A reader or writer also has the capability to release its lock , which makes it no longer active, and allows further readers or writers to be acquired.
A readable byte stream has the ability to vend two types of readers: default readers and BYOB readers . BYOB ("bring your own buffer") readers allow reading into a developer-supplied buffer, thus minimizing copies.
3. Readable Streams
3.1. Using Readable Streams
readableStream. pipeTo( writableStream)
. then(() => console. log( "All data successfully written!" ))
. catch ( e => console. error( "Something went wrong!" , e));
readableStream. pipeTo( new WritableStream({
write( chunk) {
console. log( "Chunk received" , chunk);
},
close() {
console. log( "All data successfully read!" );
},
abort( e) {
console. error( "Something went wrong!" , e);
}
}));
By
returning
promises
from
your
write
implementation,
you
can
signal
backpressure
to
the
readable
stream.
read()
method
to
get
successive
chunks.
For
example,
this
code
logs
the
next
chunk
in
the
stream,
if
available:
const reader = readableStream. getReader();
reader. read(). then(
({ value, done }) => {
if ( done) {
console. log( "The stream was already closed!" );
} else {
console. log( value);
}
},
e => console. error( "The stream became errored and cannot be read from!" , e)
);
This more manual method of reading a stream is mainly useful for library authors building new high-level operations on streams, beyond the provided ones of piping and teeing .
const reader = readableStream. getReader({ mode: "byob" });
let startingAB = new ArrayBuffer( 1024 );
readInto( startingAB)
. then( buffer => console. log( "The first 1024 bytes:" , buffer))
. catch ( e => console. error( "Something went wrong!" , e));
function readInto( buffer, offset = 0 ) {
if ( offset === buffer. byteLength) {
return Promise. resolve( buffer);
}
const view = new Uint8Array( buffer, offset, buffer. byteLength - offset);
return reader. read( view). then( newView => {
return readInto( newView. buffer, offset + newView. byteLength);
});
}
An
important
thing
to
note
here
is
that
the
final
buffer
value
is
different
from
the
startingAB
,
but
it
(and
all
intermediate
buffers)
shares
the
same
backing
memory
allocation.
At
each
step,
the
buffer
is
transferred
to
a
new
ArrayBuffer
object.
The
newView
is
a
new
Uint8Array
,
with
that
ArrayBuffer
object
as
its
buffer
property,
the
offset
that
bytes
were
written
to
as
its
byteOffset
property,
and
the
number
of
bytes
that
were
written
as
its
byteLength
property.
3.2.
Class
ReadableStream
In all current engines.
Opera 30+ Edge 79+
Edge (Legacy) 14+ IE None
Firefox for Android 65+ iOS Safari 10.3+ Chrome for Android 43+ Android WebView 43+ Samsung Internet 4.0+ Opera Mobile 30+
The
ReadableStream
class
is
a
concrete
instance
of
the
general
readable
stream
concept.
It
is
adaptable
to
any
chunk
type,
and
maintains
an
internal
queue
to
keep
track
of
data
supplied
by
the
underlying
source
but
not
yet
read
by
any
consumer.
3.2.1. Class Definition
This section is non-normative.
If
one
were
to
write
the
ReadableStream
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class ReadableStream {
constructor( underlyingSource = {}, { size, highWaterMark } = {})
get locked()
cancel( reason)
getReader()
pipeThrough({ writable, readable }, options)
pipeTo( dest, { preventClose, preventAbort, preventCancel } = {})
tee()
}
3.2.2. Internal Slots
Instances
of
ReadableStream
are
created
with
the
internal
slots
described
in
the
following
table:
Internal Slot | Description ( non-normative ) |
---|---|
[[disturbed]] |
A
boolean
flag
set
to
|
[[readableStreamController]] |
A
ReadableStreamDefaultController
or
ReadableByteStreamController
created
with
the
ability
to
control
the
state
and
queue
of
this
stream;
also
used
for
the
IsReadableStream
brand
check
|
[[reader]] |
A
ReadableStreamDefaultReader
or
ReadableStreamBYOBReader
instance,
if
the
stream
is
locked
to
a
reader
,
or
|
[[state]] |
A
string
containing
the
stream’s
current
state,
used
internally;
one
of
"readable"
,
"closed"
,
or
"errored"
|
[[storedError]] | A value indicating how the stream failed, to be given as a failure reason or exception when trying to operate on an errored stream |
3.2.3. new ReadableStream( underlyingSource = {}, { size , highWaterMark } = {})
In all current engines.
Opera 30+ Edge 79+
Edge (Legacy) None IE None
Firefox for Android 65+ iOS Safari 10.3+ Chrome for Android 43+ Android WebView 43+ Samsung Internet 4.0+ Opera Mobile 30+
underlyingSource
object
passed
to
the
constructor
can
implement
any
of
the
following
methods
to
govern
how
the
constructed
stream
instance
behaves:
-
start(controller)
is called immediately, and is typically used to adapt a push source by setting up relevant event listeners, or to acquire access to a pull source . If this process is asynchronous, it can return a promise to signal success or failure. -
pull(controller)
is called when the stream’s internal queue of chunks is not full, and will be called repeatedly until the queue reaches its high water mark . Ifpull
returns a promise, thenpull
will not be called again until that promise fulfills; if the promise rejects, the stream will become errored. -
cancel(reason)
is called when the consumer signals that they are no longer interested in the stream. It should perform any actions necessary to release access to the underlying source . If this process is asynchronous, it can return a promise to signal success or failure.
Both
start
and
pull
are
given
the
ability
to
manipulate
the
stream’s
internal
queue
and
state
via
the
passed
controller
object.
This
is
an
example
of
the
revealing
constructor
pattern
.
If
the
underlyingSource
object
contains
a
property
type
set
to
"bytes"
,
this
readable
stream
is
a
readable
byte
stream
,
and
can
successfully
vend
BYOB
readers
.
In
that
case,
the
passed
controller
object
will
be
an
instance
of
ReadableByteStreamController
.
Otherwise,
it
will
be
an
instance
of
ReadableStreamDefaultController
.
For
readable
byte
streams
,
underlyingSource
can
also
contain
a
property
autoAllocateChunkSize
,
which
can
be
set
to
a
positive
integer
to
enable
the
auto-allocation
feature
for
this
stream.
In
that
case,
when
a
consumer
uses
a
default
reader
,
the
stream
implementation
will
automatically
allocate
an
ArrayBuffer
of
the
given
size,
and
call
the
underlying
source
code
as
if
the
consumer
was
using
a
BYOB
reader
.
This
can
cut
down
on
the
amount
of
code
needed
when
writing
the
underlying
source
implementation,
as
can
be
seen
by
comparing
§ 8.3
A
readable
byte
stream
with
an
underlying
push
source
(no
backpressure
support)
without
auto-allocation
to
§ 8.5
A
readable
byte
stream
with
an
underlying
pull
source
with
auto-allocation.
The
constructor
also
accepts
a
second
argument
containing
the
queuing
strategy
object
with
two
properties:
a
non-negative
number
highWaterMark
,
and
a
function
size(chunk)
.
The
supplied
strategy
could
be
an
instance
of
the
built-in
CountQueuingStrategy
or
ByteLengthQueuingStrategy
classes,
or
it
could
be
custom.
If
no
strategy
is
supplied,
the
default
behavior
will
be
the
same
as
a
CountQueuingStrategy
with
a
high
water
mark
of
1.
3.2.4.
Properties
of
the
ReadableStream
Prototype
3.2.4.1. get locked
In all current engines.
Opera 30+ Edge 79+
Edge (Legacy) 14+ IE None
Firefox for Android 65+ iOS Safari 10.3+ Chrome for Android 43+ Android WebView 43+ Samsung Internet 4.0+ Opera Mobile 30+
locked
getter
returns
whether
or
not
the
readable
stream
is
locked
to
a
reader
.
3.2.4.2. cancel( reason )
In all current engines.
Opera 30+ Edge 79+
Edge (Legacy) 14+ IE None
Firefox for Android 65+ iOS Safari 10.3+ Chrome for Android 43+ Android WebView 43+ Samsung Internet 4.0+ Opera Mobile 30+
cancel
method
cancels
the
stream,
signaling
a
loss
of
interest
in
the
stream
by
a
consumer.
The
supplied
reason
argument
will
be
given
to
the
underlying
source,
which
may
or
may
not
use
it.
3.2.4.3. getReader({ mode } = {})
In all current engines.
Opera 30+ Edge 79+
Edge (Legacy) 14+ IE None
Firefox for Android 65+ iOS Safari 10.3+ Chrome for Android 43+ Android WebView 43+ Samsung Internet 4.0+ Opera Mobile 30+
getReader
method
creates
a
reader
of
the
type
specified
by
the
mode
option
and
locks
the
stream
to
the
new
reader.
While
the
stream
is
locked,
no
other
reader
can
be
acquired
until
this
one
is
released
.
This functionality is especially useful for creating abstractions that desire the ability to consume a stream in its entirety. By getting a reader for the stream, you can ensure nobody else can interleave reads with yours or cancel the stream, which would interfere with your abstraction.
When
mode
is
ReadableStreamDefaultReader
).
The
reader
provides
the
ability
to
directly
read
individual
chunks
from
the
stream
via
the
reader’s
read()
method.
When
mode
is
"byob"
,
the
getReader
method
creates
a
BYOB
reader
(an
instance
of
ReadableStreamBYOBReader
).
This
feature
only
works
on
readable
byte
streams
,
i.e.
streams
which
were
constructed
specifically
with
the
ability
to
handle
"bring
your
own
buffer"
reading.
The
reader
provides
the
ability
to
directly
read
individual
chunks
from
the
stream
via
the
reader’s
read()
method,
into
developer-supplied
buffers,
allowing
more
precise
control
over
allocation.
function readAllChunks( readableStream) {
const reader = readableStream. getReader();
const chunks = [];
return pump();
function pump() {
return reader. read(). then(({ value, done }) => {
if ( done) {
return chunks;
}
chunks. push( value);
return pump();
});
}
}
Note how the first thing it does is obtain a reader, and from then on it uses the reader exclusively. This ensures that no other consumer can interfere with the stream, either by reading chunks or by canceling the stream.
3.2.4.4. pipeThrough({ writable , readable }, options )
Opera 46+ Edge 79+
Edge (Legacy) None IE None
Firefox for Android None iOS Safari 10.3+ Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
pipeThrough
method
provides
a
convenient,
chainable
way
of
piping
this
readable
stream
through
a
transform
stream
(or
any
other
{
writable,
readable
}
pair).
It
simply
pipes
the
stream
into
the
writable
side
of
the
supplied
pair,
and
returns
the
readable
side
for
further
use.
Piping a stream will generally lock it for the duration of the pipe, preventing any other consumer from acquiring a reader.
This
method
is
intentionally
generic;
it
does
not
require
that
its
ReadableStream
object.
It
also
does
not
require
that
its
writable
argument
be
a
WritableStream
instance,
or
that
its
readable
argument
be
a
ReadableStream
instance.
pipeThrough(transform,
options)
would
look
like
httpResponseBody
. pipeThrough( decompressorTransform)
. pipeThrough( ignoreNonImageFilesTransform)
. pipeTo( mediaGallery);
3.2.4.5. pipeTo( dest , { preventClose , preventAbort , preventCancel } = {})
Opera 46+ Edge 79+
Edge (Legacy) None IE None
Firefox for Android None iOS Safari 10.3+ Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
pipeTo
method
pipes
this
readable
stream
to
a
given
writable
stream
.
The
way
in
which
the
piping
process
behaves
under
various
error
conditions
can
be
customized
with
a
number
of
passed
options.
It
returns
a
promise
that
fulfills
when
the
piping
process
completes
successfully,
or
rejects
if
any
errors
were
encountered.
Piping a stream will lock it for the duration of the pipe, preventing any other consumer from acquiring a reader.
Errors and closures of the source and destination streams propagate as follows:
-
An error in the source readable stream will abort the destination writable stream , unless
preventAbort
is truthy. The returned promise will be rejected with the source’s error, or with any error that occurs during aborting the destination. -
An error in the destination writable stream will cancel the source readable stream , unless
preventCancel
is truthy. The returned promise will be rejected with the destination’s error, or with any error that occurs during canceling the source. -
When the source readable stream closes, the destination writable stream will be closed, unless
preventClose
is true. The returned promise will be fulfilled once this process completes, unless an error is encountered while closing the destination, in which case it will be rejected with that error. -
If the destination writable stream starts out closed or closing, the source readable stream will be canceled , unless
preventCancel
is true. The returned promise will be rejected with an error indicating piping to a closed stream failed, or with any error that occurs during canceling the source.
3.2.4.6. tee()
In all current engines.
Opera 30+ Edge 79+
Edge (Legacy) None IE None
Firefox for Android 65+ iOS Safari 10.3+ Chrome for Android 43+ Android WebView 43+ Samsung Internet 4.0+ Opera Mobile 30+
tee
method
tees
this
readable
stream,
returning
a
two-element
array
containing
the
two
resulting
branches
as
new
ReadableStream
instances.
Teeing a stream will lock it, preventing any other consumer from acquiring a reader. To cancel the stream, cancel both of the resulting branches; a composite cancellation reason will then be propagated to the stream’s underlying source .
Note
that
the
chunks
seen
in
each
branch
will
be
the
same
object.
If
the
chunks
are
not
immutable,
this
could
allow
interference
between
the
two
branches.
(
Let
us
know
if
you
think
we
should
add
an
option
to
tee
that
creates
structured
clones
of
the
chunks
for
each
branch.)
cacheEntry
representing
an
on-disk
file,
and
another
writable
stream
httpRequestBody
representing
an
upload
to
a
remote
server,
you
could
pipe
the
same
readable
stream
to
both
destinations
at
once:
const [ forLocal, forRemote] = readableStream. tee();
Promise. all([
forLocal. pipeTo( cacheEntry),
forRemote. pipeTo( httpRequestBody)
])
. then(() => console. log( "Saved the stream to the cache and also uploaded it!" ))
. catch ( e => console. error( "Either caching or uploading failed: " , e));
3.3. General Readable Stream Abstract Operations
The following abstract operations, unlike most in this specification, are meant to be generally useful by other specifications, instead of just being part of the implementation of this spec’s classes.
3.3.1. AcquireReadableStreamBYOBReader ( stream )
This abstract operation is meant to be called from other specifications that may wish to acquire a BYOB reader for a given stream.
3.3.2. AcquireReadableStreamDefaultReader ( stream )
This abstract operation is meant to be called from other specifications that may wish to acquire a default reader for a given stream.
3.3.3. IsReadableStream ( x )
3.3.4. IsReadableStreamDisturbed ( stream )
This abstract operation is meant to be called from other specifications that may wish to query whether or not a readable stream has ever been read from or canceled.
3.3.5. IsReadableStreamLocked ( stream )
This abstract operation is meant to be called from other specifications that may wish to query whether or not a readable stream is locked to a reader .
3.3.6. ReadableStreamTee ( stream , cloneForBranch2 )
This abstract operation is meant to be called from other specifications that may wish to tee a given readable stream.
The second argument, cloneForBranch2 , governs whether or not the data from the original stream will be structured cloned before appearing in the second of the returned branches. This is useful for scenarios where both branches are to be consumed in such a way that they might otherwise interfere with each other, such as by transfering their chunks . However, it does introduce a noticable asymmetry between the two branches. [HTML]
A ReadableStreamTee pull function is an anonymous built-in function that pulls data from a given readable stream reader and enqueues it into two other streams ("branches" of the associated tee). Each ReadableStreamTee pull function has [[reader]], [[branch1]], [[branch2]], [[teeState]], and [[cloneForBranch2]] internal slots. When a ReadableStreamTee pull function F is called, it performs the following steps:
A ReadableStreamTee branch 1 cancel function is an anonymous built-in function that reacts to the cancellation of the first of the two branches of the associated tee. Each ReadableStreamTee branch 1 cancel function has [[stream]] and [[teeState]] internal slots. When a ReadableStreamTee branch 1 cancel function F is called with argument reason , it performs the following steps:
A ReadableStreamTee branch 2 cancel function is an anonymous built-in function that reacts to the cancellation of the second of the two branches of the associated tee. Each ReadableStreamTee branch 2 cancel function has [[stream]] and [[teeState]] internal slots. When a ReadableStreamTee branch 2 cancel function F is called with argument reason , it performs the following steps:
3.4. Readable Stream Abstract Operations Used by Controllers
In
terms
of
specification
factoring,
the
way
that
the
ReadableStream
class
encapsulates
the
behavior
of
both
simple
readable
streams
and
readable
byte
streams
into
a
single
class
is
by
centralizing
most
of
the
potentially-varying
logic
inside
the
two
controller
classes,
ReadableStreamDefaultController
and
ReadableByteStreamController
.
Those
classes
define
most
of
the
stateful
internal
slots
and
abstract
operations
for
how
a
stream’s
internal
queue
is
managed
and
how
it
interfaces
with
its
underlying
source
or
underlying
byte
source
.
The
abstract
operations
in
this
section
are
interfaces
that
are
used
by
the
controller
implementations
to
affect
their
associated
ReadableStream
object,
translating
those
internal
state
changes
into
developer-facing
results
visible
through
the
ReadableStream
's
public
API.
3.4.1. ReadableStreamAddReadIntoRequest ( stream )
3.4.2. ReadableStreamAddReadRequest ( stream )
3.4.3. ReadableStreamCancel ( stream , reason )
3.4.4. ReadableStreamClose ( stream )
"closed"
,
but
stream
.[[closeRequested]]
is
cancel(reason)
.
In
this
case
we
allow
the
controller’s
close
method
to
be
called
and
silently
do
nothing,
since
the
cancelation
was
outside
the
control
of
the
underlying
source.
3.4.5. ReadableStreamError ( stream , e )
3.4.6. ReadableStreamFulfillReadIntoRequest ( stream , chunk , done )
3.4.7. ReadableStreamFulfillReadRequest ( stream , chunk , done )
3.4.8. ReadableStreamGetNumReadIntoRequests ( stream )
3.4.9. ReadableStreamGetNumReadRequests ( stream )
3.4.10. ReadableStreamHasBYOBReader ( stream )
3.4.11. ReadableStreamHasDefaultReader ( stream )
3.5.
Class
ReadableStreamDefaultReader
Opera 39+ Edge 79+
Edge (Legacy) None IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android 52+ Android WebView 52+ Samsung Internet 6.0+ Opera Mobile 41+
The
ReadableStreamDefaultReader
class
represents
a
default
reader
designed
to
be
vended
by
a
ReadableStream
instance.
3.5.1. Class Definition
This section is non-normative.
If
one
were
to
write
the
ReadableStreamDefaultReader
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class ReadableStreamDefaultReader {
constructor( stream)
get closed()
cancel( reason)
read()
releaseLock()
}
3.5.2. Internal Slots
Instances
of
ReadableStreamDefaultReader
are
created
with
the
internal
slots
described
in
the
following
table:
Internal Slot | Description ( non-normative ) |
---|---|
[[closedPromise]] |
A
promise
returned
by
the
reader’s
closed
getter
|
[[ownerReadableStream]] |
A
ReadableStream
instance
that
owns
this
reader
|
[[readRequests]] |
A
List
of
promises
returned
by
calls
to
the
reader’s
read()
method
that
have
not
yet
been
resolved,
due
to
the
consumer
requesting
chunks
sooner
than
they
are
available;
also
used
for
the
IsReadableStreamDefaultReader
brand
check
|
3.5.3. new ReadableStreamDefaultReader( stream )
ReadableStreamDefaultReader/ReadableStreamDefaultReader
In only one current engine.
Opera ? Edge ?
Edge (Legacy) ? IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android ? Android WebView ? Samsung Internet ? Opera Mobile ?
ReadableStreamDefaultReader
constructor
is
generally
not
meant
to
be
used
directly;
instead,
a
stream’s
getReader()
method
should
be
used.
3.5.4.
Properties
of
the
ReadableStreamDefaultReader
Prototype
3.5.4.1. get closed
ReadableStreamDefaultReader/closed
In only one current engine.
Opera ? Edge ?
Edge (Legacy) ? IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android ? Android WebView ? Samsung Internet ? Opera Mobile ?
closed
getter
returns
a
promise
that
will
be
fulfilled
when
the
stream
becomes
closed
or
the
reader’s
lock
is
released
,
or
rejected
if
the
stream
ever
errors.
3.5.4.2. cancel( reason )
ReadableStreamDefaultReader/cancel
In only one current engine.
Opera ? Edge ?
Edge (Legacy) ? IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android ? Android WebView ? Samsung Internet ? Opera Mobile ?
cancel
method
behaves
the
same
as
that
for
the
associated
stream.
3.5.4.3. read()
ReadableStreamDefaultReader/read
In only one current engine.
Opera ? Edge ?
Edge (Legacy) ? IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android ? Android WebView ? Samsung Internet ? Opera Mobile ?
read
method
will
return
a
promise
that
allows
access
to
the
next
chunk
from
the
stream’s
internal
queue,
if
available.
-
If
the
chunk
does
become
available,
the
promise
will
be
fulfilled
with
an
object
of
the
form
{ value: theChunk, done: false }
. -
If
the
stream
becomes
closed,
the
promise
will
be
fulfilled
with
an
object
of
the
form
{ value: undefined, done: true }
. - If the stream becomes errored, the promise will be rejected with the relevant error.
If reading a chunk causes the queue to become empty, more data will be pulled from the underlying source .
3.5.4.4. releaseLock()
ReadableStreamDefaultReader/releaseLock
In only one current engine.
Opera ? Edge ?
Edge (Legacy) ? IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android ? Android WebView ? Samsung Internet ? Opera Mobile ?
WritableStreamDefaultWriter/releaseLock
In only one current engine.
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
releaseLock
method
releases
the
reader’s
lock
on
the
corresponding
stream.
After
the
lock
is
released,
the
reader
is
no
longer
active
.
If
the
associated
stream
is
errored
when
the
lock
is
released,
the
reader
will
appear
errored
in
the
same
way
from
now
on;
otherwise,
the
reader
will
appear
closed.
A
reader’s
lock
cannot
be
released
while
it
still
has
a
pending
read
request,
i.e.,
if
a
promise
returned
by
the
reader’s
read()
method
has
not
yet
been
settled.
Attempting
to
do
so
will
throw
a
3.6.
Class
ReadableStreamBYOBReader
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
The
ReadableStreamBYOBReader
class
represents
a
BYOB
reader
designed
to
be
vended
by
a
ReadableStream
instance.
3.6.1. Class Definition
This section is non-normative.
If
one
were
to
write
the
ReadableStreamBYOBReader
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class ReadableStreamBYOBReader {
constructor( stream)
get closed()
cancel( reason)
read( view)
releaseLock()
}
3.6.2. Internal Slots
Instances
of
ReadableStreamBYOBReader
are
created
with
the
internal
slots
described
in
the
following
table:
Internal Slot | Description ( non-normative ) |
---|---|
[[closedPromise]] |
A
promise
returned
by
the
reader’s
closed
getter
|
[[ownerReadableStream]] |
A
ReadableStream
instance
that
owns
this
reader
|
[[readIntoRequests]] |
A
List
of
promises
returned
by
calls
to
the
reader’s
read(view)
method
that
have
not
yet
been
resolved,
due
to
the
consumer
requesting
chunks
sooner
than
they
are
available;
also
used
for
the
IsReadableStreamBYOBReader
brand
check
|
3.6.3. new ReadableStreamBYOBReader( stream )
ReadableStreamBYOBReader/ReadableStreamBYOBReader
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
ReadableStreamBYOBReader
constructor
is
generally
not
meant
to
be
used
directly;
instead,
a
stream’s
getReader()
method
should
be
used.
3.6.4.
Properties
of
the
ReadableStreamBYOBReader
Prototype
3.6.4.1. get closed
ReadableStreamBYOBReader/closed
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
closed
getter
returns
a
promise
that
will
be
fulfilled
when
the
stream
becomes
closed
or
the
reader’s
lock
is
released
,
or
rejected
if
the
stream
ever
errors.
3.6.4.2. cancel( reason )
ReadableStreamBYOBReader/cancel
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
cancel
method
behaves
the
same
as
that
for
the
associated
stream.
3.6.4.3. read( view )
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
read
method
will
write
read
bytes
into
view
and
return
a
promise
resolved
with
a
possibly
transferred
buffer
as
described
below.
-
If
the
chunk
does
become
available,
the
promise
will
be
fulfilled
with
an
object
of
the
form
{ value: theChunk, done: false }
. -
If
the
stream
becomes
closed,
the
promise
will
be
fulfilled
with
an
object
of
the
form
{ value: undefined, done: true }
. - If the stream becomes errored, the promise will be rejected with the relevant error.
If reading a chunk causes the queue to become empty, more data will be pulled from the underlying byte source .
3.6.4.4. releaseLock()
ReadableStreamBYOBReader/releaseLock
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
releaseLock
method
releases
the
reader’s
lock
on
the
corresponding
stream.
After
the
lock
is
released,
the
reader
is
no
longer
active
.
If
the
associated
stream
is
errored
when
the
lock
is
released,
the
reader
will
appear
errored
in
the
same
way
from
now
on;
otherwise,
the
reader
will
appear
closed.
A
reader’s
lock
cannot
be
released
while
it
still
has
a
pending
read
request,
i.e.,
if
a
promise
returned
by
the
reader’s
read()
method
has
not
yet
been
settled.
Attempting
to
do
so
will
throw
a
3.7. Readable Stream Reader Abstract Operations
3.7.1. IsReadableStreamDefaultReader ( x )
3.7.2. IsReadableStreamBYOBReader ( x )
3.7.3. ReadableStreamReaderGenericCancel ( reader , reason )
3.7.4. ReadableStreamReaderGenericInitialize ( reader , stream )
3.7.5. ReadableStreamReaderGenericRelease ( reader )
3.7.6. ReadableStreamBYOBReaderRead ( reader , view )
3.7.7. ReadableStreamDefaultReaderRead ( reader )
3.8.
Class
ReadableStreamDefaultController
ReadableStreamDefaultController
Opera 39+ Edge 79+
Edge (Legacy) None IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android 52+ Android WebView 52+ Samsung Internet 6.0+ Opera Mobile 41+
The
ReadableStreamDefaultController
class
has
methods
that
allow
control
of
a
ReadableStream
's
state
and
internal
queue
.
When
constructing
a
ReadableStream
that
is
not
a
readable
byte
stream
,
the
underlying
source
is
given
a
corresponding
ReadableStreamDefaultController
instance
to
manipulate.
3.8.1. Class Definition
This section is non-normative.
If
one
were
to
write
the
ReadableStreamDefaultController
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class ReadableStreamDefaultController {
constructor( stream, underlyingSource, size, highWaterMark)
get desiredSize()
close()
enqueue( chunk)
error( e)
}
3.8.2. Internal Slots
Instances
of
ReadableStreamDefaultController
are
created
with
the
internal
slots
described
in
the
following
table:
Internal Slot | Description ( non-normative ) |
---|---|
[[closeRequested]] | A boolean flag indicating whether the stream has been closed by its underlying source , but still has chunks in its internal queue that have not yet been read |
[[controlledReadableStream]] |
The
ReadableStream
instance
controlled
|
[[pullAgain]] |
A
boolean
flag
set
to
pull
method
to
pull
more
data,
but
the
pull
could
not
yet
be
done
since
a
previous
call
is
still
executing
|
[[pulling]] |
A
boolean
flag
set
to
pull
method
is
executing
and
has
not
yet
fulfilled,
used
to
prevent
reentrant
calls
|
[[queue]] | A List representing the stream’s internal queue of chunks |
[[queueTotalSize]] | The total size of all the chunks stored in [[queue]] (see § 6.3 Queue-with-Sizes Operations ) |
[[started]] | A boolean flag indicating whether the underlying source has finished starting |
[[strategyHWM]] | A number supplied to the constructor as part of the stream’s queuing strategy , indicating the point at which the stream will apply backpressure to its underlying source |
[[strategySize]] |
A
function
supplied
to
the
constructor
as
part
of
the
stream’s
queuing
strategy
,
designed
to
calculate
the
size
of
enqueued
chunks
;
can
be
|
[[underlyingSource]] | An object representation of the stream’s underlying source ; also used for the IsReadableStreamDefaultController brand check |
3.8.3. new ReadableStreamDefaultController( stream , underlyingSource , size , highWaterMark )
ReadableStreamDefaultController
constructor
cannot
be
used
directly;
it
only
works
on
a
ReadableStream
that
is
in
the
middle
of
being
constructed.
3.8.4.
Properties
of
the
ReadableStreamDefaultController
Prototype
3.8.4.1. get desiredSize
ReadableStreamDefaultController/desiredSize
In only one current engine.
Opera ? Edge ?
Edge (Legacy) ? IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android ? Android WebView ? Samsung Internet ? Opera Mobile ?
desiredSize
getter
returns
the
desired
size
to
fill
the
controlled
stream’s
internal
queue
.
It
can
be
negative,
if
the
queue
is
over-full.
An
underlying
source
should
use
this
information
to
determine
when
and
how
to
apply
backpressure
.
3.8.4.2. close()
ReadableStreamDefaultController/close
In only one current engine.
Opera ? Edge ?
Edge (Legacy) ? IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android ? Android WebView ? Samsung Internet ? Opera Mobile ?
close
method
will
close
the
controlled
readable
stream.
Consumers
will
still
be
able
to
read
any
previously-enqueued
chunks
from
the
stream,
but
once
those
are
read,
the
stream
will
become
closed.
3.8.4.3. enqueue( chunk )
ReadableStreamDefaultController/enqueue
In only one current engine.
Opera ? Edge ?
Edge (Legacy) ? IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android ? Android WebView ? Samsung Internet ? Opera Mobile ?
enqueue
method
will
enqueue
a
given
chunk
in
the
controlled
readable
stream.
3.8.4.4. error( e )
ReadableStreamDefaultController/error
In only one current engine.
Opera ? Edge ?
Edge (Legacy) ? IE None
Firefox for Android 65+ iOS Safari ? Chrome for Android ? Android WebView ? Samsung Internet ? Opera Mobile ?
error
method
will
error
the
readable
stream,
making
all
future
interactions
with
it
fail
with
the
given
error
e
.
3.8.5. Readable Stream Default Controller Internal Methods
The
following
are
additional
internal
methods
implemented
by
each
ReadableStreamDefaultController
instance.
The
readable
stream
implementation
will
polymorphically
call
to
either
these
or
their
counterparts
for
BYOB
controllers.
3.8.5.1. [[CancelSteps]]( reason )
3.8.5.2. [[PullSteps]]()
3.9. Readable Stream Default Controller Abstract Operations
3.9.1. IsReadableStreamDefaultController ( x )
3.9.2. ReadableStreamDefaultControllerCallPullIfNeeded ( controller )
3.9.3. ReadableStreamDefaultControllerShouldCallPull ( controller )
3.9.4. ReadableStreamDefaultControllerClose ( controller )
This abstract operation can be called by other specifications that wish to close a readable stream, in the same way a developer-created stream would be closed by its associated controller object. Specifications should not do this to streams they did not create, and must ensure they have obeyed the preconditions (listed here as asserts).
3.9.5. ReadableStreamDefaultControllerEnqueue ( controller , chunk )
This abstract operation can be called by other specifications that wish to enqueue chunks in a readable stream, in the same way a developer would enqueue chunks using the stream’s associated controller object. Specifications should not do this to streams they did not create, and must ensure they have obeyed the preconditions (listed here as asserts).
"closed"
,
but
stream
.[[closeRequested]]
is
cancel(reason)
.
In
this
case
we
allow
the
controller’s
enqueue
method
to
be
called
and
silently
do
nothing,
since
the
cancelation
was
outside
the
control
of
the
underlying
source.
3.9.6. ReadableStreamDefaultControllerError ( controller , e )
This abstract operation can be called by other specifications that wish to move a readable stream to an errored state, in the same way a developer would error a stream using its associated controller object. Specifications should not do this to streams they did not create, and must ensure they have obeyed the precondition (listed here as an assert).
3.9.7. ReadableStreamDefaultControllerErrorIfNeeded ( controller , e )
3.9.8. ReadableStreamDefaultControllerGetDesiredSize ( controller )
This
abstract
operation
can
be
called
by
other
specifications
that
wish
to
determine
the
desired
size
to
fill
this
stream’s
internal
queue
,
similar
to
how
a
developer
would
consult
the
desiredSize
property
of
the
stream’s
associated
controller
object.
Specifications
should
not
use
this
on
streams
they
did
not
create.
3.10.
Class
ReadableByteStreamController
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
The
ReadableByteStreamController
class
has
methods
that
allow
control
of
a
ReadableStream
's
state
and
internal
queue
.
When
constructing
a
ReadableStream
,
the
underlying
byte
source
is
given
a
corresponding
ReadableByteStreamController
instance
to
manipulate.
3.10.1. Class Definition
This section is non-normative.
If
one
were
to
write
the
ReadableByteStreamController
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class ReadableByteStreamController {
constructor( stream, underlyingByteSource, highWaterMark)
get byobRequest()
get desiredSize()
close()
enqueue( chunk)
error( e)
}
3.10.2. Internal Slots
Instances
of
ReadableByteStreamController
are
created
with
the
internal
slots
described
in
the
following
table:
Internal Slot | Description ( non-normative ) |
---|---|
[[autoAllocateChunkSize]] |
A
positive
integer,
when
the
automatic
buffer
allocation
feature
is
enabled.
In
that
case,
this
value
specifies
the
size
of
buffer
to
allocate.
It
is
|
[[closeRequested]] | A boolean flag indicating whether the stream has been closed by its underlying byte source , but still has chunks in its internal queue that have not yet been read |
[[controlledReadableStream]] |
The
ReadableStream
instance
controlled
|
[[pullAgain]] |
A
boolean
flag
set
to
pull
method
to
pull
more
data,
but
the
pull
could
not
yet
be
done
since
a
previous
call
is
still
executing
|
[[pulling]] |
A
boolean
flag
set
to
pull
method
is
executing
and
has
not
yet
fulfilled,
used
to
prevent
reentrant
calls
|
[[byobRequest]] |
A
ReadableStreamBYOBRequest
instance
representing
the
current
BYOB
pull
request
|
[[pendingPullIntos]] | A List of descriptors representing pending BYOB pull requests |
[[queue]] | A List representing the stream’s internal queue of chunks |
[[queueTotalSize]] | The total size (in bytes) of all the chunks stored in [[queue]] |
[[started]] | A boolean flag indicating whether the underlying source has finished starting |
[[strategyHWM]] | A number supplied to the constructor as part of the stream’s queuing strategy , indicating the point at which the stream will apply backpressure to its underlying byte source |
[[underlyingByteSource]] | An object representation of the stream’s underlying byte source ; also used for the IsReadableByteStreamController brand check |
Although
ReadableByteStreamController
instances
have
[[queue]]
and
[[queueTotalSize]]
slots,
we
do
not
use
most
of
the
abstract
operations
in
§ 6.3
Queue-with-Sizes
Operations
on
them,
as
the
way
in
which
we
manipulate
this
queue
is
rather
different
than
the
others
in
the
spec.
Instead,
we
update
the
two
slots
together
manually.
This might be cleaned up in a future spec refactoring.
3.10.3. new ReadableByteStreamController( stream , underlyingByteSource , highWaterMark )
ReadableByteStreamController
constructor
cannot
be
used
directly;
it
only
works
on
a
ReadableStream
that
is
in
the
middle
of
being
constructed.
3.10.4.
Properties
of
the
ReadableByteStreamController
Prototype
3.10.4.1. get byobRequest
ReadableByteStreamController/byobRequest
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
byobRequest
getter
returns
the
current
BYOB
pull
request.
3.10.4.2. get desiredSize
ReadableByteStreamController/desiredSize
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
desiredSize
getter
returns
the
desired
size
to
fill
the
controlled
stream’s
internal
queue
.
It
can
be
negative,
if
the
queue
is
over-full.
An
underlying
source
should
use
this
information
to
determine
when
and
how
to
apply
backpressure
.
3.10.4.3. close()
ReadableByteStreamController/close
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
close
method
will
close
the
controlled
readable
stream.
Consumers
will
still
be
able
to
read
any
previously-enqueued
chunks
from
the
stream,
but
once
those
are
read,
the
stream
will
become
closed.
3.10.4.4. enqueue( chunk )
ReadableByteStreamController/enqueue
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
enqueue
method
will
enqueue
a
given
chunk
in
the
controlled
readable
stream.
3.10.4.5. error( e )
ReadableByteStreamController/error
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
error
method
will
error
the
readable
stream,
making
all
future
interactions
with
it
fail
with
the
given
error
e
.
3.10.5. Readable Stream BYOB Controller Internal Methods
The
following
are
additional
internal
methods
implemented
by
each
ReadableByteStreamController
instance.
The
readable
stream
implementation
will
polymorphically
call
to
either
these
or
their
counterparts
for
default
controllers.
3.10.5.1. [[CancelSteps]]( reason )
3.10.5.2. [[PullSteps]]()
3.11.
Class
ReadableStreamBYOBRequest
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
The
ReadableStreamBYOBRequest
class
represents
a
pull
into
request
in
a
ReadableByteStreamController
.
3.11.1. Class Definition
This section is non-normative.
If
one
were
to
write
the
ReadableStreamBYOBRequest
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class ReadableStreamBYOBRequest {
constructor( controller, view)
get view()
respond( bytesWritten)
respondWithNewView( view)
}
3.11.2. Internal Slots
Instances
of
ReadableStreamBYOBRequest
are
created
with
the
internal
slots
described
in
the
following
table:
Internal Slot | Description ( non-normative ) |
---|---|
[[associatedReadableByteStreamController]] |
The
parent
ReadableByteStreamController
instance
|
[[view]] | A typed array representing the destination region to which the controller may write generated data |
3.11.3. new ReadableStreamBYOBRequest( controller , view )
3.11.4.
Properties
of
the
ReadableStreamBYOBRequest
Prototype
3.11.4.1. get view
ReadableStreamBYOBRequest/view
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
3.11.4.2. respond( bytesWritten )
ReadableStreamBYOBRequest/respond
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
3.11.4.3. respondWithNewView( view )
ReadableStreamBYOBRequest/respondWithNewView
In no current engines.
Opera None Edge None
Edge (Legacy) None IE None
Firefox for Android None iOS Safari None Chrome for Android None Android WebView None Samsung Internet None Opera Mobile None
3.12. Readable Stream BYOB Controller Abstract Operations
3.12.1. IsReadableStreamBYOBRequest ( x )
3.12.2. IsReadableByteStreamController ( x )
3.12.3. ReadableByteStreamControllerCallPullIfNeeded ( controller )
3.12.4. ReadableByteStreamControllerClearPendingPullIntos ( controller )
3.12.5. ReadableByteStreamControllerClose ( controller )
3.12.6. ReadableByteStreamControllerCommitPullIntoDescriptor ( stream , pullIntoDescriptor )
3.12.7. ReadableByteStreamControllerConvertPullIntoDescriptor ( pullIntoDescriptor )
3.12.8. ReadableByteStreamControllerEnqueue ( controller , chunk )
3.12.9. ReadableByteStreamControllerEnqueueChunkToQueue ( controller , buffer , byteOffset , byteLength )
3.12.10. ReadableByteStreamControllerError ( controller , e )
3.12.11. ReadableByteStreamControllerFillHeadPullIntoDescriptor ( controller , size , pullIntoDescriptor )
3.12.12. ReadableByteStreamControllerFillPullIntoDescriptorFromQueue ( controller , pullIntoDescriptor )
3.12.13. ReadableByteStreamControllerGetDesiredSize ( controller )
3.12.14. ReadableByteStreamControllerHandleQueueDrain ( controller )
3.12.15. ReadableByteStreamControllerInvalidateBYOBRequest ( controller )
3.12.16. ReadableByteStreamControllerProcessPullIntoDescriptorsUsingQueue ( controller )
3.12.17. ReadableByteStreamControllerPullInto ( controller , view )
3.12.18. ReadableByteStreamControllerRespond ( controller , bytesWritten )
3.12.19. ReadableByteStreamControllerRespondInClosedState ( controller , firstDescriptor )
3.12.20. ReadableByteStreamControllerRespondInReadableState ( controller , bytesWritten , pullIntoDescriptor )
3.12.21. ReadableByteStreamControllerRespondInternal ( controller , bytesWritten )
3.12.22. ReadableByteStreamControllerRespondWithNewView ( controller , view )
3.12.23. ReadableByteStreamControllerShiftPendingPullInto ( controller )
3.12.24. ReadableByteStreamControllerShouldCallPull ( controller )
4. Writable Streams
4.1. Using Writable Streams
readableStream. pipeTo( writableStream)
. then(() => console. log( "All data successfully written!" ))
. catch ( e => console. error( "Something went wrong!" , e));
write()
and
close()
methods.
Since
writable
streams
queue
any
incoming
writes,
and
take
care
internally
to
forward
them
to
the
underlying
sink
in
sequence,
you
can
indiscriminately
write
to
a
writable
stream
without
much
ceremony:
function writeArrayToStream( array, writableStream) {
const writer = writableStream. getWriter();
array. forEach( chunk => writer. write( chunk));
return writer. close();
}
writeArrayToStream([ 1 , 2 , 3 , 4 , 5 ], writableStream)
. then(() => console. log( "All done!" ))
. catch ( e => console. error( "Error with the stream: " + e));
close()
method.
That
promise
(which
can
also
be
accessed
using
the
closed
getter)
will
reject
if
anything
goes
wrong
with
the
stream—initializing
it,
writing
to
it,
or
closing
it.
And
it
will
fulfill
once
the
stream
is
successfully
closed.
Often
this
is
all
you
care
about.
However,
if
you
care
about
the
success
of
writing
a
specific
chunk
,
you
can
use
the
promise
returned
by
the
stream’s
write()
method:
writer. write( "i am a chunk of data" )
. then(() => console. log( "chunk successfully written!" ))
. catch ( e => console. error( e));
What "success" means is up to a given stream instance (or more precisely, its underlying sink ) to decide. For example, for a file stream it could simply mean that the OS has accepted the write, and not necessarily that the chunk has been flushed to disk.
desiredSize
and
ready
properties
of
writable
stream
writers
allow
producers
to
more
precisely
respond
to
flow
control
signals
from
the
stream,
to
keep
memory
usage
below
the
stream’s
specified
high
water
mark
.
The
following
example
writes
an
infinite
sequence
of
random
bytes
to
a
stream,
using
desiredSize
to
determine
how
many
bytes
to
generate
at
a
given
time,
and
using
ready
to
wait
for
the
backpressure
to
subside.
async function writeRandomBytesForever( writableStream) {
const writer = writableStream. getWriter();
while ( true ) {
await writer. ready;
const bytes = new Uint8Array( writer. desiredSize);
window. crypto. getRandomValues( bytes);
await writer. write( bytes);
}
}
writeRandomBytesForever( myWritableStream). catch ( e => console. error( "Something broke" , e));
4.2. Design Of The State Machine
In addition to the principles for streams in general, a number of additional considerations have informed the design of the WritableStream state machine.
Some of these design decisions improve predictability, ease-of-use, and safety for developers at the expense of making implementations more complex.
Only one sink method can ever be executing at a time.
Sink methods are treated as atomic. A new sink method will never be called until the Promise from the previous one has resolved. Most changes to the internal state do not take effect until any in-flight sink method has completed.
Exception: If something has happened that will error the stream, for example writer.abort() has been called, then new calls to writer.write() will start failing immediately. There’s no user benefit in waiting for the current operation to complete before informing the user that writer.write() has failed.
The writer.ready promise and the value of writer.desiredSize reflect whether a write() performed right now would be effective.
-
They
will
change
even
while
a
sink
method
is
in-flight.
writer.ready
will
reject
as
soon
as
new
calls
to
writer.write()
will
start
failing.
writer.desiredSize
will
change
to
null
at
the
same
time.
Because promises are dispatched asynchronously, the state can still change between writer.ready becoming fulfilled and write() being called.
- The value of writer.desiredSize decreases synchronously with every call to writer.write(). This implies that the queueing strategy’s size() function is executed synchronously.
-
They
will
change
even
while
a
sink
method
is
in-flight.
writer.ready
will
reject
as
soon
as
new
calls
to
writer.write()
will
start
failing.
writer.desiredSize
will
change
to
null
at
the
same
time.
The writer.closed promise and the promises returned by writer.close() and writer.abort() do not resolve or reject until no sink methods are executing and no further sink methods will be executed.
- If the user of the WritableStream wants to retry using the same underlying file, etc., it is important to have confidence that all other operations have ceased.
- This principle also applies to the ReadableStream pipeTo() method.
Promises fulfill in consistent order. In particular, writer.ready always resolves before writer.closed, even in cases where both are fulfilling in reaction to the same occurrence.
- Queued calls to writer methods such as write() are not cancelled when writer.releaseLock() is called. This makes them easy to use in a "fire and forget" style.
4.3.
Class
WritableStream
In only one current engine.
Opera 47+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 44+
4.2.1.
4.3.1.
Class
Definition
This section is non-normative.
If
one
were
to
write
the
WritableStream
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class WritableStream {
constructor( underlyingSink = {}, { size, highWaterMark = 1 } = {})
get locked()
abort( reason)
getWriter()
}
4.2.2.
4.3.2.
Internal
Slots
Instances
of
WritableStream
are
created
with
the
internal
slots
described
in
the
following
table:
Internal Slot | Description ( non-normative ) |
---|---|
[[backpressure]] | The backpressure signal set by the controller |
[[closeRequest]] |
The
promise
returned
from
the
writer
close()
method
|
[[inFlightWriteRequest]] |
A
slot
set
to
the
promise
for
the
current
in-flight
write
operation
while
the
underlying
sink
’s
write
method
is
executing
and
has
not
yet
fulfilled,
used
to
prevent
reentrant
calls
|
[[inFlightCloseRequest]] |
A
slot
set
to
the
promise
for
the
current
in-flight
close
operation
while
the
underlying
sink
’s
close
method
is
executing
and
has
not
yet
fulfilled,
used
to
prevent
the
abort()
method
from
interrupting
close
|
[[pendingAbortRequest]] |
A
Record
containing
the
promise
returned
from
abort()
and
the
reason
passed
to
abort()
|
[[state]] |
A
string
containing
the
stream’s
current
state,
used
internally;
one
of
"writable"
,
"closed"
,
or
"errored"
|
[[storedError]] |
A
value
indicating
how
the
stream
failed,
to
be
given
as
a
failure
reason
or
exception
when
trying
to
operate
on
the
stream
while
in
the
"errored"
state
|
[[writableStreamController]] |
A
WritableStreamDefaultController
created
with
the
ability
to
control
the
state
and
queue
of
this
stream;
also
used
for
the
IsWritableStream
brand
check
|
[[writer]] |
A
WritableStreamDefaultWriter
instance,
if
the
stream
is
locked
to
a
writer
,
or
|
[[writeRequests]] | A List of promises representing the stream’s internal queue of write requests not yet processed by the underlying sink . |
The
[[inFlightCloseRequest]]
slot
and
[[closeRequest]]
slot
are
mutually
exclusive.
Similarly,
no
element
will
be
removed
from
[[writeRequests]]
while
[[inFlightWriteRequest]]
is
not
undefined
.
Implementations
can
optimize
storage
for
these
slots
based
on
these
invariants.
4.2.3.
4.3.3.
new
WritableStream(
underlyingSink
=
{},
{
size
,
highWaterMark
=
1
}
=
{})
In only one current engine.
Opera 47+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 44+
underlyingSink
object
passed
to
the
constructor
can
implement
any
of
the
following
methods
to
govern
how
the
constructed
stream
instance
behaves:
-
start(controller)
is called immediately, and should perform any actions necessary to acquire access to the underlying sink . If this process is asynchronous, it can return a promise to signal success or failure. -
write(chunk, controller)
is called when a new chunk of data is ready to be written to the underlying sink . It can return a promise to signal success or failure of the write operation. The stream implementation guarantees that this method will be called only after previous writes have succeeded, and never afterclose
orabort
is called. -
close(controller)
is called after the producer signals that they are done writing chunks to the stream, and all queued-up writes successfully complete. It should perform any actions necessary to finalize writes to the underlying sink , and release access to it. If this process is asynchronous, it can return a promise to signal success or failure. The stream implementation guarantees that this method will be called only after all queued-up writes have succeeded. -
abort(reason)
is called when the producer signals they wish to abruptly close the stream and put it in an errored state. It should clean up any held resources, much likeclose
, but perhaps with some custom handling. Unlikeclose
,abort
will be called even if writes are queued up; those chunks will be thrown away. If this process is asynchronous, it can return a promise to signal success or failure.
The
controller
object
passed
to
start
,
write
and
close
is
an
instance
of
WritableStreamDefaultController
,
and
has
the
ability
to
error
the
stream.
The
constructor
also
accepts
a
second
argument
containing
the
queuing
strategy
object
with
two
properties:
a
non-negative
number
highWaterMark
,
and
a
function
size(chunk)
.
The
supplied
strategy
could
be
an
instance
of
the
built-in
CountQueuingStrategy
or
ByteLengthQueuingStrategy
classes,
or
it
could
be
custom.
If
no
strategy
is
supplied,
the
default
behavior
will
be
the
same
as
a
CountQueuingStrategy
with
a
high
water
mark
of
1.
This is to allow us to add new potential types in the future, without backward-compatibility concerns.
1. Set *this*.[[writableStreamController]] to ? Construct(` WritableStreamDefaultController `, « *this*, _underlyingSink_, _size_, _highWaterMark_ »). 1. Perform ? *this*.[[writableStreamController]].[[StartSteps]]().
4.2.4.
4.3.4.
Properties
of
the
WritableStream
Prototype
4.2.4.1.
4.3.4.1.
get
locked
In only one current engine.
Opera 47+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 44+
locked
getter
returns
whether
or
not
the
writable
stream
is
locked
to
a
writer
.
4.2.4.2.
4.3.4.2.
abort(
reason
)
In only one current engine.
Opera 47+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 44+
abort
method
aborts
the
stream,
signaling
that
the
producer
can
no
longer
successfully
write
to
the
stream
and
it
should
be
immediately
moved
to
an
errored
state,
with
any
queued-up
writes
discarded.
This
will
also
execute
any
abort
mechanism
of
the
underlying
sink
.
4.2.4.3.
4.3.4.3.
getWriter()
In only one current engine.
Opera 47+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 44+
getWriter
method
creates
a
writer
(an
instance
of
WritableStreamDefaultWriter
)
and
locks
the
stream
to
the
new
writer.
While
the
stream
is
locked,
no
other
writer
can
be
acquired
until
this
one
is
released
.
This functionality is especially useful for creating abstractions that desire the ability to write to a stream without interruption or interleaving. By getting a writer for the stream, you can ensure nobody else can write at the same time, which would cause the resulting written data to be unpredictable and probably useless.
4.3.
4.4.
General
Writable
Stream
Abstract
Operations
The following abstract operations, unlike most in this specification, are meant to be generally useful by other specifications, instead of just being part of the implementation of this spec’s classes.
4.3.1.
4.4.1.
AcquireWritableStreamDefaultWriter
(
stream
)
4.3.2.
4.4.2.
IsWritableStream
(
x
)
4.3.3.
4.4.3.
IsWritableStreamLocked
(
stream
)
This abstract operation is meant to be called from other specifications that may wish to query whether or not a writable stream is locked to a writer .
4.3.4.
4.4.4.
WritableStreamAbort
(
stream
,
reason
)
4.3.5.
4.4.5.
WritableStreamError
(
stream
,
error
)
4.3.6.
4.4.6.
WritableStreamFinishAbort
(
stream
)
4.4.
4.5.
Writable
Stream
Abstract
Operations
Used
by
Controllers
To
allow
future
flexibility
to
add
different
writable
stream
behaviors
(similar
to
the
distinction
between
simple
readable
streams
and
readable
byte
streams
),
much
of
the
internal
state
of
a
writable
stream
is
encapsulated
by
the
WritableStreamDefaultController
class.
At
this
point
in
time
the
division
of
work
between
the
stream
and
its
controller
may
seems
somewhat
arbitrary,
but
centralizing
much
of
the
logic
in
the
controller
is
a
useful
structure
for
the
future.
The
abstract
operations
in
this
section
are
interfaces
that
are
used
by
the
controller
implementation
to
affect
its
associated
WritableStream
object,
translating
the
controller’s
internal
state
changes
into
developer-facing
results
visible
through
the
WritableStream
's
public
API.
4.4.1.
4.5.1.
WritableStreamAddWriteRequest
(
stream
)
4.4.2.
4.5.2.
WritableStreamFinishInFlightWrite
(
stream
)
4.4.3.
4.5.3.
WritableStreamFinishInFlightWriteInErroredState
(
stream
)
4.4.4.
4.5.4.
WritableStreamFinishInFlightWriteWithError
(
stream
,
error
)
4.4.5.
4.5.5.
WritableStreamFinishInFlightClose
(
stream
)
4.4.6.
4.5.6.
WritableStreamFinishInFlightCloseInErroredState
(
stream
)
4.4.7.
4.5.7.
WritableStreamFinishInFlightCloseWithError
(
stream
,
error
)
4.4.8.
4.5.8.
WritableStreamCloseQueuedOrInFlight
(
stream
)
4.4.9.
4.5.9.
WritableStreamHandleAbortRequestIfPending
(
stream
)
4.4.10.
4.5.10.
WritableStreamHasOperationMarkedInFlight
(
stream
)
4.4.11.
4.5.11.
WritableStreamMarkCloseRequestInFlight
(
stream
)
4.4.12.
4.5.12.
WritableStreamMarkFirstWriteRequestInFlight
(
stream
)
4.4.13.
4.5.13.
WritableStreamRejectClosedPromiseInReactionToError
(
stream
)
4.4.14.
4.5.14.
WritableStreamRejectAbortRequestIfPending
(
stream
)
4.4.15.
4.5.15.
WritableStreamRejectPromisesInReactionToError
(
stream
)
4.4.16.
4.5.16.
WritableStreamUpdateBackpressure
(
stream
,
backpressure
)
4.5.
4.6.
Class
WritableStreamDefaultWriter
In only one current engine.
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
The
WritableStreamDefaultWriter
class
represents
a
writable
stream
writer
designed
to
be
vended
by
a
WritableStream
instance.
4.5.1.
4.6.1.
Class
Definition
This section is non-normative.
If
one
were
to
write
the
WritableStreamDefaultWriter
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class WritableStreamDefaultWriter {
constructor( stream)
get closed()
get desiredSize()
get ready()
abort( reason)
close()
releaseLock()
write( chunk)
}
4.5.2.
4.6.2.
Internal
Slots
Instances
of
WritableStreamDefaultWriter
are
created
with
the
internal
slots
described
in
the
following
table:
Internal Slot | Description ( non-normative ) |
---|---|
[[closedPromise]] |
A
promise
returned
by
the
writer’s
closed
getter
|
[[ownerWritableStream]] |
A
WritableStream
instance
that
owns
this
writer
|
[[readyPromise]] |
A
promise
returned
by
the
writer’s
ready
getter
|
4.5.3.
4.6.3.
new
WritableStreamDefaultWriter(
stream
)
WritableStreamDefaultWriter/WritableStreamDefaultWriter
In only one current engine.
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
WritableStreamDefaultWriter
constructor
is
generally
not
meant
to
be
used
directly;
instead,
a
stream’s
getWriter()
method
should
be
used.
4.5.4.
4.6.4.
Properties
of
the
WritableStreamDefaultWriter
Prototype
4.5.4.1.
4.6.4.1.
get
closed
WritableStreamDefaultWriter/closed
In only one current engine.
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
closed
getter
returns
a
promise
that
will
be
fulfilled
when
the
stream
becomes
closed,
or
rejected
if
the
stream
ever
errors
or
the
writer’s
lock
is
released
before
the
stream
finishes
closing.
4.5.4.2.
4.6.4.2.
get
desiredSize
desiredSize
getter
returns
the
desired
size
to
fill
the
stream’s
internal
queue
.
It
can
be
negative,
if
the
queue
is
over-full.
A
producer
should
use
this
information
to
determine
the
right
amount
of
data
to
write.
It
will
be
4.5.4.3.
4.6.4.3.
get
ready
WritableStreamDefaultWriter/ready
In only one current engine.
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
ready
getter
returns
a
promise
that
will
be
fulfilled
when
the
desired
size
to
fill
the
stream’s
internal
queue
transitions
from
nonpositive
to
positive,
signaling
that
it
is
no
longer
applying
backpressure
.
Once
the
desired
size
to
fill
the
stream’s
internal
queue
dips
back
to
zero
or
below,
the
getter
will
return
a
new
promise
that
stays
pending
until
the
next
transition.
If the stream becomes errored or aborted, or the writer’s lock is released , the returned promise will become rejected.
4.5.4.4.
4.6.4.4.
abort(
reason
)
WritableStreamDefaultWriter/abort
In only one current engine.
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
abort
method
behaves
the
same
as
that
for
the
associated
stream.
(Otherwise,
it
returns
a
rejected
promise.)
4.5.4.5.
4.6.4.5.
close()
WritableStreamDefaultWriter/close
In only one current engine.
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
close
method
will
close
the
associated
writable
stream.
The
underlying
sink
will
finish
processing
any
previously-written
chunks
,
before
invoking
its
close
behavior.
During
this
time
any
further
attempts
to
write
will
fail
(without
erroring
the
stream).
The
method
returns
a
promise
that
is
fulfilled
with
closed
.)
4.5.4.6.
4.6.4.6.
releaseLock()
releaseLock
method
releases
the
writer’s
lock
on
the
corresponding
stream.
After
the
lock
is
released,
the
writer
is
no
longer
active
.
If
the
associated
stream
is
errored
when
the
lock
is
released,
the
writer
will
appear
errored
in
the
same
way
from
now
on;
otherwise,
the
writer
will
appear
closed.
Note
that
the
lock
can
still
be
released
even
if
some
ongoing
writes
have
not
yet
finished
(i.e.
even
if
the
promises
returned
from
previous
calls
to
write()
have
not
yet
settled).
It’s
not
required
to
hold
the
lock
on
the
writer
for
the
duration
of
the
write;
the
lock
instead
simply
prevents
other
producers
from
writing
in
an
interleaved
manner.
4.5.4.7.
4.6.4.7.
write(
chunk
)
WritableStreamDefaultWriter/write
In only one current engine.
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
write
method
writes
the
given
chunk
to
the
writable
stream,
by
waiting
until
any
previous
writes
have
finished
successfully,
and
then
sending
the
chunk
to
the
underlying
sink
.
It
will
return
a
promise
that
fulfills
with
Note that what "success" means is up to the underlying sink ; it may indicate simply that the chunk has been accepted, and not necessarily that it is safely saved to its ultimate destination.
4.6.
4.7.
Writable
Stream
Writer
Abstract
Operations
4.6.1.
4.7.1.
IsWritableStreamDefaultWriter
(
x
)
4.6.2.
4.7.2.
WritableStreamDefaultWriterAbort
(
writer
,
reason
)
4.6.3.
4.7.3.
WritableStreamDefaultWriterClose
(
writer
)
4.6.4.
4.7.4.
WritableStreamDefaultWriterCloseWithErrorPropagation
(
writer
)
This
abstract
operation
helps
implement
the
error
propagation
semantics
of
pipeTo()
.
4.6.5.
4.7.5.
WritableStreamDefaultWriterEnsureReadyPromiseRejected(
writer
,
error
)
4.6.6.
4.7.6.
WritableStreamDefaultWriterGetDesiredSize
(
writer
)
4.6.7.
4.7.7.
WritableStreamDefaultWriterRelease
(
writer
)
4.6.8.
4.7.8.
WritableStreamDefaultWriterWrite
(
writer
,
chunk
)
4.7.
4.8.
Class
WritableStreamDefaultController
WritableStreamDefaultController
In only one current engine.
Opera 45+ Edge 79+
Edge (Legacy) 18 IE None
Firefox for Android None iOS Safari None Chrome for Android 58+ Android WebView 58+ Samsung Internet 7.0+ Opera Mobile 43+
The
WritableStreamDefaultController
class
has
methods
that
allow
control
of
a
WritableStream
's
state.
When
constructing
a
WritableStream
,
the
underlying
sink
is
given
a
corresponding
WritableStreamDefaultController
instance
to
manipulate.
4.7.1.
4.8.1.
Class
Definition
This section is non-normative.
If
one
were
to
write
the
WritableStreamDefaultController
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class WritableStreamDefaultController {
constructor( stream, underlyingSink, size, highWaterMark)
error( e)
}
4.7.2.
4.8.2.
Internal
Slots
Instances
of
WritableStreamDefaultController
are
created
with
the
internal
slots
described
in
the
following
table:
Internal Slot | Description ( non-normative ) |
---|---|
[[controlledWritableStream]] |
The
WritableStream
instance
controlled
|
[[queue]] | A List representing the stream’s internal queue of chunks |
[[queueTotalSize]] | The total size of all the chunks stored in [[queue]] (see § 6.3 Queue-with-Sizes Operations ) |
[[started]] | A boolean flag indicating whether the underlying sink has finished starting |
[[strategyHWM]] | A number supplied to the constructor as part of the stream’s queuing strategy , indicating the point at which the stream will apply backpressure to its underlying sink |
[[strategySize]] |
A
function
supplied
to
the
constructor
as
part
of
the
stream’s
queuing
strategy
,
designed
to
calculate
the
size
of
enqueued
chunks
;
can
be
|
[[underlyingSink]] | An object representation of the stream’s underlying sink ; also used for the IsWritableStreamDefaultController brand check |
4.7.3.
4.8.3.
new
WritableStreamDefaultController(
stream
,
underlyingSink
,
size
,
highWaterMark
)
WritableStreamDefaultController
constructor
cannot
be
used
directly;
it
only
works
on
a
WritableStream
that
is
in
the
middle
of
being
constructed.
4.7.4.
4.8.4.
Properties
of
the
WritableStreamDefaultController
Prototype
4.7.4.1.
4.8.4.1.
error(
e
)
WritableStreamDefaultController/error
In no current engines.
Opera ? Edge ?
Edge (Legacy) 16+ IE None
Firefox for Android None iOS Safari None Chrome for Android ? Android WebView ? Samsung Internet ? Opera Mobile ?
error
method
will
error
the
writable
stream,
making
all
future
interactions
with
it
fail
with
the
given
error
e
.
This method is rarely used, since usually it suffices to return a rejected promise from one of the underlying sink ’s methods. However, it can be useful for suddenly shutting down a stream in response to an event outside the normal lifecycle of interactions with the underlying sink .
4.7.5.
4.8.5.
Writable
Stream
Default
Controller
Internal
Methods
The
following
are
additional
internal
methods
implemented
by
each
WritableStreamDefaultController
instance.
The
writable
stream
implementation
will
call
into
these.
The reason these are in method form, instead of as abstract operations, is to make it clear that the writable stream implementation is decoupled from the controller implementation, and could in the future be expanded with other controllers, as long as those controllers implemented such internal methods. A similar scenario is seen for readable streams, where there actually are multiple controller types and as such the counterpart internal methods are used polymorphically.
4.7.5.1.
4.8.5.1.
[[AbortSteps]]()
4.7.5.2.
4.8.5.2.
[[ErrorSteps]]()
4.7.5.3.
4.8.5.3.
[[StartSteps]]()
4.8.
4.9.
Writable
Stream
Default
Controller
Abstract
Operations
4.8.1.
4.9.1.
IsWritableStreamDefaultController
(
x
)
4.8.2.
4.9.2.
WritableStreamDefaultControllerClose
(
controller
)
4.8.3.
4.9.3.
WritableStreamDefaultControllerGetChunkSize
(
controller
,
chunk
)
4.8.4.
4.9.4.
WritableStreamDefaultControllerGetDesiredSize
(
controller
)
4.8.5.
4.9.5.
WritableStreamDefaultControllerWrite
(
controller
,
chunk
,
chunkSize
)
4.8.6.
4.9.6.
WritableStreamDefaultControllerAdvanceQueueIfNeeded
(
controller
)
4.8.7.
4.9.7.
WritableStreamDefaultControllerErrorIfNeeded
(
controller
,
error
)
4.8.8.
4.9.8.
WritableStreamDefaultControllerProcessClose
(
controller
)
4.8.9.
4.9.9.
WritableStreamDefaultControllerProcessWrite
(
controller
,
chunk
)
4.8.10.
4.9.10.
WritableStreamDefaultControllerGetBackpressure
(
controller
)
4.8.11.
4.9.11.
WritableStreamDefaultControllerError
(
controller
,
error
)
5. Transform Streams
Transform streams have been developed in the testable implementation, but not yet re-encoded in spec language. We are waiting to validate their design before doing so. In the meantime, see reference-implementation/lib/transform-stream.js .
6. Other Stream APIs and Operations
6.1.
Class
ByteLengthQueuingStrategy
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android 🔰 57+ iOS Safari ? Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
A
common
queuing
strategy
when
dealing
with
bytes
is
to
wait
until
the
accumulated
byteLength
properties
of
the
incoming
chunks
reaches
a
specified
high-water
mark.
As
such,
this
is
provided
as
a
built-in
queuing
strategy
that
can
be
used
when
constructing
streams.
const stream = new ReadableStream(
{ ... },
new ByteLengthQueuingStrategy({ highWaterMark: 16 * 1024 })
);
In this case, 16 KiB worth of chunks can be enqueued by the readable stream’s underlying source before the readable stream implementation starts sending backpressure signals to the underlying source.
const stream = new WritableStream(
{ ... },
new ByteLengthQueuingStrategy({ highWaterMark: 32 * 1024 })
);
In this case, 32 KiB worth of chunks can be accumulated in the writable stream’s internal queue, waiting for previous writes to the underlying sink to finish, before the writable stream starts sending backpressure signals to any producers .
6.1.1. Class Definition
This section is non-normative.
If
one
were
to
write
the
ByteLengthQueuingStrategy
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class ByteLengthQueuingStrategy {
constructor({ highWaterMark })
size( chunk)
}
Each
ByteLengthQueuingStrategy
instance
will
additionally
have
an
own
data
property
highWaterMark
set
by
its
constructor.
6.1.2. new ByteLengthQueuingStrategy({ highWaterMark })
ByteLengthQueuingStrategy/ByteLengthQueuingStrategy
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android 🔰 57+ iOS Safari ? Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
6.1.3.
Properties
of
the
ByteLengthQueuingStrategy
Prototype
6.1.3.1. size( chunk )
ByteLengthQueuingStrategy/size
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android 🔰 57+ iOS Safari ? Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
size
method
returns
the
given
chunk’s
byteLength
property.
(If
the
chunk
doesn’t
have
one,
it
will
return
This
method
is
intentionally
generic;
it
does
not
require
that
its
ByteLengthQueuingStrategy
object.
6.2.
Class
CountQueuingStrategy
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android 🔰 57+ iOS Safari ? Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
A common queuing strategy when dealing with streams of generic objects is to simply count the number of chunks that have been accumulated so far, waiting until this number reaches a specified high-water mark. As such, this strategy is also provided out of the box.
const stream = new ReadableStream(
{ ... },
new CountQueuingStrategy({ highWaterMark: 10 })
);
In this case, 10 chunks (of any kind) can be enqueued by the readable stream’s underlying source before the readable stream implementation starts sending backpressure signals to the underlying source.
const stream = new WritableStream(
{ ... },
new CountQueuingStrategy({ highWaterMark: 5 })
);
In this case, five chunks (of any kind) can be accumulated in the writable stream’s internal queue, waiting for previous writes to the underlying sink to finish, before the writable stream starts sending backpressure signals to any producers .
6.2.1. Class Definition
This section is non-normative.
If
one
were
to
write
the
CountQueuingStrategy
class
in
something
close
to
the
syntax
of
[ECMASCRIPT]
,
it
would
look
like
class CountQueuingStrategy {
constructor({ highWaterMark })
size()
}
Each
CountQueuingStrategy
instance
will
additionally
have
an
own
data
property
highWaterMark
set
by
its
constructor.
6.2.2. new CountQueuingStrategy({ highWaterMark })
CountQueuingStrategy/CountQueuingStrategy
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android 🔰 57+ iOS Safari ? Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
6.2.3.
Properties
of
the
CountQueuingStrategy
Prototype
6.2.3.1. size()
Opera 46+ Edge 79+
Edge (Legacy) 16+ IE None
Firefox for Android 🔰 57+ iOS Safari ? Chrome for Android 59+ Android WebView 59+ Samsung Internet 7.0+ Opera Mobile 43+
size
method
returns
one
always,
so
that
the
total
queue
size
is
a
count
of
the
number
of
chunks
in
the
queue.
This
method
is
intentionally
generic;
it
does
not
require
that
its
CountQueuingStrategy
object.
6.3. Queue-with-Sizes Operations
The
streams
in
this
specification
use
a
"queue-with-sizes"
data
structure
to
store
queued
up
values,
along
with
their
determined
sizes.
Various
specification
objects
contain
a
queue-with-sizes,
represented
by
the
object
having
two
paired
internal
slots,
always
named
[[queue]]
and
[[queueTotalSize]].
[[queue]]
is
a
List
of
Records
with
[[value]]
and
[[size]]
fields,
and
[[queueTotalSize]]
is
a
JavaScript
Number
,
i.e.
a
double-precision
floating
point
number.
The following abstract operations are used when operating on objects that contain queues-with-sizes, in order to ensure that the two internal slots stay synchronized.
Due to the limited precision of floating-point arithmetic, the framework specified here, of keeping a running total in the [[queueTotalSize]] slot, is not equivalent to adding up the size of all chunks in [[queue]]. (However, this only makes a difference when there is a huge (~10 15 ) variance in size between chunks, or when trillions of chunks are enqueued.)
6.3.1. DequeueValue ( container )
6.3.2. EnqueueValueWithSize ( container , value , size )
6.3.3. PeekQueueValue ( container )
6.3.4. ResetQueue ( container )
6.4. Miscellaneous Operations
A few abstract operations are used in this specification for utility purposes. We define them here.
6.4.1. InvokeOrNoop ( O , P , args )
6.4.2. IsFiniteNonNegativeNumber ( v )
6.4.3. PromiseInvokeOrNoop ( O , P , args )
6.4.4. ValidateAndNormalizeHighWaterMark ( highWaterMark )
*+∞* is explicitly allowed as a valid high water mark . It causes backpressure to never be applied.
1. Return _highWaterMark_.6.4.5. ValidateAndNormalizeQueuingStrategy ( size , highWaterMark )
7. Global Properties
The following constructors must be exposed on the global object as data properties of the same name:
The
attributes
of
these
properties
must
be
{
[[Writable]]:
ReadableStreamDefaultReader
,
ReadableStreamBYOBReader
,
ReadableStreamDefaultController
,
ReadableByteStreamController
,
WritableStreamDefaultWriter
,
and
WritableStreamDefaultController
classes
are
specifically
not
exposed,
as
they
are
not
independently
useful.
8. Examples of Creating Streams
This section, and all its subsections, are non-normative.
The
previous
examples
throughout
the
standard
have
focused
on
how
to
use
streams.
Here
we
show
how
to
create
a
stream,
using
the
ReadableStream
or
WritableStream
constructors.
8.1. A readable stream with an underlying push source (no backpressure support)
The
following
function
creates
readable
streams
that
wrap
WebSocket
instances
[HTML]
,
which
are
push
sources
that
do
not
support
backpressure
signals.
It
illustrates
how,
when
adapting
a
push
source,
usually
most
of
the
work
happens
in
the
start
function.
function makeReadableWebSocketStream( url, protocols) {
const ws = new WebSocket( url, protocols);
ws. binaryType = "arraybuffer" ;
return new ReadableStream({
start( controller) {
ws. onmessage = event => controller. enqueue( event. data);
ws. onclose = () => controller. close();
ws. onerror = () => controller. error( new Error( "The WebSocket errored!" ));
},
cancel() {
ws. close();
}
});
}
We can then use this function to create readable streams for a web socket, and pipe that stream to an arbitrary writable stream:
const webSocketStream = makeReadableWebSocketStream( "wss://example.com:443/" , "protocol" );
webSocketStream. pipeTo( writableStream)
. then(() => console. log( "All data successfully written!" ))
. catch ( e => console. error( "Something went wrong!" , e));
8.2. A readable stream with an underlying push source and backpressure support
The
following
function
returns
readable
streams
that
wrap
"backpressure
sockets,"
which
are
hypothetical
objects
that
have
the
same
API
as
web
sockets,
but
also
provide
the
ability
to
pause
and
resume
the
flow
of
data
with
their
readStop
and
readStart
methods.
In
doing
so,
this
example
shows
how
to
apply
backpressure
to
underlying
sources
that
support
it.
function makeReadableBackpressureSocketStream( host, port) {
const socket = createBackpressureSocket( host, port);
return new ReadableStream({
start( controller) {
socket. ondata = event => {
controller. enqueue( event. data);
if ( controller. desiredSize <= 0 ) {
// The internal queue is full, so propagate
// the backpressure signal to the underlying source.
socket. readStop();
}
};
socket. onend = () => controller. close();
socket. onerror = () => controller. error( new Error( "The socket errored!" ));
},
pull() {
// This is called if the internal queue has been emptied, but the
// stream’s consumer still wants more data. In that case, restart
// the flow of data if we have previously paused it.
socket. readStart();
},
cancel() {
socket. close();
}
});
}
We can then use this function to create readable streams for such "backpressure sockets" in the same way we do for web sockets. This time, however, when we pipe to a destination that cannot accept data as fast as the socket is producing it, or if we leave the stream alone without reading from it for some time, a backpressure signal will be sent to the socket.
8.3. A readable byte stream with an underlying push source (no backpressure support)
The
following
function
returns
readable
byte
streams
that
wraps
a
hypothetical
UDP
socket
API,
including
a
promise-returning
select2()
method
that
is
meant
to
be
evocative
of
the
POSIX
select(2)
system
call.
Since
the
UDP
protocol
does
not
have
any
built-in
backpressure
support,
the
backpressure
signal
given
by
desiredSize
is
ignored,
and
the
stream
ensures
that
when
data
is
available
from
the
socket
but
not
yet
requested
by
the
developer,
it
is
enqueued
in
the
stream’s
internal
queue
,
to
avoid
overflow
of
the
kernel-space
queue
and
a
consequent
loss
of
data.
This has some interesting consequences for how consumers interact with the stream. If the consumer does not read data as fast as the socket produces it, the chunks will remain in the stream’s internal queue indefinitely. In this case, using a BYOB reader will cause an extra copy, to move the data from the stream’s internal queue to the developer-supplied buffer. However, if the consumer consumes the data quickly enough, a BYOB reader will allow zero-copy reading directly into developer-supplied buffers.
(You
can
imagine
a
more
complex
version
of
this
example
which
uses
desiredSize
to
inform
an
out-of-band
backpressure
signaling
mechanism,
for
example
by
sending
a
message
down
the
socket
to
adjust
the
rate
of
data
being
sent.
That
is
left
as
an
exercise
for
the
reader.)
const DEFAULT_CHUNK_SIZE = 65536 ;
function makeUDPSocketStream( host, port) {
const socket = createUDPSocket( host, port);
return new ReadableStream({
type: "bytes" ,
start( controller) {
readRepeatedly(). catch ( e => controller. error( e));
function readRepeatedly() {
return socket. select2(). then(() => {
// Since the socket can become readable even when there’s
// no pending BYOB requests, we need to handle both cases.
let bytesRead;
if ( controller. byobRequest) {
const v = controller. byobRequest. view;
bytesRead = socket. readInto( v. buffer, v. byteOffset, v. byteLength);
controller. byobRequest. respond( bytesRead);
} else {
const buffer = new ArrayBuffer( DEFAULT_CHUNK_SIZE);
bytesRead = socket. readInto( buffer, 0 , DEFAULT_CHUNK_SIZE);
controller. enqueue( new Uint8Array( buffer, 0 , bytesRead));
}
if ( bytesRead === 0 ) {
controller. close();
return ;
}
return readRepeatedly();
});
}
},
cancel() {
socket. close();
}
});
}
ReadableStream
instances
returned
from
this
function
can
now
vend
BYOB
readers
,
with
all
of
the
aforementioned
benefits
and
caveats.
8.4. A readable stream with an underlying pull source
The
following
function
returns
readable
streams
that
wrap
portions
of
the
Node.js
file
system
API
(which
themselves
map
fairly
directly
to
C’s
fopen
,
fread
,
and
fclose
trio).
Files
are
a
typical
example
of
pull
sources
.
Note
how
in
contrast
to
the
examples
with
push
sources,
most
of
the
work
here
happens
on-demand
in
the
pull
function,
and
not
at
startup
time
in
the
start
function.
const fs = require( "pr/fs" ); // https://github.com/jden/pr
const CHUNK_SIZE = 1024 ;
function makeReadableFileStream( filename) {
let fd;
let position = 0 ;
return new ReadableStream({
start() {
return fs. open( filename, "r" ). then( result => {
fd = result;
});
},
pull( controller) {
const buffer = new ArrayBuffer( CHUNK_SIZE);
return fs. read( fd, buffer, 0 , CHUNK_SIZE, position). then( bytesRead => {
if ( bytesRead === 0 ) {
return fs. close( fd). then(() => controller. close());
} else {
position += bytesRead;
controller. enqueue( new Uint8Array( buffer, 0 , bytesRead));
}
});
},
cancel() {
return fs. close( fd);
}
});
}
We can then create and use readable streams for files just as we could before for sockets.
8.5. A readable byte stream with an underlying pull source
The following function returns readable byte streams that allow efficient zero-copy reading of files, again using the Node.js file system API . Instead of using a predetermined chunk size of 1024, it attempts to fill the developer-supplied buffer, allowing full control.
const fs = require( "pr/fs" ); // https://github.com/jden/pr
const DEFAULT_CHUNK_SIZE = 1024 ;
function makeReadableByteFileStream( filename) {
let fd;
let position = 0 ;
return new ReadableStream({
type: "bytes" ,
start() {
return fs. open( filename, "r" ). then( result => {
fd = result;
});
},
pull( controller) {
// Even when the consumer is using the default reader, the auto-allocation
// feature allocates a buffer and passes it to us via byobRequest.
const v = controller. byobRequest. view;
return fs. read( fd, v. buffer, v. byteOffset, v. byteLength, position). then( bytesRead => {
if ( bytesRead === 0 ) {
return fs. close( fd). then(() => controller. close());
} else {
position += bytesRead;
controller. byobRequest. respond( bytesRead);
}
});
},
cancel() {
return fs. close( fd);
},
autoAllocateChunkSize: DEFAULT_CHUNK_SIZE
});
}
With
this
in
hand,
we
can
create
and
use
BYOB
readers
for
the
returned
ReadableStream
.
But
we
can
also
create
default
readers
,
using
them
in
the
same
simple
and
generic
manner
as
usual.
The
adaptation
between
the
low-level
byte
tracking
of
the
underlying
byte
source
shown
here,
and
the
higher-level
chunk-based
consumption
of
a
default
reader
,
is
all
taken
care
of
automatically
by
the
streams
implementation.
The
auto-allocation
feature,
via
the
autoAllocateChunkSize
option,
even
allows
us
to
write
less
code,
compared
to
the
manual
branching
in
§ 8.3
A
readable
byte
stream
with
an
underlying
push
source
(no
backpressure
support)
.
8.6. A writable stream with no backpressure or success signals
The
following
function
returns
a
writable
stream
that
wraps
a
WebSocket
[HTML]
.
Web
sockets
do
not
provide
any
way
to
tell
when
a
given
chunk
of
data
has
been
successfully
sent
(without
awkward
polling
of
bufferedAmount
,
which
we
leave
as
an
exercise
to
the
reader).
As
such,
this
writable
stream
has
no
ability
to
communicate
accurate
backpressure
signals
or
write
success/failure
to
its
producers
.
That
is,
the
promises
returned
by
its
writer
’s
write()
method
and
ready
getter
will
always
fulfill
immediately.
function makeWritableWebSocketStream( url, protocols) {
const ws = new WebSocket( url, protocols);
return new WritableStream({
start( controller) {
ws. onerror = () => controller. error( new Error( "The WebSocket errored!" ));
return new Promise( resolve => ws. onopen = resolve);
},
write( chunk) {
ws. send( chunk);
// Return immediately, since the web socket gives us no easy way to tell
// when the write completes.
},
close() {
return new Promise(( resolve, reject) => {
ws. onclose = resolve;
ws. close( 1000 );
});
},
abort( reason) {
return new Promise(( resolve, reject) => {
ws. onclose = resolve;
ws. close( 4000 , reason && reason. message);
});
}
});
}
We can then use this function to create writable streams for a web socket, and pipe an arbitrary readable stream to it:
const webSocketStream = makeWritableWebSocketStream( "wss://example.com:443/" , "protocol" );
readableStream. pipeTo( webSocketStream)
. then(() => console. log( "All data successfully written!" ))
. catch ( e => console. error( "Something went wrong!" , e));
8.7. A writable stream with backpressure and success signals
The
following
function
returns
writable
streams
that
wrap
portions
of
the
Node.js
file
system
API
(which
themselves
map
fairly
directly
to
C’s
fopen
,
fwrite
,
and
fclose
trio).
Since
the
API
we
are
wrapping
provides
a
way
to
tell
when
a
given
write
succeeds,
this
stream
will
be
able
to
communicate
backpressure
signals
as
well
as
whether
an
individual
write
succeeded
or
failed.
const fs = require( "pr/fs" ); // https://github.com/jden/pr
function makeWritableFileStream( filename) {
let fd;
return new WritableStream({
start() {
return fs. open( filename, "w" ). then( result => {
fd = result;
});
},
write( chunk) {
return fs. write( fd, chunk, 0 , chunk. length);
},
close() {
return fs. close( fd);
},
abort() {
return fs. close( fd);
}
});
}
We can then use this function to create a writable stream for a file, and write individual chunks of data to it:
const fileStream = makeWritableFileStream( "/example/path/on/fs.txt" );
const writer = fileStream. getWriter();
writer. write( "To stream, or not to stream\n" );
writer. write( "That is the question\n" );
writer. close()
. then(() => console. log( "chunks written and stream closed successfully!" ))
. catch ( e => console. error( e));
Note
that
if
a
particular
call
to
fs.write
takes
a
longer
time,
the
returned
promise
will
fulfill
later.
In
the
meantime,
additional
writes
can
be
queued
up,
which
are
stored
in
the
stream’s
internal
queue.
The
accumulation
of
chunks
in
this
queue
can
change
the
stream
to
return
a
pending
promise
from
the
ready
getter,
which
is
a
signal
to
producers
that
they
should
back
off
and
stop
writing
if
possible.
The
way
in
which
the
writable
stream
queues
up
writes
is
especially
important
in
this
case,
since
as
stated
in
the
documentation
for
fs.write
,
"it
is
unsafe
to
use
fs.write
multiple
times
on
the
same
file
without
waiting
for
the
[promise]."
But
we
don’t
have
to
worry
about
that
when
writing
the
makeWritableFileStream
function,
since
the
stream
implementation
guarantees
that
the
underlying
sink
’s
write
method
will
not
be
called
until
any
promises
returned
by
previous
calls
have
fulfilled!
8.8. A { readable, writable } stream pair wrapping the same underlying resource
The
following
function
returns
an
object
of
the
form
{
readable,
writable
}
,
with
the
readable
property
containing
a
readable
stream
and
the
writable
property
containing
a
writable
stream,
where
both
streams
wrap
the
same
underlying
web
socket
resource.
In
essence,
this
combines
§ 8.1
A
readable
stream
with
an
underlying
push
source
(no
backpressure
support)
and
§ 8.6
A
writable
stream
with
no
backpressure
or
success
signals
.
While doing so, it illustrates how you can use JavaScript classes to create reusable underlying sink and underlying source abstractions.
function streamifyWebSocket( url, protocol) {
const ws = new WebSocket( url, protocols);
ws. binaryType = "arraybuffer" ;
return {
readable: new ReadableStream( new WebSocketSource( ws)),
writable: new WritableStream( new WebSocketSink( ws))
};
}
class WebSocketSource {
constructor( ws) {
this . _ws = ws;
}
start( controller) {
this . _ws. onmessage = event => controller. enqueue( event. data);
this . _ws. onclose = () => controller. close();
this . _ws. addEventListener( "error" , () => {
controller. error( new Error( "The WebSocket errored!" ));
});
}
cancel() {
this . _ws. close();
}
}
class WebSocketSink {
constructor( ws) {
this . _ws = ws;
}
start( controller) {
this . _ws. addEventListener( "error" , () => {
controller. error( new Error( "The WebSocket errored!" ));
});
return new Promise( resolve => this . _ws. onopen = resolve);
}
write( chunk) {
this . _ws. send( chunk);
}
close() {
return new Promise(( resolve, reject) => {
this . _ws. onclose = resolve;
this . _ws. close();
});
}
abort( reason) {
return new Promise(( resolve, reject) => {
ws. onclose = resolve;
ws. close( 4000 , reason && reason. message);
});
}
}
We can then use the objects created by this function to communicate with a remote web socket, using the standard stream APIs:
const streamyWS = streamifyWebSocket( "wss://example.com:443/" , "protocol" );
const writer = streamyWS. writable. getWriter();
const reader = streamyWS. readable. getReader();
writer. write( "Hello" );
writer. write( "web socket!" );
reader. read(). then(({ value, done }) => {
console. log( "The web socket says: " , value);
});
Note
how
in
this
setup
canceling
the
readable
side
will
implicitly
close
the
writable
side,
and
similarly,
closing
or
aborting
the
writable
side
will
implicitly
close
the
readable
side.
Conventions
This specification uses algorithm conventions very similar to those of [ECMASCRIPT] . However, it deviates in the following ways, mostly for brevity. It is hoped (and vaguely planned) that eventually the conventions of ECMAScript itself will evolve in these ways.
- We use destructuring notation in function and method declarations, and assume that the destructuring assignment procedure was performed before the algorithm starts.
-
We
similarly
use
the
default
argument
notation
= {}
in a couple of cases. -
We
use
"
this " instead of "this value". - We use the shorthand phrases from the [PROMISES-GUIDE] to operate on promises at a higher level than the ECMAScript spec does.
It’s also worth noting that, as in [ECMASCRIPT] , all numbers are represented as double-precision floating point values, and all arithmetic operations performed on them must be done in the standard way for such values.
Acknowledgments
The editor would like to thank Adam Rice, Anne van Kesteren, Ben Kelly, Brian di Palma, Calvin Metcalf, Dominic Tarr, Ed Hager, Forbes Lindesay, 贺师俊 (hax), isonmad, Jake Archibald, Jens Nockert, Mangala Sadhu Sangeet Singh Khalsa, Marcos Caceres, Marvin Hagemeister, Michael Mior, Mihai Potra, Simon Menke, Stephen Sugden, Tab Atkins, Tanguy Krotoff, Thorsten Lorenz, Till Schneidereit, Tim Caswell, Trevor Norris, tzik, Youenn Fablet, and Xabier Rodríguez for their contributions to this specification.
Special thanks to: Bert Belder for bringing up implementation concerns that led to crucial API changes; Forrest Norvell for his work on the initial reference implementation; Gorgi Kosev for his breakthrough idea of separating piping into two methods, thus resolving a major sticking point ; Isaac Schlueter for his pioneering work on JavaScript streams in Node.js; Jake Verbaten for his early involvement and support; Janessa Det for the logo; Will Chan for his help ensuring that the API allows high-performance network streaming; and 平野裕 (Yutaka Hirano) for his help with the readable stream reader design.
This standard is written by Domenic Denicola ( Google , d@domenic.me ) and 吉野剛史 (Takeshi Yoshino, Google , tyoshino@chromium.org ).
Per CC0 , to the extent possible under law, the editor has waived all copyright and related or neighboring rights to this work.
Intellectual property rights
Copyright © WHATWG (Apple, Google, Mozilla, Microsoft). This work is licensed under a Creative Commons Attribution 4.0 International License .