diff --git a/media-source-respec.html b/media-source-respec.html index 6e65ddb..6694fea 100644 --- a/media-source-respec.html +++ b/media-source-respec.html @@ -247,7 +247,7 @@

Definitions

SourceBuffer byte stream format specification

The specific [=byte stream format specification=] that describes the format of the byte stream accepted by a SourceBuffer instance. The - [=byte stream format specification=], for a SourceBuffer object, is initially selected based on the type passed to the + [=byte stream format specification=], for a SourceBuffer object, is initially selected based on the |type:DOMString| passed to the {{MediaSource/addSourceBuffer()}} call that created the object, and can be updated by {{SourceBuffer/changeType()}} calls on the object.

SourceBuffer configuration
@@ -358,7 +358,7 @@

Attributes

  • If the value being set is negative or NaN then throw a {{TypeError}} exception and abort these steps.
  • If the {{MediaSource/readyState}} attribute is not {{ReadyState/""open""}} then throw an {{InvalidStateError}} exception and abort these steps.
  • If the {{SourceBuffer/updating}} attribute equals true on any SourceBuffer in {{MediaSource/sourceBuffers}}, then throw an {{InvalidStateError}} exception and abort these steps.
  • -
  • Run the [=duration change=] algorithm with |new duration| set to the value being assigned to this attribute. +
  • Run the [=duration change=] algorithm with |new duration:unrestricted double| set to the value being assigned to this attribute.

    The [=duration change=] algorithm will adjust |new duration| higher if there is any currently buffered coded frame with a higher end time.

    {{SourceBuffer/appendBuffer()}} and {{MediaSource/endOfStream()}} can update the duration under certain circumstances.

  • @@ -379,10 +379,10 @@

    Attributes

    Methods

    addSourceBuffer

    Adds a new SourceBuffer to {{MediaSource/sourceBuffers}}.

      -
    1. If type is an empty string then throw a {{TypeError}} exception and abort these steps.
    2. -
    3. If type contains a MIME type that is not supported or contains a MIME type that is not supported with the types specified for the other SourceBuffer objects in {{MediaSource/sourceBuffers}}, then throw a {{NotSupportedError}} exception and abort these steps.
    4. +
    5. If |type:DOMString| is an empty string then throw a {{TypeError}} exception and abort these steps.
    6. +
    7. If |type| contains a MIME type that is not supported or contains a MIME type that is not supported with the types specified for the other SourceBuffer objects in {{MediaSource/sourceBuffers}}, then throw a {{NotSupportedError}} exception and abort these steps.
    8. If the user agent can't handle any more SourceBuffer objects or if creating a SourceBuffer - based on type would result in an unsupported [=SourceBuffer configuration=], + based on |type| would result in an unsupported [=SourceBuffer configuration=], then throw a {{QuotaExceededError}} exception and abort these steps.

      For example, a user agent MAY throw a {{QuotaExceededError}} exception if the media element has reached the readyState. This can occur if the user agent's media engine does not support adding more tracks during @@ -393,7 +393,7 @@

      Attributes

    9. Create a new SourceBuffer object and associated resources.
    10. Set the [=generate timestamps flag=] on the new object to the value in the "Generate Timestamps Flag" column of the byte stream format registry [[MSE-REGISTRY]] entry - that is associated with type. + that is associated with |type|.
    11. If the [=generate timestamps flag=] equals true:
      @@ -411,7 +411,7 @@

      Attributes

    12. Add the new object to {{MediaSource/sourceBuffers}} and [=queue a task=] to [=fire an event=] named {{addsourcebuffer}} at {{MediaSource/sourceBuffers}}.
    13. Return the new object.
    -
    ParameterTypeNullableOptionalDescription
    type{{DOMString}}
    Return type: SourceBuffer
    +
    ParameterTypeNullableOptionalDescription
    |type|{{DOMString}}
    Return type: SourceBuffer
    removeSourceBuffer

    Removes a {{SourceBuffer}} from {{MediaSource/sourceBuffers}}.

    @@ -601,7 +601,7 @@

    Attributes

    1. If the {{MediaSource/readyState}} attribute is not in the {{ReadyState/""open""}} state then throw an {{InvalidStateError}} exception and abort these steps.
    2. If the {{SourceBuffer/updating}} attribute equals true on any SourceBuffer in {{MediaSource/sourceBuffers}}, then throw an {{InvalidStateError}} exception and abort these steps.
    3. -
    4. Run the [=end of stream=] algorithm with the |error:EndOfStreamError| parameter set to |error|.
    5. +
    6. Run the [=end of stream=] algorithm with the error parameter set to |error:EndOfStreamError|.
    ParameterTypeNullableOptionalDescription
    |error|{{EndOfStreamError}}
    Return type: {{undefined}}
    @@ -611,8 +611,8 @@

    Attributes

    Updates the [=live seekable range=] variable used in HTMLMediaElement Extensions to modify {{HTMLMediaElement}}.{{HTMLMediaElement/seekable}} behavior.

    1. If the {{MediaSource/readyState}} attribute is not {{ReadyState/""open""}} then throw an {{InvalidStateError}} exception and abort these steps.
    2. -
    3. If start is negative or greater than end, then throw a {{TypeError}} exception and abort these steps.
    4. -
    5. Set [=live seekable range=] to be a new containing a single range whose start position is start and end position is end. +
    6. If |start:double| is negative or greater than |end:double|, then throw a {{TypeError}} exception and abort these steps.
    7. +
    8. Set [=live seekable range=] to be a new containing a single range whose start position is |start| and end position is |end|.
    @@ -624,22 +624,22 @@

    Attributes

    - + - + - + - +
    Description
    start|start| {{double}} The start of the range, in seconds measured from [=presentation start time=]. While set, and if {{MediaSource/duration}} equals positive Infinity, {{HTMLMediaElement}}.{{HTMLMediaElement/seekable}} will return a non-empty TimeRanges object with a lowest range start timestamp no greater than start.The start of the range, in seconds measured from [=presentation start time=]. While set, and if {{MediaSource/duration}} equals positive Infinity, {{HTMLMediaElement}}.{{HTMLMediaElement/seekable}} will return a non-empty TimeRanges object with a lowest range start timestamp no greater than |start|.
    end|end| {{double}} The end of range, in seconds measured from [=presentation start time=]. While set, and if {{MediaSource/duration}} equals positive Infinity, {{HTMLMediaElement}}.{{HTMLMediaElement/seekable}} will return a non-empty TimeRanges object with a highest range end timestamp no less than end.The end of range, in seconds measured from [=presentation start time=]. While set, and if {{MediaSource/duration}} equals positive Infinity, {{HTMLMediaElement}}.{{HTMLMediaElement/seekable}} will return a non-empty TimeRanges object with a highest range end timestamp no less than |end|.
    @@ -654,10 +654,10 @@

    Attributes

    Check to see whether the MediaSource is capable of creating SourceBuffer objects for the specified MIME type.

      -
    1. If type is an empty string, then return false.
    2. -
    3. If type does not contain a valid MIME type string, then return false.
    4. -
    5. If type contains a media type or media subtype that the MediaSource does not support, then return false.
    6. -
    7. If type contains a codec that the MediaSource does not support, then return false.
    8. +
    9. If |type:DOMString| is an empty string, then return false.
    10. +
    11. If |type| does not contain a valid MIME type string, then return false.
    12. +
    13. If |type| contains a media type or media subtype that the MediaSource does not support, then return false.
    14. +
    15. If |type| contains a codec that the MediaSource does not support, then return false.
    16. If the MediaSource does not support the specified combination of media type, media subtype, and codecs then return false.
    17. Return true.
    @@ -667,7 +667,7 @@

    Attributes

    This method returning true implies that HTMLMediaElement.canPlayType() will return "maybe" or "probably" since it does not make sense for a MediaSource to support a type the HTMLMediaElement knows it cannot play.

    -
    ParameterTypeNullableOptionalDescription
    type{{DOMString}}
    Return type: {{boolean}}
    +
    ParameterTypeNullableOptionalDescription
    |type|{{DOMString}}
    Return type: {{boolean}}
    @@ -838,10 +838,10 @@

    Seeking

    Run the following steps as part of the "Wait until the user agent has established whether or not the media data for the new playback position is available, and, if it is, until it has decoded enough data to play back that position" step of the :

    1. -

      The media element looks for [=media segments=] containing the new playback position in each SourceBuffer object in {{MediaSource/activeSourceBuffers}}. +

      The media element looks for [=media segments=] containing the |new playback position:double| in each SourceBuffer object in {{MediaSource/activeSourceBuffers}}. Any position within a {{TimeRanges}} in the current value of the {{HTMLMediaElement}}.{{HTMLMediaElement/buffered}} attribute has all necessary media segments buffered for that position.

      -
      If new playback position is not in any {{TimeRanges}} of {{HTMLMediaElement}}.{{HTMLMediaElement/buffered}}
      +
      If |new playback position| is not in any {{TimeRanges}} of {{HTMLMediaElement}}.{{HTMLMediaElement/buffered}}
      1. If the {{HTMLMediaElement}}.{{HTMLMediaElement/readyState}} attribute is greater than @@ -857,13 +857,13 @@

        Seeking

      Otherwise
      Continue -

      If the {{MediaSource/readyState}} attribute is {{ReadyState/""ended""}} and the new playback position is within a {{TimeRanges}} currently in {{HTMLMediaElement}}.{{HTMLMediaElement/buffered}}, then the seek operation must continue to completion here even if one or more currently selected or enabled track buffers' largest range end timestamp is less than new playback position. This condition should only occur due to logic in {{SourceBuffer/buffered}} when {{MediaSource/readyState}} is {{ReadyState/""ended""}}.

      +

      If the {{MediaSource/readyState}} attribute is {{ReadyState/""ended""}} and the |new playback position| is within a {{TimeRanges}} currently in {{HTMLMediaElement}}.{{HTMLMediaElement/buffered}}, then the seek operation must continue to completion here even if one or more currently selected or enabled track buffers' largest range end timestamp is less than |new playback position|. This condition should only occur due to logic in {{SourceBuffer/buffered}} when {{MediaSource/readyState}} is {{ReadyState/""ended""}}.

    2. The media element resets all decoders and initializes each one with data from the appropriate [=initialization segment=].
    3. The media element feeds [=coded frames=] from the [=active track buffers=] into the decoders starting with the - closest [=random access point=] before the new playback position.
    4. + closest [=random access point=] before the |new playback position|.
    5. Resume the at the "Await a stable state" step.
    @@ -1054,7 +1054,7 @@

    End of stream

    If |error| is not set
      -
    1. Run the [=duration change=] algorithm with |new duration| set to +
    2. Run the [=duration change=] algorithm with |new duration:unrestricted double| set to the largest [=track buffer ranges=] end time across all the [=track buffers=] across all SourceBuffer objects in {{MediaSource/sourceBuffers}}.

      This allows the duration to properly reflect the end of the appended media segments. For example, if the duration was explicitly set to 10 seconds and only media segments for 0 to 5 seconds were appended before endOfStream() was called, then the duration will get updated to 5 seconds.

    3. @@ -1155,8 +1155,8 @@

      SourceBuffer Object

      1. If this object has been removed from the {{MediaSource/sourceBuffers}} attribute of the [=parent media source=], then throw an {{InvalidStateError}} exception and abort these steps.
      2. If the {{SourceBuffer/updating}} attribute equals true, then throw an {{InvalidStateError}} exception and abort these steps.
      3. -
      4. Let new mode equal the new value being assigned to this attribute.
      5. -
      6. If [=generate timestamps flag=] equals true and new mode equals +
      7. Let |new mode:AppendMode| equal the new value being assigned to this attribute.
      8. +
      9. If [=generate timestamps flag=] equals true and |new mode| equals {{AppendMode/""segments""}}, then throw a {{TypeError}} exception and abort these steps.
      10. @@ -1167,8 +1167,8 @@

        SourceBuffer Object

    4. If the [=append state=] equals [=PARSING_MEDIA_SEGMENT=], then throw an {{InvalidStateError}} and abort these steps.
    5. -
    6. If the new mode equals {{AppendMode/""sequence""}}, then set the [=group start timestamp=] to the [=group end timestamp=].
    7. -
    8. Update the attribute to new mode.
    9. +
    10. If the |new mode| equals {{AppendMode/""sequence""}}, then set the [=group start timestamp=] to the [=group end timestamp=].
    11. +
    12. Update the attribute to |new mode|.
    updating of type {{boolean}}, readonly

    Indicates whether the asynchronous continuation of an {{SourceBuffer/appendBuffer()}} or {{SourceBuffer/remove()}} @@ -1180,20 +1180,20 @@

    SourceBuffer Object

    When the attribute is read the following steps MUST occur:

    1. If this object has been removed from the {{MediaSource/sourceBuffers}} attribute of the [=parent media source=] then throw an {{InvalidStateError}} exception and abort these steps.
    2. -
    3. Let highest end time be the largest [=track buffer ranges=] end time across all the [=track buffers=] managed by this SourceBuffer object.
    4. -
    5. Let intersection ranges equal a {{TimeRanges}} object containing a single range from 0 to highest end time.
    6. +
    7. Let |highest end time:double| be the largest [=track buffer ranges=] end time across all the [=track buffers=] managed by this SourceBuffer object.
    8. +
    9. Let |intersection ranges:normalized TimeRanges| equal a {{TimeRanges}} object containing a single range from 0 to |highest end time|.
    10. For each audio and video [=track buffer=] managed by this SourceBuffer, run the following steps: -

      Text [=track buffers=] are included in the calculation of highest end time, above, but excluded from the buffered range calculation here. They are not necessarily continuous, nor should any discontinuity within them trigger playback stall when the other media tracks are continuous over the same time range.

      +

      Text [=track buffers=] are included in the calculation of |highest end time|, above, but excluded from the buffered range calculation here. They are not necessarily continuous, nor should any discontinuity within them trigger playback stall when the other media tracks are continuous over the same time range.

        -
      1. Let track ranges equal the [=track buffer ranges=] for the current [=track buffer=].
      2. -
      3. If {{MediaSource/readyState}} is {{ReadyState/""ended""}}, then set the end time on the last range in track ranges to highest end time.
      4. -
      5. Let new intersection ranges equal the intersection between the intersection ranges and the track ranges.
      6. -
      7. Replace the ranges in intersection ranges with the new intersection ranges.
      8. +
      9. Let |track ranges:normalized TimeRanges| equal the [=track buffer ranges=] for the current [=track buffer=].
      10. +
      11. If {{MediaSource/readyState}} is {{ReadyState/""ended""}}, then set the end time on the last range in |track ranges| to |highest end time|.
      12. +
      13. Let |new intersection ranges:normalized TimeRanges| equal the intersection between the |intersection ranges| and the |track ranges|.
      14. +
      15. Replace the ranges in |intersection ranges| with the |new intersection ranges|.
    11. -
    12. If intersection ranges does not contain the exact same range information as the +
    13. If |intersection ranges| does not contain the exact same range information as the current value of this attribute, then update the current value of this attribute to - intersection ranges.
    14. + |intersection ranges|.
    15. Return the current value of this attribute.
    timestampOffset of type {{double}}
    @@ -1201,7 +1201,7 @@

    SourceBuffer Object

    On getting, Return the initial value or the last value that was successfully set.

    On setting, run the following steps:

      -
    1. Let new timestamp offset equal the new value being assigned to this attribute.
    2. +
    3. Let |new timestamp offset:double| equal the new value being assigned to this attribute.
    4. If this object has been removed from the {{MediaSource/sourceBuffers}} attribute of the [=parent media source=], then throw an {{InvalidStateError}} exception and abort these steps.
    5. If the {{SourceBuffer/updating}} attribute equals true, then throw an {{InvalidStateError}} exception and abort these steps.
    6. @@ -1212,8 +1212,8 @@

      SourceBuffer Object

  • If the [=append state=] equals [=PARSING_MEDIA_SEGMENT=], then throw an {{InvalidStateError}} and abort these steps.
  • -
  • If the {{SourceBuffer/mode}} attribute equals {{AppendMode/""sequence""}}, then set the [=group start timestamp=] to new timestamp offset.
  • -
  • Update the attribute to new timestamp offset.
  • +
  • If the {{SourceBuffer/mode}} attribute equals {{AppendMode/""sequence""}}, then set the [=group start timestamp=] to |new timestamp offset|.
  • +
  • Update the attribute to |new timestamp offset|.
  • audioTracks of type {{AudioTrackList}}, readonly
    The list of {{AudioTrack}} objects created by this object. @@ -1257,16 +1257,16 @@

    SourceBuffer Object

    onabort of type {{EventHandler}}

    The event handler for the {{abort}} event.

    Methods

    appendBuffer
    -

    Appends the segment data in an BufferSource[[!WEBIDL]] to the source buffer.

    +

    Appends the segment data in an BufferSource[[!WEBIDL]] to the {{SourceBuffer}}.

    1. Run the [=prepare append=] algorithm.
    2. -
    3. Add data to the end of the [=input buffer=].
    4. +
    5. Add |data:BufferSource| to the end of the [=input buffer=].
    6. Set the {{SourceBuffer/updating}} attribute to true.
    7. [=Queue a task=] to [=fire an event=] named {{updatestart}} at this SourceBuffer object.
    8. Asynchronously run the [=buffer append=] algorithm.
    -
    ParameterTypeNullableOptionalDescription
    data{{BufferSource}}
    Return type: {{undefined}}
    abort
    +
    ParameterTypeNullableOptionalDescription
    |data|{{BufferSource}}
    Return type: {{undefined}}
    abort

    Aborts the current segment and resets the segment parser.

      @@ -1289,10 +1289,10 @@

      SourceBuffer Object

      changeType

      Changes the MIME type associated with this object. Subsequent {{SourceBuffer/appendBuffer()}} calls will expect the newly appended bytes to conform to the new type.

        -
      1. If type is an empty string then throw a {{TypeError}} exception and abort these steps.
      2. +
      3. If |type:DOMString| is an empty string then throw a {{TypeError}} exception and abort these steps.
      4. If this object has been removed from the {{MediaSource/sourceBuffers}} attribute of the [=parent media source=], then throw an {{InvalidStateError}} exception and abort these steps.
      5. If the {{SourceBuffer/updating}} attribute equals true, then throw an {{InvalidStateError}} exception and abort these steps.
      6. -
      7. If type contains a MIME type that is not supported or contains a MIME type that is not supported with the types specified (currently or previously) of {{SourceBuffer}} objects in the {{MediaSource/sourceBuffers}} attribute of the [=parent media source=], then throw a {{NotSupportedError}} exception and abort these steps.
      8. +
      9. If |type| contains a MIME type that is not supported or contains a MIME type that is not supported with the types specified (currently or previously) of {{SourceBuffer}} objects in the {{MediaSource/sourceBuffers}} attribute of the [=parent media source=], then throw a {{NotSupportedError}} exception and abort these steps.
      10. If the {{MediaSource/readyState}} attribute of the [=parent media source=] is in the {{ReadyState/""ended""}} state then run the following steps:

          @@ -1303,7 +1303,7 @@

          SourceBuffer Object

        1. Run the [=reset parser state=] algorithm.
        2. Update the [=generate timestamps flag=] on this {{SourceBuffer}} object to the value in the "Generate Timestamps Flag" column of the byte stream format registry [[MSE-REGISTRY]] entry - that is associated with type.
        3. + that is associated with |type|.
        4. If the [=generate timestamps flag=] equals true:
          @@ -1328,7 +1328,7 @@

          SourceBuffer Object

          Description - type + |type| {{DOMString}} @@ -1343,8 +1343,8 @@

          SourceBuffer Object

        5. If this object has been removed from the {{MediaSource/sourceBuffers}} attribute of the [=parent media source=] then throw an {{InvalidStateError}} exception and abort these steps.
        6. If the {{SourceBuffer/updating}} attribute equals true, then throw an {{InvalidStateError}} exception and abort these steps.
        7. If {{MediaSource/duration}} equals NaN, then throw a {{TypeError}} exception and abort these steps.
        8. -
        9. If start is negative or greater than {{MediaSource/duration}}, then throw a {{TypeError}} exception and abort these steps.
        10. -
        11. If end is less than or equal to start or end equals NaN, then throw a {{TypeError}} exception and abort these steps.
        12. +
        13. If |start:double| is negative or greater than {{MediaSource/duration}}, then throw a {{TypeError}} exception and abort these steps.
        14. +
        15. If |end:unrestricted double| is less than or equal to |start| or |end| equals NaN, then throw a {{TypeError}} exception and abort these steps.
        16. If the {{MediaSource/readyState}} attribute of the [=parent media source=] is in the {{ReadyState/""ended""}} state then run the following steps:

          @@ -1353,7 +1353,7 @@

          SourceBuffer Object

        17. [=Queue a task=] to [=fire an event=] named {{sourceopen}} at the [=parent media source=].
      11. -
      12. Run the [=range removal=] algorithm with start and end as the start and end of the removal range.
      13. +
      14. Run the [=range removal=] algorithm with |start| and |end| as the start and end of the removal range.
      @@ -1366,7 +1366,7 @@

      SourceBuffer Object

      - + @@ -1375,7 +1375,7 @@

      SourceBuffer Object

      - + @@ -1649,12 +1649,12 @@

      Range Removal

      Follow these steps when a caller needs to initiate a JavaScript visible range removal operation that blocks other SourceBuffer updates:

        -
      1. Let start equal the starting [=presentation timestamp=] for the removal range, in seconds measured from [=presentation start time=].
      2. -
      3. Let end equal the end [=presentation timestamp=] for the removal range, in seconds measured from [=presentation start time=].
      4. +
      5. Let |start:double| equal the starting [=presentation timestamp=] for the removal range, in seconds measured from [=presentation start time=].
      6. +
      7. Let |end:unrestricted double| equal the end [=presentation timestamp=] for the removal range, in seconds measured from [=presentation start time=].
      8. Set the {{SourceBuffer/updating}} attribute to true.
      9. [=Queue a task=] to [=fire an event=] named {{updatestart}} at this SourceBuffer object.
      10. Return control to the caller and run the rest of the steps asynchronously.
      11. -
      12. Run the [=coded frame removal=] algorithm with start and end as the start and end of the removal range.
      13. +
      14. Run the [=coded frame removal=] algorithm with |start| and |end| as the start and end of the removal range.
      15. Set the {{SourceBuffer/updating}} attribute to false.
      16. [=Queue a task=] to [=fire an event=] named {{update}} at this SourceBuffer object.
      17. [=Queue a task=] to [=fire an event=] named {{updateend}} at this SourceBuffer object.
      18. @@ -1674,7 +1674,8 @@

        Initialization Segment Received

      19. Update the {{MediaSource/duration}} attribute if it currently equals NaN:
        If the initialization segment contains a duration:
        -
        Run the [=duration change=] algorithm with |new duration| set to the duration in the initialization segment.
        +
        Run the [=duration change=] algorithm with |new duration:unrestricted double| set to the duration in + the initialization segment.
        Otherwise:
        Run the [=duration change=] algorithm with |new duration| set to positive Infinity.
        @@ -1689,7 +1690,7 @@

        Initialization Segment Received

        first [=initialization segment=].
      20. The codecs for each track are supported by the user agent.

        User agents MAY consider codecs, that would otherwise be supported, as "not supported" here if the codecs were not - specified in type parameter passed to + specified in |type:DOMString| parameter passed to (a) the most recently successful {{SourceBuffer/changeType()}} on this {{SourceBuffer}} object, or (b) if no successful {{SourceBuffer/changeType()}} has yet occurred on this object, the {{MediaSource/addSourceBuffer()}} that created this {{SourceBuffer}} object. @@ -1717,7 +1718,7 @@

        Initialization Segment Received

        1. If the [=initialization segment=] contains tracks with codecs the user agent does not support, then run the [=append error=] algorithm and abort these steps.

          User agents MAY consider codecs, that would otherwise be supported, as "not supported" here if the codecs were not - specified in type parameter passed to + specified in |type:DOMString| parameter passed to (a) the most recently successful {{SourceBuffer/changeType()}} on this {{SourceBuffer}} object, or (b) if no successful {{SourceBuffer/changeType()}} has yet occurred on this object, the {{MediaSource/addSourceBuffer()}} that created this {{SourceBuffer}} object. @@ -1734,30 +1735,30 @@

          Initialization Segment Received

        2. For each audio track in the [=initialization segment=], run following steps:

            -
          1. Let audio byte stream track ID be the +
          2. Let |audio byte stream track ID| be the [=Track ID=] for the current track being processed.
          3. -
          4. Let audio language be a BCP 47 language tag for the language +
          5. Let |audio language:DOMString| be a BCP 47 language tag for the language specified in the [=initialization segment=] for this track or an empty string if no language info is present.
          6. -
          7. If audio language equals the 'und' BCP 47 value, then assign an empty string to audio language.
          8. -
          9. Let audio label be a label specified in the [=initialization segment=] for this track or an empty string if no +
          10. If |audio language| equals the 'und' BCP 47 value, then assign an empty string to |audio language|.
          11. +
          12. Let |audio label:DOMString| be a label specified in the [=initialization segment=] for this track or an empty string if no label info is present.
          13. -
          14. Let audio kinds be a sequence of kind strings specified in the +
          15. Let |audio kinds:DOMString sequence| be a sequence of kind strings specified in the [=initialization segment=] for this track or a sequence with a single empty string element in it if no kind information is provided.
          16. -
          17. For each value in audio kinds, run the following steps: +
          18. For each value in |audio kinds|, run the following steps:
              -
            1. Let current audio kind equal the value from audio kinds +
            2. Let |current audio kind:DOMString| equal the value from |audio kinds| for this iteration of the loop.
            3. Let |new audio track:AudioTrack| be a new {{AudioTrack}} object.
            4. Generate a unique ID and assign it to the property on |new audio track|.
            5. -
            6. Assign audio language to the +
            7. Assign |audio language| to the property on |new audio track|.
            8. -
            9. Assign audio label to the +
            10. Assign |audio label| to the property on |new audio track|.
            11. -
            12. Assign current audio kind to the +
            13. Assign |current audio kind| to the property on |new audio track|.
            14. @@ -1808,30 +1809,30 @@

              Initialization Segment Received

            15. For each video track in the [=initialization segment=], run following steps:

                -
              1. Let video byte stream track ID be the +
              2. Let |video byte stream track ID| be the [=Track ID=] for the current track being processed.
              3. -
              4. Let video language be a BCP 47 language tag for the language +
              5. Let |video language:DOMString| be a BCP 47 language tag for the language specified in the [=initialization segment=] for this track or an empty string if no language info is present.
              6. -
              7. If video language equals the 'und' BCP 47 value, then assign an empty string to video language.
              8. -
              9. Let video label be a label specified in the [=initialization segment=] for this track or an empty string if no +
              10. If |video language| equals the 'und' BCP 47 value, then assign an empty string to |video language|.
              11. +
              12. Let |video label:DOMString| be a label specified in the [=initialization segment=] for this track or an empty string if no label info is present.
              13. -
              14. Let video kinds be a sequence of kind strings specified in the +
              15. Let |video kinds:DOMString sequence| be a sequence of kind strings specified in the [=initialization segment=] for this track or a sequence with a single empty string element in it if no kind information is provided.
              16. -
              17. For each value in video kinds, run the following steps: +
              18. For each value in |video kinds|, run the following steps:
                  -
                1. Let current video kind equal the value from video kinds +
                2. Let |current video kind:DOMString| equal the value from |video kinds| for this iteration of the loop.
                3. Let |new video track:VideoTrack| be a new {{VideoTrack}} object.
                4. Generate a unique ID and assign it to the {{VideoTrack/id}} property on |new video track|.
                5. -
                6. Assign video language to the {{VideoTrack/language}} +
                7. Assign |video language| to the {{VideoTrack/language}} property on |new video track|.
                8. -
                9. Assign video label to the {{VideoTrack/label}} +
                10. Assign |video label| to the {{VideoTrack/label}} property on |new video track|.
                11. -
                12. Assign current video kind to the {{VideoTrack/kind}} +
                13. Assign |current video kind| to the {{VideoTrack/kind}} property on |new video track|.
                14. @@ -1882,30 +1883,30 @@

                  Initialization Segment Received

                15. For each text track in the [=initialization segment=], run following steps:

                    -
                  1. Let text byte stream track ID be the +
                  2. Let |text byte stream track ID| be the [=Track ID=] for the current track being processed.
                  3. -
                  4. Let text language be a BCP 47 language tag for the language +
                  5. Let |text language:DOMString| be a BCP 47 language tag for the language specified in the [=initialization segment=] for this track or an empty string if no language info is present.
                  6. -
                  7. If text language equals the 'und' BCP 47 value, then assign an empty string to text language.
                  8. -
                  9. Let text label be a label specified in the [=initialization segment=] for this track or an empty string if no +
                  10. If |text language| equals the 'und' BCP 47 value, then assign an empty string to |text language|.
                  11. +
                  12. Let |text label:DOMString| be a label specified in the [=initialization segment=] for this track or an empty string if no label info is present.
                  13. -
                  14. Let text kinds be a sequence of kind strings specified in the +
                  15. Let |text kinds:DOMString sequence| be a sequence of kind strings specified in the [=initialization segment=] for this track or a sequence with a single empty string element in it if no kind information is provided.
                  16. -
                  17. For each value in text kinds, run the following steps: +
                  18. For each value in |text kinds|, run the following steps:
                      -
                    1. Let current text kind equal the value from text kinds +
                    2. Let |current text kind:DOMString| equal the value from |text kinds| for this iteration of the loop.
                    3. Let |new text track:TextTrack| be a new {{TextTrack}} object.
                    4. Generate a unique ID and assign it to the {{TextTrack/id}} property on |new text track|.
                    5. -
                    6. Assign text language to the {{TextTrack/language}} +
                    7. Assign |text language| to the {{TextTrack/language}} property on |new text track|.
                    8. -
                    9. Assign text label to the {{TextTrack/label}} +
                    10. Assign |text label| to the {{TextTrack/label}} property on |new text track|.
                    11. -
                    12. Assign current text kind to the {{TextTrack/kind}} +
                    13. Assign |current text kind| to the {{TextTrack/kind}} property on |new text track|.
                    14. Populate the remaining properties on |new text track| with the appropriate information from the [=initialization segment=].
                    15. @@ -2020,19 +2021,19 @@

                      Coded Frame Processing

                      If [=generate timestamps flag=] equals true:
                        -
                      1. Let presentation timestamp equal 0.
                      2. -
                      3. Let decode timestamp equal 0.
                      4. +
                      5. Let |presentation timestamp:double| equal 0.
                      6. +
                      7. Let |decode timestamp:double| equal 0.
                      Otherwise:
                        -
                      1. Let presentation timestamp be a double precision floating point representation of the coded frame's [=presentation timestamp=] in seconds. +
                      2. Let |presentation timestamp| be a double precision floating point representation of the coded frame's [=presentation timestamp=] in seconds.

                        Special processing may be needed to determine the presentation and decode timestamps for timed text frames since this information may not be explicitly present in the underlying format or may be dependent on the order of the frames. Some metadata text tracks, like MPEG2-TS PSI data, may only have implied timestamps. Format specific rules for these situations SHOULD be in the [=byte stream format specifications=] or in separate extension specifications.

                      3. -
                      4. Let decode timestamp be a double precision floating point representation of the coded frame's decode timestamp in seconds. +
                      5. Let |decode timestamp| be a double precision floating point representation of the coded frame's decode timestamp in seconds.

                        Implementations don't have to internally store timestamps in a double precision floating point representation. This representation is used here because it is the representation for timestamps in the HTML spec. The intention here is to make the behavior clear without adding unnecessary complexity to the algorithm to deal with the fact that adding a timestampOffset may @@ -2045,10 +2046,10 @@

                        Coded Frame Processing

                      -
                    16. Let frame duration be a double precision floating point representation of the [=coded frame duration|coded frame's duration=] in seconds.
                    17. +
                    18. Let |frame duration:double| be a double precision floating point representation of the [=coded frame duration|coded frame's duration=] in seconds.
                    19. If {{SourceBuffer/mode}} equals {{AppendMode/""sequence""}} and [=group start timestamp=] is set, then run the following steps:
                        -
                      1. Set {{SourceBuffer/timestampOffset}} equal to [=group start timestamp=] - presentation timestamp.
                      2. +
                      3. Set {{SourceBuffer/timestampOffset}} equal to [=group start timestamp=] minus |presentation timestamp|.
                      4. Set [=group end timestamp=] equal to [=group start timestamp=].
                      5. Set the [=need random access point flag=] on all [=track buffers=] to true.
                      6. Unset [=group start timestamp=].
                      7. @@ -2057,24 +2058,24 @@

                        Coded Frame Processing

                      8. If {{SourceBuffer/timestampOffset}} is not 0, then run the following steps:

                          -
                        1. Add {{SourceBuffer/timestampOffset}} to the presentation timestamp.
                        2. -
                        3. Add {{SourceBuffer/timestampOffset}} to the decode timestamp.
                        4. +
                        5. Add {{SourceBuffer/timestampOffset}} to the |presentation timestamp|.
                        6. +
                        7. Add {{SourceBuffer/timestampOffset}} to the |decode timestamp|.
                      9. -
                      10. Let track buffer equal the [=track buffer=] that the coded frame will be added to.
                      11. +
                      12. Let |track buffer| equal the [=track buffer=] that the coded frame will be added to.
                      13. -
                        If [=last decode timestamp=] for track buffer is set and decode timestamp is less than +
                        If [=last decode timestamp=] for |track buffer| is set and |decode timestamp| is less than [=last decode timestamp=]:
                        OR
                        -
                        If [=last decode timestamp=] for track buffer is set and the difference between decode timestamp and [=last decode timestamp=] +
                        If [=last decode timestamp=] for |track buffer| is set and the difference between |decode timestamp| and [=last decode timestamp=] is greater than 2 times [=last frame duration=]:
                        1. If {{SourceBuffer/mode}} equals {{AppendMode/""segments""}}:
                          -
                          Set [=group end timestamp=] to presentation timestamp.
                          +
                          Set [=group end timestamp=] to |presentation timestamp|.
                          If {{SourceBuffer/mode}} equals {{AppendMode/""sequence""}}:
                          Set [=group start timestamp=] equal to the [=group end timestamp=].
                          @@ -2090,45 +2091,45 @@

                          Coded Frame Processing

                          Continue.
                      14. -
                      15. Let frame end timestamp equal the sum of presentation timestamp and frame duration.
                      16. -
                      17. If presentation timestamp is less than {{SourceBuffer/appendWindowStart}}, then set the [=need random access point flag=] to true, drop the +
                      18. Let |frame end timestamp:double| equal the sum of |presentation timestamp| and |frame duration|.
                      19. +
                      20. If |presentation timestamp| is less than {{SourceBuffer/appendWindowStart}}, then set the [=need random access point flag=] to true, drop the coded frame, and jump to the top of the loop to start processing the next coded frame. -

                        Some implementations MAY choose to collect some of these coded frames with presentation timestamp less than {{SourceBuffer/appendWindowStart}} and use them +

                        Some implementations MAY choose to collect some of these coded frames with |presentation timestamp| less than {{SourceBuffer/appendWindowStart}} and use them to generate a splice at the first coded frame that has a [=presentation timestamp=] greater than or equal to {{SourceBuffer/appendWindowStart}} even if that frame is not a [=random access point=]. Supporting this requires multiple decoders or faster than real-time decoding so for now this behavior will not be a normative requirement.

                      21. -
                      22. If frame end timestamp is greater than {{SourceBuffer/appendWindowEnd}}, then set the [=need random access point flag=] to true, drop the +
                      23. If |frame end timestamp| is greater than {{SourceBuffer/appendWindowEnd}}, then set the [=need random access point flag=] to true, drop the coded frame, and jump to the top of the loop to start processing the next coded frame. -

                        Some implementations MAY choose to collect coded frames with presentation timestamp less than {{SourceBuffer/appendWindowEnd}} and frame end timestamp greater than {{SourceBuffer/appendWindowEnd}} and use them +

                        Some implementations MAY choose to collect coded frames with |presentation timestamp| less than {{SourceBuffer/appendWindowEnd}} and |frame end timestamp| greater than {{SourceBuffer/appendWindowEnd}} and use them to generate a splice across the portion of the collected coded frames within the append window at time of collection, and the beginning portion of later processed frames which only partially overlap the end of the collected coded frames. Supporting this requires multiple decoders or faster than real-time decoding so for now this behavior will not be a normative requirement. In conjunction with collecting coded frames that span {{SourceBuffer/appendWindowStart}}, implementations MAY thus support gapless audio splicing.

                      24. -
                      25. If the [=need random access point flag=] on track buffer equals true, then run the following steps: +
                      26. If the [=need random access point flag=] on |track buffer| equals true, then run the following steps:
                        1. If the coded frame is not a [=random access point=], then drop the coded frame and jump to the top of the loop to start processing the next coded frame.
                        2. -
                        3. Set the [=need random access point flag=] on track buffer to false.
                        4. +
                        5. Set the [=need random access point flag=] on |track buffer| to false.
                      27. -
                      28. Let spliced audio frame be an unset variable for holding audio splice information
                      29. -
                      30. Let spliced timed text frame be an unset variable for holding timed text splice information
                      31. -
                      32. If [=last decode timestamp=] for track buffer is unset and presentation timestamp falls within the [=presentation interval=] of a [=coded frame=] in track buffer, then run the following steps: +
                      33. Let |spliced audio frame| be an unset variable for holding audio splice information
                      34. +
                      35. Let |spliced timed text frame| be an unset variable for holding timed text splice information
                      36. +
                      37. If [=last decode timestamp=] for |track buffer| is unset and |presentation timestamp| falls within the [=presentation interval=] of a [=coded frame=] in |track buffer|, then run the following steps:
                          -
                        1. Let overlapped frame be the [=coded frame=] in track buffer that matches the condition above.
                        2. +
                        3. Let |overlapped frame| be the [=coded frame=] in |track buffer| that matches the condition above.
                        4. -
                          If track buffer contains audio [=coded frames=]:
                          -
                          Run the [=audio splice frame=] algorithm and if a splice frame is returned, assign it to spliced audio frame.
                          -
                          If track buffer contains video [=coded frames=]:
                          +
                          If |track buffer| contains audio [=coded frames=]:
                          +
                          Run the [=audio splice frame=] algorithm and if a splice frame is returned, assign it to |spliced audio frame|.
                          +
                          If |track buffer| contains video [=coded frames=]:
                            -
                          1. Let remove window timestamp equal the overlapped frame [=presentation timestamp=] plus 1 microsecond.
                          2. -
                          3. If the presentation timestamp is less than the remove window timestamp, then remove overlapped frame from track buffer. +
                          4. Let |remove window timestamp:double| equal the |overlapped frame| [=presentation timestamp=] plus 1 microsecond.
                          5. +
                          6. If the |presentation timestamp| is less than the |remove window timestamp|, then remove |overlapped frame| from |track buffer|.

                            This is to compensate for minor errors in frame timestamp computations that can appear when converting back and forth between double precision floating point numbers and rationals. This tolerance allows a frame to replace an existing one as long as it is within 1 microsecond of the existing @@ -2137,24 +2138,24 @@

                            Coded Frame Processing

                          -
                          If track buffer contains timed text [=coded frames=]:
                          -
                          Run the [=text splice frame=] algorithm and if a splice frame is returned, assign it to spliced timed text frame.
                          +
                          If |track buffer| contains timed text [=coded frames=]:
                          +
                          Run the [=text splice frame=] algorithm and if a splice frame is returned, assign it to |spliced timed text frame|.
                      38. -
                      39. Remove existing coded frames in track buffer: +
                      40. Remove existing coded frames in |track buffer|:
                        -
                        If [=highest end timestamp=] for track buffer is not set:
                        -
                        Remove all [=coded frames=] from track buffer that have a [=presentation timestamp=] greater than or equal to - presentation timestamp and less than frame end timestamp.
                        -
                        If [=highest end timestamp=] for track buffer is set and less than or equal to presentation timestamp:
                        -
                        Remove all [=coded frames=] from track buffer that have a [=presentation timestamp=] greater than - or equal to [=highest end timestamp=] and less than frame end timestamp
                        +
                        If [=highest end timestamp=] for |track buffer| is not set:
                        +
                        Remove all [=coded frames=] from |track buffer| that have a [=presentation timestamp=] greater than or equal to + |presentation timestamp| and less than |frame end timestamp|.
                        +
                        If [=highest end timestamp=] for |track buffer| is set and less than or equal to |presentation timestamp|:
                        +
                        Remove all [=coded frames=] from |track buffer| that have a [=presentation timestamp=] greater than + or equal to [=highest end timestamp=] and less than |frame end timestamp|.
                      41. Remove all possible decoding dependencies on the [=coded frames=] removed in the previous two steps - by removing all [=coded frames=] from track buffer between those frames removed in the previous two steps and the next + by removing all [=coded frames=] from |track buffer| between those frames removed in the previous two steps and the next [=random access point=] after those removed frames.

                        Removing all [=coded frames=] until the next [=random access point=] is a conservative estimate of the decoding dependencies since it assumes all frames between the removed frames and the next random access point @@ -2163,26 +2164,26 @@

                        Coded Frame Processing

                      42. -
                        If spliced audio frame is set:
                        -
                        Add spliced audio frame to the track buffer.
                        -
                        If spliced timed text frame is set:
                        -
                        Add spliced timed text frame to the track buffer.
                        +
                        If |spliced audio frame| is set:
                        +
                        Add |spliced audio frame| to the |track buffer|.
                        +
                        If |spliced timed text frame| is set:
                        +
                        Add |spliced timed text frame| to the |track buffer|.
                        Otherwise:
                        -
                        Add the [=coded frame=] with the presentation timestamp, decode timestamp, and frame duration to the - track buffer.
                        +
                        Add the [=coded frame=] with the |presentation timestamp|, |decode timestamp|, and |frame duration| to the + |track buffer|.
                        -
                      43. Set [=last decode timestamp=] for track buffer to decode timestamp.
                      44. -
                      45. Set [=last frame duration=] for track buffer to frame duration.
                      46. -
                      47. If [=highest end timestamp=] for track buffer is unset or frame end timestamp is greater - than [=highest end timestamp=], then set [=highest end timestamp=] for track buffer - to frame end timestamp. +
                      48. Set [=last decode timestamp=] for |track buffer| to |decode timestamp|.
                      49. +
                      50. Set [=last frame duration=] for |track buffer| to |frame duration|.
                      51. +
                      52. If [=highest end timestamp=] for |track buffer| is unset or |frame end timestamp| is greater + than [=highest end timestamp=], then set [=highest end timestamp=] for |track buffer| + to |frame end timestamp|.

                        The greater than check is needed because bidirectional prediction between coded frames can cause - presentation timestamp to not be monotonically increasing even though the decode timestamps are monotonically increasing.

                        + |presentation timestamp| to not be monotonically increasing even though the decode timestamps are monotonically increasing.

                      53. -
                      54. If frame end timestamp is greater than [=group end timestamp=], - then set [=group end timestamp=] equal to frame end timestamp.
                      55. +
                      56. If |frame end timestamp| is greater than [=group end timestamp=], + then set [=group end timestamp=] equal to |frame end timestamp|.
                      57. If [=generate timestamps flag=] equals true, then set - {{SourceBuffer/timestampOffset}} equal to frame end timestamp.
                      58. + {{SourceBuffer/timestampOffset}} equal to |frame end timestamp|.
                    20. @@ -2198,7 +2199,8 @@

                      Coded Frame Processing

                      Per [[HTML]] logic, {{HTMLMediaElement}}.{{HTMLMediaElement/readyState}} changes may trigger events on the HTMLMediaElement.

                    21. If the [=media segment=] contains data beyond the current {{MediaSource/duration}}, then run the - [=duration change=] algorithm with |new duration| set to the maximum of the current duration and the [=group end timestamp=].
                    22. + [=duration change=] algorithm with |new duration:unrestricted double| set to the maximum of the current + duration and the [=group end timestamp=].
                    @@ -2206,18 +2208,18 @@

                    Coded Frame Processing

                    Coded Frame Removal

                    Follow these steps when [=coded frames=] for a specific time range need to be removed from the SourceBuffer:

                      -
                    1. Let start be the starting [=presentation timestamp=] for the removal range.
                    2. -
                    3. Let end be the end [=presentation timestamp=] for the removal range.
                    4. -
                    5. For each [=track buffer=] in this source buffer, run the following steps:

                      +
                    6. Let |start:double| be the starting [=presentation timestamp=] for the removal range.
                    7. +
                    8. Let |end:unrestricted double| be the end [=presentation timestamp=] for the removal range.
                    9. +
                    10. For each [=track buffer=] in this {{SourceBuffer}}, run the following steps:

                        -
                      1. Let remove end timestamp be the current value of {{MediaSource/duration}}
                      2. +
                      3. Let |remove end timestamp:unrestricted double| be the current value of {{MediaSource/duration}}
                      4. If this [=track buffer=] has a [=random access point=] timestamp that is greater than or equal to - end, then update remove end timestamp to that random access point timestamp.

                        + |end|, then update |remove end timestamp| to that random access point timestamp.

                        Random access point timestamps can be different across tracks because the dependencies between [=coded frames=] within a track are usually different than the dependencies in another track.

                      5. -
                      6. Remove all media data, from this [=track buffer=], that contain starting timestamps greater than or equal to start and less than the remove end timestamp. +
                      7. Remove all media data, from this [=track buffer=], that contain starting timestamps greater than or equal to |start| and less than the |remove end timestamp|.
                        1. For each removed frame, if the frame has a [=decode timestamp=] equal to the [=last decode timestamp=] for the frame's track, run the following steps:

                          @@ -2243,7 +2245,7 @@

                          Coded Frame Removal

                        2. If this object is in {{MediaSource/activeSourceBuffers}}, the is greater than or equal to - start and less than the remove end timestamp, and {{HTMLMediaElement}}.{{HTMLMediaElement/readyState}} is greater than + |start| and less than the |remove end timestamp|, and {{HTMLMediaElement}}.{{HTMLMediaElement/readyState}} is greater than , then set the {{HTMLMediaElement}}.{{HTMLMediaElement/readyState}} attribute to and stall playback.

                          Per [[HTML]] logic, {{HTMLMediaElement}}.{{HTMLMediaElement/readyState}} changes may trigger events on the HTMLMediaElement.

                          This transition occurs because media data for the current position has been removed. Playback cannot progress until media for the @@ -2258,17 +2260,17 @@

                          Coded Frame Removal

                          Coded Frame Eviction

                          -

                          This algorithm is run to free up space in this source buffer when new data is appended.

                          +

                          This algorithm is run to free up space in this {{SourceBuffer}} when new data is appended.

                            -
                          1. Let new data equal the data that is about to be appended to this SourceBuffer.
                          2. +
                          3. Let |new data:BufferSource| equal the data that is about to be appended to this SourceBuffer.
                          4. If the [=buffer full flag=] equals false, then abort these steps.
                          5. -
                          6. Let removal ranges equal a list of presentation time ranges that can be evicted from the presentation to make room for the - new data. -

                            Implementations MAY use different methods for selecting removal ranges so web applications SHOULD NOT depend on a +

                          7. Let |removal ranges:normalized TimeRanges| equal a list of presentation time ranges that can be evicted from the presentation to make room for the + |new data|. +

                            Implementations MAY use different methods for selecting |removal ranges| so web applications SHOULD NOT depend on a specific behavior. The web application can use the {{SourceBuffer/buffered}} attribute to observe whether portions of the buffered data have been evicted.

                          8. -
                          9. For each range in removal ranges, run the [=coded frame removal=] algorithm with start and end equal to +
                          10. For each range in |removal ranges|, run the [=coded frame removal=] algorithm with |start:double| and |end:unrestricted double| equal to the removal range start and end timestamp respectively.
                          @@ -2278,36 +2280,36 @@

                          Audio Splice Frame

                          Follow these steps when the [=coded frame processing=] algorithm needs to generate a splice frame for two overlapping audio [=coded frames=]:

                            -
                          1. Let track buffer be the [=track buffer=] that will contain the splice.
                          2. -
                          3. Let new coded frame be the new [=coded frame=], that is being added to track buffer, which triggered the need for a splice.
                          4. -
                          5. Let presentation timestamp be the [=presentation timestamp=] for new coded frame
                          6. -
                          7. Let decode timestamp be the decode timestamp for new coded frame.
                          8. -
                          9. Let frame duration be the [=coded frame duration=] of new coded frame.
                          10. -
                          11. Let overlapped frame be the [=coded frame=] in track buffer with a [=presentation interval=] that contains presentation timestamp. +
                          12. Let |track buffer| be the [=track buffer=] that will contain the splice.
                          13. +
                          14. Let |new coded frame| be the new [=coded frame=], that is being added to |track buffer|, which triggered the need for a splice.
                          15. +
                          16. Let |presentation timestamp:double| be the [=presentation timestamp=] for |new coded frame|.
                          17. +
                          18. Let |decode timestamp:double| be the decode timestamp for |new coded frame|.
                          19. +
                          20. Let |frame duration:double| be the [=coded frame duration=] of |new coded frame|.
                          21. +
                          22. Let |overlapped frame| be the [=coded frame=] in |track buffer| with a [=presentation interval=] that contains |presentation timestamp|.
                          23. -
                          24. Update presentation timestamp and decode timestamp to the nearest audio sample timestamp based on sample rate of the - audio in overlapped frame. If a timestamp is equidistant from both audio sample timestamps, then use the higher timestamp (e.g., +
                          25. Update |presentation timestamp| and |decode timestamp| to the nearest audio sample timestamp based on sample rate of the + audio in |overlapped frame|. If a timestamp is equidistant from both audio sample timestamps, then use the higher timestamp (e.g., floor(x * sample_rate + 0.5) / sample_rate).

                            For example, given the following values:

                              -
                            • The [=presentation timestamp=] of overlapped frame equals 10.
                            • -
                            • The sample rate of overlapped frame equals 8000 Hz
                            • -
                            • presentation timestamp equals 10.01255
                            • -
                            • decode timestamp equals 10.01255
                            • +
                            • The [=presentation timestamp=] of |overlapped frame| equals 10.
                            • +
                            • The sample rate of |overlapped frame| equals 8000 Hz
                            • +
                            • |presentation timestamp| equals 10.01255
                            • +
                            • |decode timestamp| equals 10.01255
                            -

                            presentation timestamp and decode timestamp are updated to 10.0125 since 10.01255 is closer to +

                            |presentation timestamp| and |decode timestamp| are updated to 10.0125 since 10.01255 is closer to 10 + 100/8000 (10.0125) than 10 + 101/8000 (10.012625)

                          26. If the user agent does not support crossfading then run the following steps:
                              -
                            1. Remove overlapped frame from track buffer.
                            2. -
                            3. Add a silence frame to track buffer with the following properties: +
                            4. Remove |overlapped frame| from |track buffer|.
                            5. +
                            6. Add a silence frame to |track buffer| with the following properties:
                                -
                              • The [=presentation timestamp=] set to the overlapped frame [=presentation timestamp=].
                              • -
                              • The [=decode timestamp=] set to the overlapped frame [=decode timestamp=].
                              • -
                              • The [=coded frame duration=] set to difference between presentation timestamp and the overlapped frame [=presentation timestamp=].
                              • +
                              • The [=presentation timestamp=] set to the |overlapped frame| [=presentation timestamp=].
                              • +
                              • The [=decode timestamp=] set to the |overlapped frame| [=decode timestamp=].
                              • +
                              • The [=coded frame duration=] set to difference between |presentation timestamp| and the |overlapped frame| [=presentation timestamp=].

                              Some implementations MAY apply fades to/from silence to coded frames on either side of the inserted silence to make the transition less @@ -2316,28 +2318,28 @@

                              Audio Splice Frame

                            7. Return to caller without providing a splice frame.

                              - This is intended to allow new coded frame to be added to the track buffer as if - overlapped frame had not been in the track buffer to begin with. + This is intended to allow |new coded frame| to be added to the |track buffer| as if + |overlapped frame| had not been in the |track buffer| to begin with.

                          27. -
                          28. Let frame end timestamp equal the sum of presentation timestamp and frame duration.
                          29. -
                          30. Let splice end timestamp equal the sum of presentation timestamp and the splice duration of 5 milliseconds.
                          31. -
                          32. Let fade out coded frames equal overlapped frame as well as any additional frames in track buffer that - have a [=presentation timestamp=] greater than presentation timestamp and less than splice end timestamp.
                          33. -
                          34. Remove all the frames included in fade out coded frames from track buffer. +
                          35. Let |frame end timestamp:double| equal the sum of |presentation timestamp| and |frame duration|.
                          36. +
                          37. Let |splice end timestamp:double| equal the sum of |presentation timestamp| and the splice duration of 5 milliseconds.
                          38. +
                          39. Let |fade out coded frames| equal |overlapped frame| as well as any additional frames in |track buffer| that + have a [=presentation timestamp=] greater than |presentation timestamp| and less than |splice end timestamp|.
                          40. +
                          41. Remove all the frames included in |fade out coded frames| from |track buffer|.
                          42. Return a splice frame with the following properties:
                              -
                            • The [=presentation timestamp=] set to the overlapped frame [=presentation timestamp=].
                            • -
                            • The [=decode timestamp=] set to the overlapped frame [=decode timestamp=].
                            • -
                            • The [=coded frame duration=] set to difference between frame end timestamp and the overlapped frame [=presentation timestamp=].
                            • -
                            • The fade out coded frames equals fade-out coded frames.
                            • -
                            • The fade in coded frame equal new coded frame. -

                              If the new coded frame is less than 5 milliseconds in duration, then coded frames that are appended after the - new coded frame will be needed to properly render the splice.

                              +
                            • The [=presentation timestamp=] set to the |overlapped frame| [=presentation timestamp=].
                            • +
                            • The [=decode timestamp=] set to the |overlapped frame| [=decode timestamp=].
                            • +
                            • The [=coded frame duration=] set to difference between |frame end timestamp| and the |overlapped frame| [=presentation timestamp=].
                            • +
                            • The fade out coded frames equals |fade out coded frames|.
                            • +
                            • The fade in coded frame equals |new coded frame|. +

                              If the |new coded frame| is less than 5 milliseconds in duration, then coded frames that are appended after the + |new coded frame| will be needed to properly render the splice.

                            • -
                            • The splice timestamp equals presentation timestamp.
                            • +
                            • The splice timestamp equals |presentation timestamp|.

                            See the [=audio splice rendering=] algorithm for details on how this splice frame is rendered.

                          43. @@ -2348,28 +2350,28 @@

                            Audio Splice Rendering

                            The following steps are run when a spliced frame, generated by the [=audio splice frame=] algorithm, needs to be rendered by the media element:

                              -
                            1. Let fade out coded frames be the [=coded frames=] that are faded out during the splice.
                            2. -
                            3. Let fade in coded frames be the [=coded frames=] that are faded in during the splice.
                            4. -
                            5. Let presentation timestamp be the [=presentation timestamp=] of the first coded frame in fade out coded frames.
                            6. -
                            7. Let end timestamp be the sum of the [=presentation timestamp=] and the [=coded frame duration=] of the last frame in fade in coded frames.
                            8. -
                            9. Let splice timestamp be the [=presentation timestamp=] where the splice starts. This corresponds with the [=presentation timestamp=] of the first frame in - fade in coded frames.
                            10. -
                            11. Let splice end timestamp equal splice timestamp plus five milliseconds.
                            12. -
                            13. Let fade out samples be the samples generated by decoding fade out coded frames.
                            14. -
                            15. Trim fade out samples so that it only contains samples between presentation timestamp and splice end timestamp.
                            16. -
                            17. Let fade in samples be the samples generated by decoding fade in coded frames.
                            18. -
                            19. If fade out samples and fade in samples do not have a common sample rate and channel layout, then convert - fade out samples and fade in samples to a common sample rate and channel layout.
                            20. -
                            21. Let output samples be a buffer to hold the output samples.
                            22. +
                            23. Let |fade out coded frames| be the [=coded frames=] that are faded out during the splice.
                            24. +
                            25. Let |fade in coded frames| be the [=coded frames=] that are faded in during the splice.
                            26. +
                            27. Let |presentation timestamp:double| be the [=presentation timestamp=] of the first coded frame in |fade out coded frames|.
                            28. +
                            29. Let |end timestamp:double| be the sum of the [=presentation timestamp=] and the [=coded frame duration=] of the last frame in |fade in coded frames|.
                            30. +
                            31. Let |splice timestamp:double| be the [=presentation timestamp=] where the splice starts. This corresponds with the [=presentation timestamp=] of the first frame in + |fade in coded frames|.
                            32. +
                            33. Let |splice end timestamp:double| equal |splice timestamp| plus five milliseconds.
                            34. +
                            35. Let |fade out samples| be the samples generated by decoding |fade out coded frames|.
                            36. +
                            37. Trim |fade out samples| so that it only contains samples between |presentation timestamp| and |splice end timestamp|.
                            38. +
                            39. Let |fade in samples| be the samples generated by decoding |fade in coded frames|.
                            40. +
                            41. If |fade out samples| and |fade in samples| do not have a common sample rate and channel layout, then convert + |fade out samples| and |fade in samples| to a common sample rate and channel layout.
                            42. +
                            43. Let |output samples| be a buffer to hold the output samples.
                            44. Apply a linear gain fade out with a starting gain of 1 and an ending gain of 0 to the samples between - splice timestamp and splice end timestamp in fade out samples.
                            45. -
                            46. Apply a linear gain fade in with a starting gain of 0 and an ending gain of 1 to the samples between splice timestamp and - splice end timestamp in fade in samples.
                            47. -
                            48. Copy samples between presentation timestamp to splice timestamp from fade out samples into output samples.
                            49. -
                            50. For each sample between splice timestamp and splice end timestamp, compute the sum of a sample from fade out samples and the - corresponding sample in fade in samples and store the result in output samples.
                            51. -
                            52. Copy samples between splice end timestamp to end timestamp from fade in samples into output samples.
                            53. -
                            54. Render output samples.
                            55. + |splice timestamp| and |splice end timestamp| in |fade out samples|. +
                            56. Apply a linear gain fade in with a starting gain of 0 and an ending gain of 1 to the samples between |splice timestamp| and + |splice end timestamp| in |fade in samples|.
                            57. +
                            58. Copy samples between |presentation timestamp| to |splice timestamp| from |fade out samples| into |output samples|.
                            59. +
                            60. For each sample between |splice timestamp| and |splice end timestamp|, compute the sum of a sample from |fade out samples| and the + corresponding sample in |fade in samples| and store the result in |output samples|.
                            61. +
                            62. Copy samples between |splice end timestamp| to |end timestamp| from |fade in samples| into |output samples|.
                            63. +
                            64. Render |output samples|.

                            Here is a graphical representation of this algorithm.

                            @@ -2381,23 +2383,23 @@

                            Text Splice Frame

                            Follow these steps when the [=coded frame processing=] algorithm needs to generate a splice frame for two overlapping timed text [=coded frames=]:

                              -
                            1. Let track buffer be the [=track buffer=] that will contain the splice.
                            2. -
                            3. Let new coded frame be the new [=coded frame=], that is being added to track buffer, which triggered the need for a splice.
                            4. -
                            5. Let presentation timestamp be the [=presentation timestamp=] for new coded frame
                            6. -
                            7. Let decode timestamp be the decode timestamp for new coded frame.
                            8. -
                            9. Let frame duration be the [=coded frame duration=] of new coded frame.
                            10. -
                            11. Let frame end timestamp equal the sum of presentation timestamp and frame duration.
                            12. -
                            13. Let first overlapped frame be the [=coded frame=] in track buffer with a [=presentation interval=] that contains presentation timestamp. +
                            14. Let |track buffer| be the [=track buffer=] that will contain the splice.
                            15. +
                            16. Let |new coded frame| be the new [=coded frame=], that is being added to |track buffer|, which triggered the need for a splice.
                            17. +
                            18. Let |presentation timestamp:double| be the [=presentation timestamp=] for |new coded frame|
                            19. +
                            20. Let |decode timestamp:double| be the decode timestamp for |new coded frame|.
                            21. +
                            22. Let |frame duration:double| be the [=coded frame duration=] of |new coded frame|.
                            23. +
                            24. Let |frame end timestamp:double| equal the sum of |presentation timestamp| and |frame duration|.
                            25. +
                            26. Let |first overlapped frame| be the [=coded frame=] in |track buffer| with a [=presentation interval=] that contains |presentation timestamp|.
                            27. -
                            28. Let overlapped presentation timestamp be the [=presentation timestamp=] of the first overlapped frame.
                            29. -
                            30. Let overlapped frames equal first overlapped frame as well as any additional frames in track buffer that - have a [=presentation timestamp=] greater than presentation timestamp and less than frame end timestamp.
                            31. -
                            32. Remove all the frames included in overlapped frames from track buffer. -
                            33. Update the [=coded frame duration=] of the first overlapped frame to presentation timestamp - overlapped presentation timestamp.
                            34. -
                            35. Add first overlapped frame to the track buffer. +
                            36. Let |overlapped presentation timestamp:double| be the [=presentation timestamp=] of the |first overlapped frame|.
                            37. +
                            38. Let |overlapped frames| equal |first overlapped frame| as well as any additional frames in |track buffer| that + have a [=presentation timestamp=] greater than |presentation timestamp| and less than |frame end timestamp|.
                            39. +
                            40. Remove all the frames included in |overlapped frames| from |track buffer|. +
                            41. Update the [=coded frame duration=] of the |first overlapped frame| to |presentation timestamp| minus |overlapped presentation timestamp|.
                            42. +
                            43. Add |first overlapped frame| to the |track buffer|.
                            44. Return to caller without providing a splice frame. -

                              This is intended to allow new coded frame to be added to the track buffer as if - it hadn't overlapped any frames in track buffer to begin with.

                              +

                              This is intended to allow |new coded frame| to be added to the |track buffer| as if + it hadn't overlapped any frames in |track buffer| to begin with.

                            @@ -2439,14 +2441,14 @@

                            Methods

                            Allows the SourceBuffer objects in the list to be accessed with an array operator (i.e., []).

                              -
                            1. If index is greater than or equal to the {{SourceBufferList/length}} attribute then return undefined and abort these steps.
                            2. -
                            3. Return the index'th SourceBuffer object in the list.
                            4. +
                            5. If |index:unsigned long| is greater than or equal to the {{SourceBufferList/length}} attribute then return undefined and abort these steps.
                            6. +
                            7. Return the |index|'th SourceBuffer object in the list.
      Description
      start|start| {{double}} The start of the removal range, in seconds measured from [=presentation start time=].
      end|end| {{unrestricted double}}
      - +
      ParameterTypeNullableOptionalDescription
      index|index| {{unsigned long}}