forked from rtcweb-wg/jsep
-
Notifications
You must be signed in to change notification settings - Fork 0
/
draft-ietf-rtcweb-jsep.xml
6140 lines (5255 loc) · 263 KB
/
draft-ietf-rtcweb-jsep.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="us-ascii"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes" ?>
<?rfc symrefs="yes" ?>
<?rfc iprnotified="no" ?>
<?rfc strict="yes" ?>
<?rfc compact="yes" ?>
<?rfc sortrefs="yes" ?>
<?rfc colonspace="yes" ?>
<?rfc rfcedstyle="no" ?>
<?rfc docmapping="yes" ?>
<?rfc tocdepth="4"?>
<rfc category="std" docName="draft-ietf-rtcweb-jsep-latest"
ipr="trust200902">
<front>
<title abbrev="JSEP">JavaScript Session Establishment
Protocol</title>
<author fullname="Justin Uberti" initials="J." surname="Uberti">
<organization>Google</organization>
<address>
<postal>
<street>747 6th St S</street>
<city>Kirkland</city>
<region>WA</region>
<code>98033</code>
<country>USA</country>
</postal>
<email>justin@uberti.name</email>
</address>
</author>
<author fullname="Cullen Jennings" initials="C."
surname="Jennings">
<organization>Cisco</organization>
<address>
<postal>
<street>400 3rd Avenue SW</street>
<city>Calgary</city>
<region>AB</region>
<code>T2P 4H2</code>
<country>Canada</country>
</postal>
<email>fluffy@iii.ca</email>
</address>
</author>
<author fullname="Eric Rescorla" initials="E.K." surname="Rescorla"
role="editor">
<organization>Mozilla</organization>
<address>
<postal>
<street>331 Evelyn Ave</street>
<city>Mountain View</city>
<region>CA</region>
<code>94041</code>
<country>USA</country>
</postal>
<email>ekr@rtfm.com</email>
</address>
</author>
<date />
<area>RAI</area>
<abstract>
<t>This document describes the mechanisms for allowing a
JavaScript application to control the signaling plane of a
multimedia session via the interface specified in the W3C
RTCPeerConnection API, and discusses how this relates to existing
signaling protocols.</t>
</abstract>
</front>
<middle>
<section title="Introduction" anchor="sec.introduction">
<t>This document describes how the W3C WEBRTC RTCPeerConnection
interface
<xref target="W3C.webrtc"></xref> is used to control the setup,
management and teardown of a multimedia session.</t>
<section title="General Design of JSEP"
anchor="sec.general-design-of-jsep">
<t>WebRTC call setup has been designed to focus on controlling
the media plane, leaving signaling plane behavior up to the
application as much as possible. The rationale is that
different applications may prefer to use different protocols,
such as the existing SIP call signaling protocol, or something
custom to the particular application, perhaps for a novel use
case. In this approach, the key information that needs to be
exchanged is the multimedia session description, which
specifies the necessary transport and media configuration
information necessary to establish the media plane.</t>
<t>With these considerations in mind, this document describes
the JavaScript Session Establishment Protocol (JSEP) that
allows for full control of the signaling state machine from
JavaScript. As described above, JSEP assumes a model in which a
JavaScript application executes inside a runtime containing
WebRTC APIs (the "JSEP implementation"). The JSEP
implementation is almost entirely divorced from the core
signaling flow, which is instead handled by the JavaScript
making use of two interfaces: (1) passing in local and remote
session descriptions and (2) interacting with the ICE state
machine. The combination of the JSEP implementation and the
JavaScript application is referred to throughout this document
as a "JSEP endpoint".</t>
<t>In this document, the use of JSEP is described as if it
always occurs between two JSEP endpoints. Note though in many
cases it will actually be between a JSEP endpoint and some kind
of server, such as a gateway or MCU. This distinction is
invisible to the JSEP endpoint; it just follows the
instructions it is given via the API.</t>
<t>JSEP's handling of session descriptions is simple and
straightforward. Whenever an offer/answer exchange is needed,
the initiating side creates an offer by calling a createOffer()
API. The application then uses that offer to set up its local
config via the setLocalDescription() API. The offer is finally
sent off to the remote side over its preferred signaling
mechanism (e.g., WebSockets); upon receipt of that offer, the
remote party installs it using the setRemoteDescription()
API.</t>
<t>To complete the offer/answer exchange, the remote party uses
the createAnswer() API to generate an appropriate answer,
applies it using the setLocalDescription() API, and sends the
answer back to the initiator over the signaling channel. When
the initiator gets that answer, it installs it using the
setRemoteDescription() API, and initial setup is complete. This
process can be repeated for additional offer/answer
exchanges.</t>
<t>Regarding ICE
<xref target="RFC5245"></xref>, JSEP decouples the ICE state
machine from the overall signaling state machine, as the ICE
state machine must remain in the JSEP implementation, because
only the implementation has the necessary knowledge of
candidates and other transport information. Performing this
separation provides additional flexibility in protocols that
decouple session descriptions from transport. For instance, in
traditional SIP, each offer or answer is self-contained,
including both the session descriptions and the transport
information. However,
<xref target="I-D.ietf-mmusic-trickle-ice-sip" /> allows SIP to
be used with trickle ICE
<xref target="I-D.ietf-ice-trickle" />, in which the session
description can be sent immediately and the transport
information can be sent when available. Sending transport
information separately can allow for faster ICE and DTLS
startup, since ICE checks can start as soon as any transport
information is available rather than waiting for all of it.
JSEP's decoupling of the ICE and signaling state machines
allows it to accommodate either model.</t>
<t>Through its abstraction of signaling, the JSEP approach does
require the application to be aware of the signaling process.
While the application does not need to understand the contents
of session descriptions to set up a call, the application must
call the right APIs at the right times, convert the session
descriptions and ICE information into the defined messages of
its chosen signaling protocol, and perform the reverse
conversion on the messages it receives from the other side.</t>
<t>One way to make life easier for the application is to
provide a JavaScript library that hides this complexity from
the developer; said library would implement a given signaling
protocol along with its state machine and serialization code,
presenting a higher level call-oriented interface to the
application developer. For example, libraries exist to adapt
the JSEP API into an API suitable for a SIP or XMPP. Thus, JSEP
provides greater control for the experienced developer without
forcing any additional complexity on the novice developer.</t>
</section>
<section title="Other Approaches Considered"
anchor="sec.other-approaches-consider">
<t>One approach that was considered instead of JSEP was to
include a lightweight signaling protocol. Instead of providing
session descriptions to the API, the API would produce and
consume messages from this protocol. While providing a more
high-level API, this put more control of signaling within the
JSEP implementation, forcing it to have to understand and
handle concepts like signaling glare (see
<xref target="RFC3264" />, Section 4).</t>
<t>A second approach that was considered but not chosen was to
decouple the management of the media control objects from
session descriptions, instead offering APIs that would control
each component directly. This was rejected based on the
argument that requiring exposure of this level of complexity to
the application programmer would not be beneficial; it would
result in an API where even a simple example would require a
significant amount of code to orchestrate all the needed
interactions, as well as creating a large API surface that
needed to be agreed upon and documented. In addition, these API
points could be called in any order, resulting in a more
complex set of interactions with the media subsystem than the
JSEP approach, which specifies how session descriptions are to
be evaluated and applied.</t>
<t>One variation on JSEP that was considered was to keep the
basic session description-oriented API, but to move the
mechanism for generating offers and answers out of the JSEP
implementation. Instead of providing createOffer/createAnswer
methods within the implementation, this approach would instead
expose a getCapabilities API which would provide the
application with the information it needed in order to generate
its own session descriptions. This increases the amount of work
that the application needs to do; it needs to know how to
generate session descriptions from capabilities, and especially
how to generate the correct answer from an arbitrary offer and
the supported capabilities. While this could certainly be
addressed by using a library like the one mentioned above, it
basically forces the use of said library even for a simple
example. Providing createOffer/createAnswer avoids this
problem.</t>
</section>
</section>
<section title="Terminology" anchor="sec.terminology">
<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in
<xref target="RFC2119"></xref>.</t>
</section>
<section title="Semantics and Syntax"
anchor="sec.semantics-and-syntax">
<section title="Signaling Model" anchor="sec.signaling-model">
<t>JSEP does not specify a particular signaling model or state
machine, other than the generic need to exchange session
descriptions in the fashion described by
<xref target="RFC3264"></xref> (offer/answer) in order for both
sides of the session to know how to conduct the session. JSEP
provides mechanisms to create offers and answers, as well as to
apply them to a session. However, the JSEP implementation is
totally decoupled from the actual mechanism by which these
offers and answers are communicated to the remote side,
including addressing, retransmission, forking, and glare
handling. These issues are left entirely up to the application;
the application has complete control over which offers and
answers get handed to the implementation, and when.</t>
<figure anchor="fig-sigModel" title="JSEP Signaling Model">
<artwork>
<![CDATA[
+-----------+ +-----------+
| Web App |<--- App-Specific Signaling -->| Web App |
+-----------+ +-----------+
^ ^
| SDP | SDP
V V
+-----------+ +-----------+
| JSEP |<----------- Media ------------>| JSEP |
| Impl. | | Impl. |
+-----------+ +-----------+
]]>
</artwork>
</figure>
</section>
<section title="Session Descriptions and State Machine"
anchor="sec.session-descriptions-and-state-machine">
<t>In order to establish the media plane, the JSEP
implementation needs specific parameters to indicate what to
transmit to the remote side, as well as how to handle the media
that is received. These parameters are determined by the
exchange of session descriptions in offers and answers, and
there are certain details to this process that must be handled
in the JSEP APIs.</t>
<t>Whether a session description applies to the local side or
the remote side affects the meaning of that description. For
example, the list of codecs sent to a remote party indicates
what the local side is willing to receive, which, when
intersected with the set of codecs the remote side supports,
specifies what the remote side should send. However, not all
parameters follow this rule; some parameters are declarative
and the remote side MUST either accept them or reject them
altogether. An example of such a parameter is the DTLS
fingerprints
<xref target="RFC8122"></xref>, which are calculated based on
the local certificate(s) offered, and are not subject to
negotiation.</t>
<t>In addition, various RFCs put different conditions on the
format of offers versus answers. For example, an offer may
propose an arbitrary number of m= sections (i.e., media
descriptions as described in
<xref target="RFC4566" />, Section 5.14), but an answer must
contain the exact same number as the offer.</t>
<t>Lastly, while the exact media parameters are only known only
after an offer and an answer have been exchanged, the offerer
may receive ICE checks, and possibly media (e.g., in the case
of a re-offer after a connection has been established) before
it receives an answer. To properly process incoming media in
this case, the offerer's media handler must be aware of the
details of the offer before the answer arrives.</t>
<t>Therefore, in order to handle session descriptions properly,
the JSEP implementation needs:
<list style="numbers">
<t>To know if a session description pertains to the local or
remote side.</t>
<t>To know if a session description is an offer or an
answer.</t>
<t>To allow the offer to be specified independently of the
answer.</t>
</list>JSEP addresses this by adding both setLocalDescription
and setRemoteDescription methods and having session description
objects contain a type field indicating the type of session
description being supplied. This satisfies the requirements
listed above for both the offerer, who first calls
setLocalDescription(sdp [offer]) and then later
setRemoteDescription(sdp [answer]), as well as for the
answerer, who first calls setRemoteDescription(sdp [offer]) and
then later setLocalDescription(sdp [answer]).</t>
<t>During the offer/answer exchange, the outstanding offer is
considered to be "pending" at the offerer and the answerer, as
it may either be accepted or rejected. If this is a re-offer,
each side will also have "current" local and remote
descriptions, which reflect the result of the last offer/answer
exchange. Sections
<xref target="sec.pendinglocaldescription" />,
<xref target="sec.pendingremotedescription" />,
<xref target="sec.currentlocaldescription" />, and
<xref target="sec.currentremotedescription" />, provide more
detail on pending and current descriptions.</t>
<t>JSEP also allows for an answer to be treated as provisional
by the application. Provisional answers provide a way for an
answerer to communicate initial session parameters back to the
offerer, in order to allow the session to begin, while allowing
a final answer to be specified later. This concept of a final
answer is important to the offer/answer model; when such an
answer is received, any extra resources allocated by the caller
can be released, now that the exact session configuration is
known. These "resources" can include things like extra ICE
components, TURN candidates, or video decoders. Provisional
answers, on the other hand, do no such deallocation; as a
result, multiple dissimilar provisional answers, with their own
codec choices, transport parameters, etc., can be received and
applied during call setup. Note that the final answer itself
may be different than any received provisional answers.</t>
<t>In
<xref target="RFC3264"></xref>, the constraint at the signaling
level is that only one offer can be outstanding for a given
session, but at the media stack level, a new offer can be
generated at any point. For example, when using SIP for
signaling, if one offer is sent, then cancelled using a SIP
CANCEL, another offer can be generated even though no answer
was received for the first offer. To support this, the JSEP
media layer can provide an offer via the createOffer() method
whenever the JavaScript application needs one for the
signaling. The answerer can send back zero or more provisional
answers, and finally end the offer-answer exchange by sending a
final answer. The state machine for this is as follows:</t>
<t>
<figure anchor="fig-state-machine"
title="JSEP State Machine">
<artwork>
<![CDATA[
setRemote(OFFER) setLocal(PRANSWER)
/-----\ /-----\
| | | |
v | v |
+---------------+ | +---------------+ |
| |----/ | |----/
| have- | setLocal(PRANSWER) | have- |
| remote-offer |------------------- >| local-pranswer|
| | | |
| | | |
+---------------+ +---------------+
^ | |
| | setLocal(ANSWER) |
setRemote(OFFER) | |
| V setLocal(ANSWER) |
+---------------+ |
| | |
| |<---------------------------+
| stable |
| |<---------------------------+
| | |
+---------------+ setRemote(ANSWER) |
^ | |
| | setLocal(OFFER) |
setRemote(ANSWER) | |
| V |
+---------------+ +---------------+
| | | |
| have- | setRemote(PRANSWER) |have- |
| local-offer |------------------- >|remote-pranswer|
| | | |
| |----\ | |----\
+---------------+ | +---------------+ |
^ | ^ |
| | | |
\-----/ \-----/
setLocal(OFFER) setRemote(PRANSWER)
]]>
</artwork>
</figure>
</t>
<t>Aside from these state transitions there is no other
difference between the handling of provisional ("pranswer") and
final ("answer") answers.</t>
</section>
<section title="Session Description Format"
anchor="sec.session-description-forma">
<t>JSEP's session descriptions use SDP syntax for their
internal representation. While this format is not optimal for
manipulation from JavaScript, it is widely accepted, and
frequently updated with new features; any alternate encoding of
session descriptions would have to keep pace with the changes
to SDP, at least until the time that this new encoding eclipsed
SDP in popularity.</t>
<t>However, to provide for future flexibility, the SDP syntax
is encapsulated within a SessionDescription object, which can
be constructed from SDP, and be serialized out to SDP. If
future specifications agree on a JSON format for session
descriptions, we could easily enable this object to generate
and consume that JSON.</t>
<t>As detailed below, most applications should be able to treat
the SessionDescriptions produced and consumed by these various
API calls as opaque blobs; that is, the application will not
need to read or change them.</t>
</section>
<section title="Session Description Control"
anchor="sec.session-description-ctrl">
<t>In order to give the application control over various common
session parameters, JSEP provides control surfaces which tell
the JSEP implementation how to generate session descriptions.
This avoids the need for JavaScript to modify session
descriptions in most cases.</t>
<t>Changes to these objects result in changes to the session
descriptions generated by subsequent createOffer/Answer
calls.</t>
<section title="RtpTransceivers" anchor="sec.rtptransceivers">
<t>RtpTransceivers allow the application to control the RTP
media associated with one m= section. Each RtpTransceiver has
an RtpSender and an RtpReceiver, which an application can use
to control the sending and receiving of RTP media. The
application may also modify the RtpTransceiver directly, for
instance, by stopping it.</t>
<t>RtpTransceivers generally have a 1:1 mapping with m=
sections, although there may be more RtpTransceivers than m=
sections when RtpTransceivers are created but not yet
associated with a m= section, or if RtpTransceivers have been
stopped and disassociated from m= sections. An RtpTransceiver
is said to be associated with an m= section if its mid
property is non-null; otherwise it is said to be
disassociated. The associated m= section is determined using
a mapping between transceivers and m= section indices, formed
when creating an offer or applying a remote offer.</t>
<t>An RtpTransceiver is never associated with more than one
m= section, and once a session description is applied, a m=
section is always associated with exactly one RtpTransceiver.
However, in certain cases where a m= section has been
rejected, as discussed in
<xref target="sec.subsequent-offers" /> below, that m= section
will be "recycled" and associated with a new RtpTransceiver
with a new mid value.</t>
<t>RtpTransceivers can be created explicitly by the
application or implicitly by calling setRemoteDescription
with an offer that adds new m= sections.</t>
</section>
<section title="RtpSenders" anchor="sec.rtpsenders">
<t>RtpSenders allow the application to control how RTP media
is sent. An RtpSender is conceptually responsible for the
outgoing RTP stream(s) described by an m= section. This
includes encoding the attached MediaStreamTrack, sending RTP
media packets, and generating/processing RTCP for the
outgoing RTP streams(s).</t>
</section>
<section title="RtpReceivers" anchor="sec.rtpreceivers">
<t>RtpReceivers allow the application to inspect how RTP
media is received. An RtpReceiver is conceptually responsible
for the incoming RTP stream(s) described by an m= section.
This includes processing received RTP media packets, decoding
the incoming stream(s) to produce a remote MediaStreamTrack,
and generating/processing RTCP for the incoming RTP
stream(s).</t>
</section>
</section>
<section title="ICE" anchor="sec.ice">
<section title="ICE Gathering Overview"
anchor="sec.ice-gather-overview">
<t>JSEP gathers ICE candidates as needed by the application.
Collection of ICE candidates is referred to as a gathering
phase, and this is triggered either by the addition of a new
or recycled m= section to the local session description, or
new ICE credentials in the description, indicating an ICE
restart. Use of new ICE credentials can be triggered
explicitly by the application, or implicitly by the JSEP
implementation in response to changes in the ICE
configuration.</t>
<t>When the ICE configuration changes in a way that requires
a new gathering phase, a 'needs-ice-restart' bit is set. When
this bit is set, calls to the createOffer API will generate
new ICE credentials. This bit is cleared by a call to the
setLocalDescription API with new ICE credentials from either
an offer or an answer, i.e., from either a local- or
remote-initiated ICE restart.</t>
<t>When a new gathering phase starts, the ICE agent will
notify the application that gathering is occurring through an
event. Then, when each new ICE candidate becomes available,
the ICE agent will supply it to the application via an
additional event; these candidates will also automatically be
added to the current and/or pending local session
description. Finally, when all candidates have been gathered,
an event will be dispatched to signal that the gathering
process is complete.</t>
<t>Note that gathering phases only gather the candidates
needed by new/recycled/restarting m= sections; other m=
sections continue to use their existing candidates. Also, if
an m= section is bundled (either by a successful bundle
negotiation or by being marked as bundle-only), then
candidates will be gathered and exchanged for that m= section
if and only if its MID is a BUNDLE-tag, as described in
<xref target="I-D.ietf-mmusic-sdp-bundle-negotiation" />.</t>
</section>
<section title="ICE Candidate Trickling"
anchor="sec.ice-candidate-trickling">
<t>Candidate trickling is a technique through which a caller
may incrementally provide candidates to the callee after the
initial offer has been dispatched; the semantics of "Trickle
ICE" are defined in
<xref target="I-D.ietf-ice-trickle"></xref>. This process
allows the callee to begin acting upon the call and setting
up the ICE (and perhaps DTLS) connections immediately,
without having to wait for the caller to gather all possible
candidates. This results in faster media setup in cases where
gathering is not performed prior to initiating the call.</t>
<t>JSEP supports optional candidate trickling by providing
APIs, as described above, that provide control and feedback
on the ICE candidate gathering process. Applications that
support candidate trickling can send the initial offer
immediately and send individual candidates when they get the
notified of a new candidate; applications that do not support
this feature can simply wait for the indication that
gathering is complete, and then create and send their offer,
with all the candidates, at this time.</t>
<t>Upon receipt of trickled candidates, the receiving
application will supply them to its ICE agent. This triggers
the ICE agent to start using the new remote candidates for
connectivity checks.</t>
<section title="ICE Candidate Format"
anchor="sec.ice-candidate-format">
<t>In JSEP, ICE candidates are abstracted by an
IceCandidate object, and as with session descriptions, SDP
syntax is used for the internal representation.</t>
<t>The candidate details are specified in an IceCandidate
field, using the same SDP syntax as the
"candidate-attribute" field defined in
<xref target="RFC5245" />, Section 15.1. Note that this
field does not contain an "a=" prefix, as indicated in the
following example:</t>
<figure>
<artwork>
<![CDATA[
candidate:1 1 UDP 1694498815 192.0.2.33 10000 typ host
]]>
</artwork>
</figure>
<t>The IceCandidate object contains a field to indicate
which ICE ufrag it is associated with, as defined in
<xref target="RFC5245" />, Section 15.4. This value is used
to determine which session description (and thereby which
gathering phase) this IceCandidate belongs to, which helps
resolve ambiguities during ICE restarts. If this field is
absent in a received IceCandidate (perhaps when
communicating with a non-JSEP endpoint), the most recently
received session description is assumed.</t>
<t>The IceCandidate object also contains fields to indicate
which m= section it is associated with, which can be
identified in one of two ways, either by a m= section
index, or a MID. The m= section index is a zero-based
index, with index N referring to the N+1th m= section in
the session description referenced by this IceCandidate.
The MID is a "media stream identification" value, as
defined in
<xref target="RFC5888"></xref>, Section 4, which provides a
more robust way to identify the m= section in the session
description, using the MID of the associated RtpTransceiver
object (which may have been locally generated by the
answerer when interacting with a non-JSEP endpoint that
does not support the MID attribute, as discussed in
<xref target="sec.applying-a-remote-desc" /> below). If the
MID field is present in a received IceCandidate, it MUST be
used for identification; otherwise, the m= section index is
used instead.</t>
<t>When creating an IceCandidate object, JSEP
implementations MUST populate each of the candidate, ufrag,
m= section index, and MID fields. Implementations MUST also
be prepared to receive objects with some fields missing, as
mentioned above.</t>
</section>
</section>
<section title="ICE Candidate Policy"
anchor="sec.ice-candidate-policy">
<t>Typically, when gathering ICE candidates, the JSEP
implementation will gather all possible forms of initial
candidates - host, server reflexive, and relay. However, in
certain cases, applications may want to have more specific
control over the gathering process, due to privacy or related
concerns. For example, one may want to only use relay
candidates, to leak as little location information as
possible (keeping in mind that this choice comes with
corresponding operational costs). To accomplish this, JSEP
allows the application to restrict which ICE candidates are
used in a session. Note that this filtering is applied on top
of any restrictions the implementation chooses to enforce
regarding which IP addresses are permitted for the
application, as discussed in
<xref target="I-D.ietf-rtcweb-ip-handling" />.</t>
<t>There may also be cases where the application wants to
change which types of candidates are used while the session
is active. A prime example is where a callee may initially
want to use only relay candidates, to avoid leaking location
information to an arbitrary caller, but then change to use
all candidates (for lower operational cost) once the user has
indicated they want to take the call. For this scenario, the
JSEP implementation MUST allow the candidate policy to be
changed in mid-session, subject to the aforementioned
interactions with local policy.</t>
<t>To administer the ICE candidate policy, the JSEP
implementation will determine the current setting at the
start of each gathering phase. Then, during the gathering
phase, the implementation MUST NOT expose candidates
disallowed by the current policy to the application, use them
as the source of connectivity checks, or indirectly expose
them via other fields, such as the raddr/rport attributes for
other ICE candidates. Later, if a different policy is
specified by the application, the application can apply it by
kicking off a new gathering phase via an ICE restart.</t>
</section>
<section title="ICE Candidate Pool"
anchor="sec.ice-candidate-pool">
<t>JSEP applications typically inform the JSEP implementation
to begin ICE gathering via the information supplied to
setLocalDescription, as the local description indicates the
number of ICE components which will be needed and for which
candidates must be gathered. However, to accelerate cases
where the application knows the number of ICE components to
use ahead of time, it may ask the implementation to gather a
pool of potential ICE candidates to help ensure rapid media
setup.</t>
<t>When setLocalDescription is eventually called, and the
JSEP implementation goes to gather the needed ICE candidates,
it SHOULD start by checking if any candidates are available
in the pool. If there are candidates in the pool, they SHOULD
be handed to the application immediately via the ICE
candidate event. If the pool becomes depleted, either because
a larger-than-expected number of ICE components is used, or
because the pool has not had enough time to gather
candidates, the remaining candidates are gathered as usual.
This only occurs for the first offer/answer exchange, after
which the candidate pool is emptied and no longer used.</t>
<t>One example of where this concept is useful is an
application that expects an incoming call at some point in
the future, and wants to minimize the time it takes to
establish connectivity, to avoid clipping of initial media.
By pre-gathering candidates into the pool, it can exchange
and start sending connectivity checks from these candidates
almost immediately upon receipt of a call. Note though that
by holding on to these pre-gathered candidates, which will be
kept alive as long as they may be needed, the application
will consume resources on the STUN/TURN servers it is
using.</t>
</section>
</section>
<section anchor="sec.imageattr" title="Video Size Negotiation">
<t>Video size negotiation is the process through which a
receiver can use the "a=imageattr" SDP attribute
<xref target="RFC6236" /> to indicate what video frame sizes it
is capable of receiving. A receiver may have hard limits on
what its video decoder can process, or it may have some maximum
set by policy. By specifying these limits in an "a=imageattr"
attribute, JSEP endpoints can attempt to ensure that the remote
sender transmits video at an acceptable resolution. However,
when communicating with a non-JSEP endpoint that does not
understand this attribute, any signaled limits may be exceeded,
and the JSEP implementation MUST handle this gracefully, e.g.,
by discarding the video.</t>
<t>Note that certain codecs support transmission of samples
with aspect ratios other than 1.0 (i.e., non-square pixels).
JSEP implementations will not transmit non-square pixels, but
SHOULD receive and render such video with the correct aspect
ratio. However, sample aspect ratio has no impact on the size
negotiation described below; all dimensions are measured in
pixels, whether square or not.</t>
<section anchor="sec.creating-imageattr"
title="Creating an imageattr Attribute">
<t>The receiver will first intersect any known local limits
(e.g., hardware decoder capababilities, local policy) to
determine the absolute minimum and maximum sizes it can
receive. If there are no known local limits, the
"a=imageattr" attribute SHOULD be omitted. If these local
limits preclude receiving any video, i.e., the degenerate
case of no permitted resolutions, the "a=imageattr" attribute
MUST be omitted, and the m= section MUST be marked as
sendonly/inactive, as appropriate.</t>
<t>Otherwise, an "a=imageattr" attribute is created with
"recv" direction, and the resulting resolution space formed
from the aforementioned intersection is used to specify its
minimum and maximum x= and y= values.</t>
<t>The rules here express a single set of preferences, and
therefore, the "a=imageattr" q= value is not important. It
SHOULD be set to 1.0.</t>
<t>The "a=imageattr" field is payload type specific. When all
video codecs supported have the same capabilities, use of a
single attribute, with the wildcard payload type (*), is
RECOMMENDED. However, when the supported video codecs have
different limitations, specific "a=imageattr" attributes MUST
be inserted for each payload type.</t>
<t>As an example, consider a system with a multiformat video
decoder, which is capable of decoding any resolution from
48x48 to 720p, In this case, the implementation would
generate this attribute:</t>
<t>a=imageattr:* recv [x=[48:1280],y=[48:720],q=1.0]</t>
<t>This declaration indicates that the receiver is capable of
decoding any image resolution from 48x48 up to 1280x720
pixels.</t>
</section>
<section anchor="sec.interpreting-imageattr"
title="Interpreting imageattr Attributes">
<t>
<xref target="RFC6236" /> defines "a=imageattr" to be an
advisory field. This means that it does not absolutely
constrain the video formats that the sender can use, but
gives an indication of the preferred values.</t>
<t>This specification prescribes more specific behavior. When
a MediaStreamTrack, which is producing video of a certain
resolution (the "track resolution"), is attached to a
RtpSender, which is encoding the track video at the same or
lower resolution(s) (the "encoder resolutions"), and a remote
description is applied that references the sender and
contains valid "a=imageattr recv" attributes, it MUST follow
the rules below to ensure the sender does not transmit a
resolution that would exceed the size criteria specified in
the attributes. These rules MUST be followed as long as the
attributes remain present in the remote description,
including cases in which the track changes its resolution, or
is replaced with a different track.</t>
<t>Depending on how the RtpSender is configured, it may be
producing a single encoding at a certain resolution, or, if
simulcast
<xref target="sec.simulcast" /> has been negotiated, multiple
encodings, each at their own specific resolution. In
addition, depending on the configuration, each encoding may
have the flexibility to reduce resolution when needed, or may
be locked to a specific output resolution.</t>
<t>For each encoding being produced by the RtpSender, the set
of "a=imageattr recv" attributes in the corresponding m=
section of the remote description is processed to determine
what should be transmitted. Only attributes that reference
the media format selected for the encoding are considered;
each such attribute is evaluated individually, starting with
the attribute with the highest "q=" value. If multiple
attributes have the same "q=" value, they are evaluated in
the order they appear in their containing m= section. Note
that while JSEP endpoints will include at most one
"a=imageattr recv" attribute per media format, JSEP endpoints
may receive session descriptions from non-JSEP endpoints with
m= sections that contain multiple such attributes.</t>
<t>For each "a=imageattr recv" attribute, the following rules
are applied. If this processing is successful, the encoding
is transmitted accordingly, and no further attributes are
considered for that encoding. Otherwise, the next attribute
is evaluated, in the aforementioned order. If none of the
supplied attributes can be processed successfully, the
encoding MUST NOT be transmitted, and an error SHOULD be
raised to the application.
<list style="symbols">
<t>The limits from the attribute are compared to the
encoder resolution. Only the specific limits mentioned
below are considered; any other values, such as picture
aspect ratio, MUST be ignored. When considering a
MediaStreamTrack that is producing rotated video, the
unrotated resolution MUST be used for the checks. This is
required regardless of whether the receiver supports
performing receive-side rotation (e.g., through CVO
<xref target="TS26.114" />), as it significantly simplifies
the matching logic.</t>
<t>If the attribute includes a "sar=" (sample aspect ratio)
value set to something other than "1.0", indicating the
receiver wants to receive non-square pixels, this cannot be
satisfied and the attribute MUST NOT be used.</t>
<t>If the encoder resolution exceeds the maximum size
permitted by the attribute, and the encoder is allowed to
adjust its resolution, the encoder SHOULD apply downscaling
in order to satisfy the limits. Downscaling MUST NOT change
the picture aspect ratio of the encoding, ignoring any
trivial differences due to rounding. For example, if the
encoder resolution is 1280x720, and the attribute specified
a maximum of 640x480, the expected output resolution would
be 640x360. If downscaling cannot be applied, the attribute
MUST NOT be used.</t>
<t>If the encoder resolution is less than the minimum size
permitted by the attribute, the attribute MUST NOT be used;
the encoder MUST NOT apply upscaling. JSEP implementations
SHOULD avoid this situation by allowing receipt of
arbitrarily small resolutions, perhaps via fallback to a
software decoder.</t>
<t>If the encoder resolution is within the maximum and
minimum sizes, no action is needed.</t>
</list></t>
</section>
</section>
<section title="Simulcast" anchor="sec.simulcast">
<t>JSEP supports simulcast transmission of a MediaStreamTrack,
where multiple encodings of the source media can be transmitted
within the context of a single m= section. The current JSEP API
is designed to allow applications to send simulcasted media but
only to receive a single encoding. This allows for multi-user
scenarios where each sending client sends multiple encodings to
a server, which then, for each receiving client, chooses the
appropriate encoding to forward.</t>
<t>Applications request support for simulcast by configuring
multiple encodings on an RtpSender. Upon generation of an offer
or answer, these encodings are indicated via SDP markings on
the corresponding m= section, as described below. Receivers
that understand simulcast and are willing to receive it will
also include SDP markings to indicate their support, and JSEP
endpoints will use these markings to determine whether
simulcast is permitted for a given RtpSender. If simulcast
support is not negotiated, the RtpSender will only use the
first configured encoding.</t>
<t>Note that the exact simulcast parameters are up to the
sending application. While the aforementioned SDP markings are
provided to ensure the remote side can receive and demux
multiple simulcast encodings, the specific resolutions and
bitrates to be used for each encoding are purely a send-side
decision in JSEP.</t>
<t>JSEP currently does not provide a mechanism to configure
receipt of simulcast. This means that if simulcast is offered
by the remote endpoint, the answer generated by a JSEP endpoint
will not indicate support for receipt of simulcast, and as such
the remote endpoint will only send a single encoding per m=
section.</t>
<t>In addition, JSEP does not provide a mechanism to handle an
incoming offer requesting simulcast from the JSEP endpoint.
This means that setting up simulcast in the case where the JSEP
endpoint receives the initial offer requires out-of-band
signaling or SDP inspection. However, in the case where the
JSEP endpoint sets up simulcast in its in initial offer, any
established simulcast streams will continue to work upon
receipt of an incoming re-offer. Future versions of this
specification may add additional APIs to handle the incoming
initial offer scenario.</t>
<t>When using JSEP to transmit multiple encodings from a
RtpSender, the techniques from
<xref target="I-D.ietf-mmusic-sdp-simulcast" /> and
<xref target="I-D.ietf-mmusic-rid" /> are used. Specifically,
when multiple encodings have been configured for a RtpSender,
the m= section for the RtpSender will include an "a=simulcast"
attribute, as defined in
<xref target="I-D.ietf-mmusic-sdp-simulcast" />, Section 6.2,
with a "send" simulcast stream description that lists each
desired encoding, and no "recv" simulcast stream description.
The m= section will also include an "a=rid" attribute for each
encoding, as specified in
<xref target="I-D.ietf-mmusic-rid" />, Section 4; the use of
RID identifiers allows the individual encodings to be
disambiguated even though they are all part of the same m=
section.</t>
</section>
<section title="Interactions With Forking"
anchor="sec.interactions-with-forking">
<t>Some call signaling systems allow various types of forking
where an SDP Offer may be provided to more than one device. For
example, SIP
<xref target="RFC3261"></xref> defines both a "Parallel Search"
and "Sequential Search". Although these are primarily signaling
level issues that are outside the scope of JSEP, they do have
some impact on the configuration of the media plane that is
relevant. When forking happens at the signaling layer, the
JavaScript application responsible for the signaling needs to
make the decisions about what media should be sent or received
at any point of time, as well as which remote endpoint it
should communicate with; JSEP is used to make sure the media
engine can make the RTP and media perform as required by the
application. The basic operations that the applications can
have the media engine do are:
<list style="symbols">
<t>Start exchanging media with a given remote peer, but keep
all the resources reserved in the offer.</t>
<t>Start exchanging media with a given remote peer, and free
any resources in the offer that are not being used.</t>
</list></t>
<section title="Sequential Forking"
anchor="sec.sequential-forking">
<t>Sequential forking involves a call being dispatched to
multiple remote callees, where each callee can accept the
call, but only one active session ever exists at a time; no
mixing of received media is performed.</t>
<t>JSEP handles sequential forking well, allowing the
application to easily control the policy for selecting the
desired remote endpoint. When an answer arrives from one of
the callees, the application can choose to apply it either as
a provisional answer, leaving open the possibility of using a
different answer in the future, or apply it as a final
answer, ending the setup flow.</t>
<t>In a "first-one-wins" situation, the first answer will be
applied as a final answer, and the application will reject
any subsequent answers. In SIP parlance, this would be ACK +
BYE.</t>
<t>In a "last-one-wins" situation, all answers would be
applied as provisional answers, and any previous call leg
will be terminated. At some point, the application will end
the setup process, perhaps with a timer; at this point, the
application could reapply the pending remote description as a
final answer.</t>
</section>
<section title="Parallel Forking"
anchor="sec.parallel-forking">
<t>Parallel forking involves a call being dispatched to
multiple remote callees, where each callee can accept the
call, and multiple simultaneous active signaling sessions can
be established as a result. If multiple callees send media at
the same time, the possibilities for handling this are
described in
<xref target="RFC3960"></xref>, Section 3.1. Most SIP devices
today only support exchanging media with a single device at a
time, and do not try to mix multiple early media audio
sources, as that could result in a confusing situation. For
example, consider having a European ringback tone mixed
together with the North American ringback tone - the
resulting sound would not be like either tone, and would
confuse the user. If the signaling application wishes to only
exchange media with one of the remote endpoints at a time,
then from a media engine point of view, this is exactly like