-
Notifications
You must be signed in to change notification settings - Fork 38
TDMA Model
The TDMA radio model implements a generic TDMA scheme that supports TDMA schedule distribution and updates in realtime using events. The TDMA radio model supports the following features:
-
Supports TDMA schedule definition including slot size, slot overhead, frame size as well as per slot datarate, frequency, power, service class and optional destination.
-
Supports priority queues which map user traffic based on DSCP. Outbound messages are dequeued FIFO based on slot service class queue mapping and a highest to lowest priority queue search, if necessary.
-
Supports fragmentation and reassembly of large outbound messages based on per slot datarates.
-
Supports aggregation of smaller outbound messages into larger over-the-air messages.
-
Supports flow control.
-
Supports user defined Packet Completion Rate curves as a function of SINR.
The TDMA model classifies user traffic into four categories which map to four queues. Traffic is assigned to a queue based on downstream packet priority. The queue.depth configuration parameter controls the queue depth for all queues. All queues will overflow, dropping the oldest packet when an enqueue operation occurs on a queue at max queue depth. The packet selected for discard will be the oldest packet where no portion of the packet has been transmitted due to fragmentation. If all packets in the queue have had a portion transmitted, then the oldest packet is discarded regardless of fragmentation state.
The Virtual Transport and Raw Transport use DSCP as downstream packet priority.
Queue ID | DSCP (6 MSBs of IP TOS Field) | Queue Priority |
---|---|---|
0 | 0 - 7, 24 - 31 | 0 (lowest) |
1 | 8 - 23 | 1 |
2 | 32 - 47 | 2 |
3 | 48 - 63 | 3 |
4 | Reserved Control | 4 (highest) |
All transmit slots have a class or service assigned. This class represents one of the four traffic queues or a fifth reserved scheduler control queue. If the queue.stricttxdequeue configuration parameter is enabled, only the queue matching the transmit slot class is used to dequeue traffic. If queue.stricttxdequeue is disabled, an attempt is made to dequeue traffic from the slot class matching queue, followed by all other queues in highest to lowest priority order.
Queues can be dequeued in one of two ways depending on whether the current transmit slot has a destination NEM assigned. If a destination is assigned, only packets matching the destination are dequeued (FIFO). If no destination match is found, no transmission occurs. If a destination is not assigned, packets are dequeued (FIFO) regardless of their destination.
Each slot has the same size and overhead, which is assigned using a TDMA Schedule event. Transmit slots may be assigned different datarates, making it possible for slots to have different maximum byte limits. The TDMA model supports both fragmentation and aggregation and allows either to be enabled or disabled independently.
If fragmentation is enabled with the queue.fragmentationenable configuration parameter, a large packet that is too big to fit in a given transmit slot is fragmented into two or more message components. The actual number of message components required cannot be determined before hand since each slot may vary in allowable size and destination assignment. If fragmentation is disabled, packets that are too large to fit in a transmit slot are discarded until one is found that fits. If none are found in the slot class matching queue and queue.stricttxdequeue is disabled, other queues will be searched but no packets will be discarded due to length. In this case, as soon as a packet is found to be too large for a given slot, the search ends in that queue and continues in the next lowest priority queue.
If aggregation is enabled with the queue.aggregationenable configuration parameter, one or more message components for the same or different destinations (unicast or broadcast) can be transmitted in a single slot. If all the message components contained in a single slot transmission are for the same NEM destination, that destination is used as the downstream (outbound) packet destination. Otherwise, the NEM broadcast address is used as the downstream packet destination. The latter case does not imply the TDMA model treats these messages as broadcast. The TDMA model handles each message component contained in a single transmission as a separate message. However, when examining physical layer statistics, the notion of unicast and broadcast transmission is a bit fuzzy.
If fragmentation is enabled, one or more message components in an aggregate transmission may be a fragment. The queue.aggregationslotthreshold configuration parameter controls the percentage of a slot that must be filled in order to satisfy the aggregation function and prevent further searching and/or fragmenting of packets to fill the slot.
If fragmentation and/or aggregation are disabled, the TDMA model will still process upstream (inbound) aggregate messages and resemble packet fragments. The TDMA model will attempt fragment reassembly on one or more packets from one or more sources at the same time. Two configuration parameters are used to control when individual fragment reassembly efforts should be abandoned: fragmentcheckthreshold and fragmenttimeoutthreshold. The fragmentcheckthreshold configuration parameter controls how often the model checks to see if any active reassembly efforts should be abandoned. The fragmenttimeoutthreshold configuration parameter is the amount of time that must pass since receiving a fragment for a specific reassembly effort in order for that effort to be considered timed out and subsequently abandoned.
The radio model does not handle out of order fragments. As soon as a non-consecutive fragment is received the reassembly effort for that packet is abandoned.
Aggregation and fragmentation make it difficult to convey packet based statistic information. The TDMA model addresses this by keeping track of byte statistics where message components are used and packet statistics where queue information is conveyed. This is different from other radio models.
Proper time synchronization is required for the TDMA model. The required tightness of the time synchronization is a function of the slot size configured using TDMA Schedule events. System configuration, number of emulated nodes, traffic scenario and general resource availability, are all factors in determining achievable slot sizes.
The TDMA model uses three units to describe time allocation: slot, frame and multiframe. A slot is the smallest unit of time and is defined in microseconds. A slot supports the transmission of a single burst of a maximum length accounting for payload and overhead (preamble, headers, guard times, propagation delay, etc.). A frame contains a number of slots. A multiframe contains a number of frames.
The TDMA model receives a schedule via TDMA Schedule events. There are two types of TDMA schedules: full and update. A full TDMA schedule defines the TDMA structure in use along with assigning per NEM transmit, receive and idle slots. The TDMA structure defines:
- Slot size in microseconds
- Slot overhead in microseconds
- Number of slots per frame
- Number of frames per multiframe
- Transceiver bandwidth in Hz
Slot overhead should be set to account for the various waveform overhead parameters such as synchronization, waveform headers, turnaround time, propagation delay, etc. At a minimum, the slotoverhead should be set to the maximum propagation you expect the signal to travel when using location events within your emulation. For example, if you expect to support ranges out to 10km with the TDMA model, the overhead in the schedule should be set to at least 34 microseconds. This will ensure the max amount of data packed in each frame + the max propagation delay will always be less than the slot size. Failure to do so, can result in frames being discarded by the receiver when end of reception crosses the slot boundary.
An update TDMA schedule changes slot assignment information for an NEM but does not change the TDMA structure.
TDMA slots can be assigned as transmit, receive and idle. A transmit slot is assigned a frequency (Hz), power (dBm), class ([0,4]), datarate (bps) and an optional destination NEM. A receive slot is assigned a frequency (Hz). An idle slot has no assignments and indicates an NEM is neither transmitting nor receiving.
When a TDMA model instance receives its schedule, it will take affect at the start of the next multiframe boundary which is referenced from the epoch: 00:00:00 UTC January 1, 1970.
A TDMA Schedule is defined using XML. The XML schema documents the details involved in authoring a schedule. Some important items to consider:
- A schedule that contains a <structure> element is a full schedule and one without is an update schedule.
- A multiframe can define a default frequency, power, datarate and class. A frame can define the same defaults, overriding all or some of those defined in the multiframe. A transmit slot can override all or some frame or multiframe defaults, plus add an optional destination NEM. A receive slot can only override frequency.
- For a full schedule, any frame not defined will be idle for all NEMs referenced in the schedule.
- For a full schedule, any slot not defined in a frame will be a receive slot for all NEMs referenced in the schedule. The frame default frequency will be used, if specified. Otherwise, the multiframe default frequency will be used.
- For an update schedule, only those slots assigned for specified NEMs are modified. No receive slot or idle frame auto-fill occurs.
- For a full schedule, the set of frequencies used by an NEM are sent to the physical layer to configure the frequency of interest list monitored by the spectrum monitor.
- For an update schedule, any additional frequencies used by an NEM are added to the FOI frequency set cached since the last full schedule and sent to the physical layer.
- Reception of a full schedule resets all TDMA schedule information.
- TDMA model instances can reject schedules that contains errors. Acceptance and rejection of a schedule is conveyed using the following statistics:
- scheduler.scheduleAcceptFull
- scheduler.scheduleAcceptUpdate
- scheduler.scheduleRejectFrameIndexRange
- scheduler.scheduleRejectSlotIndexRange
- scheduler.scheduleRejectUpdateBeforeFull
- scheduler.scheduleRejectOther
- Only NEMs referenced within a schedule XML file receive events when using
emaneevent-tdmaschedule
. As such, all NEMs within the scenario utilizing the TDMA model must be referenced within a full schedule. - Reception of a full or update schedule containing one or more errors will cause a TDMA model instance to flush all schedule information, resetting to its initial state of having no schedule.
Sample XML Schedule:
<emane-tdma-schedule >
<structure frames='4' slots='10' slotoverhead='0' slotduration='1000' bandwidth='1M'/>
<multiframe frequency='2.4G' power='0' class='0' datarate='1M'>
<frame index='0'>
<slot index='0,5' nodes='1'>
<tx/>
</slot>
<slot index='1,6' nodes='2'>
<tx/>
</slot>
<slot index='2,7' nodes='3'>
<tx/>
</slot>
<slot index='3,8' nodes='4'>
<tx power='30'/>
</slot>
<slot index='4,9' nodes='5'>
<tx/>
</slot>
</frame>
<frame index='1' datarate='11M'>
<slot index='0:4' nodes='1'>
<tx/>
</slot>
<slot index='5' nodes='2'>
<tx/>
</slot>
<slot index='6' nodes='3'>
<tx/>
</slot>
<slot index='7' nodes='4'>
<tx/>
</slot>
<slot index='8' nodes='5'>
<tx destination='2'/>
</slot>
</frame>
<frame index='2'>
<slot index='0:9' nodes='1'>
<tx frequency='2G' class='3'/>
</slot>
<slot index='0:9' nodes='2:10'>
<rx frequency='2G'/>
</slot>
</frame>
</multiframe>
</emane-tdma-schedule>
A TDMA schedule is sent to a TDMA model instance using a TDMA Schedule event. The emaneevent-tdmaschedule
script can be used to process a TDMA Schedule XML file. A schedule event is sent to each NEM referenced in the schedule XML. Each event contains only schedule information for the recipient NEM.
[me@host ~]$# emaneevent-tdmaschedule schedule-sample.xml -i lo
TDMA model instances contain statistics to indicate the number of full and update schedules accepted and rejected.
[me@host ~]$ emanesh localhost get stat 1 mac | grep scheduler
nem 1 mac scheduler.scheduleAcceptFull = 4
nem 1 mac scheduler.scheduleAcceptUpdate = 0
nem 1 mac scheduler.scheduleRejectFrameIndexRange = 0
nem 1 mac scheduler.scheduleRejectSlotIndexRange = 0
nem 1 mac scheduler.scheduleRejectUpdateBeforeFull = 0
TDMA model instances maintain a schedule and structure table that indicates the current schedule and slot structure.
[me@host ~]$ emanesh localhost get table 1 mac scheduler.ScheduleInfoTable scheduler.StructureInfoTable
nem 1 mac scheduler.ScheduleInfoTable
| Index | Frame | Slot | Type | Frequency | Data Rate | Power | Class | Destination |
| 0 | 0 | 0 | TX | 2400000000 | 1000000 | 0.0 | 0 | 0 |
| 1 | 0 | 1 | RX | 2400000000 | | | | |
| 2 | 0 | 2 | RX | 2400000000 | | | | |
| 3 | 0 | 3 | RX | 2400000000 | | | | |
| 4 | 0 | 4 | RX | 2400000000 | | | | |
| 5 | 0 | 5 | TX | 2400000000 | 1000000 | 0.0 | 0 | 0 |
| 6 | 0 | 6 | RX | 2400000000 | | | | |
| 7 | 0 | 7 | RX | 2400000000 | | | | |
| 8 | 0 | 8 | RX | 2400000000 | | | | |
| 9 | 0 | 9 | RX | 2400000000 | | | | |
| 10 | 1 | 0 | TX | 2400000000 | 11000000 | 0.0 | 0 | 0 |
| 11 | 1 | 1 | TX | 2400000000 | 11000000 | 0.0 | 0 | 0 |
| 12 | 1 | 2 | TX | 2400000000 | 11000000 | 0.0 | 0 | 0 |
| 13 | 1 | 3 | TX | 2400000000 | 11000000 | 0.0 | 0 | 0 |
| 14 | 1 | 4 | TX | 2400000000 | 11000000 | 0.0 | 0 | 0 |
| 15 | 1 | 5 | RX | 2400000000 | | | | |
| 16 | 1 | 6 | RX | 2400000000 | | | | |
| 17 | 1 | 7 | RX | 2400000000 | | | | |
| 18 | 1 | 8 | RX | 2400000000 | | | | |
| 19 | 1 | 9 | RX | 2400000000 | | | | |
| 20 | 2 | 0 | TX | 2000000000 | 1000000 | 0.0 | 3 | 0 |
| 21 | 2 | 1 | TX | 2000000000 | 1000000 | 0.0 | 3 | 0 |
| 22 | 2 | 2 | TX | 2000000000 | 1000000 | 0.0 | 3 | 0 |
| 23 | 2 | 3 | TX | 2000000000 | 1000000 | 0.0 | 3 | 0 |
| 24 | 2 | 4 | TX | 2000000000 | 1000000 | 0.0 | 3 | 0 |
| 25 | 2 | 5 | TX | 2000000000 | 1000000 | 0.0 | 3 | 0 |
| 26 | 2 | 6 | TX | 2000000000 | 1000000 | 0.0 | 3 | 0 |
| 27 | 2 | 7 | TX | 2000000000 | 1000000 | 0.0 | 3 | 0 |
| 28 | 2 | 8 | TX | 2000000000 | 1000000 | 0.0 | 3 | 0 |
| 29 | 2 | 9 | TX | 2000000000 | 1000000 | 0.0 | 3 | 0 |
| 30 | 3 | 0 | IDLE | | | | | |
| 31 | 3 | 1 | IDLE | | | | | |
| 32 | 3 | 2 | IDLE | | | | | |
| 33 | 3 | 3 | IDLE | | | | | |
| 34 | 3 | 4 | IDLE | | | | | |
| 35 | 3 | 5 | IDLE | | | | | |
| 36 | 3 | 6 | IDLE | | | | | |
| 37 | 3 | 7 | IDLE | | | | | |
| 38 | 3 | 8 | IDLE | | | | | |
| 39 | 3 | 9 | IDLE | | | | | |
nem 1 mac scheduler.StructureInfoTable
| Name | Value |
| bandwidth | 1000000 |
| frames | 4 |
| slotduration | 1000 |
| slotoverhead | 0 |
| slots | 10 |
The TDMA Model Packet Completion Rate is specified as curves defined via XML. The curve definitions are composed of a series of SINR values along with their corresponding probability of reception for a given datarate specified in bps. A curve definition must contain a minimum of two points with one SINR representing POR = 0 and one SINR representing POR = 100. Linear interpolation is preformed when an exact SINR match is not found. If a POR is requested for a datarate whose curve was not defined, the first curve in the file is used regardless of its associated datarate.
Specifying a packet size (<tdmabasemodel-pcr> attribute packetsize) in the curve file will adjust the POR based on received packet size. Specifying a packetsize of 0 disregards received packet size when computing the POR.
The POR is obtained using the following calculation when a non-zero packetsize is specified:
POR = POR0^(S1/S0)
Where,
POR0 is the POR value determined from the PCR curve for the given SINR value
S0 is the packet size specified in the curve file (packetsize)
S1 is the received packet size
The emulator provides the following default PCR curve definition based on 802.11 modulation and data rate combinations. Default curves are based on theoretical equations for determining Bit Error Rate (BER) in an Additive White Gaussian Noise (AWGN) channel.
<tdmabasemodel-pcr packetsize="128">
<datarate bps="1M">
<entry sinr="-9.0" por="0.0"/>
<entry sinr="-8.0" por="1.4"/>
<entry sinr="-7.0" por="21.0"/>
<entry sinr="-6.0" por="63.5"/>
<entry sinr="-5.0" por="90.7"/>
<entry sinr="-4.0" por="98.6"/>
<entry sinr="-3.0" por="99.9"/>
<entry sinr="-2.0" por="100.0"/>
</datarate>
<datarate bps="2M">
<entry sinr="-6.0" por="0"/>
<entry sinr="-5.0" por="1.4"/>
<entry sinr="-4.0" por="20.6"/>
<entry sinr="-3.0" por="63.1"/>
<entry sinr="-2.0" por="90.5"/>
<entry sinr="-1.0" por="98.5"/>
<entry sinr="0.0" por="99.9"/>
<entry sinr="1.0" por="100.0"/>
</datarate>
<datarate bps="5.5M">
<entry sinr="-2.0" por="0.0"/>
<entry sinr="-1.0" por="0.2"/>
<entry sinr="0.0" por="9.1"/>
<entry sinr="1.0" por="46.2"/>
<entry sinr="2.0" por="82.8"/>
<entry sinr="3.0" por="96.7"/>
<entry sinr="4.0" por="99.6"/>
<entry sinr="5.0" por="100.0"/>
</datarate>
<datarate bps="11M">
<entry sinr="1.0" por="0.0"/>
<entry sinr="2.0" por="0.2"/>
<entry sinr="3.0" por="8.9"/>
<entry sinr="4.0" por="45.8"/>
<entry sinr="5.0" por="82.5"/>
<entry sinr="6.0" por="96.7"/>
<entry sinr="7.0" por="99.6"/>
<entry sinr="8.0" por="100.0"/>
</datarate>
<datarate bps="6M">
<entry sinr="-2.0" por="0.0"/>
<entry sinr="-1.0" por="5.5"/>
<entry sinr="0.0" por="39.8"/>
<entry sinr="1.0" por="79.0"/>
<entry sinr="2.0" por="96.0"/>
<entry sinr="3.0" por="99.5"/>
<entry sinr="4.0" por="100.0"/>
</datarate>
<datarate bps="9M">
<entry sinr="-1.0" por="0.0"/>
<entry sinr="0.0" por="0.3"/>
<entry sinr="1.0" por="10.5"/>
<entry sinr="2.0" por="50.3"/>
<entry sinr="3.0" por="84.9"/>
<entry sinr="4.0" por="97.5"/>
<entry sinr="5.0" por="99.7"/>
<entry sinr="6.0" por="100.0"/>
</datarate>
<datarate bps="12M">
<entry sinr="3.0" por="0.0"/>
<entry sinr="4.0" por="14.3"/>
<entry sinr="5.0" por="55.2"/>
<entry sinr="6.0" por="87.5"/>
<entry sinr="7.0" por="97.8"/>
<entry sinr="8.0" por="99.8"/>
<entry sinr="9.0" por="100.0"/>
</datarate>
<datarate bps="18M">
<entry sinr="4.0" por="0.0"/>
<entry sinr="5.0" por="1.7"/>
<entry sinr="6.0" por="21.5"/>
<entry sinr="7.0" por="65.0"/>
<entry sinr="8.0" por="91.2"/>
<entry sinr="9.0" por="98.7"/>
<entry sinr="10.0" por="99.9"/>
<entry sinr="11.0" por="100.0"/>
</datarate>
<datarate bps="24M">
<entry sinr="9.0" por="0.0"/>
<entry sinr="10.0" por="2.2"/>
<entry sinr="11.0" por="23.8"/>
<entry sinr="12.0" por="64.4"/>
<entry sinr="13.0" por="90.4"/>
<entry sinr="14.0" por="98.4"/>
<entry sinr="15.0" por="99.8"/>
<entry sinr="16.0" por="100.0"/>
</datarate>
<datarate bps="36M">
<entry sinr="10.0" por="0.0"/>
<entry sinr="11.0" por="0.1"/>
<entry sinr="12.0" por="4.6"/>
<entry sinr="13.0" por="32.4"/>
<entry sinr="14.0" por="72.8"/>
<entry sinr="15.0" por="93.4"/>
<entry sinr="16.0" por="99.0"/>
<entry sinr="17.0" por="99.9"/>
<entry sinr="18.0" por="100.0"/>
</datarate>
<datarate bps="48M">
<entry sinr="16.0" por="0.0"/>
<entry sinr="17.0" por="1.3"/>
<entry sinr="18.0" por="15.8"/>
<entry sinr="19.0" por="53.5"/>
<entry sinr="20.0" por="84.9"/>
<entry sinr="21.0" por="96.8"/>
<entry sinr="22.0" por="99.6"/>
<entry sinr="23.0" por="100.0"/>
</datarate>
<datarate bps="54M">
<entry sinr="17.0" por="0.0"/>
<entry sinr="18.0" por="0.2"/>
<entry sinr="19.0" por="5.7"/>
<entry sinr="20.0" por="32.4"/>
<entry sinr="21.0" por="71.3"/>
<entry sinr="22.0" por="92.4"/>
<entry sinr="23.0" por="99.9"/>
<entry sinr="24.0" por="100.0"/>
</datarate>
</tdmabasemodel-pcr>
The above definition produces the following PCR curves corresponding to IEEE 802.11b (DSS):
The above definition produces the following PCR curves corresponding to IEEE 802.11ag (OFDM):
The following configuration parameters are available to tailor layer functionality:
- enablepromiscuousmode
- flowcontrolenable
- flowcontroltokens
- fragmentcheckthreshold
- fragmenttimeoutthreshold
- neighbormetricdeletetime
- neighbormetricupdateinterval
- pcrcurveuri
- queue.aggregationenable
- queue.aggregationslotthreshold
- queue.depth
- queue.fragmentationenable
- queue.strictdequeueenable
Defines whether promiscuous mode is enabled or not. If promiscuous mode is enabled, all received packets (intended for the given node or not) that pass the probability of reception check are sent upstream to the transport.
Type: bool
Running-State Modifiable: yes
Occurrence Range: [1,1]
Value Range: [no,yes]
Default Value(s): no
Defines whether flow control is enabled. Flow control only works with the virtual transport and the setting must match the setting within the virtual transport configuration.
Type: bool
Running-State Modifiable: no
Occurrence Range: [1,1]
Value Range: [no,yes]
Default Value(s): no
Defines the maximum number of flow control tokens (packet transmission units) that can be processed from the virtual transport without being refreshed. The number of available tokens at any given time is coordinated with the virtual transport and when the token count reaches zero, no further packets are transmitted causing application socket queues to backup.
Type: uint16
Running-State Modifiable: no
Occurrence Range: [1,1]
Value Range: [0,65535]
Default Value(s): 10
Defines the rate in seconds a check is performed to see if any packet fragment reassembly efforts should be abandoned.
Type: uint16
Running-State Modifiable: no
Occurrence Range: [1,1]
Value Range: [0,65535]
Default Value(s): 2
Defines the threshold in seconds to wait for another packet fragment for an existing reassembly effort before abandoning the effort.
Type: uint16
Running-State Modifiable: no
Occurrence Range: [1,1]
Value Range: [0,65535]
Default Value(s): 5
Defines the time in seconds of no RF receptions from a given neighbor before it is removed from the neighbor table.
Type: float
Running-State Modifiable: yes
Occurrence Range: [1,1]
Value Range: [1.000000,3660.000000]
Default Value(s): 60.000000
Defines the neighbor table update interval in seconds.
Type: float
Running-State Modifiable: no
Occurrence Range: [1,1]
Value Range: [0.100000,60.000000]
Default Value(s): 1.000000
Defines the URI of the Packet Completion Rate (PCR) curve file. The PCR curve file contains probability of reception curves as a function of Signal to Interference plus Noise Ratio (SINR).
Type: string
Running-State Modifiable: no
Value Required: yes
Occurrence Range: [1,1]
Defines whether packet aggregation is enabled for transmission. When enabled, multiple packets can be sent in the same transmission when there is additional room within the slot.
Type: bool
Running-State Modifiable: no
Occurrence Range: [1,1]
Value Range: [no,yes]
Default Value(s): yes
Defines the percentage of a slot that must be filled in order to conclude aggregation when queue.aggregationenable is enabled.
Type: double
Running-State Modifiable: no
Occurrence Range: [1,1]
Value Range: [0.000000,100.000000]
Default Value(s): 90.000000
Defines the size of the per service class downstream packet queues (in packets). Each of the 5 queues (control + 4 service classes) will be 'queuedepth' size.
Type: uint16
Running-State Modifiable: no
Occurrence Range: [1,1]
Value Range: [0,65535]
Default Value(s): 256
Defines whether packet fragmentation is enabled. When enabled, a single packet will be fragmented into multiple message components to be sent over multiple transmissions when the slot is too small. When disabled and the packet matches the traffic class for the transmit slot as defined in the TDMA schedule, the packet will be discarded.
Type: bool
Running-State Modifiable: no
Occurrence Range: [1,1]
Value Range: [no,yes]
Default Value(s): yes
Defines whether packets will be dequeued from a queue other than what has been specified when there are no eligible packets for dequeue in the specified queue. Queues are dequeued highest priority first.
Type: bool
Running-State Modifiable: no
Occurrence Range: [1,1]
Value Range: [no,yes]
Default Value(s): no
TDMA Model configuration is specified using two files:
- NEM definition file
- MAC definition file
The NEM definition file groups the mac definition file along with emulator physical layer configuration and the transport definition file.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE nem SYSTEM "file:///usr/share/emane/dtd/nem.dtd">
<nem>
<transport definition="transvirtual.xml"/>
<mac definition="tdmaradiomodel.xml"/>
<phy>
<param name="fixedantennagain" value="0.0"/>
<param name="fixedantennagainenable" value="on"/>
<param name="bandwidth" value="1M"/>
<param name="noisemode" value="all"/>
<param name="propagationmodel" value="precomputed"/>
<param name="systemnoisefigure" value="4.0"/>
<param name="subid" value="7"/>
</phy>
</nem>
The mac definition file specifies the model DLL the emulator will load and the desired model configuration.
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE mac SYSTEM 'file:///usr/share/emane/dtd/mac.dtd'>
<mac library='tdmaeventschedulerradiomodel'>
<param name='fragmentcheckthreshold' value='2'/>
<param name='fragmenttimeoutthreshold' value='5'/>
<param name='neighbormetricdeletetime' value='60.0'/>
<param name='neighbormetricupdateinterval' value='1.0'/>
<param name='pcrcurveuri' value='tdmabasemodelpcr.xml'/>
<param name='queue.aggregationenable' value='on'/>
<param name='queue.aggregationslotthreshold' value='90.0'/>
<param name='queue.depth' value='255'/>
<param name='queue.fragmentationenable' value='on'/>
<param name='queue.strictdequeueenable' value='off'/>
</mac>
The below statistics can be accessed using emanesh.
Name | Type | Clearable | Description |
---|---|---|---|
avgProcessAPIQueueDepth | double | yes | Average API queue depth for a processUpstreamPacket, processUpstreamControl, processDownstreamPacket, processDownstreamControl, processEvent and processTimedEvent. |
avgProcessAPIQueueWait | double | yes | Average API queue wait for a processUpstreamPacket, processUpstreamControl, processDownstreamPacket, processDownstreamControl, processEvent and processTimedEvent in microseconds. |
avgTimedEventLatency | double | yes | |
avgTimedEventLatencyRatio | double | yes | Average ratio of the delta between the scheduled timer expiration and the actual firing over the requested duration. An average ratio approaching 1 indicates that timer latencies are large in comparison to the requested durations. |
processedConfiguration | uint64 | yes | |
processedDownstreamControl | uint64 | yes | |
processedDownstreamPackets | uint64 | yes | |
processedEvents | uint64 | yes | |
processedTimedEvents | uint64 | yes | |
processedUpstreamControl | uint64 | yes | |
processedUpstreamPackets | uint64 | yes | |
scheduler.scheduleAcceptFull | uint64 | yes | Number of full schedules accepted. |
scheduler.scheduleAcceptUpdate | uint64 | yes | Number of update schedules accepted. |
scheduler.scheduleRejectFrameIndexRange | uint64 | yes | Number of schedules rejected due to out of range frame index. |
scheduler.scheduleRejectOther | uint64 | yes | Number of schedules rejected due to other reasons. |
scheduler.scheduleRejectSlotIndexRange | uint64 | yes | Number of schedules rejected due to out of range slot index. |
scheduler.scheduleRejectUpdateBeforeFull | uint64 | yes | Number of schedules rejected due to an update before full schedule. |
The below statistics can be accessed using emanesh.
Name | Clearable | Description |
---|---|---|
BroadcastByteAcceptTable0 | yes | Broadcast bytes accepted |
BroadcastByteAcceptTable1 | yes | Broadcast bytes accepted |
BroadcastByteAcceptTable2 | yes | Broadcast bytes accepted |
BroadcastByteAcceptTable3 | yes | Broadcast bytes accepted |
BroadcastByteAcceptTable4 | yes | Broadcast bytes accepted |
BroadcastByteDropTable0 | yes | Broadcast bytes dropped |
BroadcastByteDropTable1 | yes | Broadcast bytes dropped |
BroadcastByteDropTable2 | yes | Broadcast bytes dropped |
BroadcastByteDropTable3 | yes | Broadcast bytes dropped |
BroadcastByteDropTable4 | yes | Broadcast bytes dropped |
EventReceptionTable | yes | Received event counts |
NeighborMetricTable | no | Neighbor Metric Table |
NeighborStatusTable | no | Neighbor Status Table |
QueueFragmentHistogram | no | Shows a per queue histogram of the number of message components required to transmit packets. |
QueueStatusTable | no | Shows for each queue the number of packets enqueued, dequeued, dropped due to queue overflow (enqueue), dropped due to too big (dequeue) and which slot classes fragments are being transmitted. |
RxSlotStatusTable | no | Shows the number of Rx slot receptions that were valid or missed based on slot timing deadlines |
TxSlotStatusTable | no | Shows the number of Tx slot opportunities that were valid or missed based on slot timing deadlines |
UnicastByteAcceptTable0 | yes | Unicast bytes accepted |
UnicastByteAcceptTable1 | yes | Unicast bytes accepted |
UnicastByteAcceptTable2 | yes | Unicast bytes accepted |
UnicastByteAcceptTable3 | yes | Unicast bytes accepted |
UnicastByteAcceptTable4 | yes | Unicast bytes accepted |
UnicastByteDropTable0 | yes | Unicast bytes dropped |
UnicastByteDropTable1 | yes | Unicast bytes dropped |
UnicastByteDropTable2 | yes | Unicast bytes dropped |
UnicastByteDropTable3 | yes | Unicast bytes dropped |
UnicastByteDropTable4 | yes | Unicast bytes dropped |
scheduler.ScheduleInfoTable | no | Shows the current TDMA schedule. |
scheduler.StructureInfoTable | no | Shows the current TDMA structure: slot size, slot overhead, number of slots per frame, number of frames per multiframe and transceiver bandwidth. |
- Home
- Introduction
- EMANE Shell
- Emulator Physical Layer
- Radio Models
- Utility Models
- Transports
- Event Generators
- Event Agents
- Building Packages
- Installing Packages
- Developer Documentation
- FAQ
- Copyright