-
Notifications
You must be signed in to change notification settings - Fork 1
/
quant_post.html
469 lines (351 loc) · 23.5 KB
/
quant_post.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
<!DOCTYPE html>
<html class="writer-html5" lang="en" >
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Post-training quantization — N2D2 documentation</title>
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<!--[if lt IE 9]>
<script src="_static/js/html5shiv.min.js"></script>
<![endif]-->
<script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
<script src="_static/jquery.js"></script>
<script src="_static/underscore.js"></script>
<script src="_static/doctools.js"></script>
<script src="_static/language_data.js"></script>
<script async="async" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/latest.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="_static/js/theme.js"></script>
<link rel="author" title="About these documents" href="about.html" />
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
<link rel="next" title="[NEW] Quantization-Aware Training" href="quant_qat.html" />
<link rel="prev" title="Train from ONNX models" href="onnx_transfer.html" />
</head>
<body class="wy-body-for-nav">
<div class="wy-grid-for-nav">
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
<div class="wy-side-scroll">
<div class="wy-side-nav-search" >
<a href="index.html" class="icon icon-home" alt="Documentation Home"> N2D2
</a>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
<p class="caption"><span class="caption-text">Introduction:</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="intro.html">Presentation</a></li>
<li class="toctree-l1"><a class="reference internal" href="about.html">About N2D2-IP</a></li>
<li class="toctree-l1"><a class="reference internal" href="simus.html">Performing simulations</a></li>
<li class="toctree-l1"><a class="reference internal" href="perfs_tools.html">Performance evaluation tools</a></li>
<li class="toctree-l1"><a class="reference internal" href="tuto.html">Tutorials</a></li>
</ul>
<p class="caption"><span class="caption-text">ONNX Import:</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="onnx_convert.html">Obtain ONNX models</a></li>
<li class="toctree-l1"><a class="reference internal" href="onnx_import.html">Import ONNX models</a></li>
<li class="toctree-l1"><a class="reference internal" href="onnx_transfer.html">Train from ONNX models</a></li>
</ul>
<p class="caption"><span class="caption-text">Quantization and Export:</span></p>
<ul class="current">
<li class="toctree-l1 current"><a class="current reference internal" href="#">Post-training quantization</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#principle">Principle</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#weights-normalization">1) Weights normalization</a></li>
<li class="toctree-l3"><a class="reference internal" href="#activations-normalization">2) Activations normalization</a></li>
<li class="toctree-l3"><a class="reference internal" href="#quantization">3) Quantization</a></li>
<li class="toctree-l3"><a class="reference internal" href="#additional-optimization-strategies">Additional optimization strategies</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#weights-clipping-optional">Weights clipping (optional)</a></li>
<li class="toctree-l4"><a class="reference internal" href="#activation-scaling-factor-approximation">Activation scaling factor approximation</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#usage-in-n2d2">Usage in N2D2</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#act-rescaling-mode"><code class="docutils literal notranslate"><span class="pre">-act-rescaling-mode</span></code></a></li>
<li class="toctree-l3"><a class="reference internal" href="#command-line-example">Command line example</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#examples-and-results">Examples and results</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="quant_qat.html">[NEW] Quantization-Aware Training</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_CPP.html">Export: C++</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_CPP_STM32.html">Export: C++/STM32</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_TensorRT.html">Export: TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_DNeuro.html">Export: DNeuro</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_ONNX.html">Export: ONNX</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_legacy.html">Export: other / legacy</a></li>
</ul>
<p class="caption"><span class="caption-text">INI File Interface:</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="ini_intro.html">Introduction</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini_databases.html">Databases</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini_data_analysis.html">Stimuli data analysis</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini_environment.html">Stimuli provider (Environment)</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini_layers.html">Network Layers</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini_target.html">Targets (outputs & losses)</a></li>
<li class="toctree-l1"><a class="reference internal" href="adversarial.html">Adversarial module</a></li>
</ul>
<p class="caption"><span class="caption-text">Python API:</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="containers.html">Containers</a></li>
<li class="toctree-l1"><a class="reference internal" href="cells.html">Cells</a></li>
<li class="toctree-l1"><a class="reference internal" href="databases.html">Databases</a></li>
<li class="toctree-l1"><a class="reference internal" href="stimuliprovider.html">StimuliProvider</a></li>
<li class="toctree-l1"><a class="reference internal" href="deepnet.html">DeepNet</a></li>
</ul>
<p class="caption"><span class="caption-text">C++ API / Developer:</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="dev_intro.html">Introduction</a></li>
</ul>
</div>
</div>
</nav>
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
<nav class="wy-nav-top" aria-label="top navigation">
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
<a href="index.html">N2D2</a>
</nav>
<div class="wy-nav-content">
<div class="rst-content">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li><a href="index.html" class="icon icon-home"></a> »</li>
<li>Post-training quantization</li>
<li class="wy-breadcrumbs-aside">
<a href="_sources/quant_post.rst.txt" rel="nofollow"> View page source</a>
</li>
</ul>
<hr/>
</div>
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="post-training-quantization">
<span id="post-quant-label"></span><h1>Post-training quantization<a class="headerlink" href="#post-training-quantization" title="Permalink to this headline">¶</a></h1>
<div class="section" id="principle">
<h2>Principle<a class="headerlink" href="#principle" title="Permalink to this headline">¶</a></h2>
<p>The post-training quantization algorithm is done in 3 steps:</p>
<div class="section" id="weights-normalization">
<h3>1) Weights normalization<a class="headerlink" href="#weights-normalization" title="Permalink to this headline">¶</a></h3>
<p>All weights are rescaled in the range <span class="math notranslate nohighlight">\([-1.0, 1.0]\)</span>.</p>
<dl class="simple">
<dt>Per layer normalization</dt><dd><p>There is a single weights scaling factor, global to the layer.</p>
</dd>
<dt>Per layer and per output channel normalization</dt><dd><p>There is a different weights scaling factor for each output channel. This allows
a finer grain quantization, with a better usage of the quantized range for some
output channels, at the expense of more factors to be saved in memory.</p>
</dd>
</dl>
</div>
<div class="section" id="activations-normalization">
<h3>2) Activations normalization<a class="headerlink" href="#activations-normalization" title="Permalink to this headline">¶</a></h3>
<p>Activations at each layer are rescaled in the range <span class="math notranslate nohighlight">\([-1.0, 1.0]\)</span> for signed
outputs and <span class="math notranslate nohighlight">\([0.0, 1.0]\)</span> for unsigned outputs.</p>
<p>The <strong>optimal quantization threshold value</strong> of the activation output of each
layer is determined using the validation dataset (or test dataset if no
validation dataset is available).</p>
<p>This is an iterative process: need to take into account previous layers
normalizing factors.</p>
<p>Finding the optimal quantization threshold value of the activation output of
each layer is done the following:</p>
<ol class="arabic">
<li><p>Compute histogram of activation values;</p></li>
<li><p>Find threshold that minimizes distance between original distribution and
clipped quantized distribution. Two distance algorithms can be used:</p>
<ul class="simple">
<li><p>Mean Squared Error (MSE);</p></li>
<li><p>Kullback–Leibler divergence metric (KL-divergence).</p></li>
</ul>
<p>Another, simpler method, is to just clip the values above a fixed quantile.</p>
</li>
</ol>
<div class="figure align-default">
<img alt="Activation values histogram and corresponding thresholds." src="_images/activations_histogram.png" />
</div>
<p>The obtained threshold value is therefore the activation scaling factor to be
taken into account during quantization.</p>
</div>
<div class="section" id="quantization">
<h3>3) Quantization<a class="headerlink" href="#quantization" title="Permalink to this headline">¶</a></h3>
<p>Inputs, weights, biases and activations are quantized to the desired
<span class="math notranslate nohighlight">\(nbbits\)</span> precision.</p>
<p>Convert ranges from <span class="math notranslate nohighlight">\([-1.0, 1.0]\)</span> and <span class="math notranslate nohighlight">\([0.0, 1.0]\)</span> to
<span class="math notranslate nohighlight">\([-2^{nbbits-1}-1, 2^{nbbits-1}-1]\)</span> and <span class="math notranslate nohighlight">\([0, 2^{nbbits}-1]\)</span> taking
into account all dependencies.</p>
</div>
<div class="section" id="additional-optimization-strategies">
<h3>Additional optimization strategies<a class="headerlink" href="#additional-optimization-strategies" title="Permalink to this headline">¶</a></h3>
<div class="section" id="weights-clipping-optional">
<h4>Weights clipping (optional)<a class="headerlink" href="#weights-clipping-optional" title="Permalink to this headline">¶</a></h4>
<p>Weights can be clipped using the same strategy than for the activations (
finding the optimal quantization threshold using the weights histogram).
However, this usually leads to worse results than no clipping.</p>
</div>
<div class="section" id="activation-scaling-factor-approximation">
<h4>Activation scaling factor approximation<a class="headerlink" href="#activation-scaling-factor-approximation" title="Permalink to this headline">¶</a></h4>
<p>The activation scaling factor <span class="math notranslate nohighlight">\(\alpha\)</span> can be approximated the following
ways:</p>
<ul class="simple">
<li><p>Fixed-point: <span class="math notranslate nohighlight">\(\alpha\)</span> is approximated by <span class="math notranslate nohighlight">\(x 2^{-p}\)</span>;</p></li>
<li><p>Single-shift: <span class="math notranslate nohighlight">\(\alpha\)</span> is approximated by <span class="math notranslate nohighlight">\(2^{x}\)</span>;</p></li>
<li><p>Double-shift: <span class="math notranslate nohighlight">\(\alpha\)</span> is approximated by <span class="math notranslate nohighlight">\(2^{n} + 2^{m}\)</span>.</p></li>
</ul>
</div>
</div>
</div>
<div class="section" id="usage-in-n2d2">
<h2>Usage in N2D2<a class="headerlink" href="#usage-in-n2d2" title="Permalink to this headline">¶</a></h2>
<p>All the post-training strategies described above are available in N2D2 for any
export type. To apply post-training quantization during export, simply use the
<code class="docutils literal notranslate"><span class="pre">-calib</span></code> command line argument.</p>
<p>The following parameters are available in command line:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 27%" />
<col style="width: 73%" />
</colgroup>
<thead>
<tr class="row-odd"><th class="head"><p>Argument [default value]</p></th>
<th class="head"><p>Description</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">-calib</span></code></p></td>
<td><p>Number of stimuli used for the calibration (<code class="docutils literal notranslate"><span class="pre">-1</span></code> = use the full validation dataset)</p></td>
</tr>
<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span class="pre">-calib-reload</span></code></p></td>
<td><p>Reload and reuse the data of a previous calibration</p></td>
</tr>
<tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">-wt-clipping-mode</span></code> [<code class="docutils literal notranslate"><span class="pre">None</span></code>]</p></td>
<td><p>Weights clipping mode on export, can be <code class="docutils literal notranslate"><span class="pre">None</span></code>, <code class="docutils literal notranslate"><span class="pre">MSE</span></code> or <code class="docutils literal notranslate"><span class="pre">KL-Divergence</span></code></p></td>
</tr>
<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span class="pre">-act-clipping-mode</span></code> [<code class="docutils literal notranslate"><span class="pre">MSE</span></code>]</p></td>
<td><p>Activations clipping mode on export, can be <code class="docutils literal notranslate"><span class="pre">None</span></code>, <code class="docutils literal notranslate"><span class="pre">MSE</span></code>, <code class="docutils literal notranslate"><span class="pre">KL-Divergence</span></code> or <code class="docutils literal notranslate"><span class="pre">Quantile</span></code></p></td>
</tr>
<tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">-act-rescaling-mode</span></code> [<code class="docutils literal notranslate"><span class="pre">Single-shift</span></code>]</p></td>
<td><p>Activations scaling mode on export, can be <code class="docutils literal notranslate"><span class="pre">Floating-point</span></code>, <code class="docutils literal notranslate"><span class="pre">Fixed-point16</span></code>, <code class="docutils literal notranslate"><span class="pre">Fixed-point32</span></code>, <code class="docutils literal notranslate"><span class="pre">Single-shift</span></code>
or <code class="docutils literal notranslate"><span class="pre">Double-shift</span></code></p></td>
</tr>
<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span class="pre">-act-rescale-per-output</span></code> [0]</p></td>
<td><p>If true (1), rescale activation per output instead of per layer</p></td>
</tr>
<tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">-act-quantile-value</span></code> [0.9999]</p></td>
<td><p>If activation clipping mode is <code class="docutils literal notranslate"><span class="pre">Quantile</span></code>, fraction of the values to keep without clipping</p></td>
</tr>
</tbody>
</table>
<div class="section" id="act-rescaling-mode">
<h3><code class="docutils literal notranslate"><span class="pre">-act-rescaling-mode</span></code><a class="headerlink" href="#act-rescaling-mode" title="Permalink to this headline">¶</a></h3>
<p>The <code class="docutils literal notranslate"><span class="pre">-act-rescaling-mode</span></code> specifies how the activation scaling must be approximated,
for values other than <code class="docutils literal notranslate"><span class="pre">Floating-point</span></code>. This allows to avoid floating-point
operation altogether in the generated code, even for complex, multi-branches networks.
This is particularly useful on architectures without FPU or on FPGA.</p>
<p>For fixed-point scaling approximation (<span class="math notranslate nohighlight">\(x 2^{-p}\)</span>), two modes are available:
<code class="docutils literal notranslate"><span class="pre">Fixed-point16</span></code> and <code class="docutils literal notranslate"><span class="pre">Fixed-point32</span></code>. <code class="docutils literal notranslate"><span class="pre">Fixed-point16</span></code> specifies that <span class="math notranslate nohighlight">\(x\)</span>
must hold in at most 16-bits, whereas <code class="docutils literal notranslate"><span class="pre">Fixed-point32</span></code> allows 32-bits <span class="math notranslate nohighlight">\(x\)</span>.
In the later case, beware that overflow can occur on 32-bits only architectures
when computing the scaling multiplication before the right shift (<span class="math notranslate nohighlight">\(p\)</span>).</p>
<p>For the <code class="docutils literal notranslate"><span class="pre">Single-shift</span></code> and <code class="docutils literal notranslate"><span class="pre">Double-shift</span></code> modes, only right shifts are allowed
(scaling factor < 1.0). In case of layers with scaling factor above 1.0, <code class="docutils literal notranslate"><span class="pre">Fixed-point16</span></code>
is used as fallback for these layers.</p>
</div>
<div class="section" id="command-line-example">
<h3>Command line example<a class="headerlink" href="#command-line-example" title="Permalink to this headline">¶</a></h3>
<p>Command line example to run the C++ Export on a INI file containing an ONNX
model:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">n2d2</span> <span class="n">MobileNet_ONNX</span><span class="o">.</span><span class="n">ini</span> <span class="o">-</span><span class="n">seed</span> <span class="mi">1</span> <span class="o">-</span><span class="n">w</span> <span class="o">/</span><span class="n">dev</span><span class="o">/</span><span class="n">null</span> <span class="o">-</span><span class="n">export</span> <span class="n">CPP</span> <span class="o">-</span><span class="n">fuse</span> <span class="o">-</span><span class="n">calib</span> <span class="o">-</span><span class="mi">1</span> <span class="o">-</span><span class="n">act</span><span class="o">-</span><span class="n">clipping</span><span class="o">-</span><span class="n">mode</span> <span class="n">KL</span><span class="o">-</span><span class="n">Divergence</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="examples-and-results">
<h2>Examples and results<a class="headerlink" href="#examples-and-results" title="Permalink to this headline">¶</a></h2>
<p>Post-training quantization accuracy obtained with some models from the ONNX
Model Zoo are reported in the table below, using <code class="docutils literal notranslate"><span class="pre">-calib</span> <span class="pre">1000</span></code>:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 56%" />
<col style="width: 11%" />
<col style="width: 19%" />
<col style="width: 13%" />
</colgroup>
<thead>
<tr class="row-odd"><th class="head"><p><em>ONNX Model Zoo</em> model (specificities)</p></th>
<th class="head"><p>FP acc.</p></th>
<th class="head"><p>Fake 8 bits acc.</p></th>
<th class="head"><p>8 bits acc.</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>resnet18v1.onnx
(<code class="docutils literal notranslate"><span class="pre">-no-unsigned</span> <span class="pre">-act-rescaling-mode</span> <span class="pre">Fixed-point</span></code>)</p></td>
<td><p>69.83%</p></td>
<td><p>68.82%</p></td>
<td><p>68.78%</p></td>
</tr>
<tr class="row-odd"><td><p>mobilenetv2-1.0.onnx
(<code class="docutils literal notranslate"><span class="pre">mobilenetv20_output_flatten0_reshape0</span></code> ignored)</p></td>
<td><p>70.95%</p></td>
<td><p>65.40%</p></td>
<td><p>65.40%</p></td>
</tr>
<tr class="row-even"><td><p>mobilenetv2-1.0.onnx
(<code class="docutils literal notranslate"><span class="pre">mobilenetv20_output_flatten0_reshape0</span></code> ignored
<code class="docutils literal notranslate"><span class="pre">-act-rescaling-mode</span> <span class="pre">Fixed-point</span></code>)</p></td>
<td></td>
<td><p>66.67%</p></td>
<td><p>66.70%</p></td>
</tr>
<tr class="row-odd"><td><p>squeezenet/model.onnx
(<code class="docutils literal notranslate"><span class="pre">-no-unsigned</span> <span class="pre">-act-rescaling-mode</span> <span class="pre">Floating-point</span></code>)</p></td>
<td><p>57.58%</p></td>
<td><p>57.11%</p></td>
<td><p>54.98%</p></td>
</tr>
</tbody>
</table>
<ul class="simple">
<li><p><em>FP acc.</em> is the floating point accuracy obtained before post-training
quantization on the model imported in ONNX;</p></li>
<li><p><em>Fake 8 bits acc.</em> is the accuracy obtained after post-training quantization
in N2D2, in fake-quantized mode (the numbers are quantized but the
representation is still floating point);</p></li>
<li><p><em>8 bits acc.</em> is the accuracy obtained after post-training quantization in the
N2D2 reference C++ export, in actual 8 bits representation.</p></li>
</ul>
</div>
</div>
</div>
</div>
<footer>
<div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
<a href="quant_qat.html" class="btn btn-neutral float-right" title="[NEW] Quantization-Aware Training" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
<a href="onnx_transfer.html" class="btn btn-neutral float-left" title="Train from ONNX models" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
</div>
<hr/>
<div role="contentinfo">
<p>
© Copyright 2019, CEA LIST
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
jQuery(function () {
SphinxRtdTheme.Navigation.enable(true);
});
</script>
</body>
</html>