-
Notifications
You must be signed in to change notification settings - Fork 1
/
intro.html
523 lines (396 loc) · 23.9 KB
/
intro.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
<!DOCTYPE html>
<html class="writer-html5" lang="en" >
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Presentation — N2D2 documentation</title>
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<!--[if lt IE 9]>
<script src="_static/js/html5shiv.min.js"></script>
<![endif]-->
<script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
<script src="_static/jquery.js"></script>
<script src="_static/underscore.js"></script>
<script src="_static/doctools.js"></script>
<script src="_static/language_data.js"></script>
<script async="async" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/latest.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="_static/js/theme.js"></script>
<link rel="author" title="About these documents" href="about.html" />
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
<link rel="next" title="About N2D2-IP" href="about.html" />
<link rel="prev" title="N2D2" href="index.html" />
</head>
<body class="wy-body-for-nav">
<div class="wy-grid-for-nav">
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
<div class="wy-side-scroll">
<div class="wy-side-nav-search" >
<a href="index.html" class="icon icon-home" alt="Documentation Home"> N2D2
</a>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
<p class="caption"><span class="caption-text">Introduction:</span></p>
<ul class="current">
<li class="toctree-l1 current"><a class="current reference internal" href="#">Presentation</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#database-handling">Database handling</a></li>
<li class="toctree-l2"><a class="reference internal" href="#data-pre-processing">Data pre-processing</a></li>
<li class="toctree-l2"><a class="reference internal" href="#deep-network-building">Deep network building</a></li>
<li class="toctree-l2"><a class="reference internal" href="#performances-evaluation">Performances evaluation</a></li>
<li class="toctree-l2"><a class="reference internal" href="#hardware-exports">Hardware exports</a></li>
<li class="toctree-l2"><a class="reference internal" href="#summary">Summary</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="about.html">About N2D2-IP</a></li>
<li class="toctree-l1"><a class="reference internal" href="simus.html">Performing simulations</a></li>
<li class="toctree-l1"><a class="reference internal" href="perfs_tools.html">Performance evaluation tools</a></li>
<li class="toctree-l1"><a class="reference internal" href="tuto.html">Tutorials</a></li>
</ul>
<p class="caption"><span class="caption-text">ONNX Import:</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="onnx_convert.html">Obtain ONNX models</a></li>
<li class="toctree-l1"><a class="reference internal" href="onnx_import.html">Import ONNX models</a></li>
<li class="toctree-l1"><a class="reference internal" href="onnx_transfer.html">Train from ONNX models</a></li>
</ul>
<p class="caption"><span class="caption-text">Quantization and Export:</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="quant_post.html">Post-training quantization</a></li>
<li class="toctree-l1"><a class="reference internal" href="quant_qat.html">[NEW] Quantization-Aware Training</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_CPP.html">Export: C++</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_CPP_STM32.html">Export: C++/STM32</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_TensorRT.html">Export: TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_DNeuro.html">Export: DNeuro</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_ONNX.html">Export: ONNX</a></li>
<li class="toctree-l1"><a class="reference internal" href="export_legacy.html">Export: other / legacy</a></li>
</ul>
<p class="caption"><span class="caption-text">INI File Interface:</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="ini_intro.html">Introduction</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini_databases.html">Databases</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini_data_analysis.html">Stimuli data analysis</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini_environment.html">Stimuli provider (Environment)</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini_layers.html">Network Layers</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini_target.html">Targets (outputs & losses)</a></li>
<li class="toctree-l1"><a class="reference internal" href="adversarial.html">Adversarial module</a></li>
</ul>
<p class="caption"><span class="caption-text">Python API:</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="containers.html">Containers</a></li>
<li class="toctree-l1"><a class="reference internal" href="cells.html">Cells</a></li>
<li class="toctree-l1"><a class="reference internal" href="databases.html">Databases</a></li>
<li class="toctree-l1"><a class="reference internal" href="stimuliprovider.html">StimuliProvider</a></li>
<li class="toctree-l1"><a class="reference internal" href="deepnet.html">DeepNet</a></li>
</ul>
<p class="caption"><span class="caption-text">C++ API / Developer:</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="dev_intro.html">Introduction</a></li>
</ul>
</div>
</div>
</nav>
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
<nav class="wy-nav-top" aria-label="top navigation">
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
<a href="index.html">N2D2</a>
</nav>
<div class="wy-nav-content">
<div class="rst-content">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li><a href="index.html" class="icon icon-home"></a> »</li>
<li>Presentation</li>
<li class="wy-breadcrumbs-aside">
<a href="_sources/intro.rst.txt" rel="nofollow"> View page source</a>
</li>
</ul>
<hr/>
</div>
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="presentation">
<h1>Presentation<a class="headerlink" href="#presentation" title="Permalink to this headline">¶</a></h1>
<p>The N2D2 platform is a comprehensive solution for fast and accurate Deep
Neural Network (DNN) simulation and full and automated DNN-based
applications building. The platform integrates database construction,
data pre-processing, network building, benchmarking and hardware export
to various targets. It is particularly useful for DNN design and
exploration, allowing simple and fast prototyping of DNN with different
topologies. It is possible to define and learn multiple network topology
variations and compare the performances (in terms of recognition rate
and computationnal cost) automatically. Export targets include CPU, DSP
and GPU with OpenMP, OpenCL, Cuda, cuDNN and TensorRT programming models
as well as custom hardware IP code generation with High-Level Synthesis
for FPGA and dedicated configurable DNN accelerator IP <a class="footnote-reference brackets" href="#id3" id="id1">1</a>.</p>
<p>In the following, the first section describes the database handling
capabilities of the tool, which can automatically generate learning,
validation and testing data sets from any hand made database (for
example from simple files directories). The second section briefly
describes the data pre-processing capabilites built-in the tool, which
does not require any external pre-processing step and can handle many
data transformation, normalization and augmentation (for example using
elastic distortion to improve the learning). The third section show an
example of DNN building using a simple INI text configuration file. The
fourth section show some examples of metrics obtained after the learning
and testing to evaluate the performances of the learned DNN. Next, the
fifth section introduces the DNN hardware export capabilities of the
toolflow, which can automatically generate ready to use code for various
targets such as embedded GPUs or full custom dedicated FPGA IP. Finally,
we conclude by summarising the main features of the tool.</p>
<div class="section" id="database-handling">
<h2>Database handling<a class="headerlink" href="#database-handling" title="Permalink to this headline">¶</a></h2>
<p>The tool integrates everything needed to handle custom or hand made
databases:</p>
<ul class="simple">
<li><p>Genericity: load image and sound, 1D, 2D or 3D data;</p></li>
<li><p>Associate a label for each data point (useful for scene labeling for
example) or a single label to each data file (one object/class per image
for example), 1D or 2D labels;</p></li>
<li><p>Advanced Region of Interest (ROI) handling:</p>
<ul>
<li><p>Support arbitrary ROI shapes (circular, rectangular, polygonal or pixelwise
defined);</p></li>
<li><p>Convert ROIs to data point (pixelwise) labels;</p></li>
<li><p>Extract one or multiple ROIs from an initial dataset to create as many
corresponding additional data to feed the DNN;</p></li>
</ul>
</li>
<li><p>Native support of file directory-based databases, where each
sub-directory represents a different label. Most used image file formats
are supported (JPEG, PNG, PGM…);</p></li>
<li><p>Possibility to add custom datafile format in the tool without any change
in the code base;</p></li>
<li><p>Automatic random partitionning of the database into learning, validation
and testing sets.</p></li>
</ul>
</div>
<div class="section" id="data-pre-processing">
<h2>Data pre-processing<a class="headerlink" href="#data-pre-processing" title="Permalink to this headline">¶</a></h2>
<p>Data pre-processing, such as image rescaling, normalization,
filtering… is directly integrated into the toolflow, with no need for
external tool or pre-processing. Each pre-processing step is called a
<em>transformation</em>.</p>
<p>The full sequence of transformations can be specified easily in a INI
text configuration file. For example:</p>
<div class="highlight-ini notranslate"><div class="highlight"><pre><span></span><span class="c1">; First step: convert the image to grayscale</span>
<span class="k">[env.Transformation-1]</span>
<span class="na">Type</span><span class="o">=</span><span class="s">ChannelExtractionTransformation</span>
<span class="na">CSChannel</span><span class="o">=</span><span class="s">Gray</span>
<span class="c1">; Second step: rescale the image to a 29x29 size</span>
<span class="k">[env.Transformation-2]</span>
<span class="na">Type</span><span class="o">=</span><span class="s">RescaleTransformation</span>
<span class="na">Width</span><span class="o">=</span><span class="s">29</span>
<span class="na">Height</span><span class="o">=</span><span class="s">29</span>
<span class="c1">; Third step: apply histogram equalization to the image</span>
<span class="k">[env.Transformation-3]</span>
<span class="na">Type</span><span class="o">=</span><span class="s">EqualizeTransformation</span>
<span class="c1">; Fourth step (only during learning): apply random elastic distortions to the images to extent the learning set</span>
<span class="k">[env.OnTheFlyTransformation]</span>
<span class="na">Type</span><span class="o">=</span><span class="s">DistortionTransformation</span>
<span class="na">ApplyTo</span><span class="o">=</span><span class="s">LearnOnly</span>
<span class="na">ElasticGaussianSize</span><span class="o">=</span><span class="s">21</span>
<span class="na">ElasticSigma</span><span class="o">=</span><span class="s">6.0</span>
<span class="na">ElasticScaling</span><span class="o">=</span><span class="s">20.0</span>
<span class="na">Scaling</span><span class="o">=</span><span class="s">15.0</span>
<span class="na">Rotation</span><span class="o">=</span><span class="s">15.0</span>
</pre></div>
</div>
<p>Example of pre-processing transformations built-in in the tool are:</p>
<ul class="simple">
<li><p>Image color space change and color channel extraction;</p></li>
<li><p>Elastic distortion;</p></li>
<li><p>Histogram equalization (including CLAHE);</p></li>
<li><p>Convolutional filtering of the image with custom or pre-defined kernels
(Gaussian, Gabor…);</p></li>
<li><p>(Random) image flipping;</p></li>
<li><p>(Random) extraction of fixed-size slices in a given label (for
multi-label images)</p></li>
<li><p>Normalization;</p></li>
<li><p>Rescaling, padding/cropping, triming;</p></li>
<li><p>Image data range clipping;</p></li>
<li><p>(Random) extraction of fixed-size slices.</p></li>
</ul>
</div>
<div class="section" id="deep-network-building">
<h2>Deep network building<a class="headerlink" href="#deep-network-building" title="Permalink to this headline">¶</a></h2>
<p>The building of a deep network is straightforward and can be done
withing the same INI configuration file. Several layer types are
available: convolutional, pooling, fully connected, Radial-basis
function (RBF) and softmax. The tool is highly modular and new layer
types can be added without any change in the code base. Parameters of
each layer type are modifiable, for example for the convolutional layer,
one can specify the size of the convolution kernels, the stride, the
number of kernels per input map and the learning parameters (learning
rate, initial weights value…). For the learning, the data dynamic can
be chosen between 16 bits (with NVIDIA cuDNN <a class="footnote-reference brackets" href="#id4" id="id2">2</a>), 32 bit and 64 bit
floating point numbers.</p>
<p>The following example, which will serve as the use case for the rest of
this presentation, shows how to build a DNN with 5 layers: one
convolution layer, followed by one MAX pooling layer, followed by two
fully connected layers and a softmax output layer.</p>
<div class="highlight-ini notranslate"><div class="highlight"><pre><span></span><span class="c1">; Specify the input data format</span>
<span class="k">[env]</span>
<span class="na">SizeX</span><span class="o">=</span><span class="s">24</span>
<span class="na">SizeY</span><span class="o">=</span><span class="s">24</span>
<span class="na">BatchSize</span><span class="o">=</span><span class="s">12</span>
<span class="c1">; First layer: convolutional with 3x3 kernels</span>
<span class="k">[conv1]</span>
<span class="na">Input</span><span class="o">=</span><span class="s">env</span>
<span class="na">Type</span><span class="o">=</span><span class="s">Conv</span>
<span class="na">KernelWidth</span><span class="o">=</span><span class="s">3</span>
<span class="na">KernelHeight</span><span class="o">=</span><span class="s">3</span>
<span class="na">NbOutputs</span><span class="o">=</span><span class="s">32</span>
<span class="na">Stride</span><span class="o">=</span><span class="s">1</span>
<span class="c1">; Second layer: MAX pooling with pooling area 2x2</span>
<span class="k">[pool1]</span>
<span class="na">Input</span><span class="o">=</span><span class="s">conv1</span>
<span class="na">Type</span><span class="o">=</span><span class="s">Pool</span>
<span class="na">Pooling</span><span class="o">=</span><span class="s">Max</span>
<span class="na">PoolWidth</span><span class="o">=</span><span class="s">2</span>
<span class="na">PoolHeight</span><span class="o">=</span><span class="s">2</span>
<span class="na">NbOutputs</span><span class="o">=</span><span class="s">32</span>
<span class="na">Stride</span><span class="o">=</span><span class="s">2</span>
<span class="na">Mapping.Size</span><span class="o">=</span><span class="s">1 ; one to one connection between convolution output maps and pooling input maps</span>
<span class="c1">; Third layer: fully connected layer with 60 neurons</span>
<span class="k">[fc1]</span>
<span class="na">Input</span><span class="o">=</span><span class="s">pool1</span>
<span class="na">Type</span><span class="o">=</span><span class="s">Fc</span>
<span class="na">NbOutputs</span><span class="o">=</span><span class="s">60</span>
<span class="c1">; Fourth layer: fully connected with 10 neurons</span>
<span class="k">[fc2]</span>
<span class="na">Input</span><span class="o">=</span><span class="s">fc1</span>
<span class="na">Type</span><span class="o">=</span><span class="s">Fc</span>
<span class="na">NbOutputs</span><span class="o">=</span><span class="s">10</span>
<span class="c1">; Final layer: softmax</span>
<span class="k">[softmax]</span>
<span class="na">Input</span><span class="o">=</span><span class="s">fc2</span>
<span class="na">Type</span><span class="o">=</span><span class="s">Softmax</span>
<span class="na">NbOutputs</span><span class="o">=</span><span class="s">10</span>
<span class="na">WithLoss</span><span class="o">=</span><span class="s">1</span>
<span class="k">[softmax.Target]</span>
<span class="na">TargetValue</span><span class="o">=</span><span class="s">1.0</span>
<span class="na">DefaultValue</span><span class="o">=</span><span class="s">0.0</span>
</pre></div>
</div>
<p>The resulting DNN is shown in figure [fig:DNNExample].</p>
<div class="figure align-default" id="id5">
<img alt="Automatically generated and ready to learn DNN from the INI configuration file example." src="_images/dnn_example.png" />
<p class="caption"><span class="caption-text">Automatically generated and ready to learn DNN from the INI
configuration file example.</span><a class="headerlink" href="#id5" title="Permalink to this image">¶</a></p>
</div>
<p>The learning is accelerated in GPU using the NVIDIA cuDNN framework,
integrated into the toolflow. Using GPU acceleration, learning times can
be reduced typically by two orders of magnitude, enabling the learning
of large databases within tens of minutes to a few hours instead of
several days or weeks for non-GPU accelerated learning.</p>
</div>
<div class="section" id="performances-evaluation">
<h2>Performances evaluation<a class="headerlink" href="#performances-evaluation" title="Permalink to this headline">¶</a></h2>
<p>The software automatically outputs all the information needed for the
network applicative performances analysis, such as the recognition rate
and the validation score during the learning; the confusion matrix
during learning, validation and test; the memory and computation
requirements of the network; the output maps activity for each layer,
and so on, as shown in figure [fig:metrics].</p>
</div>
<div class="section" id="hardware-exports">
<h2>Hardware exports<a class="headerlink" href="#hardware-exports" title="Permalink to this headline">¶</a></h2>
<p>Once the learned DNN recognition rate performances are satisfying, an
optimized version of the network can be automatically exported for
various embedded targets. An automated network computation performances
benchmarking can also be performed among different targets.</p>
<p>The following targets are currently supported by the toolflow:</p>
<ul class="simple">
<li><p>Plain C code (no dynamic memory allocation, no floating point
processing);</p></li>
<li><p>C code accelerated with OpenMP;</p></li>
<li><p>C code tailored for High-Level Synthesis (HLS) with Xilinx Vivado HLS;</p>
<ul>
<li><p>Direct synthesis to FPGA, with timing and utilization after routing;</p></li>
<li><p>Possibility to constrain the maximum number of clock cycles desired to
compute the whole network;</p></li>
<li><p>FPGA utilization vs number of clock cycle
trade-off analysis;</p></li>
</ul>
</li>
<li><p>OpenCL code optimized for either CPU/DSP or GPU;</p></li>
<li><p>Cuda kernels, cuDNN and TensorRT code optimized for NVIDIA GPUs.</p></li>
</ul>
<p>Different automated optimizations are embedded in the exports:</p>
<ul class="simple">
<li><p>DNN weights and signal data precision reduction (down to 8 bit integers
or less for custom FPGA IPs);</p></li>
<li><p>Non-linear network activation functions approximations;</p></li>
<li><p>Different weights discretization methods.</p></li>
</ul>
<p>The exports are generated automatically and come with a Makefile and a
working testbench, including the pre-processed testing dataset. Once
generated, the testbench is ready to be compiled and executed on the
target platform. The applicative performance (recognition rate) as well
as the computing time per input data can then be directly mesured by the
testbench.</p>
<div class="figure align-default" id="id6">
<img alt="Example of network benchmarking on different hardware targets." src="_images/targets_benchmarking.png" />
<p class="caption"><span class="caption-text">Example of network benchmarking on different hardware targets.</span><a class="headerlink" href="#id6" title="Permalink to this image">¶</a></p>
</div>
<p>The figure [fig:TargetsBenchmarking] shows an example of benchmarking
results of the previous DNN on different targets (in log scale).
Compared to desktop CPUs, the number of input image pixels processed per
second is more than one order of magnitude higher with GPUsand at least
two orders of magnitude better with synthesized DNN on FPGA.</p>
</div>
<div class="section" id="summary">
<h2>Summary<a class="headerlink" href="#summary" title="Permalink to this headline">¶</a></h2>
<p>The N2D2 platform is today a complete and production ready neural
network building tool, which does not require advanced knownledges in
deep learning to be used. It is tailored for fast neural network
applications generation and porting with minimum overhead in terms of
database creation and management, data pre-processing, networks
configuration and optimized code generation, which can save months of
manual porting and verification effort to a single automated step in the
tool.</p>
<dl class="footnote brackets">
<dt class="label" id="id3"><span class="brackets"><a class="fn-backref" href="#id1">1</a></span></dt>
<dd><p>Ongoing work</p>
</dd>
<dt class="label" id="id4"><span class="brackets"><a class="fn-backref" href="#id2">2</a></span></dt>
<dd><p>On future GPUs</p>
</dd>
</dl>
</div>
</div>
</div>
</div>
<footer>
<div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
<a href="about.html" class="btn btn-neutral float-right" title="About N2D2-IP" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a>
<a href="index.html" class="btn btn-neutral float-left" title="N2D2" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a>
</div>
<hr/>
<div role="contentinfo">
<p>
© Copyright 2019, CEA LIST
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
jQuery(function () {
SphinxRtdTheme.Navigation.enable(true);
});
</script>
</body>
</html>