Releases: es-ude/elastic-ai.creator
Releases · es-ude/elastic-ai.creator
v0.47.1
Fix
- Remove wrongly committed files (
4fdea0c
)
v0.47.0
Feature
- Simplify project structure (
81cbcb3
)
v0.46.0
Feature
- Use conv1d arithmetics function to implement conv1d module (
69778be
)
- Add conv1d function to arithmetics (
1cab190
)
- Test that conv1d uses different arithmetics (
7eb01db
)
- Add the ability to sum over dimension (
c45c0e6
)
Fix
- Quantize weights before inference (
61153e6
)
v0.45.0
Feature
- Simplify usage for the elasticai.creator.nn.vhdl package by adding layers to init (
2c7c968
)
Fix
- Fix broken import in base template generator and move it with its template to own folder (
9eb1f70
)
v0.44.0
Feature
- Check for autowiring protocol violation (
3f17e00
)
- Add AutoWirer (
f4159c8
)
- Add intermediate symbols to rule definitions (
624b310
)
- Support parsing partial files (
f2c2eb6
)
- Support parsing partial files (
8170012
)
- Add standalone parser module (
5a9b141
)
- Add basic vhdl parsing (
5df2a3f
)
- Port expansion/template based on autowiring protocol (
0d14618
)
- template: Make precomputed scalar functions bufferless (
89986fa
)
Fix
- Use new Sequential constructor (
6bb111b
)
- Port def and impl of monotonous function design (
2d423d4
)
- Children of sequential layer determine signal widths (
3dd5c0c
)
- Remove obsolete parsing functionality (
7f85d05
)
- Adjust tests to follow previous change (
c328bd5
)
- Correct tuple type annotation (
f0e7da0
)
v0.43.0
Feature
- Add tests for the FPMonotonouslyIncreasingModule (
9ba64ae
)
- Introduce FPMonotonouslyIncreasingModule to easily add new activations (
b78c922
)
Fix
- Set correct signal names for x and y address (
5354a2a
)
- Use elsif in lookup table (
f375ba3
)
- Increase default sampling intervall (
07620d3
)
v0.42.0
Feature
- Reimplement hard tanh activation function (
9b86f9d
)
- Add working hardsigmoid implementation (
db03ff0
)
- Make sure that inplace parameter is fixed defined (
79b7a1e
)
v0.41.0
Feature
- Add fixed point ReLU module (
62c1555
)
v0.40.0
Feature
- Simplify the use of the sequential layer (same as in torch) (
9fad15d
)
- Improve performance of the identity step autograd function (
46f036c
)
- Add quantized tanh implementation with lookup tables (
3a1fb10
)
- Implement bufferless component interface for precomputed scalar function (
f701a57
)
- Pass step lut to identity step function and improve readablility (
c1b6747
)
- Rename autograd function and pass step lut to autograd function (
d607e98
)
- Implement autograd fn to map inputs to a subset of inputs (
26c6ec7
)
- Add a function to easily compare tensors with pytest (
24e737e
)
- Add experimental precomputed tanh in fixed point (
0e76d03
)
Fix
- Fix that last io pair was dropped when calling save_to function (
2bc46ac
)
- Fix missing creation of a subpath in the save_to function (
2a4dbdf
)