Skip to content

Releases: es-ude/elastic-ai.creator

v0.47.1

16 Jun 12:24
Compare
Choose a tag to compare

Fix

  • Remove wrongly committed files (4fdea0c)

v0.47.0

16 Jun 10:29
Compare
Choose a tag to compare

Feature

  • Simplify project structure (81cbcb3)

v0.46.1

13 Jun 19:10
Compare
Choose a tag to compare

Fix

  • Fix wrong port definitions (9a4c8af)
  • Fix some syntax errors (3997bbd)

v0.46.0

13 Jun 09:43
Compare
Choose a tag to compare

Feature

  • Use conv1d arithmetics function to implement conv1d module (69778be)
  • Add conv1d function to arithmetics (1cab190)
  • Test that conv1d uses different arithmetics (7eb01db)
  • Add the ability to sum over dimension (c45c0e6)

Fix

  • Quantize weights before inference (61153e6)

v0.45.0

10 Jun 11:47
Compare
Choose a tag to compare

Feature

  • Simplify usage for the elasticai.creator.nn.vhdl package by adding layers to init (2c7c968)

Fix

  • Fix broken import in base template generator and move it with its template to own folder (9eb1f70)

v0.44.0

09 Jun 13:19
Compare
Choose a tag to compare

Feature

  • Check for autowiring protocol violation (3f17e00)
  • Add AutoWirer (f4159c8)
  • Add intermediate symbols to rule definitions (624b310)
  • Support parsing partial files (f2c2eb6)
  • Support parsing partial files (8170012)
  • Add standalone parser module (5a9b141)
  • Add basic vhdl parsing (5df2a3f)
  • Port expansion/template based on autowiring protocol (0d14618)
  • template: Make precomputed scalar functions bufferless (89986fa)

Fix

  • Use new Sequential constructor (6bb111b)
  • Port def and impl of monotonous function design (2d423d4)
  • Children of sequential layer determine signal widths (3dd5c0c)
  • Remove obsolete parsing functionality (7f85d05)
  • Adjust tests to follow previous change (c328bd5)
  • Correct tuple type annotation (f0e7da0)

v0.43.0

09 Jun 09:56
Compare
Choose a tag to compare

Feature

  • Add tests for the FPMonotonouslyIncreasingModule (9ba64ae)
  • Introduce FPMonotonouslyIncreasingModule to easily add new activations (b78c922)

Fix

  • Set correct signal names for x and y address (5354a2a)
  • Use elsif in lookup table (f375ba3)
  • Increase default sampling intervall (07620d3)

v0.42.0

08 Jun 19:22
Compare
Choose a tag to compare

Feature

  • Reimplement hard tanh activation function (9b86f9d)
  • Add working hardsigmoid implementation (db03ff0)
  • Make sure that inplace parameter is fixed defined (79b7a1e)

v0.41.0

08 Jun 12:07
Compare
Choose a tag to compare

Feature

  • Add fixed point ReLU module (62c1555)

v0.40.0

04 Jun 11:42
Compare
Choose a tag to compare

Feature

  • Simplify the use of the sequential layer (same as in torch) (9fad15d)
  • Improve performance of the identity step autograd function (46f036c)
  • Add quantized tanh implementation with lookup tables (3a1fb10)
  • Implement bufferless component interface for precomputed scalar function (f701a57)
  • Pass step lut to identity step function and improve readablility (c1b6747)
  • Rename autograd function and pass step lut to autograd function (d607e98)
  • Implement autograd fn to map inputs to a subset of inputs (26c6ec7)
  • Add a function to easily compare tensors with pytest (24e737e)
  • Add experimental precomputed tanh in fixed point (0e76d03)

Fix

  • Fix that last io pair was dropped when calling save_to function (2bc46ac)
  • Fix missing creation of a subpath in the save_to function (2a4dbdf)