-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do you know how NVDLA implement the non-linear activation functions #2
Comments
I will check those codes these days. |
The Single Data Point Processor (SDP) allows for the application of both linear and non-linear functions onto individual data points. This is commonly used immediately after convolution in CNN systems. The SDP provides native support for linear functions (e.g., simple bias and scaling) and uses lookup tables (LUTs) to implement non-linear functions. This combination supports most common activation functions as well as other element-wise operations including: ReLU, PReLU, precision scaling, batch normalization, bias addition, or other complex non-linear functions, such as a sigmoid or a hyperbolic tangent. I guess U shall understand SDP's functions first, then U will know the RTL code very well. |
i know which functions inside SDP , but for detail solution , how to design LUT to implement ELU/Sigmoid/Tanh is not know ; |
Hi, Junning Thanks |
The index is consist of 9 bit INT and 35 bit FRAC, so if the index is inbetween of 2 entrys, the output will be Y0FRAC + Y1(1<<35 - FRAC), that's why, the HW always implement the addr and addr+1 logics. |
From the RTL Source code ,it use 9 bits as address for LUT , and it can 4*16bits data in parallel ;
but it divide it's table into LO and LE , what this means ? how to decide to use LO or LE ?
why it both get the LE[addr] and LE[addr+1] ?
there are also scale/shift/offset/bias after LUT , what this means ?
The text was updated successfully, but these errors were encountered: