Skip to content
View Richardyu114's full-sized avatar
🎯
Focusing
🎯
Focusing

Organizations

@simvisionX

Block or report Richardyu114

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Richardyu114/README.md

Hi 👋, thanks for coming!




GIF

I'm a machine learning engineer👨from China and currently working on model accelerating right now.

Trying to work hard and stop talking too much.

Hope you have fun there.🤝

Abhishek Naidu | Twitter Abhishek's Instagram Abhishek's Reddit

Pinned Loading

  1. MIT-Linear-Algebra-Learning-Materials MIT-Linear-Algebra-Learning-Materials Public

    这是我学习MIT18.06线性代数课所收集的学习材料

    138 44

  2. minds-thoughts-and-resources-about-research- minds-thoughts-and-resources-about-research- Public

    自己的一个资源和ideas笔记,记录平时一些随便看到的有关科研和学习的东西

    8

  3. Richardyu114.github.io Richardyu114.github.io Public

    personal blog based on hexo, https://densecollections.top

    CSS

  4. weakly-segmentation-with-bounding-box weakly-segmentation-with-bounding-box Public

    ideas from paper 'BoxSup' and 'Simple does it' and realize it on sputum smear and drone images

    Python 18 4

  5. intel-extension-for-transformers intel-extension-for-transformers Public

    Forked from intel/intel-extension-for-transformers

    Extending Hugging Face transformers APIs for Transformer-based models and improve the productivity of inference deployment. With extremely compressed models, the toolkit can greatly improve the inf…

    C++

  6. neural-speed neural-speed Public

    Forked from intel/neural-speed

    An innovation library for efficient LLM inference via low-bit quantization and sparsity

    C++