- 👋 Hi, I’m Mingyu Jin, PhD student at Rutgers University
- 👀 I’m interested in Trustworthy Large Language Models, Explainability, and Data Mining.
- 🌱 I’m currently learning about interpretability in transformers.
- 💞️ I’m looking to collaborate with students who are also interested in these areas
- 📫 How to reach me: mingyu.jin404@gmail.com
I may be slow to respond.
Pinned Loading
-
Stockagent
Stockagent PublicLarge Language Model-based Stock Trading in Simulated Real-world Environments
-
The-Impact-of-Reasoning-Step-Length-on-Large-Language-Models
The-Impact-of-Reasoning-Step-Length-on-Large-Language-Models Public[ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correlation between the effectiveness of CoT and the length of reas…
-
Luckfort/CD
Luckfort/CD Public[COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?
-
Disentangling-Memory-and-Reasoning
Disentangling-Memory-and-Reasoning Public[preprint] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.