Skip to content
@tml-epfl

Theory of Machine Learning, EPFL

Popular repositories Loading

  1. llm-adaptive-attacks llm-adaptive-attacks Public

    Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]

    Shell 178 20

  2. understanding-fast-adv-training understanding-fast-adv-training Public

    Understanding and Improving Fast Adversarial Training [NeurIPS 2020]

    Python 93 12

  3. llm-past-tense llm-past-tense Public

    Does Refusal Training in LLMs Generalize to the Past Tense? [arXiv, July 2024]

    Python 46 6

  4. sharpness-vs-generalization sharpness-vs-generalization Public

    A modern look at the relationship between sharpness and generalization [ICML 2023]

    Jupyter Notebook 42 3

  5. why-weight-decay why-weight-decay Public

    Why Do We Need Weight Decay in Modern Deep Learning? [arXiv, Oct 2023]

    Python 41

  6. understanding-sam understanding-sam Public

    Towards Understanding Sharpness-Aware Minimization [ICML 2022]

    Jupyter Notebook 34 3

Repositories

Showing 10 of 12 repositories
  • llm-adaptive-attacks Public

    Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]

    tml-epfl/llm-adaptive-attacks’s past year of commit activity
    Shell 178 MIT 20 0 0 Updated Aug 3, 2024
  • llm-past-tense Public

    Does Refusal Training in LLMs Generalize to the Past Tense? [arXiv, July 2024]

    tml-epfl/llm-past-tense’s past year of commit activity
    Python 46 6 0 0 Updated Jul 18, 2024
  • icl-alignment Public

    Is In-Context Learning Sufficient for Instruction Following in LLMs?

    tml-epfl/icl-alignment’s past year of commit activity
    Python 19 Apache-2.0 3 0 0 Updated May 31, 2024
  • long-is-more-for-alignment Public

    Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]

    tml-epfl/long-is-more-for-alignment’s past year of commit activity
    Python 12 0 1 0 Updated May 2, 2024
  • why-weight-decay Public

    Why Do We Need Weight Decay in Modern Deep Learning? [arXiv, Oct 2023]

    tml-epfl/why-weight-decay’s past year of commit activity
    Python 41 0 0 0 Updated Oct 9, 2023
  • sam-low-rank-features Public

    Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]

    tml-epfl/sam-low-rank-features’s past year of commit activity
    Jupyter Notebook 24 1 1 0 Updated Sep 22, 2023
  • sharpness-vs-generalization Public

    A modern look at the relationship between sharpness and generalization [ICML 2023]

    tml-epfl/sharpness-vs-generalization’s past year of commit activity
    Jupyter Notebook 42 3 0 0 Updated Sep 11, 2023
  • sgd-sparse-features Public

    SGD with large step sizes learns sparse features [ICML 2023]

    tml-epfl/sgd-sparse-features’s past year of commit activity
    Jupyter Notebook 31 5 0 0 Updated Apr 24, 2023
  • tml-epfl.github.io Public

    Creating a repository to store all related information for the weekly TML group meetings.

    tml-epfl/tml-epfl.github.io’s past year of commit activity
    HTML 0 MIT 0 0 0 Updated Nov 16, 2022
  • understanding-sam Public

    Towards Understanding Sharpness-Aware Minimization [ICML 2022]

    tml-epfl/understanding-sam’s past year of commit activity
    Jupyter Notebook 34 3 0 0 Updated Jun 14, 2022

Top languages

Loading…

Most used topics

Loading…