Skip to content
View oublalkhalid's full-sized avatar
🎯
Focusing
🎯
Focusing

Highlights

  • Pro

Block or report oublalkhalid

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
oublalkhalid/README.md

Hi! I'm Khalid

  • 👀 I’m interested in the Satisfiability of Machine and Deep Learning (Explainable AI, Causality..), I am passionated for mathematics and its applications like energy domaine, sustainability, ...

  • 🥇 MS.E in Computer Science (summa cum laude) from Ecole Polytechnique - Institut Polytechnique of Paris, France. Currently, I follow a Ph.D. program at the same institute.

  • 💞️ Open to collaborate on Explainability for Generative Models

  • 📚 What I have read ?

  • 📫 How to reach me khlaid.oublal@polytechnique.edu (.org [for graduate email])

  • Training at Mathematical Institute, University of Oxford

  • Summer School Oxford, Machine Learning (OxML2023): Generative Models on NLP & Finance

  • Current work:

    • Satisfiability modulo theories, Neural networks as a sub-symbolic approach with Pr. Sergio Mover
    • Deep Q-Learning systems to avoid collisions 802.11bf electric scooter with Pr. Keun-Woo Lim
    • Explainable Models for sequential data with Pr. François Roueff and Pr. Said Ladjal. Follow-up by Pr. Cristian Jutten.
    • OpenXAI for time series with Stanford University (ongoing...)
  • I collaborate to @huggingface Time Series Large Models

News 📣:

  • [January 2024]🚀 Paper accepted at ICLR 2024: Disentangling Time Series Representations via Contrastive Independence-of-Support on l-Variational Inference
  • [September 2023] Paper accepted at NeurIPS 2023: DISCOV
  • [March 2023] Paper accepter at ICML 2023: Temporal Attention Bottleneck is Informative?

Feel free to discover my repositories.

Skills

angularjs aws bash csharp cypress django docker dotnet elasticsearch express flask git go java javascript jenkins kafka kubernetes linux mongodb mysql nodejs php postgresql postman python rabbitMQ react redis travisci typescript zapier

Pinned Loading

  1. Institut-Polytechnique-de-Paris/time-disentanglement-lib Institut-Polytechnique-de-Paris/time-disentanglement-lib Public template

    🔥 [ICLR 2024] Disentangling Time Series Representations via Contrastive based l-Variational Inference

    Python 12 2

  2. MoroccoAI-Data-Challenge-Nvidia-ANRT-MoroccoAI MoroccoAI-Data-Challenge-Nvidia-ANRT-MoroccoAI Public

    MoroccoAI Challenge - This paper won the Nvidia - ANRT - MoroccoAI (conference of December 2021)

    Jupyter Notebook 14 1

  3. huggingface/transformers huggingface/transformers Public

    🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

    Python 137k 27.4k

  4. gluonts gluonts Public

    Forked from awslabs/gluonts

    Probabilistic time series modeling in Python

    Python

  5. Pytorch-Ray-Tune Pytorch-Ray-Tune Public

    Hyperparameter tuning

    Jupyter Notebook 1

  6. TAB-VAE TAB-VAE Public

    Temporal Attention Bottleneck for VAE is informative? (ICML 2023)

    Python 3