Skip to content

Latest commit

 

History

History
1543 lines (1085 loc) · 91.3 KB

awesome_paper_date.md

File metadata and controls

1543 lines (1085 loc) · 91.3 KB

Awesome papers by date

Here, we list some papers related to transfer learning by date (starting from 2021-07). For papers older than 2021-07, please refer to the papers by topic, which contains more papers.

2023-12

  • Multi-Modal Domain Adaptation Across Video Scenes for Temporal Video Grounding [arxiv]

    • Multi-modal domain adaptation 多模态领域自适应
  • Domain Adaptive Graph Classification [arxiv]

    • Domain adaptive graph classification 域适应的图分类
  • Understanding and Estimating Domain Complexity Across Domains [arxiv]

    • Understanding and estimating domain complexity 解释领域复杂性
  • Prompt-based Domain Discrimination for Multi-source Time Series Domain Adaptation [arxiv]

    • Prompt-based domain discrimination for time series domain adaptation 基于prompt的时间序列域自适应
  • NeurIPS'23 SwapPrompt: Test-Time Prompt Adaptation for Vision-Language Models [arxiv]

    • Test-time prompt adaptation for vision language models 对视觉-语言大模型的测试时prompt自适应
  • AAAI24 Relax Image-Specific Prompt Requirement in SAM: A Single Generic Prompt for Segmenting Camouflaged Objects [arxiv][code]

    • A training-free test-time adaptation approach to relax the instance-specific prompts requirment in SAM.
  • Open Domain Generalization with a Single Network by Regularization Exploiting Pre-trained Features [arxiv]

    • Open domain generalization with a single network 用单一网络结构进行开放式domain generalizaition
  • Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation [arxiv]

    • Using vision foundation models for domain genealized semantic segmentation 用视觉基础模型进行域泛化语义分割
  • DARNet: Bridging Domain Gaps in Cross-Domain Few-Shot Segmentation with Dynamic Adaptation [arxiv]

    • Dynamic adaptation for cross-domain few-shot segmentation 动态适配用于跨领域小样本分割
  • A Unified Framework for Unsupervised Domain Adaptation based on Instance Weighting [arxiv]

    • Instance weighting for domain adaptation 样本加权用于领域自适应
  • Target-agnostic Source-free Domain Adaptation for Regression Tasks [arxiv]

    • Target-agnostic source-free DA for regression 用于回归任务的source-free DA
  • On the Out-Of-Distribution Robustness of Self-Supervised Representation Learning for Phonocardiogram Signals [arxiv]

    • OOD robustness for self-supervised learning for phonocardiogram 心音图信号自监督的OOD鲁棒性
  • Student Activity Recognition in Classroom Environments using Transfer Learning [arxiv]

    • Using transfer learning to recognize student activities 用迁移学习来识别学生课堂行为

2023-11

  • A2XP: Towards Private Domain Generalization [arxiv]

    • Private domain generalization 隐私保护的domain generalization
  • Layer-wise Auto-Weighting for Non-Stationary Test-Time Adaptation [arxiv]

    • Auto-weighting for test-time adaptation 自动权重的TTA
  • Domain Generalization by Learning from Privileged Medical Imaging Information [arxiv]

    • Domain generalizaiton by learning from privileged medical imageing inforamtion
  • SSL-DG: Rethinking and Fusing Semi-supervised Learning and Domain Generalization in Medical Image Segmentation [arxiv]

    • Semi-supervised learning + domain generalization 把半监督和领域泛化结合在一起
  • WACV'24 Learning Class and Domain Augmentations for Single-Source Open-Domain Generalization [arxiv]

    • Class and domain augmentation for single-source open-domain DG 结合类和domain增强做单源DG
  • Proposal-Level Unsupervised Domain Adaptation for Open World Unbiased Detector [arxiv]

    • Proposal-level unsupervised domain adaptation
  • Robust Fine-Tuning of Vision-Language Models for Domain Generalization [arxiv]

    • Robust fine-tuning for domain generalization 用于领域泛化的鲁棒微调
  • NeurIPS 2023 Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models [arxiv]

    • Distill OOD robustness from vision-language foundational models 从VLM模型中蒸馏出OOD鲁棒性
  • UbiComp 2024 Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition [arxiv]

    • Test-time adaptation for activity recognition 测试时adaptation用于行为识别

2023-10

  • PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization [arxiv]

    • Prompt-driven style generation for source-free domain generalization
  • A Survey of Heterogeneous Transfer Learning [arxiv]

    • A recent survey of heterogeneous transfer learning 一篇最近的关于异构迁移学习的综述
  • Equivariant Adaptation of Large Pre-Trained Models [arxiv]

    • Equivariant adaptation of large pre-trained models 对大模型进行等边自适应
  • Effective and Parameter-Efficient Reusing Fine-Tuned Models [arxiv]

    • Effective and parameter-efficient reusing fine-tuned models 高效使用预训练模型
  • Prompting-based Efficient Temporal Domain Generalization [arxiv]

    • Prompt based temporal domain generalization 基于prompt的时间域domain generalization
  • Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks [arxiv]

    • Noisy model learning: fine-tuning to supress the bad effect of noisy pretraining data 通过使用轻量级finetune减少噪音预训练数据对下游任务的影响
  • ZooPFL: Exploring Black-box Foundation Models for Personalized Federated Learning [arxiv]

    • Black-box foundation models for personalized federated learning 黑盒的blackbox模型进行个性化迁移学习

2023-09

  • Domain Generalization with Fourier Transform and Soft Thresholding [arxiv]

    • Domain generalization with Fourier transform 基于傅里叶变换和软阈值进行domain generalization
  • DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning [arxiv]

    • Decomposed prompt tuning for parameter-efficient fine-tuning 基于分解prompt tuning的参数高效微调
  • Better Practices for Domain Adaptation [arxiv]

    • Better practice for domain adaptation
  • Domain Adaptation for Efficiently Fine-tuning Vision Transformer with Encrypted Images [arxiv]

    • Domain adaptation for efficient ViT
  • Robust Activity Recognition for Adaptive Worker-Robot Interaction using Transfer Learning [arxiv]

    • Activity recognition using domain adaptation

2023-08

  • IJCV'23 Exploring Vision-Language Models for Imbalanced Learning [arxiv] [code]

    • Explore vision-language models for imbalanced learning 探索视觉大模型在不平衡问题上的表现
  • ICCV'23 Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning [arxiv] [code]

    • 达到对抗鲁棒性和泛化能力的trade off
  • ICCV'23 Domain-Specificity Inducing Transformers for Source-Free Domain Adaptation [arxiv]

    • Domain-specificity for source-free DA 用领域特异性驱动的source-free DA
  • Unsupervised Domain Adaptation via Domain-Adaptive Diffusion [arxiv]

    • Domain-adaptive diffusion for domain adaptation 领域自适应的diffusion
  • Multi-Scale and Multi-Layer Contrastive Learning for Domain Generalization [arxiv]

    • Multi-scale and multi-layer contrastive learning for DG 多尺度和多层对比学习用于DG
  • Exploring the Transfer Learning Capabilities of CLIP in Domain Generalization for Diabetic Retinopathy [arxiv]

    • Domain generalization for diabetic retinopathy 用领域泛化进行糖尿病视网膜病
  • Federated Fine-tuning of Billion-Sized Language Models across Mobile Devices [arxiv]

    • Federated fine-tuning for large models 大模型联邦微调
  • Source-Free Collaborative Domain Adaptation via Multi-Perspective Feature Enrichment for Functional MRI Analysis [arxiv]

    • Source-free domain adaptation for MRI analysis
  • Towards Realistic Unsupervised Fine-tuning with CLIP [arxiv]

    • Unsupervised fine-tuning of CLIP
  • Fine-tuning can cripple your foundation model; preserving features may be the solution [arxiv]

    • Fine-tuning will cripple foundation model
  • Exploring Transfer Learning in Medical Image Segmentation using Vision-Language Models [arxiv]

    • Transfer learning for medical image segmentation
  • Transfer Learning for Portfolio Optimization [arxiv]

    • Transfer learning for portfolio optimization
  • NormAUG: Normalization-guided Augmentation for Domain Generalization [arxiv]

    • Normalization augmentation for domain generalization

2023-07

  • Benchmarking Algorithms for Federated Domain Generalization [arxiv]

    • Benchmark algorthms for federated domain generalization 对联邦域泛化算法进行的benchmark
  • DISPEL: Domain Generalization via Domain-Specific Liberating [arxiv]

    • Domain generalization via domain-specific liberating
  • Review of Large Vision Models and Visual Prompt Engineering [arxiv]

    • A survey of large vision model and prompt tuning 一个关于大视觉模型的prompt tuning的综述
  • Intra- & Extra-Source Exemplar-Based Style Synthesis for Improved Domain Generalization [arxiv]

    • Exemplar-based style synthesis for domain generalization 样例格式合成用于DG
  • SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation [arxiv]

    • Using SAM for domain adaptation 使用segment anything进行domain adaptation
  • Unified Transfer Learning Models for High-Dimensional Linear Regression [arxiv]

    • Transfer learning for high-dimensional linar regression 迁移学习用于高维线性回归

2023-06

  • Pruning for Better Domain Generalizability [arxiv]

    • Using pruning for better domain generalization 使用剪枝操作进行domain generalization
  • TMLR'23 Generalizability of Adversarial Robustness Under Distribution Shifts [openreview]

    • Evaluate the OOD perormance of adversarial training 评测对抗训练模型的OOD鲁棒性
  • Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning [arxiv]

    • A guide for parameter-efficient fine-tuning 一个对parameter efficient fine-tuning的全面介绍
  • ICML'23 A Kernel-Based View of Language Model Fine-Tuning [arxiv]

    • A kernel-based view of language model fine-tuning 一种以kernel的视角来看待fine-tuning的方法
  • ICML'23 Improving Visual Prompt Tuning for Self-supervised Vision Transformers [arxiv]

    • Improving visual prompt tuning for self-supervision 为自监督模型提高其 prompt tuning 表现
  • Cross-Database and Cross-Channel ECG Arrhythmia Heartbeat Classification Based on Unsupervised Domain Adaptation [arxiv]

    • EEG using unsupervised domain adaptation 用无监督DA来进行EEG心跳分类
  • Real-Time Online Unsupervised Domain Adaptation for Real-World Person Re-identification [arxiv]

    • Real-time online unsupervised domain adaptation for REID 无监督DA用于REID
  • Federated Domain Generalization: A Survey [arxiv]

    • A survey on federated domain generalization 一篇关于联邦域泛化的综述
  • Domain Generalization for Domain-Linked Classes [arxiv]

    • Domain generalization for domain-linked classes
  • Can We Evaluate Domain Adaptation Models Without Target-Domain Labels? A Metric for Unsupervised Evaluation of Domain Adaptation [arxiv]

    • Evaluate domain adaptation models 评测domain adaptation的模型
  • Universal Test-time Adaptation through Weight Ensembling, Diversity Weighting, and Prior Correction [arxiv]

    • Universal test-time adaptation
  • Adapting Pre-trained Language Models to Vision-Language Tasks via Dynamic Visual Prompting [arxiv]

    • Using dynamic visual prompting for model adaptation 用动态视觉prompt进行模型适配

2023-05

  • Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup [arxiv]

    • Why mixup works for domain generalization? 系统性研究为啥mixup对OOD很work
  • ACL'23 Parameter-Efficient Fine-Tuning without Introducing New Latency [arxiv]

    • Parameter-efficient finetuning 参数高效的finetune
  • Universal Domain Adaptation from Foundation Models [arxiv]

    • Using foundation models for universal domain adaptation
  • Ahead-of-Time P-Tuning [arxiv]

    • Ahead-ot-time P-tuning for language models
  • Multi-Source to Multi-Target Decentralized Federated Domain Adaptation [arxiv]

    • Decentralized federated domain adaptation
  • Benchmarking Low-Shot Robustness to Natural Distribution Shifts [arxiv]

    • Low-shot robustness to distribution shifts

2023-04

  • Multi-Source to Multi-Target Decentralized Federated Domain Adaptation [arxiv]

    • Multi-source to multi-target federated domain adaptation 多源多目标的联邦域自适应
  • ICML'23 AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation [arxiv]

    • Adaptive test-time adaptation 非参数化分类器进行测试时adaptation
  • Improved Test-Time Adaptation for Domain Generalization [arxiv]

    • Improved test-time adaptation for domain generalization
  • Reweighted Mixup for Subpopulation Shift [arxiv]

    • Reweighted mixup for subpopulation shift
  • CVPR'23 Zero-shot Generative Model Adaptation via Image-specific Prompt Learning [arxiv]

    • Zero-shot generative model adaptation via image-specific prompt learning 零样本的生成模型adaptation
  • Source-free Domain Adaptation Requires Penalized Diversity [arxiv]

    • Source-free DA requires penalized diversity
  • Domain Generalization with Adversarial Intensity Attack for Medical Image Segmentation [arxiv]

    • Domain generalization for medical segmentation 用domain generalization进行医学分割
  • CVPR'23 Meta-causal Learning for Single Domain Generalization [arxiv]

    • Meta-causal learning for domain generalization
  • Domain Generalization In Robust Invariant Representation [arxiv]

    • Domain generalization in robust invariant representation
  • Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness [arxiv]

    • Local structure preserving for adversarial robustness 通过保留局部结构来进行对抗鲁棒性
  • TFS-ViT: Token-Level Feature Stylization for Domain Generalization [arxiv]

    • Token-level feature stylization for domain generalization 用token-level特征变换进行domain generalization
  • Are Data-driven Explanations Robust against Out-of-distribution Data? [arxiv]

    • Data-driven explanations robust? 探索数据驱动的解释是否是OOD鲁棒的
  • ERM++: An Improved Baseline for Domain Generalization [arxiv]

    • Improved ERM for domain generalization 提高的ERM用于domain generalization
  • CVPR'23 Feature Alignment and Uniformity for Test Time Adaptation [arxiv]

    • Feature alignment for test-time adaptation 使用特征对齐进行测试时adaptation
  • Finding Competence Regions in Domain Generalization [arxiv]

    • Finding competence regions in domain generalization 在DG中发现能力区域
  • CVPR'23 TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization [arxiv]

    • Improve generalization and adversarial robustness 同时提高鲁棒性和泛化性
  • CVPR'23 Trainable Projected Gradient Method for Robust Fine-tuning [arxiv]

    • Trainable PGD for robust fine-tuning 可训练的pgd用于鲁棒的微调技术
  • Parameter-Efficient Tuning Makes a Good Classification Head [arxiv]

    • Parameter-efficient tuning makes a good classification head 参数高效的迁移学习成就一个好的分类头
  • Complementary Domain Adaptation and Generalization for Unsupervised Continual Domain Shift Learning [arxiv]

    • Continual domain shift learning using adaptation and generalization 使用 adaptation和DG进行持续分布变化的学习

2023-03

  • CVPR'23 A New Benchmark: On the Utility of Synthetic Data with Blender for Bare Supervised Learning and Downstream Domain Adaptation [arxiv]

    • A new benchmark for domain adaptation 一个对于domain adaptation最新的benchmark
  • Unsupervised domain adaptation by learning using privileged information [arxiv]

    • Domain adaptation by privileged information 使用高级信息进行domain adaptation
  • A Unified Continual Learning Framework with General Parameter-Efficient Tuning [arxiv]

    • A continual learning framework for parameter-efficient tuning 一个对于参数高效迁移的连续学习框架
  • CVPR'23 Sharpness-Aware Gradient Matching for Domain Generalization [arxiv]

    • Sharpness-aware gradient matching for DG 利用梯度匹配进行domain generalization
  • TempT: Temporal consistency for Test-time adaptation [arxiv]

    • Temporeal consistency for test-time adaptation 时间一致性用于test-time adaptation
  • TMLR'23 Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural Networks [arxiv]

    • A framework for online learning 一个在线学习的框架
  • ICLR'23 workshop SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models [arxiv]

    • Sparse pre-training and dense fine-tuning
  • CVPR'23 ALOFT: A Lightweight MLP-like Architecture with Dynamic Low-frequency Transform for Domain Generalization [arxiv]

    • A lightweight module for domain generalization 一个用于DG的轻量级模块
  • ICLR'23 Contrastive Alignment of Vision to Language Through Parameter-Efficient Transfer Learning [arxiv]

    • Contrastive alignment for vision language models using transfer learning 使用参数高效迁移进行视觉语言模型的对比对齐
  • Probabilistic Domain Adaptation for Biomedical Image Segmentation [arxiv]

    • Probabilistic domain adaptation for biomedical image segmentation 概率的domain adaptation用于生物医疗图像分割
  • Imbalanced Domain Generalization for Robust Single Cell Classification in Hematological Cytomorphology [arxiv]

    • Imbalanced domain generalization for single cell classification 不平衡的DG用于单细胞分类
  • Revisit Parameter-Efficient Transfer Learning: A Two-Stage Paradigm [arxiv]

    • Parameter-efficient transfer learning: a two-stage approach 一种两阶段的参数高效迁移学习
  • Unsupervised Cumulative Domain Adaptation for Foggy Scene Optical Flow [arxiv]

    • Domain adaptation for foggy scene optical flow 领域自适应用于雾场景的光流
  • ICLR'23 AutoTransfer: AutoML with Knowledge Transfer -- An Application to Graph Neural Networks [arxiv]

    • GNN with autoML transfer learning 用于GNN的自动迁移学习
  • Transfer Learning for Real-time Deployment of a Screening Tool for Depression Detection Using Actigraphy [arxiv]

    • Transfer learning for Depression detection 迁移学习用于脉动计焦虑检测
  • Domain Generalization via Nuclear Norm Regularization [arxiv]

    • Domain generalization via nuclear norm regularization 使用核归一化进行domain generalization
  • To Stay or Not to Stay in the Pre-train Basin: Insights on Ensembling in Transfer Learning [arxiv]

    • Ensembling in transfer learning 调研迁移学习中的集成
  • CVPR'13 Masked Images Are Counterfactual Samples for Robust Fine-tuning [arxiv]

    • Masked images for robust fine-tuning 调研masked image对于fine-tuning的影响
  • FedCLIP: Fast Generalization and Personalization for CLIP in Federated Learning [arxiv]

    • Fast generalization for federated CLIP 在联邦中进行快速的CLIP训练
  • Robust Representation Learning with Self-Distillation for Domain Generalization [arxiv]

    • Robust representation learning with self-distillation
  • ICLR-23 Temporal Coherent Test-Time Optimization for Robust Video Classification [arxiv]

    • Temporal distribution shift in video classification
  • WSDM-23 A tutorial on domain generalization [link] | [website]

    • A tutorial on domain generalization

2023-02

  • On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective [arxiv] | [code]

    • Adversarial and OOD evaluation of ChatGPT 对ChatGPT鲁棒性的评测
  • Transfer learning for process design with reinforcement learning [arxiv]

    • Transfer learning for process design with reinforcement learning 使用强化迁移学习进行过程设计
  • Domain Adaptation for Time Series Under Feature and Label Shifts [arxiv]

    • Domain adaptation for time series 用于时间序列的domain adaptation
  • How Reliable is Your Regression Model's Uncertainty Under Real-World Distribution Shifts? [arxiv]

    • Regression models uncertainty for distribution shift 回归模型对于分布漂移的不确定性
  • ICLR'23 SoftMatch: Addressing the Quantity-Quality Tradeoff in Semi-supervised Learning [arxiv]

    • Semi-supervised learning algorithm 解决标签质量问题的半监督学习方法
  • Empirical Study on Optimizer Selection for Out-of-Distribution Generalization [arxiv]

    • Opimizer selection for OOD generalization OOD泛化中的学习器选择
  • ICML'22 Understanding the failure modes of out-of-distribution generalization [arxiv]

    • Understand the failure modes of OOD generalization 探索OOD泛化中的失败现象
  • ICLR'23 Out-of-distribution Representation Learning for Time Series Classification [arxiv]

    • OOD for time series classification 时间序列分类的OOD算法

2023-01

  • ICLR'23 FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning [arxiv]

    • New baseline for semi-supervised learning 半监督学习新算法
  • CLIP the Gap: A Single Domain Generalization Approach for Object Detection [arxiv]

    • Using CLIP for domain generalization object detection 使用CLIP进行域泛化的目标检测
  • Language-Informed Transfer Learning for Embodied Household Activities [arxiv]

    • Transfer learning for robust control in household 在家居机器人上使用强化迁移学习
  • Does progress on ImageNet transfer to real-world datasets? [arxiv]

    • ImageNet accuracy does not transfer to down-stream tasks
  • TPAMI'23 Source-Free Unsupervised Domain Adaptation: A Survey [arxiv]

    • A survey on source-free domain adaptation 关于source-free DA的一个最新综述
  • Discriminative Radial Domain Adaptation [arxiv]

    • Discriminative radial domain adaptation 判别性的放射式domain adaptation

2022-12

  • WACV'23 Cross-Domain Video Anomaly Detection without Target Domain Adaptation [arxiv]

    • Cross-domain video anomaly detection without target domain adaptation 跨域视频异常检测
  • Co-Learning with Pre-Trained Networks Improves Source-Free Domain Adaptation [arxiv]

    • Pre-trained models for source-free domain adaptation 用预训练模型进行source-free DA
  • TMLR'22 A Unified Survey on Anomaly, Novelty, Open-Set, and Out of-Distribution Detection: Solutions and Future Challenges [openreview]

    • A recent survey on OOD/anomaly detection 一篇最新的关于OOD/anomaly detection的综述
  • NeurIPS'18 A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks [paper]

    • Using class-conditional distribution for OOD detection 使用类条件概率进行OOD检测
  • ICLR'22 Discrete Representations Strengthen Vision Transformer Robustness [arxiv]

    • Embed discrete representation for OOD generalization 在ViT中加入离散表征增强OOD性能
  • CONDA: Continual Unsupervised Domain Adaptation Learning in Visual Perception for Self-Driving Cars [arxiv]

    • Continual DA for self-driving cars 连续的domain adaptation用于自动驾驶
  • Finetune like you pretrain: Improved finetuning of zero-shot vision models [arxiv]]

    • Improved fine-tuning of zero-shot models 针对zero-shot model提高fine-tuneing

2022-11

  • ECCV-22 DecoupleNet: Decoupled Network for Domain Adaptive Semantic Segmentation [arXiv] [Code]

    • Domain adaptation in semantic segmentation 语义分割域适应
  • Robust Mean Teacher for Continual and Gradual Test-Time Adaptation [arxiv]

    • Mean teacher for test-time adaptation 在测试时用mean teacher进行适配
  • Learning to Learn Domain-invariant Parameters for Domain Generalization [[arxiv](Learning to Learn Domain-invariant Parameters for Domain Generalization)]

    • Learning to learn domain-invariant parameters for DG 元学习进行domain generalization
  • HMOE: Hypernetwork-based Mixture of Experts for Domain Generalization [arxiv]

    • Hypernetwork-based ensembling for domain generalization 超网络构成的集成学习用于domain generalization
  • The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning [arxiv]

    • OOD using fine-tuning 系统总结了基于fine-tuning进行OOD的一些结果
  • GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective [arxiv]

    • OOD for natural language processing evaluation 提出GLUE-X用于OOD在NLP数据上的评估
  • CVPR'22 Delving Deep Into the Generalization of Vision Transformers Under Distribution Shifts [arxiv]

    • Vision transformers generalization under distribution shifts 评估ViT的分布漂移
  • NeurIPS'22 Models Out of Line: A Fourier Lens on Distribution Shift Robustness [arxiv]

    • A fourier lens on distribution shift robustness 通过傅里叶视角来看分布漂移的鲁棒性
  • CVPR'22 Does Robustness on ImageNet Transfer to Downstream Tasks? [arxiv]

    • Does robustness on imagenet transfer lto downstream tasks?
  • Normalization Perturbation: A Simple Domain Generalization Method for Real-World Domain Shifts [arxiv]

    • Normalization perturbation for domain generalization 通过归一化扰动来进行domain generalization
  • FIXED: Frustraitingly easy domain generalization using Mixup [arxiv]

    • 使用Mixup进行domain generalization
  • Learning to Learn Domain-invariant Parameters for Domain Generalization [arxiv]

    • Learning to learn domain-invariant parameters for domain generalization
  • NeurIPS'22 Improved Fine-Tuning by Better Leveraging Pre-Training Data [openreview]

    • Using pre-training data for fine-tuning 用预训练数据来做微调
  • NeurIPS'22 Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning [openreview]

    • Adaptive contrastive learning for source-free DA 自适应的对比学习用于source-free DA
  • NeurIPS'22 LOG: Active Model Adaptation for Label-Efficient OOD Generalization [openreview]

    • Model adaptation for label-efficient OOD generalization
  • NeurIPS'22 MetaTeacher: Coordinating Multi-Model Domain Adaptation for Medical Image Classification [openreview]

    • Multi-model domain adaptation mor medical image classification 多模型DA用于医疗数据
  • NeurIPS'22 Domain Adaptation under Open Set Label Shift [openreview]

    • Domain adaptation under open set label shift 在开放集的label shift中的DA
  • NeurIPS'22 Domain Generalization without Excess Empirical Risk [openreview]

    • Domain generalization without excess empirical risk
  • NeurIPS'22 FedSR: A Simple and Effective Domain Generalization Method for Federated Learning [openreview]

    • FedSR for federated learning domain generalization 用于联邦学习的domain generalization
  • NeurIPS'22 Probable Domain Generalization via Quantile Risk Minimization [openreview]

    • Domain generalization with quantile risk minimization 用quantile风险最小化的domain generalization
  • NeurIPS'22 Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer [arxiv]

    • Continual learning with backward knowledge transfer 反向知识迁移的持续学习
  • NeurIPS'22 Test Time Adaptation via Conjugate Pseudo-labels [openreview]

    • Test-time adaptation with conjugate pseudo-labels 用伪标签进行测试时adaptation
  • NeurIPS'22 Your Out-of-Distribution Detection Method is Not Robust! [openreview]

    • OOD models are not robust 分布外泛化模型不够鲁棒

2022-10

  • NeurIPS'22 Respecting Transfer Gap in Knowledge Distillation [arxiv]

    • Transfer gap in distillation 知识蒸馏中的迁移gap
  • Transfer of Machine Learning Fairness across Domains [arxiv]

    • Fairness transfer in transfer learning 迁移学习中的公平性迁移
  • On Fine-Tuned Deep Features for Unsupervised Domain Adaptation [arxiv]

    • Fine-tuned features for domain adaptation 微调的特征用于域自适应
  • WACV-23 ConfMix: Unsupervised Domain Adaptation for Object Detection via Confidence-based Mixing [arxiv]

    • Domain adaptation for object detection using confidence mixing 用置信度mix做domain adaptation
  • CVPR-20 Regularizing CNN Transfer Learning With Randomised Regression [arxiv]

    • Using randomized regression to regularize CNN 用随机回归约束CNN迁移学习
  • AAAI-21 TransTailor: Pruning the Pre-trained Model for Improved Transfer Learning [arxiv]

    • Pruning pre-trained model for transfer learning 通过对预训练模型进行剪枝来进行迁移学习
  • PhDthesis Generalizing in the Real World with Representation Learning [arxiv]

    • A phd thesis about generalization in real world 一篇关于现实世界如何做Generalization的博士论文
  • The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning [arxiv]

    • Evolution of OOD robustness by fine-tuning
  • Visual Prompt Tuning for Test-time Domain Adaptation [arxiv]

    • VPT for test-time adaptation 用prompt tuning进行test-time DA
  • Unsupervised Domain Adaptation for COVID-19 Information Service with Contrastive Adversarial Domain Mixup [arxiv]

    • Domain adaptation for COVID-19 用DA进行COVID-19预测
  • ICONIP'22 IDPL: Intra-subdomain adaptation adversarial learning segmentation method based on Dynamic Pseudo Labels [arxiv]

    • Intra-domain adaptation for segmentation 子领域对抗Adaptation
  • NeurIPS'22 Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks [arxiv]

    • Parameter-efficient multi-task adaptation 参数高效的多任务adaptation
  • Out-of-Distribution Generalization in Algorithmic Reasoning Through Curriculum Learning [arxiv]

    • OOD in algorithmic reasoning 算法reasoning过程中的OOD
  • Towards Out-of-Distribution Adversarial Robustness [arxiv]

    • OOD adversarial robustness OOD对抗鲁棒性
  • TripleE: Easy Domain Generalization via Episodic Replay [arxiv]

    • Easy domain generalization by episodic replay
  • Deep Spatial Domain Generalization [arxiv]

    • Deep spatial domain generalization

2022-09

  • Assaying Out-Of-Distribution Generalization in Transfer Learning [arXiv]

    • A lot of experiments to show OOD performance
  • ICML-21 Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization [arxiv]

    • Strong correlation between ID and OOD
  • Deep Domain Adaptation for Detecting Bomb Craters in Aerial Images [arxiv]

    • Bomb craters detection using domain adaptation 用DA检测遥感图像中的炮弹弹坑
  • WACV-23 TeST: Test-time Self-Training under Distribution Shift [arxiv]

    • Test-time self-training 测试时训练
  • StyleTime: Style Transfer for Synthetic Time Series Generation [arxiv]

    • Style transfer for time series generation 时间序列生成的风格迁移
  • Robust Domain Adaptation for Machine Reading Comprehension [arxiv]

    • Domain adaptation for machine reading comprehension 机器阅读理解的domain adaptation
  • Generalized representations learning for time series classification [arxiv]

    • OOD for time series classification 域泛化用于时间序列分类
  • USB: A Unified Semi-supervised Learning Benchmark [arxiv] [code]

    • Unified semi-supervised learning codebase 半监督学习统一代码库
  • Test-Time Training with Masked Autoencoders [arxiv]

    • Test-time training with MAE MAE的测试时训练
  • Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models [arxiv]

    • Test-time prompt tuning 测试时的prompt tuning
  • TeST: test-time self-training under distribution shift [arxiv]

    • Test-time self-training 测试时的self-training
  • Language-aware Domain Generalization Network for Cross-Scene Hyperspectral Image Classification [arxiv]

    • Domain generalization for cross-scene hyperspectral image classification 域泛化用于高光谱图像分类
  • IEEE-TMM'22 Uncertainty Modeling for Robust Domain Adaptation Under Noisy Environments [IEEE]

    • Uncertainty modeling for domain adaptation 噪声环境下的domain adaptation
  • Improving Robustness to Out-of-Distribution Data by Frequency-based Augmentation arxiv

    • OOD by frequency-based augmentation 通过基于频率的数据增强进行OOD
  • Domain Generalization for Prostate Segmentation in Transrectal Ultrasound Images: A Multi-center Study arxiv

    • Domain generalizationfor prostate segmentation 领域泛化用于前列腺分割
  • Domain Adaptation from Scratch arxiv

    • Domain adaptation from scratch
  • Towards Optimization and Model Selection for Domain Generalization: A Mixup-guided Solution arxiv

    • Model selection for domain generalization 域泛化中的模型选择问题
  • Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNets

    • Parameter efficient CNN adapter for transfer learning 参数高效的CNN adapter用于迁移学习
  • Equivariant Disentangled Transformation for Domain Generalization under Combination Shift

    • Equivariant disentangled transformation for domain generalization 新的建模domain generalization的思路

2022-08

2022-07

2022-06

2022-05

2022-04

Updated at 2022-04-29:

2022-03

2022-02

2022-01

2021-12

2021-11

2021-10

2021-09

2021-08

2021-07