Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the difference between Open Intent Detection and Out-of-domain Detection? #11

Closed
wjczf123 opened this issue May 9, 2022 · 4 comments

Comments

@wjczf123
Copy link

wjczf123 commented May 9, 2022

Nice work.
I have a question about the difference between Open Intent Detection and Out-of-domain Detection? It seems that some work[1-4] in Out-of-domain Detection very related to Open Intent Detection.

There are also two works published in TASLP related to this repository [5-6].

  1. GOLD: Improving Out-of-Scope Detection in Dialogues using Data Augmentation
  2. Energy-based Unknown Intent Detection with Data Manipulation
  3. OutFlip: Generating Out-of-Domain Samples for Unknown Intent Detection with Natural Language Attack
  4. Modeling Discriminative Representations for Out-of-Domain Detection with Supervised Contrastive Learning
  5. Learning to Classify Open Intent via Soft Labeling and Manifold Mixup
  6. Towards Textual Out-of-Domain Detection without In-Domain Labels
@topDreamer
Copy link

I have the same doubts, and I hope to get an answer and reply from the author. Thanks a lot~

@topDreamer
Copy link

@HanleiZhang

@HanleiZhang
Copy link
Member

Hi, first very sorry about my replying lateness. That's a very good question. Indeed, there are some differences between out-of-domain detection and open intent detection in some aspects:
(1) Out-of-domain (OOD) detection focuses on detecting if an example is misclassified as out-of-distribution [1]. Thus, it is essentially a binary classification task. However, open intent detection needs to distinguish specific known classes of in-domain (ID) samples.
(2) The evaluation metrics of OOD detection and open intent detection are rather different. The former usually uses AUROC, AUPR, and FPR metrics to evaluate the performance of a binary classifier, while the latter needs to evaluate both the known class and open class performance.
(3) Many OOD detection methods need OOD samples during training. Some use labeled OOD data [1, 2], while others use pseudo-generated/constructed OOD data during training [3, 4, 5]. Open intent detection does not use labeled OOD data and may use pseudo OOD data during training. The paper [6] you mentioned belongs to open intent detection. Thanks for your paper recommendations [5, 6].

Finally, we would like to recommend you read our latest paper Learning Discriminative Representations and Decision Boundaries for Open Intent Detection. This paper stresses the difference between ood detection and open intent detection. Enjoy it!

[1] A Baseline for Detecting Misclassified And Out-of-Distribution Examples in Neural Networks.
[2] GOLD: Improving Out-of-Scope Detection in Dialogues using Data Augmentation.
[3] OutFlip: Generating Out-of-Domain Samples for Unknown Intent Detection with Natural Language Attack.
[4] Modeling Discriminative Representations for Out-of-Domain Detection with Supervised Contrastive Learning.
[5] Towards Textual Out-of-Domain Detection without In-Domain Labels.
[6] Learning to Classify Open Intent via Soft Labeling and Manifold Mixup.

@HanleiZhang HanleiZhang pinned this issue Jul 20, 2022
@wjczf123
Copy link
Author

wjczf123 commented Aug 7, 2022

Thanks for your reply. I get it.

@wjczf123 wjczf123 closed this as completed Aug 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants