Skip to content

SpanCategorizer and Transformers #10059

Discussion options

You must be logged in to vote

I was going to suggest using init config, but I see that it doesn't have a default transformer option for spancat yet (since spancat is still experimental, it's not as integrated into the defaults). A transformer config for the ner_spancat demo project could look like this:

[paths]
train = null
dev = null
vectors = null
init_tok2vec = null

[system]
gpu_allocator = "pytorch"
seed = 0

[nlp]
lang = "id"
pipeline = ["transformer","spancat"]
batch_size = 128
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}

[components]

[components.spancat]
factory = "spancat"
max_positive = null
scorer = {"@scorers":"s…

Replies: 2 comments 4 replies

Comment options

You must be logged in to vote
3 replies
@phlobo
Comment options

@adrianeboyd
Comment options

@phlobo
Comment options

Answer selected by phlobo
Comment options

You must be logged in to vote
1 reply
@monWork
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat / config Feature: Training config feat / transformer Feature: Transformer feat / spancat Feature: Span Categorizer
3 participants