Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
-
Updated
Nov 20, 2024 - Python
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
Natural Language Processing Pipeline - Sentence Splitting, Tokenization, Lemmatization, Part-of-speech Tagging and Dependency Parsing
reference code for syntaxnet
A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology tags, lemmas, and dependency trees.
Repository for the Georgetown University Multilayer Corpus (GUM)
Yet Another (natural language) Parser
Python framework for processing Universal Dependencies data
BERT fine-tuning for POS tagging task (Keras)
A framework to convert Universal Dependencies to Logical Forms
HuSpaCy: industrial-strength Hungarian natural language processing
This Universal Dependencies (UD) Portuguese treebank.
A minimal, pure Python library to interface with CoNLL-U format files.
spaCy + UDPipe
Framework for probing tasks
An NLP pipeline for Hebrew
CONLL-U to Pandas DataFrame
COMBO is jointly trained tagger, lemmatizer and dependency parser.
marry.py: A utility for converting Universal Dependencies–annotated corpora to UniMorph
BERT fine-tuning for POS tagging task (google's tensorflow)
Add a description, image, and links to the universal-dependencies topic page so that developers can more easily learn about it.
To associate your repository with the universal-dependencies topic, visit your repo's landing page and select "manage topics."