diff --git a/README.md b/README.md index 6522a9a..f26c638 100644 --- a/README.md +++ b/README.md @@ -12,6 +12,7 @@ Table of Contents * [Review on Multi-modal Data Analytics](#review_on_multi-modal_data_analytics) * [Multi-modal Dataset](#multi-modal_dataset) + * [Multi-modal Named Entity Recognition](#multi-modal_named_entity_recognition) * [Multi-modal Relation Extraction](#multi-modal_relation_extraction) * [Multi-modal Event Extraction](#multi-modal_event_extraction) * [Multi-modal Representation Learning](#multi-modal_representation_learning) @@ -46,6 +47,13 @@ Table of Contents +## multi-modal_named_entity_recognition + + +[【必知必懂论文】之多模态命名实体识别](https://mp.weixin.qq.com/s/6RXhbsVXYscjIDX1gP20iw) + + + ## multi-modal_relation_extraction 1. Liu X, Gao F, Zhang Q, et al. **Graph convolution for multimodal information extraction from visually rich documents**[J]. arXiv preprint arXiv:1903.11279, 2019. [[Paper]](https://arxiv.org/abs/1903.11279#:~:text=In%20this%20paper%2C%20we%20introduce%20a%20graph%20convolution,further%20combined%20with%20text%20embeddings%20for%20entity%20extraction.) @@ -98,8 +106,6 @@ https://dl.acm.org/doi/10.1145/3474085.3476968) 16. Hu X, Guo Z, Teng Z, et al. **Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis**[J]. ACL 2023. [[Paper]](https://aclanthology.org/2023.acl-short.27/) - - [Relation Extraction in 2018/2019](https://github.com/WindChimeRan/NREPapers2019)