Skip to content

[ACM Computing Surveys'23] Implementations or refactor of some temporal link prediction/dynamic link prediction methods and summary of related open resources for survey paper "Temporal Link Prediction: A Unified Framework, Taxonomy, and Review" which has been accepted by ACM Computing Surveys.

Notifications You must be signed in to change notification settings

KuroginQin/OpenTLP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Awesome Stars Forks

OpenTLP

This repository is the open-source project of a survey paper entitled "Temporal Link Prediction: A Unified Framework, Taxonomy, and Review", which has been accepted by ACM Computing Surveys. It refactors or implements some representative temporal link prediction (TLP, a.k.a. dynamic link prediction) methods (especially for those do not provide their source code) based on the unified encoder-decoder framework and terminologies (regarding task settings, taxonomy, etc.) introduced in our survey paper. In addition, this repository also summarizes some other open-source projects regarding TLP. We will keep updating this repository to include some other (SOTA or classic) TLP methods, task settings, dynamic graph datasets, etc.

Note that this repository is not the official implementation of related methods. Some of the implemented TLP approaches also need careful parameter tuning to achieve the best performance on different datasets with different task settings.

Abstract

Dynamic graphs serve as a generic abstraction and description of the evolutionary behaviors of various complex systems (e.g., social networks and communication networks). Temporal link prediction (TLP) is a classic yet challenging inference task on dynamic graphs, which predicts possible future linkage based on historical topology. The predicted future topology can be used to support some advanced applications on real-world systems (e.g., resource pre-allocation) for better system performance. This survey provides a comprehensive review of existing TLP methods. Concretely, we first give the formal problem statements and preliminaries regarding data models, task settings, and learning paradigms that are commonly used in related research. A hierarchical fine-grained taxonomy is further introduced to categorize existing methods in terms of their data models, learning paradigms, and techniques. From a generic perspective, we propose a unified encoder-decoder framework to formulate all the methods reviewed, where different approaches only differ in terms of some components of the framework. Moreover, we envision serving the community with an open-source project OpenTLP that refactors or implements some representative TLP methods using the proposed unified framework and summarizes other public resources. As a conclusion, we finally discuss advanced topics in recent research and highlight possible future directions.

Citing

If you find this project useful for your research, please cite our survey paper.

@article{qin2022temporal,
  title={Temporal Link Prediction: A Unified Framework, Taxonomy, and Review}, 
  author={Meng Qin and Dit-Yan Yeung},
  journal={ACM Computing Surveys},
  year={2023}
}

If you have any questions regarding this repository, you can contact the author via [mengqin_az@foxmail.com].

Outline

Notations

  • ESSD - Evenly-Sampled Snapshot Sequence Description, a.k.a. Discrete-Time Dynamic Graph (DTDG) in other literature.

  • UESD - Unevenly-Sampled Edge Sequence Description, a.k.a. Continuous-Time Dynamic Graph (CTDG) in other literature.

(We argure that CTDG cannot precisely describe the corresponding data model. Although the time index associated with each edge is defined on a continuous domain in the second data model, the edge sequence used to encode the dynamic topology is still discrete. The term of continuous description may be ambiguous in some cases. Therefore, we use ESSD & UESD instead of DTDG & CTDG.)

  • DI - Direct Inference.

  • OTI - Online Training & Inference.

  • OTOG - Offline Training & Online Generalization.

  • HQ - Able to Derive High-Quality Prediction Results for Weighted TLP.

  • D/L-Dep - the original version of a method cannot support the weighted TLP but it can be easily extended to tackle such a setting by replacing an appropriate decoder or loss.

Implemented or Refactored TLP Methods

We implement some representative TLP methods using MATLAB and PyTorch, with the corresponding code and data put under the direcories './Matlab' and './Python'.

For each method implemented by MATLAB, [method_name].m provides the functions that give the definitions of encoder and decoder, where the loss function is derived during the optimization of encoder. [method_name]_demo.m demonstrates how to use these functions. To run the demonstration code, please first unzip the data.zip in ./Matlab/data.

For each method implemented by PyTorch, a set of classes are used to define the encoder, decoder, and loss function, which are put under the directory './Python/[method_name]'. Furthermore, [method_name].py demonstrates how to use these classes. To run the demonstration code, please first unzip the data.zip in ./Python/data. In particular, these methods (implemented by PyTorch) can be speeded up via GPUs.

Details of the implemented TLP methods are summarized as follows.

Methods Publication Venues Data Models Paradigms Level Attributes Weighted TLP Language
CRJMF [1] CIKM 2011 ESSD OTI 1 Static Able MATLAB
GrNMF [2] Physica A 2018 ESSD OTI 1 N/A Able MATLAB
DeepEye [3] BDMA 2018 ESSD OTI 1 N/A Able MATLAB
AM-NMF [4] SIGCOMM 2018 NetAI WKSP ESSD OTI 1 N/A Able MATLAB
TMF [5] WSDM 2017 ESSD OTI 1 N/A Able Python
LIST [6] IJCAI 2017 ESSD OTI 1 N/A Able Python
Dyngraph2vec [7] KBS 2020 ESSD OTOG 1 N/A Able