From e81aabf37c3c950fff4aa416edb4fe20fc17e36b Mon Sep 17 00:00:00 2001 From: pratzohol Date: Thu, 24 Oct 2024 14:15:00 +0530 Subject: [PATCH] :) --- .../1-icl-over-graphs-GPPT.md | 7 ++-- .../2-icl-over-graphs-PRODIGY.md | 4 --- .../3-one-for-all-ofa.md} | 7 +--- .../Graphs_with_LLMs/2-talk-like-graph.md | 20 ------------ .../3-potential-llm-graph-learning.md | 14 -------- docs/Notes/Maths/bigoplus.md | 2 -- docs/Notes/Miscellaneous/erdos-renyl-model.md | 4 +-- .../Miscellaneous/in-context-learning.md | 7 +--- docs/Notes/Miscellaneous/types-of-learning.md | 5 --- docs/Notes/list-of-papers.md | 6 ++-- docs/Python/quick-pytorch.md | 32 ------------------- 11 files changed, 8 insertions(+), 100 deletions(-) rename docs/Notes/{Graphs_with_LLMs/1-one-for-all-ofa.md => Graph_Neural_Networks/3-one-for-all-ofa.md} (96%) delete mode 100644 docs/Notes/Graphs_with_LLMs/2-talk-like-graph.md delete mode 100644 docs/Notes/Graphs_with_LLMs/3-potential-llm-graph-learning.md delete mode 100644 docs/Python/quick-pytorch.md diff --git a/docs/Notes/Graph_Neural_Networks/1-icl-over-graphs-GPPT.md b/docs/Notes/Graph_Neural_Networks/1-icl-over-graphs-GPPT.md index 26d53b8..4f5fdb7 100644 --- a/docs/Notes/Graph_Neural_Networks/1-icl-over-graphs-GPPT.md +++ b/docs/Notes/Graph_Neural_Networks/1-icl-over-graphs-GPPT.md @@ -1,8 +1,5 @@ --- title: "Prompting : GPPT" -tags: - - Prompting - - Graphs --- # Prompting over Graphs : GPPT @@ -48,7 +45,7 @@ The orthogonal prompt initialization and regularization are proposed to separate - **GCC** : Leverages contrastive learning to capture the universal network topological properties across multiple networks. - **GPT-GNN** : Introduces a self-supervised attributed graph generation task to pre-train GNN models that can capture the structural and semantic properties of the graph. -- **L2P-GNN** : Utilizes meta-learning to learn the fine-tune strategy during the pre-training process. +- **L2P-GNN** : Utilizes meta-learning to learn the fine-tune strategy during the pre-training process. ## 4. Preliminary Knowledge @@ -198,5 +195,5 @@ $$\mathcal{L_o} = \sum_{m} || E^m (E^m)^T - I ||_F^2$$ [//begin]: # "Autogenerated link references for markdown compatibility" [in-context-learning|In-context Learning (ICL)]: ../Miscellaneous/in-context-learning "In-context Learning (ICL)" -[icl-over-graphs-PRODIGY|ICL : PRODIGY]: icl-over-graphs-prodigy "ICL : PRODIGY" +[Notes/Graph_Neural_Networks/2-icl-over-graphs-PRODIGY|ICL : PRODIGY]: 2-icl-over-graphs-PRODIGY "ICL : PRODIGY" [//end]: # "Autogenerated link references" \ No newline at end of file diff --git a/docs/Notes/Graph_Neural_Networks/2-icl-over-graphs-PRODIGY.md b/docs/Notes/Graph_Neural_Networks/2-icl-over-graphs-PRODIGY.md index 79ca223..c97922a 100644 --- a/docs/Notes/Graph_Neural_Networks/2-icl-over-graphs-PRODIGY.md +++ b/docs/Notes/Graph_Neural_Networks/2-icl-over-graphs-PRODIGY.md @@ -1,9 +1,5 @@ --- title: "ICL : PRODIGY" -tags: - - Prompting - - ICL - - Graphs --- # ICL over Graphs : PRODIGY diff --git a/docs/Notes/Graphs_with_LLMs/1-one-for-all-ofa.md b/docs/Notes/Graph_Neural_Networks/3-one-for-all-ofa.md similarity index 96% rename from docs/Notes/Graphs_with_LLMs/1-one-for-all-ofa.md rename to docs/Notes/Graph_Neural_Networks/3-one-for-all-ofa.md index c1d0b81..9b93296 100644 --- a/docs/Notes/Graphs_with_LLMs/1-one-for-all-ofa.md +++ b/docs/Notes/Graph_Neural_Networks/3-one-for-all-ofa.md @@ -1,10 +1,5 @@ --- title: "OneForAll (OFA)" -tags: - - Prompting - - LLMs - - Graphs - - ICL --- # ICL over Graphs : OFA @@ -85,6 +80,6 @@ _**NOTE**_ : Definition of [[in-context-learning|In-context Learning (ICL)]] is [//begin]: # "Autogenerated link references for markdown compatibility" -[icl-over-graphs-prodigy|ICL : PRODIGY]: ../Graph_Neural_Networks/icl-over-graphs-prodigy "ICL : PRODIGY" +[Notes/Graph_Neural_Networks/2-icl-over-graphs-PRODIGY|ICL : PRODIGY]: ../Graph_Neural_Networks/2-icl-over-graphs-PRODIGY "ICL : PRODIGY" [in-context-learning|In-context Learning (ICL)]: ../Miscellaneous/in-context-learning "In-context Learning (ICL)" [//end]: # "Autogenerated link references" \ No newline at end of file diff --git a/docs/Notes/Graphs_with_LLMs/2-talk-like-graph.md b/docs/Notes/Graphs_with_LLMs/2-talk-like-graph.md deleted file mode 100644 index 95a57ef..0000000 --- a/docs/Notes/Graphs_with_LLMs/2-talk-like-graph.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: "Encoding Graphs for LLMs" -tags: - - Prompting - - LLMs - - Graphs ---- - -# Talk like a Graph : Encoding Graphs for LLMs - -In this note, I will cover the following paper ["Talk like a Graph : Encoding Graphs for LLMs"](https://arxiv.org/pdf/2310.04560.pdf). It provides a comprehensive study of encoding graph-structured data as texts by LLMs. - -## 1. Overview - -1. LLMs performance on graph task varies because of : - - Graph encoding method - - Nature of graph task - - Structure of graph - -2. \ No newline at end of file diff --git a/docs/Notes/Graphs_with_LLMs/3-potential-llm-graph-learning.md b/docs/Notes/Graphs_with_LLMs/3-potential-llm-graph-learning.md deleted file mode 100644 index e23c0b5..0000000 --- a/docs/Notes/Graphs_with_LLMs/3-potential-llm-graph-learning.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Potential of LLMs in graph learning" -tags: - - Prompting - - LLMs - - Graphs ---- - -# Exploring the Potential of LLMs in Learning on Graphs - -In this note, I will cover the following paper ["Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs"](https://arxiv.org/pdf/2307.03393.pdf). - -## 1. Overview - diff --git a/docs/Notes/Maths/bigoplus.md b/docs/Notes/Maths/bigoplus.md index 94e86fa..294ff46 100644 --- a/docs/Notes/Maths/bigoplus.md +++ b/docs/Notes/Maths/bigoplus.md @@ -1,7 +1,5 @@ --- title: \bigoplus -tags: - - LaTeX --- ## 1. \bigoplus operator diff --git a/docs/Notes/Miscellaneous/erdos-renyl-model.md b/docs/Notes/Miscellaneous/erdos-renyl-model.md index 9cc32ed..de4ec20 100644 --- a/docs/Notes/Miscellaneous/erdos-renyl-model.md +++ b/docs/Notes/Miscellaneous/erdos-renyl-model.md @@ -1,7 +1,5 @@ --- title: "Erdos Renyl Model" -tags: - - Graphs --- # Erdos Renyl Model : Generating Random Graphs @@ -19,4 +17,4 @@ tags: ## 2. References -- https://www.geeksforgeeks.org/erdos-renyl-model-generating-random-graphs/ \ No newline at end of file +- https://www.geeksforgeeks.org/erdos-renyl-model-generating-random-graphs/ \ No newline at end of file diff --git a/docs/Notes/Miscellaneous/in-context-learning.md b/docs/Notes/Miscellaneous/in-context-learning.md index bc27977..e5adf6d 100644 --- a/docs/Notes/Miscellaneous/in-context-learning.md +++ b/docs/Notes/Miscellaneous/in-context-learning.md @@ -1,8 +1,3 @@ ---- -tags: - - ICL ---- - # In-context Learning (ICL) ## 1. Definition @@ -25,5 +20,5 @@ In-context Learning or ICL was defined in ["Language Models are few-shot learner [//begin]: # "Autogenerated link references for markdown compatibility" -[icl-over-graphs-PRODIGY|ICL over Graphs : PRODIGY]: ../Graph_Neural_Networks/icl-over-graphs-PRODIGY "ICL : PRODIGY" +[Notes/Graph_Neural_Networks/2-icl-over-graphs-PRODIGY|ICL over Graphs : PRODIGY]: ../Graph_Neural_Networks/2-icl-over-graphs-PRODIGY "ICL : PRODIGY" [//end]: # "Autogenerated link references" \ No newline at end of file diff --git a/docs/Notes/Miscellaneous/types-of-learning.md b/docs/Notes/Miscellaneous/types-of-learning.md index 5e581f2..c2a9ff0 100644 --- a/docs/Notes/Miscellaneous/types-of-learning.md +++ b/docs/Notes/Miscellaneous/types-of-learning.md @@ -1,8 +1,3 @@ ---- -tags: - - Basics ---- - # Types of Learning There are broadly 3 different types of learning in Machine Learning paradigm: diff --git a/docs/Notes/list-of-papers.md b/docs/Notes/list-of-papers.md index 94653db..d88778d 100644 --- a/docs/Notes/list-of-papers.md +++ b/docs/Notes/list-of-papers.md @@ -10,8 +10,8 @@ List of all the papers I've covered till date : | ----- | ------------- | ------------ | | 1. | [GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural Networks](https://dl.acm.org/doi/abs/10.1145/3534678.3539249) | [Click here](./Graph_Neural_Networks/1-icl-over-graphs-GPPT.md) | | 2. | [PRODIGY : Enabling In-context learning over graphs](https://arxiv.org/abs/2305.12600) | [Click here](./Graph_Neural_Networks/2-icl-over-graphs-PRODIGY.md) | -| 3. | [Talk like a Graph : Encoding Graphs for LLMs](https://arxiv.org/pdf/2310.04560.pdf) | [Click here](./Graphs_with_LLMs/2-talk-like-graph.md) | -| 4. | [ONE FOR ALL: TOWARDS TRAINING ONE GRAPH MODEL FOR ALL CLASSIFICATION TASKS](https://arxiv.org/pdf/2310.00149v1.pdf) | [Click here](./Graphs_with_LLMs/1-one-for-all-ofa.md) | -| 5. | [Exploring the Potential of LLMs in Learning on Graphs](https://arxiv.org/pdf/2307.03393.pdf) | [Click here](./Graphs_with_LLMs/3-potential-llm-graph-learning.md) | +| 3. | [Talk like a Graph : Encoding Graphs for LLMs](https://arxiv.org/pdf/2310.04560.pdf) | [Click here](Graph_Neural_Networks/4-talk-like-graph.md) | +| 4. | [ONE FOR ALL: TOWARDS TRAINING ONE GRAPH MODEL FOR ALL CLASSIFICATION TASKS](https://arxiv.org/pdf/2310.00149v1.pdf) | [Click here](Graph_Neural_Networks/3-one-for-all-ofa.md) | +| 5. | [Exploring the Potential of LLMs in Learning on Graphs](https://arxiv.org/pdf/2307.03393.pdf) | [Click here](Graph_Neural_Networks/3-potential-llm-graph-learning.md) | | 6. | TBA | [Click here]() | diff --git a/docs/Python/quick-pytorch.md b/docs/Python/quick-pytorch.md deleted file mode 100644 index 955d878..0000000 --- a/docs/Python/quick-pytorch.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: "Pytorch Nuggets" -tags: - - pytorch ---- - -# Pytorch Nuggets : A collection of short notes - -## 1. Torch and Numpy - -- Default `dtype` of numpy is `float64` whereas for pytorch, it is `float32`. -- Thus, use `ndarray.from_numpy().type(torch.float32)` to convert numpy array to pytorch tensor. -- `torch.Tensor.numpy()` converts pytorch tensor to numpy array. Data type will be `float32`. - -## 2. Reshape, View, squeeze, unsqueeze - -- **View** just changes the shape but addresses the same memory. Thus, if you change any value after view operation, it will change the original tensor as well. -- **Reshape** creates a copy of the tensor and thus, changes in reshaped tensor will not affect the original tensor. -- **Squeeze** removes all the dimensions with size 1. For example, `A x 1 x B x 1 X C` will return `A x B x C`. -- **Unsqueeze** adds a dimension of `1` at specified `dim`. - -## 3. Evaluation mode - -- Instead of using `torch.no_grad()`, its better to use `torch.inference_mode()` as it is more efficient. - -## 4. Random Seed - -- Use `torch.manual_seed( )` to set the random seed for all the modules. - -## 5. Mean - -- `dtype` of the tensor can only be `float` or `complex` for `torch.mean()`. \ No newline at end of file