Skip to content

Commit

Permalink
Merge pull request #21 from RUCAIBox/data
Browse files Browse the repository at this point in the history
get latest code from branch: Data
  • Loading branch information
linzihan-backforward authored Aug 5, 2021
2 parents 1b0da97 + 17b40ff commit b9a72d2
Show file tree
Hide file tree
Showing 71 changed files with 536 additions and 388 deletions.
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/general/cdae.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Write the following code to a python file, such as `run.py`
from recbole.quick_start import run_recbole
parameter_dict = {
'training_neg_sample_num': 0
'neg_sampling': None,
}
run_recbole(model='CDAE', dataset='ml-100k', config_dict=parameter_dict)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/general/enmf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Write the following code to a python file, such as `run.py`
from recbole.quick_start import run_recbole
parameter_dict = {
'training_neg_sample_num': 0
'neg_sampling': None,
}
run_recbole(model='ENMF', dataset='ml-100k', config_dict=parameter_dict)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/general/line.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ If you want to use ``HyperTuning`` to tune hyper parameters of this model, you c
.. code:: bash
learning_rate choice [0.01,0.005,0.001,0.0005,0.0001]
training_neg_sample_num choice [1,3,5]
neg_sampling choice [{'uniform':1}, {'uniform':3}, {'uniform':5}]
second_order_loss_weight choice [0.3,0.6,1]
Note that we just provide these hyper parameter ranges for reference only, and we can not guarantee that they are the optimal range of this model.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/general/macridvae.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Write the following code to a python file, such as `run.py`
from recbole.quick_start import run_recbole
parameter_dict = {
'training_neg_sample_num': 0
'neg_sampling': None,
}
run_recbole(model='MacridVAE', dataset='ml-100k', config_dict=parameter_dict)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/general/multidae.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Write the following code to a python file, such as `run.py`
from recbole.quick_start import run_recbole
parameter_dict = {
'training_neg_sample_num': 0
'neg_sampling': None,
}
run_recbole(model='MultiDAE', dataset='ml-100k', config_dict=parameter_dict)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/general/multivae.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Write the following code to a python file, such as `run.py`
from recbole.quick_start import run_recbole
parameter_dict = {
'training_neg_sample_num': 0
'neg_sampling': None,
}
run_recbole(model='MultiVAE', dataset='ml-100k', config_dict=parameter_dict)
Expand Down
6 changes: 3 additions & 3 deletions docs/source/user_guide/model/general/ract.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Running with RecBole
config_dict = {
'train_stage': 'actor_pretrain',
'pretrain_epochs': 150,
'training_neg_sample_num': 0,
'neg_sampling': None,
}
run_recbole(model='RaCT', dataset='ml-100k',
config_dict=config_dict, saved=False)
Expand All @@ -66,7 +66,7 @@ And then:
'train_stage': 'critic_pretrain',
'pretrain_epochs': 50,
'pre_model_path': './saved/RaCT-ml-100k-150.pth',
'training_neg_sample_num': 0,
'neg_sampling': None,
}
run_recbole(model='RaCT', dataset='ml-100k',
config_dict=config_dict, saved=False)
Expand All @@ -86,7 +86,7 @@ And then:
config_dict = {
'train_stage': 'finetune',
'pre_model_path': './saved/RaCT-ml-100k-50.pth',
'training_neg_sample_num': 0,
'neg_sampling': None,
}
run_recbole(model='RaCT', dataset='ml-100k',
config_dict=config_dict)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/general/recvae.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Write the following code to a python file, such as `run.py`
from recbole.quick_start import run_recbole
parameter_dict = {
'training_neg_sample_num': 0
'neg_sampling': None,
}
run_recbole(model='RecVAE', dataset='ml-100k', config_dict=parameter_dict)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/knowledge/ripplenet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ If you want to use ``HyperTuning`` to tune hyper parameters of this model, you c
learning_rate choice [0.01,0.005,0.001,0.0005,0.0001]
n_memory choice [4, 8. 16. 32]
training_neg_sample_num choice [1, 2, 5, 10]
neg_sampling choice [{'uniform':1}, {'uniform':2}, {'uniform':5}, {'uniform':10}]
Note that we just provide these hyper parameter ranges for reference only, and we can not guarantee that they are the optimal range of this model.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/bert4rec.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Running with RecBole
- ``layer_norm_eps (float)`` : A value added to the denominator for numerical stability. Defaults to ``1e-12``.
- ``initializer_range (float)`` : The standard deviation for normal initialization. Defaults to ``0.02``.
- ``mask_ratio (float)`` : The probability for a item replaced by MASK token. Defaults to ``0.2``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform':1}" ``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.


**A Running Example:**
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/caser.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Running with RecBole
- ``n_v (int)`` : The number of vertical Convolutional filters. Defaults to ``8``.
- ``reg_weight (float)`` : The L2 regularization weight. Defaults to ``1e-4``.
- ``dropout_prob (float)`` : The dropout rate. Defaults to ``0.4``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform':1}"``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.

**A Running Example:**

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/fdsa.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Running with RecBole
- ``initializer_range (float)`` : The standard deviation for normal initialization. Defaults to ``0.02``.
- ``selected_features (list)`` : The list of selected item features. Defaults to ``['class']`` for ml-100k dataset.
- ``pooling_mode (str)``: The intra-feature pooling mode. Defaults to ``'mean'``. Range in ``['max', 'mean', 'sum']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform': 1}"``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.


**A Running Example:**
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/fossil.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Running with RecBole
- ``order_len (int)`` : The last N items . Defaults to ``3``.
- ``reg_weight (float)`` : The L2 regularization weight. Defaults to ``0.00``.
- ``alpha (float)`` : The parameter of alpha in calculate the similarity. Defaults to ``0.6``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform': 1}"``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.

**A Running Example:**

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/fpmc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ And then:
**Notes:**

- Different from other sequential models, FPMC must be optimized in pair-wise way using negative sampling, so it needs ``training_neg_sample_num=1``.
- Different from other sequential models, FPMC must be optimized in pair-wise way using negative sampling, so it needs ``neg_sampling="{'uniform': 1}"``.

Tuning Hyper Parameters
-------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/gcsan.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Running with RecBole
- ``step (int)`` : The number of layers in GNN. Defaults to ``1``.
- ``weight (float)`` : The weight parameter controls the contribution of self-attention representation and the last-clicked action, the original paper suggests that setting w to a value of 0.4 to 0.8 is more desirable. Defaults to ``0.6``.
- ``reg_weight (float)`` : The L2 regularization weight. Defaults to ``[5e-5]``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform': 1}"``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.

**A Running Example:**

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/gru4rec.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Running with RecBole
- ``hidden_size (int)`` : The number of features in the hidden state. Defaults to ``128``.
- ``num_layers (int)`` : The number of layers in GRU. Defaults to ``1``.
- ``dropout_prob (float)``: The dropout rate. Defaults to ``0.3``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform': 1}"``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.


**A Running Example:**
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/gru4recf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Running with RecBole
- ``dropout_prob (float)`` : The dropout rate. Defaults to ``0.3``.
- ``selected_features (list)`` : The list of selected item features. Defaults to ``['class']`` for ml-100k dataset.
- ``pooling_mode (str)`` : The intra-feature pooling mode. Defaults to ``'sum'``. Range in ``['max', 'mean', 'sum']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform': 1}"``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.


**A Running Example:**
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/gru4reckg.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Running with RecBole
- ``num_layers (int)`` : The number of layers in GRU. Defaults to ``1``.
- ``dropout_prob (float)`` : The dropout rate. Defaults to ``0.1``.
- ``freeze_kg (bool)`` : Whether to freeze the pre-trained knowledge embedding feature. Defaults to ``True``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform': 1}"``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.


**A Running Example:**
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/hgn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Running with RecBole
- ``embedding_size (int)`` : The embedding size of users and items. Defaults to ``64``.
- ``pooling_type (str)`` : The type of pooling include average pooling and max pooling . Defaults to ``average``.
- ``reg_weight (float)`` : The L2 regularization weight. Defaults to ``[0.00,0.00]``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform': 1}"``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.

**A Running Example:**

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/hrm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Running with RecBole
- ``pooling_type_layer_1 (str)`` : The type of pooling in the first floor include average pooling and max pooling . Defaults to ``max``.
- ``pooling_type_layer_2 (str)`` : The type of pooling in the second floor include average pooling and max pooling . Defaults to ``max``.
- ``dropout_prob (float)`` : The dropout rate. Defaults to ``0.2``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform': 1}"``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.

**A Running Example:**

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/model/sequential/ksr.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Running with RecBole
- ``dropout_prob (float)`` : The dropout rate. Defaults to ``0.1``.
- ``freeze_kg (bool)`` : Whether to freeze the pre-trained knowledge embedding feature. Defaults to ``True``.
- ``gamma (float)`` : The scaling factor used in read operation when calculating the attention weights of user preference on attributes. Defaults to ``10``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``training_neg_sample_num = 1``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.
- ``loss_type (str)`` : The type of loss function. If it set to ``'CE'``, the training task is regarded as a multi-classification task and the target item is the ground truth. In this way, negative sampling is not needed. If it set to ``'BPR'``, the training task will be optimized in the pair-wise way, which maximize the difference between positive item and negative item. In this way, negative sampling is necessary, such as setting ``--neg_sampling="{'uniform': 1}"``. Defaults to ``'CE'``. Range in ``['BPR', 'CE']``.


**A Running Example:**
Expand Down
Loading

0 comments on commit b9a72d2

Please sign in to comment.