Skip to content

Commit

Permalink
Merge pull request #167 from basf/version_bump
Browse files Browse the repository at this point in the history
version bump
  • Loading branch information
AnFreTh authored Dec 4, 2024
2 parents 978c49e + 4e4cde8 commit a4b624c
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 5 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
</div>

<div style="text-align: center;">
<h1>Mambular: Tabular Deep Learning</h1>
<h1>Mambular: Tabular Deep Made Simple</h1>
</div>

Mambular is a Python library for tabular deep learning. It includes models that leverage the Mamba (State Space Model) architecture, as well as other popular models like TabTransformer, FTTransformer, TabM and tabular ResNets. Check out our paper `Mambular: A Sequential Model for Tabular Deep Learning`, available [here](https://arxiv.org/abs/2408.06291). Also check out our paper introducing [TabulaRNN](https://arxiv.org/pdf/2411.17207) and analyzing the efficiency of NLP inspired tabular models.
Expand Down
2 changes: 1 addition & 1 deletion mambular/__version__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
"""Version information."""

# The following line *must* be the last in the module, exactly as formatted:
__version__ = "0.2.3"
__version__ = "1.0.0"
2 changes: 1 addition & 1 deletion mambular/arch_utils/layer_utils/embedding_layer.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def __init__(self, num_feature_info, cat_feature_info, config):
d_embedding=self.d_model,
n_frequencies=getattr(config, "n_frequencies", 48),
frequency_init_scale=getattr(config, "frequency_init_scale", 0.01),
activation=self.embedding_activation,
activation=True,
lite=getattr(config, "plr_lite", False),
)
elif self.embedding_type == "linear":
Expand Down
3 changes: 1 addition & 2 deletions mambular/configs/mlp_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ class DefaultMLPConfig:
weight_decay: float = 1e-06
lr_factor: float = 0.1
layer_sizes: list = (256, 128, 32)
activation: callable = nn.SELU()
activation: callable = nn.ReLU()
skip_layers: bool = False
dropout: float = 0.2
use_glu: bool = False
Expand All @@ -76,5 +76,4 @@ class DefaultMLPConfig:
embedding_bias: bool = False
layer_norm_after_embedding: bool = False
d_model: int = 32
embedding_type: float = "plr"
plr_lite: bool = False

0 comments on commit a4b624c

Please sign in to comment.