Skip to content
This repository has been archived by the owner on Mar 8, 2024. It is now read-only.

[pre-commit.ci] pre-commit suggestions #24

Merged
merged 2 commits into from
Jan 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,19 +28,19 @@ repos:
args: [--py38-plus]
name: Upgrade code

- repo: https://github.com/myint/docformatter
- repo: https://github.com/PyCQA/docformatter
rev: v1.7.5
hooks:
- id: docformatter
args: [--in-place, --wrap-summaries=120, --wrap-descriptions=120]

- repo: https://github.com/PyCQA/isort
rev: 5.12.0
rev: 5.13.2
hooks:
- id: isort

- repo: https://github.com/psf/black
rev: 22.12.0
rev: 23.12.1
hooks:
- id: black
name: Black code
Expand All @@ -67,8 +67,8 @@ repos:
#- flake8-return
#- flake8-simplify

- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.1.6
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.9
hooks:
- id: ruff
args: ["--fix"]
2 changes: 0 additions & 2 deletions models/chatglm/modeling_chatglm.py
Original file line number Diff line number Diff line change
Expand Up @@ -877,7 +877,6 @@ def forward(
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor, ...], BaseModelOutputWithPast]:

output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
Expand Down Expand Up @@ -949,7 +948,6 @@ def forward(
attention_mask = attention_mask.to(hidden_states.device)

for i, layer in enumerate(self.layers):

if output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
layer_past = past_key_values[i]
Expand Down