Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve docstrings for text generation #1597

Merged
merged 1 commit into from
Aug 16, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions src/huggingface_hub/inference/_text_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@
# - added default values for all parameters (not needed in BaseModel but dataclasses yes)
# - integrated in `huggingface_hub.InferenceClient``
# - added `stream: bool` and `details: bool` in the `text_generation` method instead of having different methods for each use case
# - NO asyncio support yet => TODO soon

from dataclasses import field
from enum import Enum
Expand Down Expand Up @@ -355,7 +354,7 @@ class TextGenerationResponse:
"""
Represents a response for text generation.

In practice, if `details=False` is passed (default), only the generated text is returned.
Only returned when `details=True`, otherwise a string is returned.

Args:
generated_text (`str`):
Expand Down Expand Up @@ -397,7 +396,9 @@ class StreamDetails:
@dataclass
class TextGenerationStreamResponse:
"""
Represents a response for text generation when `stream=True` is passed
Represents a response for streaming text generation.

Only returned when `details=True` and `stream=True`.

Args:
token (`Token`):
Expand Down
Loading