Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix various typos with crate-ci/typos #15116

Merged
merged 1 commit into from
Mar 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
* Add support for DAT upscaler models ([#14690](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14690), [#15039](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15039))
* Extra Networks Tree View ([#14588](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14588), [#14900](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14900))
* NPU Support ([#14801](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14801))
* Propmpt comments support
* Prompt comments support

### Minor:
* Allow pasting in WIDTHxHEIGHT strings into the width/height fields ([#14296](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14296))
Expand Down Expand Up @@ -59,7 +59,7 @@
* modules/api/api.py: add api endpoint to refresh embeddings list ([#14715](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14715))
* set_named_arg ([#14773](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14773))
* add before_token_counter callback and use it for prompt comments
* ResizeHandleRow - allow overriden column scale parameter ([#15004](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15004))
* ResizeHandleRow - allow overridden column scale parameter ([#15004](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15004))

### Performance
* Massive performance improvement for extra networks directories with a huge number of files in them in an attempt to tackle #14507 ([#14528](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14528))
Expand Down Expand Up @@ -101,7 +101,7 @@
* Gracefully handle mtime read exception from cache ([#14933](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14933))
* Only trigger interrupt on `Esc` when interrupt button visible ([#14932](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14932))
* Disable prompt token counters option actually disables token counting rather than just hiding results.
* avoid doble upscaling in inpaint ([#14966](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14966))
* avoid double upscaling in inpaint ([#14966](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14966))
* Fix #14591 using translated content to do categories mapping ([#14995](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14995))
* fix: the `split_threshold` parameter does not work when running Split oversized images ([#15006](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15006))
* Fix resize-handle for mobile ([#15010](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15010), [#15065](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15065))
Expand Down Expand Up @@ -171,7 +171,7 @@
* infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page
* add FP32 fallback support on sd_vae_approx ([#14046](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14046))
* support XYZ scripts / split hires path from unet ([#14126](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14126))
* allow use of mutiple styles csv files ([#14125](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14125))
* allow use of multiple styles csv files ([#14125](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14125))
* make extra network card description plaintext by default, with an option (Treat card description as HTML) to re-enable HTML as it was (originally by [#13241](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13241))

### Extensions and API:
Expand Down Expand Up @@ -308,7 +308,7 @@
* new samplers: Restart, DPM++ 2M SDE Exponential, DPM++ 2M SDE Heun, DPM++ 2M SDE Heun Karras, DPM++ 2M SDE Heun Exponential, DPM++ 3M SDE, DPM++ 3M SDE Karras, DPM++ 3M SDE Exponential ([#12300](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12300), [#12519](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12519), [#12542](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12542))
* rework DDIM, PLMS, UniPC to use CFG denoiser same as in k-diffusion samplers:
* makes all of them work with img2img
* makes prompt composition posssible (AND)
* makes prompt composition possible (AND)
* makes them available for SDXL
* always show extra networks tabs in the UI ([#11808](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11808))
* use less RAM when creating models ([#11958](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11958), [#12599](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12599))
Expand Down Expand Up @@ -484,7 +484,7 @@
* user metadata system for custom networks
* extended Lora metadata editor: set activation text, default weight, view tags, training info
* Lora extension rework to include other types of networks (all that were previously handled by LyCORIS extension)
* show github stars for extenstions
* show github stars for extensions
* img2img batch mode can read extra stuff from png info
* img2img batch works with subdirectories
* hotkeys to move prompt elements: alt+left/right
Expand Down Expand Up @@ -703,7 +703,7 @@
* do not wait for Stable Diffusion model to load at startup
* add filename patterns: `[denoising]`
* directory hiding for extra networks: dirs starting with `.` will hide their cards on extra network tabs unless specifically searched for
* LoRA: for the `<...>` text in prompt, use name of LoRA that is in the metdata of the file, if present, instead of filename (both can be used to activate LoRA)
* LoRA: for the `<...>` text in prompt, use name of LoRA that is in the metadata of the file, if present, instead of filename (both can be used to activate LoRA)
* LoRA: read infotext params from kohya-ss's extension parameters if they are present and if his extension is not active
* LoRA: fix some LoRAs not working (ones that have 3x3 convolution layer)
* LoRA: add an option to use old method of applying LoRAs (producing same results as with kohya-ss)
Expand Down Expand Up @@ -733,7 +733,7 @@
* fix gamepad navigation
* make the lightbox fullscreen image function properly
* fix squished thumbnails in extras tab
* keep "search" filter for extra networks when user refreshes the tab (previously it showed everthing after you refreshed)
* keep "search" filter for extra networks when user refreshes the tab (previously it showed everything after you refreshed)
* fix webui showing the same image if you configure the generation to always save results into same file
* fix bug with upscalers not working properly
* fix MPS on PyTorch 2.0.1, Intel Macs
Expand All @@ -751,7 +751,7 @@
* switch to PyTorch 2.0.0 (except for AMD GPUs)
* visual improvements to custom code scripts
* add filename patterns: `[clip_skip]`, `[hasprompt<>]`, `[batch_number]`, `[generation_number]`
* add support for saving init images in img2img, and record their hashes in infotext for reproducability
* add support for saving init images in img2img, and record their hashes in infotext for reproducibility
* automatically select current word when adjusting weight with ctrl+up/down
* add dropdowns for X/Y/Z plot
* add setting: Stable Diffusion/Random number generator source: makes it possible to make images generated from a given manual seed consistent across different GPUs
Expand Down
5 changes: 5 additions & 0 deletions _typos.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
[default.extend-words]
# Part of "RGBa" (Pillow's pre-multiplied alpha RGB mode)
Ba = "Ba"
# HSA is something AMD uses for their GPUs
HSA = "HSA"
8 changes: 4 additions & 4 deletions extensions-builtin/LDSR/sd_hijack_ddpm_v1.py
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,7 @@ def p_losses(self, x_start, t, noise=None):
elif self.parameterization == "x0":
target = x_start
else:
raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported")
raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported")

loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])

Expand Down Expand Up @@ -880,7 +880,7 @@ def forward(self, x, c, *args, **kwargs):
def apply_model(self, x_noisy, t, cond, return_ids=False):

if isinstance(cond, dict):
# hybrid case, cond is exptected to be a dict
# hybrid case, cond is expected to be a dict
pass
else:
if not isinstance(cond, list):
Expand Down Expand Up @@ -916,7 +916,7 @@ def apply_model(self, x_noisy, t, cond, return_ids=False):
cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])]

elif self.cond_stage_key == 'coordinates_bbox':
assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size'
assert 'original_image_size' in self.split_input_params, 'BoundingBoxRescaling is missing original_image_size'

# assuming padding of unfold is always 0 and its dilation is always 1
n_patches_per_row = int((w - ks[0]) / stride[0] + 1)
Expand All @@ -926,7 +926,7 @@ def apply_model(self, x_noisy, t, cond, return_ids=False):
num_downs = self.first_stage_model.encoder.num_resolutions - 1
rescale_latent = 2 ** (num_downs)

# get top left postions of patches as conforming for the bbbox tokenizer, therefore we
# get top left positions of patches as conforming for the bbbox tokenizer, therefore we
# need to rescale the tl patch coordinates to be in between (0,1)
tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w,
rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h)
Expand Down
2 changes: 1 addition & 1 deletion extensions-builtin/Lora/lyco_helpers.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def factorization(dimension: int, factor:int=-1) -> tuple[int, int]:
In LoRA with Kroneckor Product, first value is a value for weight scale.
secon value is a value for weight.
Becuase of non-commutative property, A⊗B ≠ B⊗A. Meaning of two matrices is slightly different.
Because of non-commutative property, A⊗B ≠ B⊗A. Meaning of two matrices is slightly different.
examples)
factor
Expand Down
2 changes: 1 addition & 1 deletion extensions-builtin/Lora/networks.py
Original file line number Diff line number Diff line change
Expand Up @@ -355,7 +355,7 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
"""
Applies the currently selected set of networks to the weights of torch layer self.
If weights already have this particular set of networks applied, does nothing.
If not, restores orginal weights from backup and alters weights according to networks.
If not, restores original weights from backup and alters weights according to networks.
"""

network_layer_name = getattr(self, 'network_layer_name', None)
Expand Down
4 changes: 2 additions & 2 deletions extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,7 @@ onUiLoaded(async() => {

// Create tooltip
function createTooltip() {
const toolTipElemnt =
const toolTipElement =
targetElement.querySelector(".image-container");
const tooltip = document.createElement("div");
tooltip.className = "canvas-tooltip";
Expand Down Expand Up @@ -355,7 +355,7 @@ onUiLoaded(async() => {
tooltip.appendChild(tooltipContent);

// Add a hint element to the target element
toolTipElemnt.appendChild(tooltip);
toolTipElement.appendChild(tooltip);
}

//Show tool tip if setting enable
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@
"canvas_hotkey_grow_brush": shared.OptionInfo("W", "Enlarge the brush size"),
"canvas_hotkey_move": shared.OptionInfo("F", "Moving the canvas").info("To work correctly in firefox, turn off 'Automatically search the page text when typing' in the browser settings"),
"canvas_hotkey_fullscreen": shared.OptionInfo("S", "Fullscreen Mode, maximizes the picture so that it fits into the screen and stretches it to its full width "),
"canvas_hotkey_reset": shared.OptionInfo("R", "Reset zoom and canvas positon"),
"canvas_hotkey_overlap": shared.OptionInfo("O", "Toggle overlap").info("Technical button, neededs for testing"),
"canvas_hotkey_reset": shared.OptionInfo("R", "Reset zoom and canvas position"),
"canvas_hotkey_overlap": shared.OptionInfo("O", "Toggle overlap").info("Technical button, needed for testing"),
"canvas_show_tooltip": shared.OptionInfo(True, "Enable tooltip on the canvas"),
"canvas_auto_expand": shared.OptionInfo(True, "Automatically expands an image that does not fit completely in the canvas area, similar to manually pressing the S and R buttons"),
"canvas_blur_prompt": shared.OptionInfo(False, "Take the focus off the prompt when working with a canvas"),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ def latent_blend(settings, a, b, t):

def get_modified_nmask(settings, nmask, sigma):
"""
Converts a negative mask representing the transparency of the original latent vectors being overlayed
Converts a negative mask representing the transparency of the original latent vectors being overlaid
to a mask that is scaled according to the denoising strength for this step.

Where:
Expand Down
12 changes: 6 additions & 6 deletions javascript/aspectRatioOverlay.js
Original file line number Diff line number Diff line change
Expand Up @@ -50,17 +50,17 @@ function dimensionChange(e, is_width, is_height) {
var scaledx = targetElement.naturalWidth * viewportscale;
var scaledy = targetElement.naturalHeight * viewportscale;

var cleintRectTop = (viewportOffset.top + window.scrollY);
var cleintRectLeft = (viewportOffset.left + window.scrollX);
var cleintRectCentreY = cleintRectTop + (targetElement.clientHeight / 2);
var cleintRectCentreX = cleintRectLeft + (targetElement.clientWidth / 2);
var clientRectTop = (viewportOffset.top + window.scrollY);
var clientRectLeft = (viewportOffset.left + window.scrollX);
var clientRectCentreY = clientRectTop + (targetElement.clientHeight / 2);
var clientRectCentreX = clientRectLeft + (targetElement.clientWidth / 2);

var arscale = Math.min(scaledx / currentWidth, scaledy / currentHeight);
var arscaledx = currentWidth * arscale;
var arscaledy = currentHeight * arscale;

var arRectTop = cleintRectCentreY - (arscaledy / 2);
var arRectLeft = cleintRectCentreX - (arscaledx / 2);
var arRectTop = clientRectCentreY - (arscaledy / 2);
var arRectLeft = clientRectCentreX - (arscaledx / 2);
var arRectWidth = arscaledx;
var arRectHeight = arscaledy;

Expand Down
2 changes: 1 addition & 1 deletion javascript/extraNetworks.js
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ function extraNetworksTreeProcessDirectoryClick(event, btn, tabname, extra_netwo
* Processes `onclick` events when user clicks on directories in tree.
*
* Here is how the tree reacts to clicks for various states:
* unselected unopened directory: Diretory is selected and expanded.
* unselected unopened directory: Directory is selected and expanded.
* unselected opened directory: Directory is selected.
* selected opened directory: Directory is collapsed and deselected.
* chevron is clicked: Directory is expanded or collapsed. Selected state unchanged.
Expand Down
2 changes: 1 addition & 1 deletion javascript/ui.js
Original file line number Diff line number Diff line change
Expand Up @@ -411,7 +411,7 @@ function switchWidthHeight(tabname) {

var onEditTimers = {};

// calls func after afterMs milliseconds has passed since the input elem has beed enited by user
// calls func after afterMs milliseconds has passed since the input elem has been edited by user
function onEdit(editId, elem, afterMs, func) {
var edited = function() {
var existingTimer = onEditTimers[editId];
Expand Down
6 changes: 3 additions & 3 deletions modules/api/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -360,7 +360,7 @@ def init_script_args(self, request, default_script_args, selectable_scripts, sel
return script_args

def apply_infotext(self, request, tabname, *, script_runner=None, mentioned_script_args=None):
"""Processes `infotext` field from the `request`, and sets other fields of the `request` accoring to what's in infotext.
"""Processes `infotext` field from the `request`, and sets other fields of the `request` according to what's in infotext.

If request already has a field set, and that field is encountered in infotext too, the value from infotext is ignored.

Expand Down Expand Up @@ -409,8 +409,8 @@ def get_field_value(field, params):
if request.override_settings is None:
request.override_settings = {}

overriden_settings = infotext_utils.get_override_settings(params)
for _, setting_name, value in overriden_settings:
overridden_settings = infotext_utils.get_override_settings(params)
for _, setting_name, value in overridden_settings:
if setting_name not in request.override_settings:
request.override_settings[setting_name] = value

Expand Down
4 changes: 2 additions & 2 deletions modules/call_queue.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,8 @@ def f(*args, extra_outputs_array=extra_outputs, **kwargs):
sys_pct = sys_peak/max(sys_total, 1) * 100

toltip_a = "Active: peak amount of video memory used during generation (excluding cached data)"
toltip_r = "Reserved: total amout of video memory allocated by the Torch library "
toltip_sys = "System: peak amout of video memory allocated by all running programs, out of total capacity"
toltip_r = "Reserved: total amount of video memory allocated by the Torch library "
toltip_sys = "System: peak amount of video memory allocated by all running programs, out of total capacity"

text_a = f"<abbr title='{toltip_a}'>A</abbr>: <span class='measurement'>{active_peak/1024:.2f} GB</span>"
text_r = f"<abbr title='{toltip_r}'>R</abbr>: <span class='measurement'>{reserved_peak/1024:.2f} GB</span>"
Expand Down
2 changes: 1 addition & 1 deletion modules/devices.py
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ def test_for_nans(x, where):
def first_time_calculation():
"""
just do any calculation with pytorch layers - the first time this is done it allocaltes about 700MB of memory and
spends about 2.7 seconds doing that, at least wih NVidia.
spends about 2.7 seconds doing that, at least with NVidia.
"""

x = torch.zeros((1, 1)).to(device, dtype)
Expand Down
2 changes: 1 addition & 1 deletion modules/extra_networks.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ def activate(self, p, params_list):
Where name matches the name of this ExtraNetwork object, and arg1:arg2:arg3 are any natural number of text arguments
separated by colon.

Even if the user does not mention this ExtraNetwork in his prompt, the call will stil be made, with empty params_list -
Even if the user does not mention this ExtraNetwork in his prompt, the call will still be made, with empty params_list -
in this case, all effects of this extra networks should be disabled.

Can be called multiple times before deactivate() - each new call should override the previous call completely.
Expand Down
2 changes: 1 addition & 1 deletion modules/initialize.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ def load_model():
"""
Accesses shared.sd_model property to load model.
After it's available, if it has been loaded before this access by some extension,
its optimization may be None because the list of optimizaers has neet been filled
its optimization may be None because the list of optimizers has not been filled
by that time, so we apply optimization again.
"""
from modules import devices
Expand Down
2 changes: 1 addition & 1 deletion modules/mac_specific.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

# before torch version 1.13, has_mps is only available in nightly pytorch and macOS 12.3+,
# use check `getattr` and try it for compatibility.
# in torch version 1.13, backends.mps.is_available() and backends.mps.is_built() are introduced in to check mps availabilty,
# in torch version 1.13, backends.mps.is_available() and backends.mps.is_built() are introduced in to check mps availability,
# since torch 2.0.1+ nightly build, getattr(torch, 'has_mps', False) was deprecated, see https://github.com/pytorch/pytorch/pull/103279
def check_for_mps() -> bool:
if version.parse(torch.__version__) <= version.parse("2.0.1"):
Expand Down
Loading
Loading