Skip to content

Commit

Permalink
Fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
kianmeng committed Jul 22, 2022
1 parent aa639d8 commit e9fcb70
Show file tree
Hide file tree
Showing 58 changed files with 73 additions and 73 deletions.
2 changes: 1 addition & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ codegen-units = 1
incremental = false
build-override = { opt-level = 0 }

# It's not quite worth building depencies with more optimizations yet. Let's keep it here for later.
# It's not quite worth building dependencies with more optimizations yet. Let's keep it here for later.
#[profile.dev.package."*"]
#opt-level = 2

Expand Down
2 changes: 1 addition & 1 deletion DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ by humans.
* **Experiments**
* quick, potentially one-off programs to learn about an aspect of gitoxide potentially in comparison to other implementations like `libgit2`.
* No need for tests of any kind, but it must compile and be idiomatic Rust and `gitoxide`.
* Manual commmand-line parsing is OK
* Manual command-line parsing is OK
* no polish
* make it compile quickly, so no extras
* **Examples**
Expand Down
2 changes: 1 addition & 1 deletion cargo-smart-release/src/context.rs
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ impl Context {
.parent()
.expect("parent of a file is always present")
.strip_prefix(&self.root)
.expect("workspace members are releative to the root directory");
.expect("workspace members are relative to the root directory");

if dir.as_os_str().is_empty() {
None
Expand Down
2 changes: 1 addition & 1 deletion cargo-smart-release/tests/changelog/parse.rs
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ fn known_and_unknown_sections_are_sorted() {
markdown: "- initial release\n\n".into()
},
Segment::User {
markdown: "### Something inbetween\n\nintermezzo\n".into()
markdown: "### Something in between\n\nintermezzo\n".into()
},
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Hello, this is a changelog.

- initial release

### Something inbetween
### Something in between

intermezzo

Expand Down
2 changes: 1 addition & 1 deletion deny.toml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ ignore = [ ]
[licenses]
# The lint level for crates which do not have a detectable license
unlicensed = "deny"
# List of explictly allowed licenses
# List of explicitly allowed licenses
# See https://spdx.org/licenses/ for list of possible licenses
# [possible values: any SPDX 3.11 short identifier (+ optional exception)].
allow = [
Expand Down
8 changes: 4 additions & 4 deletions etc/discovery/odb.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ Solutions aren't always mutually exclusive despite the form of presentation sugg
| | | 3. catch error, force a pack refresh, repeat | can work in conjunction with similar shortcomings of loose reference database | needs mutability, burden on the API user; | |
| | | 4. writers force an update of the process-wide pool of packs after creating new packs and before updating references with the new objects | | high implementation complexity; assumes complete control of one process over git repository, excluding running git-maintenance; new readers aren't allowed for a while until the new pack is placed causing some moments of unresponsiveness/waiting | |
| **pack** | ~~5. race when creating/altering more than a pack at a time~~ | 1. ignore | | a chance for occasional object misses | all of them |
| | | 2. retry more than one time | greatly reduced likelyhood of object misses | | |
| | | 2. retry more than one time | greatly reduced likelihood of object misses | | |
| **pack** | **6.too many (small) packs (i.e. due to pack-receive) reduce lookup performance** | 1. explode pack into loose objects (and deal with them separately) | can run in parallel (but is typically bound by max IOP/s) | might take a while if many objects are contained in the pack due to file IOP/s; needs recompresssion and looses delta compression; risk of too many small objects | |
| | | 2. combine multiple packs into one | keep all benefits of packs; very fast if pack-to-pack copy is used; can run in parallel (but is typically bound by max IOP/s) | combining with big packs takes has to write a lot of data; can be costly if pack delta compression is used | |
| | | 3. Just-in-time maintenance after writes | tuned to run just at the right time to run just as much as needed | an implementation isn't trivial as there must only be one maintenance operation per repository at a time, so some queue should be made available to not skip maintenance just because one is running already. | |
Expand Down Expand Up @@ -238,7 +238,7 @@ for applications that don't need it, like CLIs.
#### Loose References

Writing loose references isn't actually atomic, so readers may observe some references in an old and some in a new state. This isn't always a breaking issue like it is
the case for packs, the progam can still operate and is likely to produce correct (enough) outcomes.
the case for packs, the program can still operate and is likely to produce correct (enough) outcomes.

Mitigations are possible with careful programming on the API user's side or by using the `ref-table` database instead.

Expand Down Expand Up @@ -273,7 +273,7 @@ refresh to the user in case they fetched or pulled in the meantime, to refresh t
**Drawbacks**

The program could benefit of using 1.2 instead of 1.1 which could cause exhaustion of file handles despite the user having no interest in evaluating all available objects,
but ideally that is possible without loosing performance during multi-threading.
but ideally that is possible without losing performance during multi-threading.

### Professional git-hosting mono-repo server with git-maintenance tasks and just-in-time replication

Expand Down Expand Up @@ -381,7 +381,7 @@ The default favors speed and using all available cores, but savvy users can run
- not an issue as there isn't enough traffic here
* **9.2** loose object database - too many loose objects reduce overall performance - just-in-time maintenance
* **10** - disk full - display early warnings in the front-end to every user to get it fixed
- This solution is implemented on application side (and not in `gitoxide`), it's intersting enough to mention though for systems that operate themselves.
- This solution is implemented on application side (and not in `gitoxide`), it's interesting enough to mention though for systems that operate themselves.
- One could also imagine that it tries to spend the nights aggressively compression repositories, some low-hanging fruits there.
* **10** - write failure - fail connection
- write failures aren't specifically handled but result in typical Rust error behaviour probably alongside error reporting on the respective channels of the git-transport sideband.
Expand Down
2 changes: 1 addition & 1 deletion git-bitmap/src/ewah.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ pub fn decode(data: &[u8]) -> Result<(Vec, &[u8]), decode::Error> {
let (len, data) = decode::u32(data).ok_or(Error::Corrupt("eof reading chunk length"))?;
let len = len as usize;

// NOTE: git does this by copying all bytes first, and then it will change the endianess in a separate loop.
// NOTE: git does this by copying all bytes first, and then it will change the endianness in a separate loop.
// Maybe it's faster, but we can't do it without unsafe. Let's leave it to the optimizer and maybe
// one day somebody will find out that it's worth it to use unsafe here.
let (mut bits, data) = decode::split_at_pos(data, len * std::mem::size_of::<u64>())
Expand Down
4 changes: 2 additions & 2 deletions git-config/src/file/access/mutate.rs
Original file line number Diff line number Diff line change
Expand Up @@ -234,12 +234,12 @@ impl<'event> File<'event> {
Ok(())
}

/// Append another File to the end of ourselves, without loosing any information.
/// Append another File to the end of ourselves, without losing any information.
pub fn append(&mut self, other: Self) -> &mut Self {
self.append_or_insert(other, None)
}

/// Append another File to the end of ourselves, without loosing any information.
/// Append another File to the end of ourselves, without losing any information.
pub(crate) fn append_or_insert(&mut self, mut other: Self, mut insert_after: Option<SectionId>) -> &mut Self {
let nl = self.detect_newline_style_smallvec();
fn extend_and_assure_newline<'a>(
Expand Down
2 changes: 1 addition & 1 deletion git-config/src/file/includes/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ impl File<'static> {
/// a deviation from how git does it, as it technically adds new value right after the include path itself,
/// technically 'splitting' the section. This can only make a difference if the `include` section also has values
/// which later overwrite portions of the included file, which seems unusual as these would be related to `includes`.
/// We can fix this by 'splitting' the inlcude section if needed so the included sections are put into the right place.
/// We can fix this by 'splitting' the include section if needed so the included sections are put into the right place.
pub fn resolve_includes(&mut self, options: init::Options<'_>) -> Result<(), Error> {
if options.includes.max_depth == 0 {
return Ok(());
Expand Down
4 changes: 2 additions & 2 deletions git-config/src/file/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ pub struct Section<'a> {
meta: OwnShared<Metadata>,
}

/// A function to filter metadata, returning `true` if the corresponding but ommitted value can be used.
/// A function to filter metadata, returning `true` if the corresponding but omitted value can be used.
pub type MetadataFilter = dyn FnMut(&'_ Metadata) -> bool;

/// A strongly typed index into some range.
Expand All @@ -76,7 +76,7 @@ impl Add<Size> for Index {
}
}

/// A stronlgy typed a size.
/// A strongly typed a size.
#[derive(PartialEq, Eq, Hash, PartialOrd, Ord, Debug, Clone, Copy)]
pub(crate) struct Size(pub(crate) usize);

Expand Down
2 changes: 1 addition & 1 deletion git-config/src/parse/events.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ use crate::{
parse::{section, Event, Section},
};

/// A type store without allocation all events that are typicaly preceeding the first section.
/// A type store without allocation all events that are typically preceding the first section.
pub type FrontMatterEvents<'a> = SmallVec<[Event<'a>; 8]>;

/// A zero-copy `git-config` file parser.
Expand Down
2 changes: 1 addition & 1 deletion git-diff/src/tree/visit.rs
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ pub enum Change {
pub enum Action {
/// Continue the traversal of changes.
Continue,
/// Stop the traversal of changes, making this te last call to [visit(…)][Visit::visit()].
/// Stop the traversal of changes, making this the last call to [visit(…)][Visit::visit()].
Cancel,
}

Expand Down
Git LFS file not shown
4 changes: 2 additions & 2 deletions git-features/src/parallel/in_parallel.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ pub fn join<O1: Send, O2: Send>(left: impl FnOnce() -> O1 + Send, right: impl Fn
/// That way it's possible to handle threads without needing the 'static lifetime for data they interact with.
///
/// Note that the threads should not rely on actual parallelism as threading might be turned off entirely, hence should not
/// connect each other with channels as deadlock would occour in single-threaded mode.
/// connect each other with channels as deadlock would occur in single-threaded mode.
pub fn threads<'env, F, R>(f: F) -> std::thread::Result<R>
where
F: FnOnce(&crossbeam_utils::thread::Scope<'env>) -> R,
Expand Down Expand Up @@ -85,7 +85,7 @@ where
}

/// An experiment to have fine-grained per-item parallelization with built-in aggregation via thread state.
/// This is only good for operations where near-random access isn't detremental, so it's not usually great
/// This is only good for operations where near-random access isn't detrimental, so it's not usually great
/// for file-io as it won't make use of sorted inputs well.
/// Note that `periodic` is not guaranteed to be called in case other threads come up first and finish too fast.
// TODO: better docs
Expand Down
2 changes: 1 addition & 1 deletion git-features/src/parallel/serial.rs
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ mod not_parallel {
}

/// An experiment to have fine-grained per-item parallelization with built-in aggregation via thread state.
/// This is only good for operations where near-random access isn't detremental, so it's not usually great
/// This is only good for operations where near-random access isn't detrimental, so it's not usually great
/// for file-io as it won't make use of sorted inputs well.
// TODO: better docs
pub fn in_parallel_with_slice<I, S, E>(
Expand Down
2 changes: 1 addition & 1 deletion git-mailmap/src/entry.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ use bstr::BStr;

use crate::Entry;

/// Acccess
/// Access
impl<'a> Entry<'a> {
/// The name to map to.
pub fn new_name(&self) -> Option<&'a BStr> {
Expand Down
2 changes: 1 addition & 1 deletion git-mailmap/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ pub mod parse;

/// Parse the given `buf` of bytes line by line into mapping [Entries][Entry].
///
/// Errors may occour per line, but it's up to the caller to stop iteration when
/// Errors may occur per line, but it's up to the caller to stop iteration when
/// one is encountered.
pub fn parse(buf: &[u8]) -> parse::Lines<'_> {
parse::Lines::new(buf)
Expand Down
2 changes: 1 addition & 1 deletion git-mailmap/src/snapshot.rs
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ impl<'a> From<crate::Entry<'a>> for EmailEntry {
}

impl Snapshot {
/// Create a new snapshot from the given bytes buffer, ignoring all parse errors that may occour on a line-by-line basis.
/// Create a new snapshot from the given bytes buffer, ignoring all parse errors that may occur on a line-by-line basis.
///
/// This is similar to what git does.
pub fn from_bytes(buf: &[u8]) -> Self {
Expand Down
2 changes: 1 addition & 1 deletion git-odb/src/store_impls/dynamic/load_index.rs
Original file line number Diff line number Diff line change
Expand Up @@ -531,7 +531,7 @@ impl super::Store {
let mut files = slot.files.load_full();
let files_mut = Arc::make_mut(&mut files);
// set the generation before we actually change the value, otherwise readers of old generations could observe the new one.
// We rather want them to turn around here and update their index, which, by that time, migth actually already be available.
// We rather want them to turn around here and update their index, which, by that time, might actually already be available.
// If not, they would fail unable to load a pack or index they need, but that's preferred over returning wrong objects.
// Safety: can't race as we hold the lock, have to set the generation beforehand to help avoid others to observe the value.
slot.generation.store(generation, Ordering::SeqCst);
Expand Down
2 changes: 1 addition & 1 deletion git-odb/src/store_impls/dynamic/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ pub mod structure {
///
/// Note that this call is expensive as it gathers additional information about loose object databases.
/// Note that it may change as we collect information due to the highly volatile nature of the
/// implementation. The likelyhood of actual changes is low though as these still depend on something
/// implementation. The likelihood of actual changes is low though as these still depend on something
/// changing on disk and somebody reading at the same time.
pub fn structure(&self) -> Result<Vec<Record>, load_index::Error> {
let index = self.index.load();
Expand Down
2 changes: 1 addition & 1 deletion git-odb/src/store_impls/dynamic/types.rs
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ impl IndexAndPacks {
}
}

/// If we are garbaged, put ourselve into the loaded state. Otherwise put ourselves back to unloaded.
/// If we are garbaged, put ourselves into the loaded state. Otherwise put ourselves back to unloaded.
pub(crate) fn put_back(&mut self) {
match self {
IndexAndPacks::Index(bundle) => {
Expand Down
2 changes: 1 addition & 1 deletion git-odb/tests/odb/store/dynamic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -325,7 +325,7 @@ fn contains() {
unreachable_indices: 0,
unreachable_packs: 0
},
"when asking for an object in the smallest pack, all inbetween packs are also loaded."
"when asking for an object in the smallest pack, all in between packs are also loaded."
);

assert!(!new_handle.contains(hex_to_id("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa")));
Expand Down
2 changes: 1 addition & 1 deletion git-pack/src/cache/delta/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ enum NodeKind {
pub struct Tree<T> {
/// The root nodes, i.e. base objects
root_items: Vec<Item<T>>,
/// The child nodes, i.e. those that rely a base object, like ref and ofs delta objets
/// The child nodes, i.e. those that rely a base object, like ref and ofs delta objects
child_items: Vec<Item<T>>,
/// The last encountered node was either a root or a child.
last_seen: Option<NodeKind>,
Expand Down
Loading

0 comments on commit e9fcb70

Please sign in to comment.