Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs][yugabyted][2024.1.2] Add Read Replica and xCluster examples #23289

Merged
merged 14 commits into from
Sep 3, 2024
1 change: 1 addition & 0 deletions .github/vale-styles/Yugabyte/spelling-exceptions.txt
Original file line number Diff line number Diff line change
Expand Up @@ -418,6 +418,7 @@ negatable
Netlify
nginx
Nokogiri
Northwind
noteable
noteables
npm
Expand Down
12 changes: 10 additions & 2 deletions docs/content/preview/architecture/key-concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,12 @@ DocDB is the underlying document storage engine of YugabyteDB and is built on to

A fault domain is a potential point of failure. Examples of fault domains would be nodes, racks, zones, or entire regions. {{<link "../../explore/fault-tolerance/#fault-domains">}}

## Fault tolerance

YugabyteDB achieves resiliency by replicating data across fault domains using the Raft consensus protocol. The [fault domain](#fault-domain) can be at the level of individual nodes, availability zones, or entire regions.

The fault tolerance determines how resilient the cluster is to domain (that is, node, zone, or region) outages, whether planned or unplanned. Fault tolerance is achieved by adding redundancy, in the form of additional nodes, across the fault domain. Due to the way the Raft protocol works, providing a fault tolerance of `ft` requires replicating data across `2ft + 1` domains. This number is referred to as the [replication factor](#replication-factor-rf). For example, to survive the outage of 2 nodes, a cluster needs 2 * 2 + 1 nodes; that is, a replication factor of 5. While the 2 nodes are offline, the remaining 3 nodes can continue to serve reads and writes without interruption.

## Follower reads

Normally, only the [tablet leader](#tablet-leader) can process user-facing write and read requests. Follower reads allow you to lower read latencies by serving reads from the tablet followers. This is similar to reading from a cache, which can provide more read IOPS with low latency. The data might be slightly stale, but is timeline-consistent, meaning no out of order data is possible.
Expand Down Expand Up @@ -143,10 +149,12 @@ A region refers to a defined geographical area or location where a cloud provide

## Replication factor (RF)

The number of copies of data in a YugabyteDB universe. YugabyteDB replicates data across zones (or fault domains) in order to tolerate faults. Fault tolerance (FT) and RF are correlated. To achieve a FT of k nodes, the universe has to be configured with a RF of (2k + 1).
The number of copies of data in a YugabyteDB universe. YugabyteDB replicates data across [fault domains](#fault-domain) (for example, zones) in order to tolerate faults. [Fault tolerance](#fault-tolerance) (FT) and RF are correlated. To achieve a FT of k nodes, the universe has to be configured with a RF of (2k + 1).

The RF should be an odd number to ensure majority consensus can be established during failures. {{<link "../docdb-replication/replication/#replication-factor">}}

Each [read replica](#read-replica-cluster) cluster can also have its own replication factor. In this case, the replication factor determines how many copies of your primary data the read replica has; multiple copies ensure the availability of the replica in case of a node outage. Replicas *do not* participate in the primary cluster Raft consensus, and do not affect the fault tolerance of the primary cluster or contribute to failover.

## Sharding

Sharding is the process of mapping a table row to a [tablet](#tablet). YugabyteDB supports 2 types of sharding, Hash and Range. {{<link "../docdb-sharding">}}
Expand Down Expand Up @@ -191,7 +199,7 @@ The [YB-TServer](../yb-tserver) service is responsible for maintaining and manag
A YugabyteDB universe comprises one [primary cluster](#primary-cluster) and zero or more [read replica clusters](#read-replica-cluster) that collectively function as a resilient and scalable distributed database.

{{<note>}}
Sometimes the terms *universe* and *cluster* are used interchangeably. However, the two are not always equivalent, as a universe can contain one or more [clusters](#cluster).
Sometimes the terms *universe* and *cluster* are used interchangeably. The two are not always equivalent, as a universe can contain one or more [clusters](#cluster).
{{</note>}}

## xCluster
Expand Down
Loading