Skip to content

Commit

Permalink
[docs] renamed references (#23212)
Browse files Browse the repository at this point in the history
  • Loading branch information
aishwarya24 committed Aug 19, 2024
1 parent 5b21f96 commit 5878518
Show file tree
Hide file tree
Showing 6 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion docs/content/preview/faq/comparisons/amazon-aurora.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ menu:
type: docs
---

Generally available since 2015, Amazon Aurora is built on a proprietary distributed storage engine that automatically replicates 6 copies of data across 3 availability zones for high availability. From an API standpoint, Aurora is wire compatible with both PostgreSQL and MySQL. As described in ["Amazon Aurora under the hood: quorums and correlated failure"](https://aws.amazon.com/blogs/database/amazon-aurora-under-the-hood-quorum-and-correlated-failure/), Aurora uses a quorum write approach based on 6 replicas. This allows for significantly better availability and durability than traditional master-slave replication.
Generally available since 2015, Amazon Aurora is built on a proprietary distributed storage engine that automatically replicates 6 copies of data across 3 availability zones for high availability. From an API standpoint, Aurora is wire compatible with both PostgreSQL and MySQL. As described in ["Amazon Aurora under the hood: quorums and correlated failure"](https://aws.amazon.com/blogs/database/amazon-aurora-under-the-hood-quorum-and-correlated-failure/), Aurora uses a quorum write approach based on 6 replicas. This allows for significantly better availability and durability than traditional leader-follower replication.

## Horizontal write scalability

Expand Down
2 changes: 1 addition & 1 deletion docs/content/preview/faq/comparisons/postgresql.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ There is a concept of "partitioned tables" in PostgreSQL that can make sharding

## Continuous availability

The most common replication mechanism in PostgreSQL is that of asynchronous replication. Two completely independent database instances are deployed in a master-slave configuration in such a way that the slave instances periodically receive committed data from the master instance. The slave instance does not participate in the original writes to the master, thus making the latency of write operations low from an application client standpoint. However, the true cost is loss of availability (until manual failover to slave) as well as inability to serve recently committed data when the master instance fails (given the data lag on the slave). The less common mechanism of synchronous replication involves committing to two independent instances simultaneously. It is less common because of the complete loss of availability when one of the instances fail. Thus, irrespective of the replication mechanism used, it is impossible to guarantee always-on, strongly-consistent reads in PostgreSQL.
The most common replication mechanism in PostgreSQL is that of asynchronous replication. Two completely independent database instances are deployed in a leader-follower configuration in such a way that the follower instances periodically receive committed data from the master instance. The follower instance does not participate in the original writes to the master, thus making the latency of write operations low from an application client standpoint. However, the true cost is loss of availability (until manual failover to follower) as well as inability to serve recently committed data when the master instance fails (given the data lag on the follower). The less common mechanism of synchronous replication involves committing to two independent instances simultaneously. It is less common because of the complete loss of availability when one of the instances fail. Thus, irrespective of the replication mechanism used, it is impossible to guarantee always-on, strongly-consistent reads in PostgreSQL.

YugabyteDB is designed to solve the high availability need that monolithic databases such as PostgreSQL were never designed for. This inherently means committing the updates at 1 more independent failure domain than compared to PostgreSQL. There is no overall "leader" node in YugabyteDB that is responsible for handing updates for all the data in the database. There are multiple shards and those shards are distributed among the multiple nodes in the cluster. Each node has some shard leaders and some shard followers. Serving writes is the responsibility of a shard leader which then uses Raft replication protocol to commit the write to at least 1 more follower replica before acknowledging the write as successful back to the application client. When a node fails, some shard leaders will be lost but the remaining two follower replicas (on still available nodes) will elect a new leader automatically in a few seconds. Note that the replica that had the latest data gets the priority in such an election. This leads to extremely low write unavailability and essentially a self-healing system with auto-failover characteristics.

Expand Down
2 changes: 1 addition & 1 deletion docs/content/preview/faq/comparisons/vitess.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,4 @@ While Vitess presents a single logical SQL database to clients, it does not supp

## Lack of continuous availability

Vitess does not make any enhancements to the asynchronous master-slave replication architecture of MySQL. For every shard in the Vitess cluster, another slave instance has to be created and replication has to be maintained. The end result is that Vitess cannot guarantee continuous availability during failures. Spanner-inspired distributed SQL databases like YugabyteDB solve this replication problem at the core using Raft distributed consensus at a per-shard level for both data replication and leader election.
Vitess does not make any enhancements to the asynchronous leader-follower replication architecture of MySQL. For every shard in the Vitess cluster, another follower instance has to be created and replication has to be maintained. The end result is that Vitess cannot guarantee continuous availability during failures. Spanner-inspired distributed SQL databases like YugabyteDB solve this replication problem at the core using Raft distributed consensus at a per-shard level for both data replication and leader election.
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ This section describes how replication works in DocDB. The data in a DocDB table

There are other advanced replication features in YugabyteDB. These include two forms of asynchronous replication of data:

* **xCluster replication** Data is asynchronously replicated between different YugabyteDB clusters - both unidirectional replication (master-slave) or bidirectional replication across two clusters.
* **xCluster replication** Data is asynchronously replicated between different YugabyteDB clusters - both unidirectional replication (leader-follower) or bidirectional replication across two clusters.
* **Read replicas** The in-cluster asynchronous replicas are called read replicas.

<div class="row">
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ This section describes how replication works in DocDB. The data in a DocDB table

YugabyteDB also provides other advanced replication features. These include two forms of asynchronous replication of data:

* **xCluster** Data is asynchronously replicated between different YugabyteDB universes - both unidirectional replication (master-slave) or bidirectional replication across two universes.
* **xCluster** Data is asynchronously replicated between different YugabyteDB universes - both unidirectional replication (leader-follower) or bidirectional replication across two universes.
* **Read replicas** The in-universe asynchronous replicas are called read replicas.

The YugabyteDB synchronous replication architecture is inspired by <a href="https://research.google.com/archive/spanner-osdi2012.pdf">Google Spanner</a>.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ This section describes how replication works in DocDB. The data in a DocDB table

YugabyteDB also provides other advanced replication features. These include two forms of asynchronous replication of data:

* **xCluster** - Data is asynchronously replicated between different YugabyteDB universes - both unidirectional replication (master-slave) or bidirectional replication across two universes.
* **xCluster** - Data is asynchronously replicated between different YugabyteDB universes - both unidirectional replication (leader-follower) or bidirectional replication across two universes.
* **Read replicas** - The in-universe asynchronous replicas are called read replicas.

The YugabyteDB synchronous replication architecture is inspired by <a href="https://research.google.com/archive/spanner-osdi2012.pdf">Google Spanner</a>.
Expand Down

0 comments on commit 5878518

Please sign in to comment.