Skip to content
This repository has been archived by the owner on Feb 12, 2021. It is now read-only.

Commit

Permalink
etcd: feedback on etcd-member doc.
Browse files Browse the repository at this point in the history
  • Loading branch information
pop committed May 16, 2017
1 parent 65f8ff7 commit eee8dcc
Show file tree
Hide file tree
Showing 2 changed files with 37 additions and 28 deletions.
63 changes: 36 additions & 27 deletions etcd/getting-started-with-etcd-manually.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
# Setting up etcd v3 on Container Linux "by hand"
# Manual Configuration of etcd3 on Container Linux

The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt!
The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt.

**Before we begin** if you are able to use Container Linux Configs or ignition configs [to provision your Container Linux nodes][easier-setup], you should go that route. Only follow this guide if you *have* to setup etcd the 'hard' way.
f **Before we begin** if you are able to use Container Linux Configs [to provision your Container Linux nodes][easier-setup], you should go that route. Use this guide only if you must set up etcd the *hard* way.

This tutorial outlines how to setup the newest version of etcd on a Container Linux cluster using the `etcd-member` systemd service. This service spawns a rkt container which houses the etcd process.
This tutorial outlines how to set up the newest version of etcd on a Container Linux cluster using the `etcd-member` systemd service. This service spawns a rkt container which houses the etcd process.

It is expected that you have some familiarity with etcd operations before entering this guide and have at least skimmed the [etcd clustering guide][etcd-clustering] first.

We will deploy a simple 2 node etcd v3 cluster on two local Virtual Machines. This tutorial does not cover setting up TLS, however principles and commands in the [etcd clustering guide][etcd-clustering] carry over into this workflow.

Expand All @@ -13,7 +15,9 @@ We will deploy a simple 2 node etcd v3 cluster on two local Virtual Machines. Th
| 0 | 192.168.100.100 | my-etcd-0 |
| 1 | 192.168.100.101 | my-etcd-1 |

First, run `sudo systemctl edit etcd-member` and paste the following code into the editor:
These IP addresses are visible from within your two machines as well as on the host machine. Once the VMs are setup you should be able to run `ping 192.168.100.100` and `ping 192.168.100.101`, where those are the ip addresses of the VMs.

SSH into your first node and run `systemctl edit etcd-member` and paste the following code into the editor:

```ini
[Service]
Expand All @@ -29,6 +33,8 @@ Environment="ETCD_OPTS=\
--initial-cluster-state=\"new\""
```

This will create a systemd unit *override* and open the new file in `vi`. The file is empty to begin and *you* populate it with the above code. Paste the above code into the editor and `:wq` to save it.

Replace:

| Variable | value |
Expand All @@ -39,10 +45,12 @@ Replace:
| `my-etcd-1` | The other node's name. |
| `f7b787ea26e0c8d44033de08c2f80632` | The discovery token obtained from https://discovery.etcd.io/new?size=2 (generate your own!). |

*If you want a cluster of more than 2 nodes, make sure `size=#` where # is the number of nodes you want. Otherwise the extra ndoes will become proxies.*
> To create a cluster of more than 2 nodes, set `size=#`, where `#` is the number of nodes you wish to create. If not set, any extra nodes will become proxies.
1. Edit the file appropriately and save it. Run `systemctl daemon-reload`.
2. Do the same on the other node, swapping the names and ip-addresses appropriately. It should look like this:
1. Edit the service override.
2. Save the changes.
3. Run `systemctl daemon-reload`.
4. Do the same on the other node, swapping the names and ip-addresses appropriately. It should look something like this:


```ini
Expand All @@ -59,15 +67,17 @@ Environment="ETCD_OPTS=\
--initial-cluster-state=\"new\""
```

*If at any point you get confused about this configuration file, keep in mind that these arguments are the same as those passed to the etcd binary when starting a cluster. With that in mind, reference the [etcd clustering guide][etcd-clustering] for help and sanity-checks.*
Note that the arguments used in this configuration file are the same as those passed to the etcd binary when starting a cluster. For more information on help and sanity checks, see the [etcd clustering guide][etcd-clustering].

## Verification

You can verify that the services have been configured by running `systemctl cat etcd-member`. This will print the service and it's override conf to the screen. You should see your changes on both nodes.
1. To verify that services have been configured, run `systemctl cat etcd-member` on the manually configured nodes. This will print the service and it's override conf to the screen. You should see the overrides on both nodes.

On both nodes run `systemctl enable etcd-member && systemctl start etcd-member`.
2. To enable the service on boot, run `systemctl enable etcd-member` on all nodes.

If this command hangs for a very long time, <Ctrl>+c to exit out and run `journalctl -xef`. If this outputs something like `rafthttp: request cluster ID mismatch (got 7db8ba5f405afa8d want 5030a2a4c52d7b21)` this means there is existing data on the nodes. Since we are starting completely new nodes we will wipe away the existing data and re-start the service. Run the following on both nodes:
3. To start the service, run `systemctl start etcd-member`. This command may take a while to complete becuase it is downloading a rkt container and setting up etcd.

If the last command hangs for a very long time (10+ minutes), press <Ctrl>+c on your keyboard to exit the commadn and run `journalctl -xef`. If this outputs something like `rafthttp: request cluster ID mismatch (got 7db8ba5f405afa8d want 5030a2a4c52d7b21)` this means there is existing data on the nodes. Since we are starting completely new nodes we will wipe away the existing data and re-start the service. Run the following on both nodes:

```sh
$ rm -rf /var/lib/etcd
Expand All @@ -87,22 +97,24 @@ $ etcdctl --endpoints="http://192.168.100.100:2379,http://192.168.100.101:2379"
true
```

There you have it! You have now setup etcd v3 by hand. Pat yourself on the back. Take five.
There you have it! You have now set up etcd v3 by hand. Pat yourself on the back. Take five.

## Troubleshooting

In the process of setting up your etcd cluster you got it into a non-working state, you have a few options:
In the process of setting up your etcd cluster if you got it into a non-working state, you have a few options:

1. Reference the [runtime configuration guide][runtime-guide].
2. Reset your environment.
* Reference the [runtime configuration guide][runtime-guide].
* Reset your environment.

Since etcd is running in a container, the second option is very easy.

Start by stopping the `etcd-member` service (run these commands *on* the Container Linux nodes).
Run the following commands on the Container Linux nodes:


1. `systemctl stop etcd-member` to stop the service.
2. `systemctl status etcd-member` to verify the service has exited. The output should look like:

```sh
$ systemctl stop etcd-member
$ systemctl status etcd-member
● etcd-member.service - etcd (System Application Container)
Loaded: loaded (/usr/lib/systemd/system/etcd-member.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/etcd-member.service.d
Expand All @@ -111,16 +123,13 @@ $ systemctl status etcd-member
Docs: https://github.com/coreos/etcd
```

Next, delete the etcd data (again, run on the Container Linux nodes):

```sh
$ rm /var/lib/etcd2
$ rm /var/lib/etcd
```
3. `rm /var/lib/etcd2` to remove the etcd v2 data.
4. `rm /var/lib/etcd` to remove the etcd v3 data.

*If you set the etcd-member to have a custom data directory, you will need to run a different `rm` command.*
> If you set a custom data directory for the etcd-member service, you will need to run a modified `rm` command.
Edit the etcd-member service, restart the `etcd-member` service, and basically start this guide again from the top.
5. Edit the etcd-member service with `systemctl edit etcd-member`.
6. Restart the etcd-member service with `systemctl start etcd-member`.

[runtime-guide]: https://coreos.com/etcd/docs/latest/op-guide/runtime-configuration.html
[etcd-clustering]: https://coreos.com/etcd/docs/latest/op-guide/clustering.html
Expand Down
2 changes: 1 addition & 1 deletion etcd/getting-started-with-etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ etcd:
initial_cluster_state: new
```

If you are unable to provision your machine using Container Linux configs, check out the [Setting up etcd v3 on Container Linux "by hand"][by-hand]
If you are unable to provision your machine using Container Linux configs, refer to [Setting up etcd v3 on Container Linux "by hand"][by-hand].

## Reading and writing to etcd

Expand Down

0 comments on commit eee8dcc

Please sign in to comment.