Skip to content

Latest commit

 

History

History
486 lines (432 loc) · 16.7 KB

HArmadillium.md

File metadata and controls

486 lines (432 loc) · 16.7 KB


HArmadillium

Hardware:

ubuntu

Required:ubuntu repository #[Ubuntu 24.04 LTS Noble]

Wiring


note: --Deadsnakes PPA has already updated its support for Ubuntu 24.04 (Noble)

sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.11
Alternative Python Setup:

High Availability Required Packages:

#[Ubuntu 24.04 LTS Noble]
sudo apt install corosync pacemaker fence-agents crmsh pcs* cluster-glue ufw nginx haveged heartbeat openssh-server openssh-client

Overview:

Getting Wiki:

StaticIP

Setup Static IP

Host

  • edit host file TO each node
sudo nano /etc/hosts

armadillium01 host

127.0.0.1       localhost
127.0.1.1       armadillium01

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

192.168.1.142 armadillium02
192.168.1.143 armadillium03
192.168.1.144 armadillium04

host setup


SSH

SSH connection to communicate with all nodes

OpenSSH Server -- Docs Install required packages to each node and Check ubuntu repository

  • FROM armadillium01 TO armadillium02
ssh armadillium02@192.168.1.142
sudo apt install corosync pacemaker fence-agents crmsh pcs* cluster-glue ufw nginx haveged heartbeat openssh-server openssh-client
  • openssh-server: OpenSSH server application and related support files.
  • openssh-client: OpenSSH client applications on your Ubuntu system
-- #01 -- #02 -- #03 -- #04

UFW

Firewall Rules TO each node

-Description: The Uncomplicated FireWall is a front-end for iptables, to make managing a Netfilter firewall easier. It provides a command line interface with syntax similar to OpenBSD's Packet Filter. It is particularly well-suited as a host-based firewall.

sudo ufw allow from 192.168.1.141
sudo ufw allow from 192.168.1.142
sudo ufw allow from 192.168.1.143
sudo ufw allow from 192.168.1.144
sudo ufw allow ssh

firewall rules


High Availability

Corosync

  • Corosync cluster engine daemon and utilities
The Corosync Cluster Engine is a Group Communication System with additional features for implementing high availability within applications.
The project provides four C Application Programming Interface features:
  • A closed process group communication model with virtual synchrony guarantees for creating replicated state machines.
  • A simple availability manager that restarts the application process when it has failed.
  • A configuration and statistics in-memory database that provide the ability to set, retrieve, and receive change notifications of information.
  • A quorum system that notifies applications when quorum is achieved or lost.

Corosync Configuration File: repeat this TO each node

sudo rm /etc/corosync/corosync.conf
sudo nano /etc/corosync/corosync.conf

corosync configuration file:

totem {
  version: 2
  cluster_name: HArmadillium
  transport: udpu
  interface {
   ringnumber: 0
   bindnetaddr: 192.168.1.140
   broadcast: yes
   mcastport: 5405
 }
}
nodelist {
  node {
    ring0_addr: 192.168.1.141
    name: armadillium01
    nodeid: 1
  }
  node {
    ring0_addr: 192.168.1.142
    name: armadillium02
    nodeid: 2
  }
  node {
    ring0_addr: 192.168.1.143
    name: armadillium03
    nodeid: 3
  }
  node {
    ring0_addr: 192.168.1.144
    name: armadillium04
    nodeid: 4
  }
}
logging {
  to_logfile: yes
  logfile: /var/log/corosync/corosync.log
  to_syslog: yes
  timestamp: on
}
service {
  name: pacemaker
  ver: 1
}
sudo service corosync start


Corosync-keygen Authorize

  • FROM armadillium01 create corosync key :
#armadillium01 
sudo corosync-keygen
  • secure copy(ssh) corosync authkey FROM armadillium01 TO #armadillium02 #armadillium03 #armadillium04 IN /tmp directory
sudo scp /etc/corosync/authkey armadillium02@192.168.1.142:/tmp #02
sudo scp /etc/corosync/authkey armadillium03@192.168.1.143:/tmp #03
sudo scp /etc/corosync/authkey armadillium04@192.168.1.144:/tmp #04
  • connect via(ssh) and move copied file FROM /tmp directory TO /etc/corosync directory
#connect(ssh) to armadillium02 
ssh armadillium02@192.168.1.142 #02
sudo mv /tmp/authkey /etc/corosync
sudo chown root: /etc/corosync/authkey
sudo chmod 400 /etc/corosync/authkey

corosync setup


CRM

Consider this configuration tool as an alternative to PCS.

Cluster Setup


PCS

  • PCS Pacemaker Configuration System

-Description: It permits users to easily view, modify and create pacemaker based clusters. pcs also provides pcsd, which operates as a GUI and remote server for PCS.

sudo service pcsd start

PCS Create Password and authenticate localhost

#armadillium01
sudo passwd hacluster

authenticate localhost

sudo pcs client local-auth
#Username: hacluster
#Password: 
#localhost: Authorized

PCS AUTH authorize/authenticate host

#armadillium01
sudo pcs host auth armadillium01 armadillium02 armadillium03 armadillium04
#Username: hacluster
#Password:
#armadillium01: Authorized
#armadillium02: Authorized
#armadillium03: Authorized
#armadillium04: Authorized

ClusterLabs (3.3.2. Enable pcs Daemon)

  • Disable STONITH
sudo pcs property set stonith-enabled=false
  • Ignore Quorum policy
sudo pcs property set no-quorum-policy=ignore
sudo apt install resource-agents-extra
sudo pcs resource create webserver ocf:heartbeat:nginx configfile=/etc/nginx/nginx.conf op monitor timeout="5s" interval="5s"

ClusterLabs Resource Agents

  • Create PCS FLOATING IP Resource:
sudo pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.140 cidr_netmask=32 op monitor interval=30s
#crm configure primitive virtual_ip ocf:heartbeat:IPaddr2 params ip="192.168.1.140" cidr_netmask="32" op monitor interval="10s" meta migration-threshold="10"
Constraint:
sudo pcs constraint colocation add webserver with virtual_ip INFINITY
sudo pcs constraint order webserver then virtual_ip
#Adding webserver virtual_ip (kind: Mandatory) (Options: first-action=start then-action=start)

note:

sudo pcs cluster start --all
sudo pcs cluster enable --all
#armadillium01: Starting Cluster...
#armadillium02: Starting Cluster...
#armadillium03: Starting Cluster...
#armadillium04: Starting Cluster...
#armadillium01: Cluster Enabled
#armadillium02: Cluster Enabled
#armadillium03: Cluster Enabled
#armadillium04: Cluster Enabled

Pacemaker

Cluster Resource Manager:

-Description: Pacemaker is a distributed finite state machine capable of co-ordinating the startup and recovery of inter-related services across a set of machines. Pacemaker understands many different resource types (OCF, SYSV, systemd) and can accurately model the relationships between them (colocation, ordering).

Run Pacemaker after corosync service: TO each node
sudo update-rc.d pacemaker defaults 20 01

PCMK

  • Create PCMK file
sudo mkdir /etc/corosync/service.d
sudo nano /etc/corosync/service.d/pcmk
  • add this
service {
  name: pacemaker
  ver: 1
}

Webserver

  • Nginx as Reverse Proxy
sudo apt install openssl nginx git -y

OpenSSL WebServer

[HTTPS]

self-signed certificate (HTTPS) with OpenSSL

git clone https://github.com/universalbit-dev/HArmadillium/
cd HArmadillium/ssl
sudo mkdir /etc/nginx/ssl
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/host.key -out /etc/nginx/ssl/host.cert --config distinguished.cnf
sudo openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048

Nginx Configuration File (default)

  • edit the Nginx default file
sudo rm /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/sites-enabled/default
  • #armadillium01 Nginx configuration file:
#armadillium01 192.168.1.141
server {
listen 80;
listen [::]:80;
server_name armadillium01;
return 301 https://$host$request_uri;
}

server {
listen 8001;
server_name armadillium01;
return 301 https://$host$request_uri;
}
    
upstream websocket {
    server 192.168.1.141;
    server 192.168.1.142;
    server 192.168.1.143;
    server 192.168.1.144;
}

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name armadillium01;
    root /usr/share/nginx/html;
    ssl_certificate /etc/nginx/ssl/host.cert;
    ssl_certificate_key /etc/nginx/ssl/host.key;    

    location / {
            proxy_buffers 8 32k;
            proxy_buffer_size 64k;
            proxy_pass http://websocket;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $http_host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-NginX-Proxy true;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_read_timeout 86400s;
            proxy_send_timeout 86400s;
    }
}
#armadillium01 systemd[1]: Started nginx.service - A high performance web server and a reverse proxy server.

webserver nginx node configuration file:

sudo service nginx start
Alternative to webserver Nginx : (work in progress)

  • Throubleshooter:
**Error
Warning: Unable to read the known-hosts file: No such file or directory: '/var/lib/pcsd/known-hosts'
armadillium03: Unable to authenticate to armadillium03 - (HTTP error: 401)...
armadillium01: Unable to authenticate to armadillium01 - (HTTP error: 401)...
armadillium04: Unable to authenticate to armadillium04 - (HTTP error: 401)...
armadillium02: Unable to authenticate to armadillium02 - (HTTP error: 401)...
  • cause: PCSD service not started
  • fix: Start PCSD service
#armadilliun02
ssh armadillium02@192.168.1.142
sudo service pcsd start
sudo service pcsd status
  • PCSD Status:
sudo pcs cluster status
  * armadillium03: Online
  * armadillium04: Online
  * armadillium02: Online
  * armadillium01: Online

  • Property List TO each node
sudo pcs property list
Example Working Output:
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: HArmadillium
dc-version: 2.0.5
have-watchdog: false
no-quorum-policy: ignore
stonith-enabled: false
HACluster configured and ready to host something of amazing

Resources: