Skip to content

Commit

Permalink
Add
Browse files Browse the repository at this point in the history
  • Loading branch information
hapiman2 committed Sep 12, 2018
1 parent 7f17be6 commit f421f25
Show file tree
Hide file tree
Showing 10 changed files with 196 additions and 40 deletions.
73 changes: 73 additions & 0 deletions elasticsearch/安装.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,76 @@ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.ta
tar -xvzf elasticsearch-6.4.0.tar.gz
mv elasticsearch-6.4.0 elasticsearch
```
### 配置
- 主节点配置

```sh
# 设置跨域相关
http.cors.enabled: true
http.cors.allow-origin: "*"

# 集群名字
cluster.name: gbl
# 节点名称
node.name: master
# 是否是主节点
node.master: true
network.host: 10.10.254.53
# 服务端口
http.port: 9200
# 数据存储目录
path.data: /data/es
# 日志存储目录
path.logs: /data/logs/es

# 创建目录
mkdir -p /data/es
mkdir -p /data/logs/es
```
- 从节点配置
```sh
cluster.name: gbl
node.name: slave1
node.master: false
http.port: 9200
network.host: 10.10.254.54
discovery.zen.ping.unicast.hosts: ["10.10.254.53"]

# 数据存储目录
path.data: /data/es
# 日志存储目录
path.logs: /data/logs/es

# 创建目录
mkdir -p /data/es
mkdir -p /data/logs/es
```

### 启动退出
```sh
# 启动
./bin/elasticsearch -d
# 重启
ps -ef | grep elastic
kill -9 xxxx
./bin/elasticsearch -d
```

### 问题
- 问题一
[1]: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
修改`sudo vim /etc/security/limits.conf `文件
```sh
hadoop soft nofile 65536
hadoop hard nofile 65536
```
修改之后退出生效

- 问题二
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
修改`sudo vi /etc/sysctl.conf`文件
```sh
# 增加
vm.max_map_count=655360
```
使用`sudo sysctl -p`使配置生效
26 changes: 25 additions & 1 deletion git/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,28 @@ sudo vim /etc/profile
# 在文件中增加export PATH=/usr/local/git/bin:$PATH行,加到全局路径中
source /etc/profile
```
·
如果没有生效, 退出当前用户, 即可生效

### 问题
- 问题1
```
Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 3.
BEGIN failed--compilation aborted at Makefile.PL line 3.
make[1]: *** [perl.mak] Error 2
make: *** [perl/perl.mak] Error 2
```
执行安装命令: `yum install perl-ExtUtils-MakeMaker package`

- 问题2
```
GIT_VERSION = 2.9.5
* new build flags
CC credential-store.o
In file included from cache.h:4:0,
from credential-store.c:1:
git-compat-util.h:280:25: fatal error: openssl/ssl.h: No such file or directory
#include <openssl/ssl.h>
^
compilation terminated.
make: *** [credential-store.o] Error 1
```
20 changes: 15 additions & 5 deletions hadoop/hbase集群安装.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ export PATH=$PATH:$HBASE_HOME/bin
```
### 设置`supergroup`用户组
```sh
groupadd supergroup
groupmems -g supergroup -a hbase
sudo groupadd supergroup
sudo groupmems -g supergroup -a hbase
```
### 安装hbase
```sh
Expand All @@ -25,7 +25,13 @@ sudo chown -R hbase.hbase /usr/local/hbase
# 创建日志
sudo mkdir /data/logs/hbase
sudo chown -R hbase.hbase /data/logs/hbase
# 创建数据目录
sudo mkdir /data/hbase
sudo chown -R hbase.hbase /data/hbase
```

### 将hadoop集群的`hdfs-site.xml``core-site.xml`放到`hbase/conf`

### 配置hbase-env.sh
```sh
# 设置不使用系统的`zookeer`,使用独立的
Expand All @@ -46,7 +52,7 @@ export HBASE_LOG_DIR=/data/logs/hbase
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hbase/zookeeper</value>
<value>/data/hbase/zookeeper</value>
</property>
<property>
<!-- 分布式开关 -->
Expand All @@ -56,13 +62,17 @@ export HBASE_LOG_DIR=/data/logs/hbase
<property>
<!-- zookeeper集群地址 -->
<name>hbase.zookeeper.quorum</name>
<value>fjr-ofckv-72-238,fjr-ofckv-72-237,fjr-ofckv-72-236</value>
<value>fjx-ofckv-72-238,fjx-ofckv-72-237,fjx-ofckv-72-236</value>
</property>
<property>
<name>hbase.table.sanity.checks</name>
<value>false</value>
</property>
</configuration>
```
**备注**: 将上面的两个文件分别同步到集群中每台机器上

### 手动启动
### 手动启动·
```sh
# 启动Master
$HBASE_HOME/bin/hbase-daemon.sh start master
Expand Down
40 changes: 39 additions & 1 deletion hadoop/phoenix安装.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,14 @@
# 根据hbase的版本下载
wget http://archive.apache.org/dist/phoenix/apache-phoenix-4.13.1-HBase-1.2/bin/apache-phoenix-4.13.1-HBase-1.2-bin.tar.gz
# 解压
tar -xvzf apache-phoenix-4.14.0-HBase-1.2-bin.tar.gz
tar -xvzf apache-phoenix-4.13.0-HBase-1.2-bin.tar.gz
mv apache-phoenix-4.14.0-HBase-1.2-bin phoenix
mv phoenix /usr/local/hbase
```

### 复制文件
`phoenix-*-server.jar`文件复制到每个hbase节点的`/usr/local/hbase/lib`目录下

### 启动
```sh
cp /usr/local/hbase/conf/hbase-site.xml /usr/local/hbase/phoenix/
Expand All @@ -28,3 +32,37 @@ Caused by: java.net.UnknownHostException: mycluster

`mycluster`是集群的名字, 如果集群有问题,就会导致`phoenix`不能识别`mycluster`, 从而连不上`hbase`;
解决方案就是将`hadoop``core-site.xml`,`hdfs-site.xml`文件复制到`hbase``/usr/local/hbase/conf`目录即可.

**问题二**

org.apache.phoenix.exception.PhoenixIOException: SYSTEM.CATALOG
Caused by: org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CATALOG

先关闭hbase(包括HMaster和regionServer), 执行`./hbase clean --cleanAll`, 然后重启

**问题三**

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set `hbase.table.sanity.checks` to false at conf or table descriptor if you want to bypass sanity checks

直接在`hbase-site.xml`中添加
```xml
<property>
<name>hbase.table.sanity.checks</name>
<value>false</value>
</property>
```
`hbase.table.sanity.checks=true`主要用于hbase的各种参数检查, 如果设置为`true`, 则会按照下面的逻辑顺序检查
```
1.check max file size,hbase.hregion.max.filesize,最小为2MB
2.check flush size,hbase.hregion.memstore.flush.size,最小为1MB
3.check that coprocessors and other specified plugin classes can be loaded
4.check compression can be loaded
5.check encryption can be loaded
6.Verify compaction policy
7.check that we have at least 1 CF
8.check blockSize
9.check versions
10.check minVersions <= maxVerions
11.check replication scope
12.check data replication factor, it can be 0(default value) when user has not explicitly set the value, in this case we use default replication factor set in the file system.
```
12 changes: 6 additions & 6 deletions hadoop/zk集群安装.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,9 @@ ZOOKEEPER_PREFIX="${ZOOBINDIR}/.."
```
4.创建文件夹
```sh
sudo mkdir /data/zookeeper
sudo mkdir -p /data/zookeeper
sudo chown zookeeper.zookeeper /data/zookeeper
sudo mkdir /data/logs/zookeeper
sudo mkdir -p /data/logs/zookeeper
sudo chown zookeeper.zookeeper /data/logs/zookeeper
```
5.在每台`dataDir`目录中创建`myid`文件, 写入这台服务器的`Zookeeper id`, 这个id是一个1-255的数字
Expand All @@ -63,10 +63,10 @@ $ZOOKEEPER_HOME/bin/zkServer.sh stop
ls /data/zookeeper |grep -v myid |xargs rm -rf (删除myid之外的文件)
rm -rf /data/logs/zookeeper/*
# 配置$ZOOKEEPER_HOME/conf/zoo.cfg
# fjr-ofckv-72-238 237 238 是你的hostname
server.238=fjr-ofckv-72-238:2888:3888
server.237=fjr-ofckv-72-237:2888:3888
server.236=fjr-ofckv-72-236:2888:3888
# fjx-ofckv-72-238 237 238 是你的hostname
server.238=fjx-ofckv-72-238:2888:3888
server.237=fjx-ofckv-72-237:2888:3888
server.236=fjx-ofckv-72-236:2888:3888
```

### 添加到自启动
Expand Down
32 changes: 18 additions & 14 deletions hadoop/集群安装-HA.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,30 +16,30 @@ $HADOOP_PREFIX/sbin/hadoop-daemon.sh start journalnode
<property>
<!-- 设置namenode列表-->
<name>dfs.ha.namenodes.mycluster</name>
<value>fjr-ofckv-72-238,fjr-ofckv-72-237</value>
<value>fjx-ofckv-72-238,fjx-ofckv-72-237</value>
</property>
<property>
<!-- 设置nodenode的rpc访问地址 -->
<name>dfs.namenode.rpc-address.mycluster.fjr-ofckv-72-238</name>
<value>fjr-ofckv-72-238:8020</value>
<name>dfs.namenode.rpc-address.mycluster.fjx-ofckv-72-238</name>
<value>fjx-ofckv-72-238:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.fjr-ofckv-72-237</name>
<value>fjr-ofckv-72-237:8020</value>
<name>dfs.namenode.rpc-address.mycluster.fjx-ofckv-72-237</name>
<value>fjx-ofckv-72-237:8020</value>
</property>
<property>
<!-- 设置namenode的http访问地址 -->
<name>dfs.namenode.http-address.mycluster.fjr-ofckv-72-238</name>
<value>fjr-ofckv-72-238:50070</value>
<name>dfs.namenode.http-address.mycluster.fjx-ofckv-72-238</name>
<value>fjx-ofckv-72-238:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.fjr-ofckv-72-237</name>
<value>fjr-ofckv-72-237:50070</value>
<name>dfs.namenode.http-address.mycluster.fjx-ofckv-72-237</name>
<value>fjx-ofckv-72-237:50070</value>
</property>
<property>
<!-- 设置journalnode集群访问地址 -->
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://fjr-ofckv-72-238:8485;fjr-ofckv-72-237:8485;fjr-ofckv-72-236:8485/mycluster</value>
<value>qjournal://fjx-ofckv-72-238:8485;fjx-ofckv-72-237:8485;fjx-ofckv-72-236:8485/mycluster</value>
</property>
<property>
<!-- 配置dns客户端 -->
Expand Down Expand Up @@ -92,7 +92,7 @@ $HADOOP_PREFIX/sbin/hadoop-daemon.sh start journalnode
<property>
<!-- zk列表 -->
<name>ha.zookeeper.quorum</name>
<value>fjr-ofckv-72-238:2181,fjr-ofckv-72-237:2181,fjr-ofckv-72-236:2181</value>
<value>fjx-ofckv-72-238:2181,fjx-ofckv-72-237:2181,fjx-ofckv-72-236:2181</value>
</property>
<property>
<name>ha.zookeeper.session-timeout.ms</name>
Expand All @@ -103,8 +103,8 @@ $HADOOP_PREFIX/sbin/hadoop-daemon.sh start journalnode
```sh
# 创建文件目录
su - hadoop
mkdir /data/hadoop/hdfs/jn
chown hadoop.hadoop /data/hadoop/hdfs/jn
sudo mkdir /data/hadoop/hdfs/jn
sudo chown hadoop.hadoop /data/hadoop/hdfs/jn

# 启动
$HADOOP_PREFIX/sbin/hadoop-daemon.sh start journalnode
Expand Down Expand Up @@ -224,6 +224,10 @@ exit $RETVAL
### 遇到的问题
1. 启动namenode的时候发现不能启动, 查看了log日志 ` ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Got error reading edit log input stream http://fjr-ofckv-72-236:8480/getJournal?jid=mycluster&segmentTxId=1&storageInfo=-60%3A2130050496%3A0%3ACID-f77b10ed-4f80-4ad6-a9f1-1166fd67bcfc; failing over to edit log http://fjr-ofckv-72-237:8480/getJournal?jid=mycluster&segmentTxId=1&storageInfo=-60%3A2130050496%3A0%3ACID-f77b10ed-4f80-4ad6-a9f1-1166fd67bcfc`
1. 启动namenode的时候发现不能启动, 查看了log日志 ` ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Got error reading edit log input stream http://fjx-ofckv-72-236:8480/getJournal?jid=mycluster&segmentTxId=1&storageInfo=-60%3A2130050496%3A0%3ACID-f77b10ed-4f80-4ad6-a9f1-1166fd67bcfc; failing over to edit log http://fjx-ofckv-72-237:8480/getJournal?jid=mycluster&segmentTxId=1&storageInfo=-60%3A2130050496%3A0%3ACID-f77b10ed-4f80-4ad6-a9f1-1166fd67bcfc`
解决方法: 在`namenode`的机器上执行`hdfs namenode -bootstrapStandby`
2. 如果出现在不同的`namenode`节点下面`datanode`节点不同, 使用
`scp -r /data/hadoop/hdfs/nn/* hadoop@254-53:/data/hadoop/hdfs/nn/`的方式, 将`namenode`同步;
若同时出现了`datanode`异常,则删除`rm -rf /data/hadoop/hdfs/dn/current/`, 然后重启`/usr/local/hadoop/sbin/hadoop-daemon.sh --script hdfs start datanode`
17 changes: 8 additions & 9 deletions hadoop/集群安装-非HA.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@

### 配置用户名
```
useradd hadoop
passwd hadoop
useradd hadoop && passwd hadoop
# 登录密钥
# 添加hadoop到sudoers中
chmod u+w /etc/sudoers
Expand All @@ -17,21 +16,19 @@ hadoop ALL=NOPASSWD:ALL
cd
ssh-keygen -t rsa
# 在不同的服务器上生成之后,
chmod 700 ~/.ssh
cat id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 .ssh/authorized_keys
chmod 700 ~/.ssh && cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys && chmod 600 .ssh/authorized_keys
# 确保每一台机器上的authorized_keys保持一致, 不通机器之间都可以通过ssh互通
```

### 安装java
使用`hadoop`用户安装`java`

安装文件: `jdk-8u171-linux-x64.tar.gz`
文件地址: [jdk-8u181-linux-x64.tar.gz](http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz?AuthParam=1536580563_bd197c67d1ce939bb53030d979b2c591)

安装过程:
```
tar -xvzf jdk-8u171-linux-x64.tar.gz
mv jdk1.8.0_171 java
tar -xvzf jdk-8u181-linux-x64.tar.gz
mv jdk1.8.0_181 java
sudo mv java /usr/local
```

Expand Down Expand Up @@ -83,7 +80,7 @@ export HADOOP_LOG_DIR=/data/logs/hadoop
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://fjr-ofckv-72-238:8020</value>
<value>hdfs://fjx-ofckv-72-238:8020</value>
</property>
</configuration>
```
Expand Down Expand Up @@ -131,8 +128,10 @@ rm -rf /data/hadoop/hdfs/dn/current/
```sh
# 启动namenode节点, 使用jps查看 1490 Jps \n 1382 NameNode
/usr/local/hadoop/sbin/hadoop-daemon.sh --script hdfs start namenode
/usr/local/hadoop/sbin/hadoop-daemon.sh --script hdfs stop namenode
# 启动datanode节点, 使用jps查看 13281 DataNode \n 15530 Jps
/usr/local/hadoop/sbin/hadoop-daemon.sh --script hdfs start datanode
/usr/local/hadoop/sbin/hadoop-daemon.sh --script hdfs stop datanode
```

### 遇到的问题
Expand Down
3 changes: 3 additions & 0 deletions mq/kafka/linux安装kafka集群.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
**下载地址**

wget http://archive.apache.org/dist/kafka/1.0.0/kafka_2.12-1.0.0.tgz
9 changes: 7 additions & 2 deletions nodejs/linux上安装node.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
### 下载
```sh
wget https://nodejs.org/dist/v8.11.4/node-v8.11.4.tar.gz
wget https://nodejs.org/dist/v8.11.4/node-v8.11.4-linux-x64.tar.xz
```
### 解压
```sh

tar -xvJf node-v8.11.4-linux-x64.tar.xz
mv node-v8.11.4-linux-x64 node
sudo mv node /usr/local/
```

### 设置环境变量

Loading

0 comments on commit f421f25

Please sign in to comment.