-
Notifications
You must be signed in to change notification settings - Fork 0
/
search.xml
397 lines (397 loc) · 161 KB
/
search.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
<?xml version="1.0" encoding="utf-8"?>
<search>
<entry>
<title><![CDATA[kube-scheduler]]></title>
<url>%2F2019%2F01%2F08%2Fkube-scheduler%2F</url>
<content type="text"><![CDATA[前言只是为了便于自己理解和记忆做的记录,大多摘自网站 https://feisky.xyz/kubernetes-handbook/ kube-schedulerkube-scheduler 负责分配调度 Pod 到集群内的节点上,它监听 kube-apiserver,查询还未分配 Node 的 Pod,然后根据调度策略为这些 Pod 分配节点(更新 Pod 的 NodeName 字段)。 调度器需要充分考虑诸多的因素: 公平调度 资源高效利用 QoS affinity 和 anti-affinity 数据本地化(data locality) 内部负载干扰(inter-workload interference) deadlines 指定 Node 节点调度#主题 ##一级分支 ###二级分支 ##一级分支 ##一级分支 ###二级分支 ####三级分支]]></content>
<categories>
<category>k8s学习笔记</category>
</categories>
<tags>
<tag>k8s</tag>
</tags>
</entry>
<entry>
<title><![CDATA[tomcat为什么会内存溢出]]></title>
<url>%2F2018%2F12%2F06%2Ftomcat%E4%B8%BA%E4%BB%80%E4%B9%88%E4%BC%9A%E5%86%85%E5%AD%98%E6%BA%A2%E5%87%BA%2F</url>
<content type="text"><![CDATA[前言1. OutOfMemoryError: Java heap space这个应该是是最常见的内存溢出报错。 例子: 沙县小吃大厅=heap space;坐下吃饭=正常的业务请求;吃饭完离开=业务请求结束;打扫餐桌=GC内存回收 需要明确的: 大厅容量有限,因此能放下的桌子是有限的 不能让食客坐有上一个食客吃完之后还未清理的桌子 食客用餐期间不能清理食客的桌子 打扫桌子是服务员定时的或在没有桌子可用的时候进行打扫 没桌子可用是打扫桌子需要等所有能打扫的桌子打扫完之后才能继续提供服务 不可能让食客吃完之后顺便打扫桌子 好,以上明确之后,我们来用这个例子说明Java heap space溢出的问题 讲解: 正常情况下食客来吃沙县小吃 –> 老板招待食客坐下 –> 食客点餐,吃饭 –> 食客离开 –> 老板定时收拾桌子。 上面的情况是正常的情况,一切看着都是在有序的进行。有一天,可能是由于这个沙县小吃做的比较好吃,突然火了,成网红店了,食客突然增多了。店面还是那个店面,桌子数量还是那么多,来一个食客坐下吃饭,再来一个食客坐下吃饭。食客吃完,走了,打扫桌子,接待下一个顾客。可是一下进来吃饭的人太多,桌子被占满了,又有食客来了,完了,没地方坐了,这时候没办法,暂停接待客人,让食客等着,开始打扫桌子,后面陆续有食客来,还得让食客等着,排的队越来越长,等待时间也越来越长,长到超过食客忍耐的极限,就相当于tomcat访问不了了。 其实会发现,问题很简单,就是能接待的食客数量低于要来吃饭的食客数量。 可能有人会说,店面扩大点,多摆点桌子就好了,这其实就相当于增大heap space。是,这确实是一个解决方法,但那样的话有三个问题: 成本会增加很多 可用土地面积就那么大了,店面不允许扩大了 增加桌子数量之后,还是不够,等桌子占满的时候,开始暂停接待客人,进行全面打扫,由于桌子面积不变,桌子数量增多,全部打扫完的时间也就变长,因此暂停接待的时间就增加了,体验也不好。 这时候有第二种方案,减小桌子大小,比如之前只有8人桌,来一个人也得坐8人桌,所以可以接待的客人就少了。那我优化下,去掉一些8人桌,换成4人桌和2人桌,是不是可以接待的客人就多了?最后发现不能再减小桌子大小了,怎么办,开分店!有钱了,任性了,开分店。 这就是方案三了,开了分店之后,相应的能接待的人就多了。]]></content>
<categories>
<category>答女朋友问</category>
</categories>
<tags>
<tag>tomcat</tag>
<tag>内存</tag>
</tags>
</entry>
<entry>
<title><![CDATA[oracle12.2rac新增节点后出现undo被占用]]></title>
<url>%2F2018%2F11%2F29%2Foracle12-2rac%E6%96%B0%E5%A2%9E%E8%8A%82%E7%82%B9%E5%90%8E%E5%87%BA%E7%8E%B0undo%E8%A2%AB%E5%8D%A0%E7%94%A8%2F</url>
<content type="text"><![CDATA[描述ORACLE 12.2 rac环境下新增一个节点之后,新增节点上的PDB启动失败。下面简单记录下解决问题的过程,有时间再详细梳理。 解决过程表象: 节点创建成功后,泰尔和海银的PDB不能open,会报某个undo表空间被占用。 拓邦启动之后自动变为受限模式 解决过程: undo被占用,先考虑了新增节点的默认undo是否是执行被占用undo的,发现不是,都有自己的默认undo表空间 考虑能否释放undo,PDB1被占用的undo不是任何节点的默认undo,因此考虑删除,发现删除不掉,有未提交事务(这个问题没解决) PDB2被占用的undo是节点2使用的undo,考虑替换undo之后,无法删除被占用的undo 开始考虑为什么节点三的pdb会去用其他的undo表空间,理论上应该用自己的undo表空间 在可用节点查看已存在的undo,发现节点三的undo表空间不存在 查看alert日志,发现在实例创建时,节点3的undo表空间创建失败 怀疑是因为节点三的undo创建失败,自动去查找已存在的.undo。(注意,是.undo,还需要验证,不过PDB3的也创建失败,但是没有尝试使用其他节点的undo,而是启动为受限模式了,并且每次启动都会尝试去创建undo),创建不成功,会在alert报错 手动创建节点三的undo,并重新制定给节点三,启动成功,问题解决。]]></content>
<categories>
<category>oracle</category>
<category>undo</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>RAC</tag>
<tag>UNDO</tag>
<tag>ISSUE</tag>
</tags>
</entry>
<entry>
<title><![CDATA[python常见错误]]></title>
<url>%2F2018%2F08%2F29%2Fpython%E5%B8%B8%E8%A7%81%E9%94%99%E8%AF%AF%2F</url>
<content type="text"><![CDATA[前言记录一些奇葩问题 Python List 遍历问题1234567alist=[1,2,3,4,5,6]for i in alist: print i alist.remove(i) 135 原因使用for i in list的方式遍历列表时,使用的是下标操作,remove使用的是值操作,最初alist的值为[1,2,3,4,5,6] 循环的第一次,下标指向0,此时打印出来的值为1,执行remove之后,列表变为[2,3,4,5,6] 此时进行循环的第二次的时候,下标为1,但是alist列表下标为1的位置的值变为了3 后面以此类推。]]></content>
<categories>
<category>python</category>
</categories>
<tags>
<tag>python</tag>
</tags>
</entry>
<entry>
<title><![CDATA[brainwave]]></title>
<url>%2F2018%2F06%2F26%2Fbrainwave%2F</url>
<content type="text"><![CDATA[待研究 Flashback Table UNDO_RETENTION Flashback Drop Flashback Database Media Recovery]]></content>
<categories>
<category>brainwave</category>
</categories>
<tags>
<tag>brainwave</tag>
</tags>
</entry>
<entry>
<title><![CDATA[mysql存储引擎学习笔记]]></title>
<url>%2F2018%2F05%2F22%2Fmysql%E5%AD%98%E5%82%A8%E5%BC%95%E6%93%8E%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0%2F</url>
<content type="text"></content>
<tags>
<tag>mysql</tag>
</tags>
</entry>
<entry>
<title><![CDATA[delete oracle dg config]]></title>
<url>%2F2018%2F04%2F24%2Fdelete-oracle-dg-config%2F</url>
<content type="text"><![CDATA[简述反向操作,取消oracle dataguard,恢复数据库为非dg状态 操作主库取消日志传输在主库执行 123alter system set log_archive_dest_state_2=defer;alter system set log_archive_dest_2=''alter system set log_archive_dest_state_2 = 'enable'; 备注:在这里,我的dg日志传输的参数配置在log_archive_dest_state_2,具体,根据自己的配置。可使用如下命令查询 1show parameter log_archive_dest 找到VALUE 为service的 前面的NAME 就是该参数名 主库取消FORCE LOGGING123456SQL> ALTER DATABASE NO FORCE LOGGING;Database altered.SQL> select log_mode,force_logging from v$database;LOG_MODE FORCE_LOGGING------------------------------------ ---------------------------------------------------------------------------------------------------------------------ARCHIVELOG NO 主库修改standby_file_management 为MANUAL123456SQL> alter system set standby_file_management = 'MANUAL';System altered.SQL> show parameter standby_file_management;NAME TYPE VALUE------------------------------------ --------------------------------- ------------------------------standby_file_management string MANUAL 修改归档日志的参数值,删除角色和DB_UNIQUE_NAME的指定原值: 123456SQL> show parameter log_archive_dest_1NAME TYPE VALUE------------------------------------ --------------------------------- ------------------------------log_archive_dest_1 string LOCATION=+ARCHIVE_LOG VALID_FO R=(ALL_LOGFILES,ALL_ROLES) DB_ UNIQUE_NAME=srmcloud 改为: 1234SQL> show parameter log_archive_dest_1NAME TYPE VALUE------------------------------------ --------------------------------- ------------------------------log_archive_dest_1 string LOCATION=+ARCHIVE_LOG 命令: 1alter system set log_archive_dest_1='LOCATION=+ARCHIVE_LOG'; 注:其中的LOCATION根据具体情况修改 设置FAL_SERVER为空12SQL> alter system set FAL_SERVER='';System altered. 设置log_archive_config为空1alter system set log_archive_config=''; 删除standby日志组123456789101112131415SQL> select group#,thread#,sequence#,archived,status from v$standby_log; GROUP# THREAD# SEQUENCE# ARCHIVED STATUS---------- ---------- ---------- --------- ------------------------------ 11 0 0 YES UNASSIGNED 12 0 0 YES UNASSIGNED 13 0 0 YES UNASSIGNED 14 0 0 YES UNASSIGNEDSQL> ALTER DATABASE DROP STANDBY LOGFILE GROUP 11;Database altered.SQL> ALTER DATABASE DROP STANDBY LOGFILE GROUP 12;Database altered.SQL> ALTER DATABASE DROP STANDBY LOGFILE GROUP 13;Database altered.SQL> ALTER DATABASE DROP STANDBY LOGFILE GROUP 14;Database altered.]]></content>
<categories>
<category>oracle</category>
<category>dg</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>dg</tag>
</tags>
</entry>
<entry>
<title><![CDATA[oracle 11.2.0.4 RAC PSU PATCH]]></title>
<url>%2F2018%2F03%2F06%2Foracle-11-2-0-4-RAC-PSU-PATCH%2F</url>
<content type="text"><![CDATA[简介 Key Value 操作系统 centos 7.4 数据库版本 11.2.0.4 集群 RAC 下载相关补丁使用如下链接下载opatch: Patch 6880880 使用如下链接下载GI PSU和DB PSU: PSU 下载最新的GI PSU 下载最新的DB PSU 阅读GI PSU的README文档解压p27107360_112040_Linux-x86-64.zip 阅读27107360下的README.html 要求OPatch的版本为11.2.0.3.6或更高 使用如下命令查看OPatch的版本 $ORACLE_HOME/OPatch/opatch version 版本为11.2.0.3.4,因此需要升级OPatch 升级OPatch两个节点GRID用户和ORACLE用户都做如下操作 12mv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatch.pre6880880 unzip -d $ORACLE_HOME <OPATCH-ZIP_LOCATION>p6880880_112000_Linux-x86-64.zip 解压补丁文件1unzip p27107360_112040_Linux-x86-64.zip 检查补丁之间有无冲突需要注意grid用户的$ORACLE_HOME 的权限 使用grid用户和oracle用户执行如下命令检测有无冲突 12cd 27107360$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./ GRID用户执行结果如下 ORACLE用户执行结果如下 节点1打补丁需要关闭节点1 grid用户 1$ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/software/gi_psu/27107360/ oracle 用户 1$ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/software/gi_psu/27107360/ 报错,补丁22502505需要安装11.2.0.4.0的oracle.usm 这里只需要打26925576即可 1$ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/software/gi_psu/27107360/26925576 输出略 。。。 节点2打补丁与节点1打补丁的操作一模一样 打完之后启动数据库集群 升级数据字典12345678910cd $ORACLE_HOME/rdbms/adminsqlplus /nologSQL> CONNECT / AS SYSDBASQL> @catbundle.sql psu applySQL> QUITcd $ORACLE_HOME/rdbms/adminsqlplus /nologSQL> CONNECT / AS SYSDBASQL> @utlrp.sql 重启数据库集群 验证补丁是否打成功各节点的GI HOME和ORACLE HOME都执行验证。 12$ cd $ORACLE_HOME/OPatch$ ./opatch lsinventory 数据库的验证。 1SQL> select * from dba_registry_history;]]></content>
<categories>
<category>oracle</category>
<category>rac</category>
<category>patch</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>rac</tag>
<tag>patch</tag>
</tags>
</entry>
<entry>
<title><![CDATA[shell 编程]]></title>
<url>%2F2018%2F01%2F12%2Fshell-%E7%BC%96%E7%A8%8B%2F</url>
<content type="text"><![CDATA[描述记录shell编程出现的坑和一些常用的语法 SHELL 参数 ${parameter:?word} 当parameter指示的参数没有被设值的时候,将会通过标准错误的方式显示word中的语句。 shell中各种括号的使用记录小括号(又称圆括号)单小括号 新开一个进程来执行命令 如图所示,括号中的命令将会新开一个子shell顺序执行,所以括号中的变量不能够被脚本余下的部分使用。括号中多个命令之间用分号隔开,最后一个命令可以没有分号,各命令和括号之间不必有空格。 作为$()来使用用于将()中的命令执行的结果替换回原来的位置 用于初始化数组。如:array=(a b c d) 双小括号 整数扩展。这种扩展计算是整数型的计算,不支持浮点型。((exp))结构扩展并计算一个算术表达式的值,如果表达式的结果为0,那么返回的退出状态码为1,或者 是”假”,而一个非零值的表达式所返回的退出状态码将为0,或者是”true”。若是逻辑判断,表达式exp为真则为1,假则为0 123456789101112[root@zabbix-server ~]# NUM=0[root@zabbix-server ~]# if (($NUM));then echo true;else echo false;fifalse[root@zabbix-server ~]# NUM=1[root@zabbix-server ~]# if (($NUM));then echo true;else echo false;fitrue[root@zabbix-server ~]# if ((1>2));then echo true;else echo false;fifalse[root@zabbix-server ~]# if ((1<2));then echo true;else echo false;fitrue[root@zabbix-server ~]# if ((1==2));then echo true;else echo false;fifalse 只要括号中的运算符、表达式符合C语言运算规则,都可用在\$((exp))中,甚至是三目运算符。作不同进位(如二进制、八进制、十六进制)运算时,输出结果全都自动转化成了十进制。 12[root@zabbix-server ~]# echo $((16#10))16 做数学运算 12345678910111213[root@zabbix-server ~]# a=1[root@zabbix-server ~]# ((a++))[root@zabbix-server ~]# echo $a2[root@zabbix-server ~]# ((a--))[root@zabbix-server ~]# echo $a1[root@zabbix-server ~]# ((a=a+2))[root@zabbix-server ~]# echo $a3[root@zabbix-server ~]# ((a=a*2))[root@zabbix-server ~]# echo $a6 常用于算术运算比较,双括号中的变量可以不使用$符号前缀。括号内支持多个表达式用逗号分开。 只要括号中的表达式符合C语言运算规则,比如可以直接使用for((i=0;i<5;i++)) 特殊公式1echo ${ZABBIX_SOURCE_DIR:=zabbix-3.4.8} 作用是如果ZABBIX_SOURCE_DIR不为空则输出$ZABBIX_SOURCE_DIR,否则输出zabbix-3.4.8,同时把zabbix-3.4.8赋值给ZABBIX_SOURCE_DIR。 执行命令时不走alias 关于Linux的拷贝命令我们都知道cp的参数 -f的意思是: -f, –force if an existing destination file cannot be opened, remove it and try again 也就是说-f可以覆盖目的目录下有的文件, 但你有没有发现过这种情况,即使使用了-f也无法覆盖? 原因何在呢? 默认cp命令是有别名(alias cp=’cp -i’)的,无法强制覆盖,即使你用 -f 参数也无法强制覆盖文件。 可以使用\cp 执行cp命令时不走alias \cp * -rf ../../test 网上还有一种解决方法:临时取消cp的alias 123>#unalias cp>#cp a /test/a> 常用的判断 –b 当file存在并且是块文件时返回真 -c 当file存在并且是字符文件时返回真 -d 当pathname存在并且是一个目录时返回真 -e 当pathname指定的文件或目录存在时返回真 -f 当file存在并且是正规文件时返回真 -g 当由pathname指定的文件或目录存在并且设置了SGID位时返回为真 -h 当file存在并且是符号链接文件时返回真,该选项在一些老系统上无效 -k 当由pathname指定的文件或目录存在并且设置了“粘滞”位时返回真 -p 当file存在并且是命令管道时返回为真 -r 当由pathname指定的文件或目录存在并且可读时返回为真 -s 当file存在文件大小大于0时返回真 -u 当由pathname指定的文件或目录存在并且设置了SUID位时返回真 -w 当由pathname指定的文件或目录存在并且可执行时返回真。一个目录为了它的内容被访问必然是可执行的。 -o 当由pathname指定的文件或目录存在并且被子当前进程的有效用户ID所指定的用户拥有时返回真。 -eq 等于 -ne 不等于 -gt 大于 -lt 小于 -le 小于等于 -ge 大于等于 -z 空串 = 两个字符相等 != 两个字符不等 -n 非空串 常用的判断命令是否存在123$ command -v foo >/dev/null 2>&1 || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }$ type foo >/dev/null 2>&1 || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }$ hash foo 2>/dev/null || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }]]></content>
<categories>
<category>shell</category>
</categories>
<tags>
<tag>shell</tag>
</tags>
</entry>
<entry>
<title><![CDATA[asmlib创建的磁盘在线扩容]]></title>
<url>%2F2018%2F01%2F11%2Fasmlib%E5%88%9B%E5%BB%BA%E7%9A%84%E7%A3%81%E7%9B%98%E5%9C%A8%E7%BA%BF%E6%89%A9%E5%AE%B9%2F</url>
<content type="text"><![CDATA[说明本环境的asm磁盘是通过asmlib创建的,不适用于scsi的磁盘。 新增磁盘为/dev/xvdd 操作过程分区1234567891011121314151617181920212223242526272829[root@srm-db ~]# fdisk /dev/xvdd Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel with disk identifier 0x36af11d0.Changes will remain in memory only, until you decide to write them.After that, of course, the previous content won't be recoverable.Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').Command (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-65270, default 1): Using default value 1Last cylinder, +cylinders or +size{K,M,G} (1-65270, default 65270): Using default value 65270Command (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.Syncing disks.[root@srm-db ~]# 查看现有ASM磁盘123[root@srm-db ~]# oracleasm listdisksDATADATA2 添加新的asm磁盘12345678[root@srm-db ~]# oracleasm createdisk DATA3 /dev/xvdd1Writing disk header: doneInstantiating disk: done[root@srm-db ~]# oracleasm listdisksDATADATA2DATA3[root@srm-db ~]# 查看数据库中记录的asm磁盘12345678910111213141516171819202122232425262728293031323334353637383940[root@srm-db ~]# su - grid[grid@srm-db ~]$ sqlplus / as sysasm SQL*Plus: Release 12.2.0.1.0 Production on Thu Jan 11 19:42:07 2018Copyright (c) 1982, 2016, Oracle. All rights reserved.Connected to:Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit ProductionSQL> select name, path, mode_status, state, disk_number,failgroup from v$asm_disk;NAME------------------------------PATH--------------------------------------------------------------------------------MODE_ST STATE DISK_NUMBER FAILGROUP------- -------- ----------- ------------------------------/dev/oracleasm/disks/DATA3ONLINE NORMAL 0DATA_0000/dev/oracleasm/disks/DATAONLINE NORMAL 0 DATA_0000NAME------------------------------PATH--------------------------------------------------------------------------------MODE_ST STATE DISK_NUMBER FAILGROUP------- -------- ----------- ------------------------------DATA_0001/dev/oracleasm/disks/DATA2ONLINE NORMAL 1 DATA_0001SQL> 新增的asm磁盘为/dev/oracleasm/disks/DATA3 查看磁盘组名称12345678SQL> select group_number,name,TOTAL_MB, FREE_MB from v$asm_diskgroup 2 ;GROUP_NUMBER NAME TOTAL_MB FREE_MB------------ ------------------------------ ---------- ---------- 1 DATA 1023988 36588SQL> DATA磁盘组就是要扩容的磁盘组 扩容DATA磁盘组123456SQL> alter diskgroup DATA add disk '/dev/oracleasm/disks/DATA3' rebalance power 10;Diskgroup altered.SQL> 观察reblance1234567891011121314151617SQL> select * from v$asm_operation;GROUP_NUMBER OPERA PASS STAT POWER ACTUAL SOFAR EST_WORK------------ ----- --------- ---- ---------- ---------- ---------- ---------- EST_RATE EST_MINUTES ERROR_CODE CON_ID---------- ----------- -------------------------------------------- ---------- 1 REBAL COMPACT WAIT 10 10 0 0 0 0 0 1 REBAL REBALANCE RUN 10 10 5531 82304 2268 33 0 1 REBAL REBUILD DONE 10 10 0 0 0 0 0SQL> 当没有输出时,则表明reblance完成 修改reblance power1ALTER DISKGROUP DATA REBALANCE POWER 1;]]></content>
<categories>
<category>oracle</category>
<category>asm</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>asm</tag>
</tags>
</entry>
<entry>
<title><![CDATA[DG Broker配置]]></title>
<url>%2F2017%2F12%2F31%2FDG-Broker%E9%85%8D%E7%BD%AE%2F</url>
<content type="text"><![CDATA[前言Dgmgrl表示Data Guard Manager Command Line Interface,用来管理维护Dataguard,而且该命令系统自带不需要额外安装,命令简单易上手,容易学习,比sqlplus用来更加简单一些. 本文参考了如下几篇文档: https://community.oracle.com/docs/DOC-1007327 http://blog.csdn.net/u011364306/article/details/50523117 http://blog.csdn.net/u011364306/article/details/50534654 配置注:本文的配置是基于我的另一篇文章Oracle11g-dataguard配置搭建出来的dg环境进行的配置 配置监听主备库都需要配置监听,需添加如下监听内容 主库(srmcloud): 12345678910111213141516171819LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = srmcloud)(PORT = 1521)) ) )SID_LIST_LISTENER= (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = srmcloud) (ORACLE_HOME = /home/oracle/opt/oracle/product/11.2.0/dbhome_1) (SID_NAME = srmcloud) ) (SID_DESC = (GLOBAL_DBNAME = srmcloud_dgmgrl) (ORACLE_HOME = /home/oracle/opt/oracle/product/11.2.0/dbhome_1) (SID_NAME = srmcloud) ) ) 其中 12345(SID_DESC = (GLOBAL_DBNAME = srmcloud_dgmgrl) (ORACLE_HOME = /home/oracle/opt/oracle/product/11.2.0/dbhome_1) (SID_NAME = srmcloud) ) 为新添加的代码,GLOBAL_DBNAME一定要以dgmgrl结尾,备库一样,修改为如下内容 备库(srmclouddg): 12345678910111213141516171819LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = srmclouddg)(PORT = 1521)) ) )SID_LIST_LISTENER= (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = srmclouddg) (ORACLE_HOME = /home/oracle/opt/oracle/product/11.2.0/dbhome_1) (SID_NAME = srmcloud) ) (SID_DESC = (GLOBAL_DBNAME = srmclouddg_dgmgrl) (ORACLE_HOME = /home/oracle/opt/oracle/product/11.2.0/dbhome_1) (SID_NAME = srmcloud) ) ) 注:添加完成后,重启主备库的监听 配置DG Broker multiplexed configuration files如下操作在主备库都需要执行 12345SQL> alter system set dg_broker_config_file1='/home/oracle/opt/oracle/oradata/srmcloud/dg_config1.dat' scope=both;System altered.SQL> alter system set dg_broker_config_file2='/home/oracle/opt/oracle/oradata/srmcloud/dg_config2.dat' scope=both;System altered.SQL> start the Broker如下操作在主备库都需要执行 123SQL> alter system set dg_broker_start=true scope=both;System altered.SQL> 这时可以查看后台启动了一个dmon进程 1234[oracle@srmcloud ~]$ ps -ef | grep dmonoracle 20464 1 0 05:54 ? 00:00:00 ora_dmon_srmcloudoracle 20579 18643 0 06:16 pts/1 00:00:00 grep dmon[oracle@srmcloud ~]$ 配置Broker Configuration12345678[oracle@srmcloud ~]$ dgmgrl sys/handhandDGMGRL for Linux: Version 11.2.0.4.0 - 64bit ProductionCopyright (c) 2000, 2009, Oracle. All rights reserved.Welcome to DGMGRL, type "help" for information.Connected.DGMGRL> create configuration srmcloud_srmclouddg_config as primary database is srmcloudconnect identifier is 'srmcloud';Configuration "srmcloud_srmclouddg_config" created with primary database "srmcloud"DGMGRL> 注: 其中identifier后面的内容是tns中配置的tnsname 有文章说在oracle12c中,由于配置了LOG_ARCHIVE_DEST_n的参数,导致报如下错误,Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added,这种问题在11g中不会出现,至于12c是否真的会出现,可自行测试,解决办法就是将LOG_ARCHIVE_DEST_n参数置为空,等配置好之后再改回去。 Add the standby to the configuration123456789101112DGMGRL> add database srmclouddg as connect identifier is 'srmclouddg';Database "srmclouddg" addedDGMGRL>show configurationConfiguration - srmcloud_srmclouddg_config Protection Mode: MaxPerformance Databases: srmcloud - Primary database srmclouddg - Physical standby databaseFast-Start Failover: DISABLEDConfiguration Status:DISABLEDDGMGRL> Enable that Configuration.123456789101112DGMGRL> enable configuration;Enabled.DGMGRL> show configurationConfiguration - srmcloud_srmclouddg_config Protection Mode: MaxPerformance Databases: srmcloud - Primary database srmclouddg - Physical standby databaseFast-Start Failover: DISABLEDConfiguration Status:SUCCESSDGMGRL> 验证验证切换12345678910111213141516171819202122DGMGRL> switchover to srmclouddg;Performing switchover NOW, please wait...Operation requires a connection to instance "srmcloud" on database "srmclouddg"Connecting to instance "srmcloud"...Connected.New primary database "srmclouddg" is opening...Operation requires startup of instance "srmcloud" on database "srmcloud"Starting instance "srmcloud"...ORACLE instance started.Database mounted.Database opened.Switchover succeeded, new primary is "srmclouddg"DGMGRL> show configuration;Configuration - srmcloud_srmclouddg_config Protection Mode: MaxPerformance Databases: srmclouddg - Primary database srmcloud - Physical standby databaseFast-Start Failover: DISABLEDConfiguration Status:SUCCESSDGMGRL> 验证手动关闭,并启动备机,自动启用日志应用之前需要手动启动日志应用才可以。 这里使用的是上一步切换过的主备库,主库为srmclouddg,备库为srmcloud 备库: 12345SQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> 主库: 12345678910111213DGMGRL> show database srmcloudDatabase - srmcloud Role: PHYSICAL STANDBY Intended State: OFFLINE Transport Lag: (unknown) Apply Lag: (unknown) Apply Rate: (unknown) Real Time Query: OFF Instance(s): srmcloudDatabase Status:SHUTDOWNDGMGRL> 备库: 12345678910SQL> startup;ORACLE instance started.Total System Global Area 1553305600 bytesFixed Size 2253544 bytesVariable Size 1426066712 bytesDatabase Buffers 117440512 bytesRedo Buffers 7544832 bytesDatabase mounted.Database opened.SQL> 主库: 123456789101112tabase - srmcloud Role: PHYSICAL STANDBY Intended State: APPLY-ON Transport Lag: 0 seconds (computed 0 seconds ago) Apply Lag: 0 seconds (computed 0 seconds ago) Apply Rate: 0 Byte/s Real Time Query: ON Instance(s): srmcloudDatabase Status:SUCCESSDGMGRL> 可以看出,自动启用了日志应用,不需要在sqlplus中输入alter database recover managed standby database …类似的语句,并且备库自动启动为PHYSICAL STANDBY的角色,不需要手动指定,OPEN_MODE自动为READ ONLY WITH APPLY模式,也不需要手动指定]]></content>
<categories>
<category>oracle</category>
<category>dg</category>
<category>broker</category>
<category>dgmgrl</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>dg</tag>
<tag>broker</tag>
<tag>dgmgrl</tag>
</tags>
</entry>
<entry>
<title><![CDATA[Oracle11g dataguard配置]]></title>
<url>%2F2017%2F12%2F28%2FOracle11g-dataguard%E9%85%8D%E7%BD%AE%2F</url>
<content type="text"><![CDATA[前言DG的配置过程,参考了如下两篇文章: http://www.cnblogs.com/tippoint/archive/2013/04/18/3029019.html http://blog.itpub.net/29324876/viewspace-1246133/ 主库IP:10.211.55.6 备库IP:10.211.55.7 安装之前使用如下命令同步主备库的时间 1ntpdate -u 0.centos.pool.ntp.org 配置判断DG是否已安装12345SQL> select * from v$option where parameter = 'Oracle Data Guard';PARAMETER VALUE------------------- ---------Oracle Data Guard TRUESQL> 如果是true表示已经安装可以配置,否则需要安装相应组件。 设置主库为强制记录日志默认情况下数据库操作会记录redo log,但是在一些特定的情况下可以使用nologging来不生成redo信息 表的批量INSERT(通过/+APPEND /提示使用“直接路径插入“。或采用SQL*Loader直接路径加载)。表数据不生成redo,但是 所有索引修改会生成redo,但是所有索引修改会生成redo(尽管表不生成日志,但这个表上的索引却会生成redo!)。 LOB操作(对大对象的更新不必生成日志)。 通过CREATE TABLE AS SELECT创建表 各种ALTER TABLE操作,如MOVE和SPLIT 在一些表迁移和表空间迁移中,可以使用alter table a nologging;或者alter tablespace snk nologging;在操作完成后再修改回logging状态。 这里需要多说一句,如果你使用nologging导入大批量数据,以后对这些数据的修改会在redo或者archive log中,但是基准的数据是没有的,所以一旦介质损坏是无法完全恢复的,必须在使用nologging完成切换回logging后,做一次全备或者0级备份。 强制记录日志123SQL> alter database force logging;Database altered.SQL> 检查状态(YES为强制)12345SQL> select name,force_logging from v$database;NAME FORCE_LOG--------------------------- ---------SRMCLOUD YESSQL> 如果需要在主库添加或者删除数据文件时,这些文件也会在备份添加或删除,使用如下123SQL> alter system set standby_file_management = 'AUTO';System altered.SQL> 创建standby log files从库使用standby log files来保存从主库接收到的重做日志。既然主要是从库在使用,那为什么需要在主库上也建立standby log files ? 原因主要由两个:一是主库可能转换为备库,而备库是需要有standby log files的 二是如果主库建立了standby log files那备库会自动建立。 创建standby log files需要注意两点: standby log files的大小和redo log files一样 一般而言, standbyredo 日志文件组数要比 primary 数据库的 online redo 日志文件组数至少多一个。推荐 standbyredo 日志组数量基于 primary 数据库的线程数(这里的线程数可以理解为 rac 结构中的 rac节点数)。有一个推荐的公式可以做参考:(每线程的日志组数+1)最大线程数假设现在节点是1个,则=(3+1)1=4如果是双节点 则=(3+1)*2=8这里我们创建4个standby logfile: 查询redo日志大小1234567SQL> select group#,bytes/1024/1024 as M from v$log; GROUP# M---------- ---------- 1 100 2 100 3 100SQL> 这里是100M,三个 创建不建议组号group#紧挨着redo,因为后续redo有可能调整,这里我们从建立从11到14的standby logfile 123456789SQL> alter database add standby logfile group 11 '/home/oracle/opt/oracle/oradata/srmcloud/standby11.log' size 100M;Database altered.SQL> alter database add standby logfile group 12 '/home/oracle/opt/oracle/oradata/srmcloud/standby12.log' size 100M;Database altered.SQL> alter database add standby logfile group 13 '/home/oracle/opt/oracle/oradata/srmcloud/standby13.log' size 100M;Database altered.SQL> alter database add standby logfile group 14 '/home/oracle/opt/oracle/oradata/srmcloud/standby14.log' size 100M;Database altered.SQL> 创建密码文件并传输给备库一般数据库默认就有密码文件,存放在$ORACLE_HOME/dbs/orapwSID 这里为orapwsrmcloud 如果没有 1sql> orapwd file=$ORACLE_HOME/dbs/orapwsrmcloud password=oracle; 检查REMOTE_LOGIN_PASSWORDFILE值是否为 EXCLUSIVE: 1sql> show parameter REMOTE_LOGIN_PASSWORDFILE; 如果值不是EXCLUSIVE,则: 1sql> alter system set remote_login_passwordfile=exclusive scope=spfile; 如果存在或者创建完成,将密码文件传输到standby 库的对应目录,并授权 处理控制文件查看控制文件位置123456SQL> select name from v$controlfile;NAME--------------------------------------------------------------------------------/home/oracle/opt/oracle/oradata/srmcloud/control01.ctl/home/oracle/opt/oracle/oradata/srmcloud/control02.ctlSQL> 生成standby控制文件1234567891011121314151617SQL> shutdown immediateDatabase closed.Database dismounted.ORACLE instance shut down.SQL> startup mountORACLE instance started.Total System Global Area 1553305600 bytesFixed Size 2253544 bytesVariable Size 1426066712 bytesDatabase Buffers 117440512 bytesRedo Buffers 7544832 bytesDatabase mounted.SQL> alter database create standby controlfile as '/tmp/standby_control01.ctl';Database altered.SQL> alter database open;Database altered.SQL> 然后在备库建立对应的目录,并授权12[oracle@srmclouddg software]$ mkdir -p /home/oracle/opt/oracle/oradata/srmcloud/[oracle@srmclouddg software]$ chown oracle:oinstall /home/oracle/opt/oracle/oradata/srmcloud 拷贝主库的控制文件到备库 db_name和db_unique_name默认db_name和db_unique_name和实例名是一致的,这里是srmcloud。需要注意在DG中主库和从库的db_unique_name是不能一致的,需要区分开的。这里我们设置主库的db_unique_name为srmcloud,从库为srmclouddg。 查看db_unique_name12345SQL> show parameter db_unique_nameNAME TYPE VALUE-------------- --------------- ------------------ db_unique_name string srmcloudSQL> 设置db_unique_name123SQL> alter system set db_unique_name=srmcloud scope=spfile;System altered.SQL> 注意虽然默认db_unique_name和db_name是一致的,但是需要显式设置,否则在spfile中没有此参数 闪回略 SQL*NET设置配置主库监听(listener.ora)虽然可以通过netca来进行配置,但是除了这个默认的外,我们还需要一个静态注册SID_LIST_LISTENER,如果没有此从参数,而且dataguard启动顺序不正确,主库会报PING[ARC1]:Heartbeat failed to connect to standby ‘*‘.Error is 12514导致归档无法完成配置如下: 1234567891011121314LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = srmcloud)(PORT = 1521)) ) )SID_LIST_LISTENER= (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = srmcloud) (ORACLE_HOME = /home/oracle/opt/oracle/product/11.2.0/dbhome_1) (SID_NAME = srmcloud) ) ) 配置tns配置如下: 123456789101112131415161718SRMCLOUD = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.211.55.6)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = srmcloud) ) )SRMCLOUDDG = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.211.55.7)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = srmclouddg) ) ) 传输到备库并修改listener.ora如下: 1234567891011121314LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = srmclouddg)(PORT = 1521)) ) )SID_LIST_LISTENER= (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = srmclouddg) (ORACLE_HOME = /home/oracle/opt/oracle/product/11.2.0/dbhome_1) (SID_NAME = srmcloud) ) ) 日志传输配置查看是否启用归档1234567SQL> Archive log list;Database log mode No Archive ModeAutomatic archival DisabledArchive destination USE_DB_RECOVERY_FILE_DESTOldest online log sequence 35Current log sequence 37SQL> 启用归档并设置归档日志路径12345678910111213141516171819SQL> alter system set log_archive_dest_1='LOCATION=/home/oracle/opt/oracle/archive1 valid_for=(all_logfiles,primary_role) db_unique_name=srmcloud' scope=spfile;System altered.SQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> startup mount;ORACLE instance started.Total System Global Area 1553305600 bytesFixed Size 2253544 bytesVariable Size 1426066712 bytesDatabase Buffers 117440512 bytesRedo Buffers 7544832 bytesDatabase mounted.SQL> alter database archivelog;Database altered.SQL> alter database open;Database altered.SQL> 配置归档日志到备份库123SQL> alter system set log_archive_dest_2='SERVICE=srmclouddg lgwr sync valid_for=(online_logfile,primary_role) db_unique_name=srmclouddg';System altered.SQL> 要注意STANDBY_ARCHIVE_DEST 参数不需要,已经被官方弃用。设置此参数后启动数据库,只会报 ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance 错。 配置FAL_SERVER这个参数指定当日志传输出现问题时,备库到哪里去找缺少的归档日志。它用在备库接收的到的重做日志间有缺口的时候。这种情况会发生在日志传输出现中断时,比如你需要对备库进行维护操作。在备库维护期间,没有日志传输过来,这时缺口就出现了。设置了这个参数,备库就会主动去寻找那些缺少的日志,并要求主库进行传输。你是主库,就填写:fal_server=从库从库上就反过来:fal_server=主库注意:FAL_CLIENT在11g中已经废弃,虽然可以配置但是已经不起作用了 12SQL> alter system set FAL_SERVER=srmclouddg;System altered. Data Guard 配置里的另外一个库的名字12SQL> alter system set log_archive_config = 'dg_config=(srmcloud,srmclouddg)';System altered. 1SQL> create pfile from spfile; 手工修改pfile,如下: 123456789101112131415161718192021222324252627282930313233343536srmcloud.__db_cache_size=687865856srmcloud.__java_pool_size=16777216srmcloud.__large_pool_size=33554432srmcloud.__oracle_base='/home/oracle/opt/oracle'#ORACLE_BASE set from environmentsrmcloud.__pga_aggregate_target=469762048srmcloud.__sga_target=1090519040srmcloud.__shared_io_pool_size=0srmcloud.__shared_pool_size=335544320srmcloud.__streams_pool_size=0*.audit_file_dest='/home/oracle/opt/oracle/admin/srmcloud/adump'*.audit_trail='db'*.compatible='11.2.0.4.0'*.control_files='/home/oracle/opt/oracle/oradata/srmcloud/control01.ctl','/home/oracle/opt/oracle/oradata/srmcloud/control02.ctl'*.db_block_size=32768*.db_domain=''*.db_name='srmcloud'*.db_recovery_file_dest='/home/oracle/opt/oracle/fast_recovery_area'*.db_recovery_file_dest_size=5218762752*.db_unique_name='SRMCLOUD'*.diagnostic_dest='/home/oracle/opt/oracle'*.dispatchers='(PROTOCOL=TCP) (SERVICE=srmcloudXDB)'*.fal_server='SRMCLOUDDG'*.job_queue_processes=1000*.log_archive_config='dg_config=(srmcloud,srmclouddg)'*.log_archive_dest_1='LOCATION=/home/oracle/opt/oracle/archive1 valid_for=(all_logfiles,primary_role) db_unique_name=srmcloud'*.log_archive_dest_2='SERVICE=srmclouddg lgwr sync valid_for=(online_logfile,primary_role) db_unique_name=srmclouddg'*.log_archive_dest_state_1=enable*.log_archive_dest_state_2=enable*.log_archive_format='srmcloud_%t_%s_%r.dbf'*.memory_target=1546649600*.open_cursors=300*.processes=1500*.remote_login_passwordfile='EXCLUSIVE'*.sessions=1655*.standby_file_management='AUTO'*.undo_tablespace='UNDOTBS1' 12345678910111213141516SQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> create spfile from pfile;File created.SQL> startup;ORACLE instance started.Total System Global Area 1553305600 bytesFixed Size 2253544 bytesVariable Size 1426066712 bytesDatabase Buffers 117440512 bytesRedo Buffers 7544832 bytesDatabase mounted.Database opened.SQL> 传输pfile到备库并修改只需修改如下项即可: 123456*.db_name='srmcloud'*.db_unique_name='SRMCLOUDDG'*.fal_server='SRMCLOUD'*.log_archive_config='dg_config=(srmclouddg,srmcloud)'*.log_archive_dest_1='LOCATION=/home/oracle/opt/oracle/archive1 valid_for=(all_logfiles,primary_role) db_unique_name=srmclouddg'*.log_archive_dest_2='SERVICE=srmcloud lgwr sync valid_for=(online_logfile,primary_role) db_unique_name=srmcloud' 建立相关目录 1234567891011[root@srmclouddg srmcloud]# mkdir -p /home/oracle/opt/oracle/admin/srmcloud/[root@srmclouddg srmcloud]# cd /home/oracle/opt/oracle/admin/srmcloud/[root@srmclouddg srmcloud]# pwd/home/oracle/opt/oracle/admin/srmcloud[root@srmclouddg srmcloud]# mkdir adump dpdump pfile[root@srmclouddg srmcloud]# lltotal 12drwxr-xr-x 2 root root 4096 Dec 28 21:38 adumpdrwxr-xr-x 2 root root 4096 Dec 28 21:38 dpdumpdrwxr-xr-x 2 root root 4096 Dec 28 21:38 pfile[root@srmclouddg srmcloud]# chown -R oracle:oinstall /home/oracle/opt/oracle/admin 1[oracle@srmclouddg dbs]$ mkdir -p /home/oracle/opt/oracle/fast_recovery_area 使用pfile启动备库到nomount12345678SQL> startup nomount pfile='/home/oracle/opt/oracle/product/11.2.0/dbhome_1/dbs/initsrmcloud.ora';ORACLE instance started.Total System Global Area 1553305600 bytesFixed Size 2253544 bytesVariable Size 1426066712 bytesDatabase Buffers 117440512 bytesRedo Buffers 7544832 bytesSQL> 注意:要启动备库监听 Duplicate复制主库到备库123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134[oracle@srmcloud ~]$ rman target sys/handhand@SRMCLOUD auxiliary sys/handhand@SRMCLOUDDGRecovery Manager: Release 11.2.0.4.0 - Production on Thu Dec 28 23:11:34 2017Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.connected to target database: SRMCLOUD (DBID=3572656870)connected to auxiliary database: SRMCLOUD (not mounted)RMAN> duplicate target database for standby from active database spfile set db_unique_name 'SRMCLOUD' nofilenamecheck;Starting Duplicate Db at 28-DEC-17using target database control file instead of recovery catalogallocated channel: ORA_AUX_DISK_1channel ORA_AUX_DISK_1: SID=10 device type=DISKcontents of Memory Script:{ backup as copy reuse targetfile '/home/oracle/opt/oracle/product/11.2.0/dbhome_1/dbs/orapwsrmcloud' auxiliary format '/home/oracle/opt/oracle/product/11.2.0/dbhome_1/dbs/orapwsrmcloud' targetfile '/home/oracle/opt/oracle/product/11.2.0/dbhome_1/dbs/spfilesrmcloud.ora' auxiliary format '/home/oracle/opt/oracle/product/11.2.0/dbhome_1/dbs/spfilesrmcloud.ora' ; sql clone "alter system set spfile= ''/home/oracle/opt/oracle/product/11.2.0/dbhome_1/dbs/spfilesrmcloud.ora''";}executing Memory ScriptStarting backup at 28-DEC-17allocated channel: ORA_DISK_1channel ORA_DISK_1: SID=1157 device type=DISKFinished backup at 28-DEC-17sql statement: alter system set spfile= ''/home/oracle/opt/oracle/product/11.2.0/dbhome_1/dbs/spfilesrmcloud.ora''contents of Memory Script:{ sql clone "alter system set db_unique_name = ''SRMCLOUD'' comment= '''' scope=spfile"; shutdown clone immediate; startup clone nomount;}executing Memory Scriptsql statement: alter system set db_unique_name = ''SRMCLOUD'' comment= '''' scope=spfileOracle instance shut downconnected to auxiliary database (not started)Oracle instance startedTotal System Global Area 1553305600 bytesFixed Size 2253544 bytesVariable Size 1426066712 bytesDatabase Buffers 117440512 bytesRedo Buffers 7544832 bytescontents of Memory Script:{ backup as copy current controlfile for standby auxiliary format '/home/oracle/opt/oracle/oradata/srmcloud/control01.ctl'; restore clone controlfile to '/home/oracle/opt/oracle/oradata/srmcloud/control02.ctl' from '/home/oracle/opt/oracle/oradata/srmcloud/control01.ctl';}executing Memory ScriptStarting backup at 28-DEC-17using channel ORA_DISK_1channel ORA_DISK_1: starting datafile copycopying standby control fileoutput file name=/home/oracle/opt/oracle/product/11.2.0/dbhome_1/dbs/snapcf_srmcloud.f tag=TAG20171228T231151 RECID=3 STAMP=963961911channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01Finished backup at 28-DEC-17Starting restore at 28-DEC-17allocated channel: ORA_AUX_DISK_1channel ORA_AUX_DISK_1: SID=9 device type=DISKchannel ORA_AUX_DISK_1: copied control file copyFinished restore at 28-DEC-17contents of Memory Script:{ sql clone 'alter database mount standby database';}executing Memory Scriptsql statement: alter database mount standby databaseRMAN-05538: WARNING: implicitly using DB_FILE_NAME_CONVERTcontents of Memory Script:{ set newname for tempfile 1 to "/home/oracle/opt/oracle/oradata/srmcloud/temp01.dbf"; switch clone tempfile all; set newname for datafile 1 to "/home/oracle/opt/oracle/oradata/srmcloud/system01.dbf"; set newname for datafile 2 to "/home/oracle/opt/oracle/oradata/srmcloud/sysaux01.dbf"; set newname for datafile 3 to "/home/oracle/opt/oracle/oradata/srmcloud/undotbs01.dbf"; set newname for datafile 4 to "/home/oracle/opt/oracle/oradata/srmcloud/users01.dbf"; backup as copy reuse datafile 1 auxiliary format "/home/oracle/opt/oracle/oradata/srmcloud/system01.dbf" datafile 2 auxiliary format "/home/oracle/opt/oracle/oradata/srmcloud/sysaux01.dbf" datafile 3 auxiliary format "/home/oracle/opt/oracle/oradata/srmcloud/undotbs01.dbf" datafile 4 auxiliary format "/home/oracle/opt/oracle/oradata/srmcloud/users01.dbf" ; sql 'alter system archive log current';}executing Memory Scriptexecuting command: SET NEWNAMErenamed tempfile 1 to /home/oracle/opt/oracle/oradata/srmcloud/temp01.dbf in control fileexecuting command: SET NEWNAMEexecuting command: SET NEWNAMEexecuting command: SET NEWNAMEexecuting command: SET NEWNAMEStarting backup at 28-DEC-17using channel ORA_DISK_1channel ORA_DISK_1: starting datafile copyinput datafile file number=00002 name=/home/oracle/opt/oracle/oradata/srmcloud/sysaux01.dbfoutput file name=/home/oracle/opt/oracle/oradata/srmcloud/sysaux01.dbf tag=TAG20171228T231202channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15channel ORA_DISK_1: starting datafile copyinput datafile file number=00003 name=/home/oracle/opt/oracle/oradata/srmcloud/undotbs01.dbfoutput file name=/home/oracle/opt/oracle/oradata/srmcloud/undotbs01.dbf tag=TAG20171228T231202channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15channel ORA_DISK_1: starting datafile copyinput datafile file number=00001 name=/home/oracle/opt/oracle/oradata/srmcloud/system01.dbfoutput file name=/home/oracle/opt/oracle/oradata/srmcloud/system01.dbf tag=TAG20171228T231202channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07channel ORA_DISK_1: starting datafile copyinput datafile file number=00004 name=/home/oracle/opt/oracle/oradata/srmcloud/users01.dbfoutput file name=/home/oracle/opt/oracle/oradata/srmcloud/users01.dbf tag=TAG20171228T231202channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01Finished backup at 28-DEC-17sql statement: alter system archive log currentcontents of Memory Script:{ switch clone datafile all;}executing Memory Scriptdatafile 1 switched to datafile copyinput datafile copy RECID=3 STAMP=963961972 file name=/home/oracle/opt/oracle/oradata/srmcloud/system01.dbfdatafile 2 switched to datafile copyinput datafile copy RECID=4 STAMP=963961972 file name=/home/oracle/opt/oracle/oradata/srmcloud/sysaux01.dbfdatafile 3 switched to datafile copyinput datafile copy RECID=5 STAMP=963961972 file name=/home/oracle/opt/oracle/oradata/srmcloud/undotbs01.dbfdatafile 4 switched to datafile copyinput datafile copy RECID=6 STAMP=963961972 file name=/home/oracle/opt/oracle/oradata/srmcloud/users01.dbfFinished Duplicate Db at 28-DEC-17RMAN> quit 关闭,并使用pfile重启数据库,并创建spfile1234567891011121314151617181920212223SQL> shutdown immediate;ORA-01109: database not openDatabase dismounted.ORACLE instance shut down.SQL> startup nomount pfile='/home/oracle/opt/oracle/product/11.2.0/dbhome_1/dbs/initsrmcloud.ora';ORACLE instance started.Total System Global Area 1553305600 bytesFixed Size 2253544 bytesVariable Size 1426066712 bytesDatabase Buffers 117440512 bytesRedo Buffers 7544832 bytesSQL> alter database mount standby database;Database altered.SQL> show parameter db_unique_nameNAME TYPE------------------------------------ ---------------------------------VALUE------------------------------db_unique_name stringSRMCLOUDDGSQL> create spfile from pfile;File created.SQL> 关闭数据库,使用spfile重启数据库12345678910111213141516SQL> shutdown immediate;ORA-01109: database not openDatabase dismounted.ORACLE instance shut down.SQL> startup nomount;ORACLE instance started.Total System Global Area 1553305600 bytesFixed Size 2253544 bytesVariable Size 1426066712 bytesDatabase Buffers 117440512 bytesRedo Buffers 7544832 bytesSQL> alter database mount standby database;Database altered.SQL> alter database open read only;Database altered.SQL> 查看日志123456789SQL> select SEQUENCE#,FIRST_TIME,NEXT_TIME ,APPLIED from v$archived_log order by 1; SEQUENCE# FIRST_TIME NEXT_TIME APPLIED---------- --------------- --------------- --------------------------- 40 28-DEC-17 28-DEC-17 NO 41 28-DEC-17 28-DEC-17 NO 42 28-DEC-17 28-DEC-17 NO 43 28-DEC-17 28-DEC-17 NO 44 28-DEC-17 28-DEC-17 NOSQL> 已接收,但未应用 开启备库日志应用1234567891011SQL> alter database recover managed standby database using current logfile disconnect from session;Database altered.SQL> select SEQUENCE#,FIRST_TIME,NEXT_TIME ,APPLIED from v$archived_log order by 1; SEQUENCE# FIRST_TIME NEXT_TIME APPLIED---------- --------------- --------------- --------------------------- 40 28-DEC-17 28-DEC-17 YES 41 28-DEC-17 28-DEC-17 YES 42 28-DEC-17 28-DEC-17 YES 43 28-DEC-17 28-DEC-17 YES 44 28-DEC-17 28-DEC-17 YESSQL> 验证数据传输验证在主库创建用户123456SQL> create user test identified by test;User created.SQL> select username from dba_users where username='TEST';USERNAME--------------------------------------------------------------------------------TEST 在备库查询TEST用户是否存在12345SQL> select username from dba_users where username='TEST';USERNAME--------------------------------------------------------------------------------TESTSQL> dg数据传输正常 主备切换验证#####主库切换为备库 123456789101112131415SQL> alter database commit to switchover to physical standby with session shutdown;Database altered.SQL> startup mount;ORACLE instance started.Total System Global Area 1553305600 bytesFixed Size 2253544 bytesVariable Size 1426066712 bytesDatabase Buffers 117440512 bytesRedo Buffers 7544832 bytesDatabase mounted.SQL> select database_role from v$database;DATABASE_ROLE------------------------------------------------PHYSICAL STANDBYSQL> 备库切换为主库12345678910111213141516171819202122232425SQL> select database_role from v$database;DATABASE_ROLE------------------------------------------------PHYSICAL STANDBYSQL> select open_mode from v$database;OPEN_MODE------------------------------------------------------------READ ONLY WITH APPLYSQL> alter database commit to switchover to primary with session shutdown;Database altered.SQL> select database_role from v$database;DATABASE_ROLE------------------------------------------------PRIMARYSQL> select open_mode from v$database;OPEN_MODE------------------------------------------------------------MOUNTEDSQL> alter database open;Database altered.SQL> select open_mode from v$database;OPEN_MODE------------------------------------------------------------READ WRITESQL> 启动原主库(现在为备库)为read only模式,并启用日志自动应用12345SQL> alter database open read only;Database altered.SQL> alter database recover managed standby database using current logfile disconnect from session;Database altered.SQL> 至此,DG配置完成 管理dataguard启动关闭顺序 监听先启从库再起主库 1lsnrctl start 启动先起从库1234sql>startup nomountsql>alter database mount standby database;sql>alter database open read only;sql>alter database recover managed standby database using current logfile disconnect from session; 再启主库1sql>startup 关闭(和开启正好相反)先关主库:1sql>shutdown immediate 再关从库:12sql>alter database recover managed standby database cancel;sql>shutdown immediate; 后续介绍一篇dg文章,写的比我好 https://community.oracle.com/docs/DOC-1007058]]></content>
<categories>
<category>oracle</category>
<category>dataguard</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>dataguard</tag>
</tags>
</entry>
<entry>
<title><![CDATA[oracle 12C 单实例PDB插入到RAC环境下出现的问题]]></title>
<url>%2F2017%2F12%2F24%2Foracle-12C-%E5%8D%95%E5%AE%9E%E4%BE%8BPDB%E6%8F%92%E5%85%A5%E5%88%B0RAC%E7%8E%AF%E5%A2%83%E4%B8%8B%E5%87%BA%E7%8E%B0%E7%9A%84%E9%97%AE%E9%A2%98%2F</url>
<content type="text"><![CDATA[前言本文记录在oracle 12C,通过pdb插拔的方式将单实例转换为rac环境出现的问题。 undo表空间被占用问题描述在迁移过程中,由于修改过源pdb的默认undo表空间,导致插入到rac环境的时候,出现undo表空间被占用的情况。 注:这里,使用的是local undo的模式 问题重现切换至测试pdb12SQL> alter session set container = TEST;Session altered. 查询默认undo表空间,及数据文件位置123456789SQL> select tablespace_name, file_name, AUTOEXTENSIBLE, bytes / 1024 / 1024 size_mb from dba_data_files where tablespace_name like '%UNDO%'; TABLESPACE_NAME FILE_NAME AUTOEXTEN SIZE_MB--------------- --------------------------------------------------- --------- ----------UNDOTBS1 /u01/app/oracle/oradata/SRMCLOUD/test/undotbs01.dbf YES 1515 修改默认undo表空间12345678910111213SQL> create undo tablespace undotbs2 datafile '/u01/app/oracle/oradata/SRMCLOUD/test/undotbs02.dbf' size 100m autoextend on;Tablespace created.SQL> alter system set undo_tablespace='UNDOTBS2' scope=both;System altered.SQL> drop tablespace UNDOTBS1 including contents and datafiles;Tablespace dropped.SQL> show parameter undoNAME TYPE VALUE------------------------------------ --------------------------------- ------------------------------temp_undo_enabled boolean FALSEundo_management string AUTOundo_retention integer 900undo_tablespace string UNDOTBS2 切换回CDB1SQL> alter session set container = CDB$ROOT 拔出测试PDB123456SQL> alter pluggable database TEST close immediate;Pluggable database altered.SQL> alter pluggable database TEST unplug into '/u01/app/oracle/oradata/SRMCLOUD/test.xml';Pluggable database altered.SQL> drop pluggable database TEST KEEP DATAFILES;Pluggable database dropped. 在rac下插入pdb12SQL> CREATE PLUGGABLE DATABASE test USING '/u01/app/oracle/test.xml' SOURCE_FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/SRMCLOUD/test/', '/u01/app/oracle/test/') MOVE FILE_NAME_CONVERT = ('/u01/app/oracle/test/', '+DATA/srmcloud/test/');Pluggable database created. 启动新插入的pdb123456SQL> alter pluggable database TEST open instances=all;alter pluggable database TEST open instances=all*ERROR at line 1:ORA-65107: Error encountered when processing the current task on instance:1ORA-30013: undo tablespace 'UNDOTBS2' is currently in use 问题描述2以上就重现了错误过程。 刚开始只是以为更改了默认undo表空间的名称,导致在插入oracle rac环境下的时候,oracle自动添加新的undo失败。(oracle 12c rac下,每个实例上的pdb使用的是不同的undo表空间) 针对以上猜想,尝试更改undo表空间为默认的名称后,重新插入,仍然失败 123456SQL> alter pluggable database TEST open instances=all;alter pluggable database TEST open instances=all*ERROR at line 1:ORA-65107: Error encountered when processing the current task on instance:1ORA-30013: undo tablespace 'UNDOTBS1' is currently in use 发现原因通过对比修改前与修改后的拔出文件test.xml发现,修改后比修改前,在拔出文件中多了一行如下参数 1<spfile>*.undo_tablespace='UNDOTBS1'</spfile> 由于这个参数的原因,导致pdb在插入的时候,统一把undo表空间指向为’UNDOTBS1’导致的。 解决方法就是,删除这一行数据即可。 问题解决12SQL> alter pluggable database TEST open instances=all;Pluggable database altered.]]></content>
<categories>
<category>oracle 12C</category>
<category>RAC</category>
<category>PDB</category>
</categories>
<tags>
<tag>oracle 12C</tag>
<tag>RAC</tag>
<tag>PDB</tag>
</tags>
</entry>
<entry>
<title><![CDATA[RAC删除节点]]></title>
<url>%2F2017%2F12%2F05%2FRAC%E5%88%A0%E9%99%A4%E8%8A%82%E7%82%B9%2F</url>
<content type="text"><![CDATA[目的删除节点node03 其中: 数据库名为orcl node03的实例名为orcl3 停止node03的数据库实例在任意一个可用节点上,这里用node01,grid用户执行如下命令 1srvctl stop instance -d orcl -i orcl3 卸载node03上的数据库实例在任意一个可用节点上,这里用node01,oracle用户执行如下命令 1dbca -silent -deleteInstance -nodeList node03 -gdbName orcl -instanceName orcl3 -sysDBAUserName sys -sysDBAPassword Hand1234 结果如下: 1234567891011121314151617[oracle@node01 ~]$ dbca -silent -deleteInstance -nodeList node03 -gdbName orcl -instanceName orcl3 -sysDBAUserName sys -sysDBAPassword Hand1234Deleting instance1% complete2% complete6% complete13% complete20% complete26% complete33% complete40% complete46% complete53% complete60% complete66% completeCompleting instance management.100% completeLook at the log file "/u01/app/oracle/cfgtoollogs/dbca/orcl0.log" for further details. node01上,grid用户,执行如下命令验证是否删除 1srvctl config database -d orcl 结果如下: 1234567891011121314151617181920212223242526272829[grid@node01 ~]$ srvctl config database -d orclDatabase unique name: orclDatabase name: orclOracle home: /u01/app/oracle/product/12.2.0/dbhome_1Oracle user: oracleSpfile: +DATA/ORCL/PARAMETERFILE/spfile.277.961430953Password file: +DATA/ORCL/PASSWORD/pwdorcl.256.961427153Domain: myCluster.comStart options: openStop options: immediateDatabase role: PRIMARYManagement policy: AUTOMATICServer pools:Disk Groups: DATAMount point paths:Services:Type: RACStart concurrency:Stop concurrency:OSDBA group: dbaOSOPER group: operDatabase instances: orcl1,orcl2Configured nodes: node01,node02CSS critical: noCPU count: 0Memory target: 0Maximum memory: 0Default network number for database services:Database is administrator managed 重点关注 Database instance,发现没有orcl3 停止node03的lisenter在任意一个可用节点上,这里用node01,grid用户执行如下命令 12srvctl disable listener -l LISTENER -n node03srvctl stop listener -l LISTENER -n node03 在node1上更新inventorynode01,oracle用户,执行如下命令 123456789[oracle@node01 ~]$ echo $ORACLE_HOME/u01/app/oracle/product/12.2.0/dbhome_1[oracle@node01 bin]$ cd $ORACLE_HOME/oui/bin[oracle@node01 bin]$ ./runInstaller -updatenodelist ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 "CLUSTER_NODES={node01,node02}"Starting Oracle Universal Installer...Checking swap space: must be greater than 500 MB. Actual 8191 MB PassedThe inventory pointer is located at /etc/oraInst.loc'UpdateNodeList' was successful. 在node03运行deinstallnode03,oracle用户,执行如下命令 12[oracle@node03 ~]$ cd $ORACLE_HOME/deinstall [oracle@node03 deinstall]$ ./deinstall -local GRID层面删除node03检查node01上,grid用户,执行如下面命令检查 1234[grid@node01 ~]$ olsnodes -s -tnode01 Active Unpinnednode02 Active Unpinnednode03 Active Unpinned 如果为node03pinned,使用如下命令设为Unpinned 1crsctl unpin css-n node03 在node03节点执行以root用户执行deconfig12[root@node03 ~]# cd /u01/app/12.2.0/grid/crs/install/[root@node03 install]# ./rootcrs.sh -deconfig -deinstall -force 结果如下: 123456789101112131415161718192021222324252627282930313233343536373839404142[root@node03 install]# ./rootcrs.sh -deconfig -deinstall -forceUsing configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_paramsThe log of current session can be found at: /u01/app/grid/crsdata/node03/crsconfig/crsdeconfig_node03_2017-12-05_05-23-51PM.logCRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node03'CRS-2673: Attempting to stop 'ora.crsd' on 'node03'CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'node03'CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node03'CRS-2673: Attempting to stop 'ora.OCR_VOT_GIMR.dg' on 'node03'CRS-2673: Attempting to stop 'ora.chad' on 'node03'CRS-2677: Stop of 'ora.OCR_VOT_GIMR.dg' on 'node03' succeededCRS-2677: Stop of 'ora.DATA.dg' on 'node03' succeededCRS-2673: Attempting to stop 'ora.asm' on 'node03'CRS-2677: Stop of 'ora.asm' on 'node03' succeededCRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node03'CRS-2677: Stop of 'ora.chad' on 'node03' succeededCRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node03' succeededCRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node03' has completedCRS-2677: Stop of 'ora.crsd' on 'node03' succeededCRS-2673: Attempting to stop 'ora.asm' on 'node03'CRS-2673: Attempting to stop 'ora.crf' on 'node03'CRS-2673: Attempting to stop 'ora.gpnpd' on 'node03'CRS-2673: Attempting to stop 'ora.mdnsd' on 'node03'CRS-2677: Stop of 'ora.crf' on 'node03' succeededCRS-2677: Stop of 'ora.gpnpd' on 'node03' succeededCRS-2677: Stop of 'ora.mdnsd' on 'node03' succeededCRS-2677: Stop of 'ora.asm' on 'node03' succeededCRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node03'CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node03' succeededCRS-2673: Attempting to stop 'ora.ctssd' on 'node03'CRS-2673: Attempting to stop 'ora.evmd' on 'node03'CRS-2677: Stop of 'ora.ctssd' on 'node03' succeededCRS-2677: Stop of 'ora.evmd' on 'node03' succeededCRS-2673: Attempting to stop 'ora.cssd' on 'node03'CRS-2677: Stop of 'ora.cssd' on 'node03' succeededCRS-2673: Attempting to stop 'ora.gipcd' on 'node03'CRS-2677: Stop of 'ora.gipcd' on 'node03' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node03' has completedCRS-4133: Oracle High Availability Services has been stopped.2017/12/05 17:24:42 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.2017/12/05 17:24:55 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.2017/12/05 17:24:57 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node 验证结果node01上,grid用户,执行如下面命令检查 1234[grid@node01 ~]$ olsnodes -s -tnode01 Active Unpinnednode02 Active Unpinnednode03 Inactive Unpinned node03为Inactive 在node01上删除节点node03的节点信息node01上,root用户 123[root@node01 ~]# cd /u01/app/12.2.0/grid/bin/[root@node01 bin]# ./crsctl delete node -n node03CRS-4661: Node node03 successfully deleted. node01上,grid用户 123[grid@node01 ~]$ olsnodes -s -tnode01 Active Unpinnednode02 Active Unpinned 更新node03节点列表node03上,grid用户,执行如下命令 123456[grid@node03 ~]$ cd /u01/app/12.2.0/grid/oui/bin/[grid@node03 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0/grid/ "CLUSTER_NODES={node03}" CRS=TRUE -localStarting Oracle Universal Installer...Checking swap space: must be greater than 500 MB. Actual 8191 MB PassedThe inventory pointer is located at /etc/oraInst.loc'UpdateNodeList' was successful. 在node03上删除gridnode03上,grid用户,执行如下命令 12[grid@node03 bin]$ cd /u01/app/12.2.0/grid/deinstall/[grid@node03 deinstall]$ ./deinstall -local node01节点上更新节点列表node01上,grid用户 123456[grid@node01 ~]$ cd /u01/app/12.2.0/grid/oui/bin/[grid@node01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0/grid "CLUSTER_NODES={node01,node02}" CRS=TRUE -silentStarting Oracle Universal Installer...Checking swap space: must be greater than 500 MB. Actual 8191 MB PassedThe inventory pointer is located at /etc/oraInst.loc'UpdateNodeList' was successful. 验证是否删除成功12345678910[grid@node01 bin]$ cluvfy stage -post nodedel -n node03Verifying Node Removal ... Verifying CRS Integrity ...PASSED Verifying Clusterware Version Consistency ...PASSEDVerifying Node Removal ...PASSEDPost-check for node removal was successful.CVU operation performed: stage -post nodedelDate: Dec 5, 2017 6:23:06 PMCVU home: /u01/app/12.2.0/grid/User: grid 附录如果出现vip没有删除的情况,可使用如下命令删除vip 123cd /u01/app/12.2.0/grid/bin[root@bin]# srvctl stop vip ora.node03.vip[root@bin]# srvctl remove vip ora.node03.vip]]></content>
<categories>
<category>oracle</category>
<category>rac</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>rac</tag>
</tags>
</entry>
<entry>
<title><![CDATA[Oracle RAC 12CR2添加新节点]]></title>
<url>%2F2017%2F12%2F01%2FOracle-RAC-12CR2%E6%B7%BB%E5%8A%A0%E6%96%B0%E8%8A%82%E7%82%B9%2F</url>
<content type="text"><![CDATA[修改记录列出最近扩展节点遇到的一些问题。见文章最后。 1、扩展完节点之后,实例启动不成功,报ORA-01618: redo thread 3 is not enabled - cannot mount 2、扩展完节点之后,实例启动不成功,报ORA-30013: undo tablespace 'UNDOTBS1' is currently in use 网段声明在之前安装的rac基础上添加新节点,启动node01、node02为原rac,node03为新添加节点 Name node01 node02 node03 Public IP 192.168.1.14 192.168.1.15 192.168.1.16 Private IP 192.168.6.2 192.168.6.3 192.168.6.4 Virtual IP 192.168.1.17 192.168.1.18 192.168.1.22 SCAN IP 192.168.1.19 192.168.1.120 192.168.1.21 NET IP 192.168.2.95 192.168.2.96 192.168.2.55 准备参考hwcloud上的Oracle RAC 12cR2安装手册(1)—环境准备做如下准备: IP分配 安全组 修改主机名 配置host解析 关闭libvirt 禁用ZEROCONF 安装依赖 配置swap 配置内核 关闭NTP 配置PAM 修改limit文件 关闭SELinux 创建用户,用户组和文件夹 配置grid和oracle用户的环境变量 配置ssh互信 挂载共享存储到node03上,安装asmlib,使用oracleasm scandisks扫描磁盘组 安装Grid环境检查node01上,grid用户,执行如下命令进行node03的环境检查 1cluvfy stage -pre nodeadd -n node03 -fixup -verbose 执行结果如下: 12345678910111213CVU operation performed: stage -pre nodeaddDate: Dec 1, 2017 3:40:21 PMCVU home: /u01/app/12.2.0/grid/User: grid******************************************************************************************Following is the list of fixable prerequisites selected to fix in this session******************************************************************************************-------------- --------------- ----------------Check failed. Failed on nodes Reboot required?-------------- --------------- ----------------Package: cvuqdisk-1.0.10-1 node03 noExecute "/tmp/CVU_12.2.0.1.0_grid/runfixup.sh" as root user on nodes "node03" to perform the fix up operations manuallyPress ENTER key to continue after execution of "/tmp/CVU_12.2.0.1.0_grid/runfixup.sh" has completed on nodes "node03" 根据提示,在node03上用root用户执行修复脚本,结果如下: 12345678Press ENTER key to continue after execution of "/tmp/CVU_12.2.0.1.0_grid/runfixup.sh" has completed on nodes "node03"Fix: Package: cvuqdisk-1.0.10-1 Node Name Status ------------------------------------ ------------------------ node03 successfulResult:"Package: cvuqdisk-1.0.10-1" was successfully fixed on all the applicable nodes[grid@node01 ~]$ 临时禁用dns解析由于grid安装时会校验dns中是否能检查到节点信息,图形化安装时可以手动忽略,这里静默添加,不能忽略,会报错退出,如下: 1234567INFO: [Dec 1, 2017 4:46:51 PM] resolv.conf Integrity: resolv.conf IntegrityINFO: [Dec 1, 2017 4:46:51 PM] Severity:CRITICALINFO: [Dec 1, 2017 4:46:51 PM] OverallStatus:VERIFICATION_FAILEDINFO: [Dec 1, 2017 4:46:51 PM] *********************************************INFO: [Dec 1, 2017 4:46:51 PM] (Linux) resolv.conf Integrity: This task checks consistency of file /etc/resolv.conf file across nodesINFO: [Dec 1, 2017 4:46:51 PM] Severity:CRITICALINFO: [Dec 1, 2017 4:46:51 PM] OverallStatus:VERIFICATION_FAILED 所以临时禁用,如下: node01, node02, node03节点,root用户 1mv /etc/resolv.conf /etc/resolv.conf.bak 安装node01上,grid用户,执行如下命令: 12cd $ORACLE_HOME/addnode./addnode.sh -silent "CLUSTER_NEW_NODES={node03}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node03-vip.myCluster.com}" "CLUSTER_NEW_PUBLIC_HOSTNAMES={node03.myCluster.com}" "CLUSTER_NEW_NODE_ROLES={HUB}" 注: 以上参数都参考图形化安装时的配置 结果如下: 12345678910111213As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/12.2.0/grid/root.shExecute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [node03]Execute /u01/app/12.2.0/grid/root.sh on the following nodes: [node03]The scripts can be executed in parallel on all the nodes................................................... 100% Done.Successfully Setup Software. 根据提示,用root用户在node03执行/u01/app/oraInventory/orainstRoot.sh,/u01/app/12.2.0/grid/root.sh即可 安装oracle软件node01上,oracle用户,执行如下命令: 12cd $ORACLE_HOME/addnode./addnode.sh -silent "CLUSTER_NEW_NODES={node03}" 结果如下: 123456789As a root user, execute the following script(s): 1. /u01/app/oracle/product/12.2.0/dbhome_1/root.shExecute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes: [node03].................................................. 100% Done.Successfully Setup Software. 根据提示,使用root用户,在node03上执行/u01/app/oracle/product/12.2.0/dbhome_1/root.sh即可 安装instancenode01上,oracle用户,执行如下命令: 1dbca -silent -addInstance -nodeName node03 -gdbName orcl.myCluster.com -instanceName orcl3 -sysDBAUserName sys -sysDBAPassword Hand1234 结果如下: 1234567891011121314151617[oracle@node01 ~]$ dbca -silent -addInstance -nodeName node03 -gdbName orcl.myCluster.com -instanceName orcl3 -sysDBAUserName sys -sysDBAPassword Hand1234Adding instance1% complete2% complete6% complete13% complete20% complete26% complete33% complete40% complete46% complete53% complete66% completeCompleting instance management.76% complete100% completeLook at the log file "/u01/app/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details. 注: 安装过程中,出现找不到磁盘组的情况 后把u01文件夹按之前的授权重新授权,重启服务器出现原来的RAC无法启动的问题,报错为CRS-0184 Cannot Communicate With The Crs Daemon in Oracle 通过在三个节点(其实只需在无法启动的节点)执行安装grid时和oracle生成的root.sh文件之后,能够正常启动数据库 安装第三个节点的实例能找到ASM磁盘 验证集群层 crsctl check crs crsctl check cluster -all crsctl status res -t 应用层 srvctl status nodeapps srvctl status asm srvctl status listener srvctl status instance -d orcl -i orcl1,orcl2,orcl3 最后记得恢复/etc/resolv.conf !!! 问题总结ORA-01618: redo thread 3 is not enabled - cannot mount在可以使用的节点上检查redo日志组12345678SQL> set lines 200SQL> select * from v$Log; GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARCHIVED STATUS FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME CON_ID---------- ---------- ---------- ---------- ---------- ---------- --------- ------------------------------------------------ ------------- --------------- ------------ --------------- ---------- 1 1 4429 209715200 512 1 NO CURRENT 3711801002 21-MAR-18 1.8447E+19 0 2 1 4428 209715200 512 1 NO ACTIVE 3711526576 21-MAR-18 3711801002 21-MAR-18 0 3 2 4623 209715200 512 1 NO CURRENT 3711753735 21-MAR-18 1.8447E+19 0 4 2 4622 209715200 512 1 NO INACTIVE 3711406786 21-MAR-18 3711753735 21-MAR-18 0 在可以使用的节点上添加thread 3日志12345678910111213141516SQL> alter database add logfile thread 3('+DATA','+DATA') size 200m;Database altered.SQL> alter database add logfile thread 3('+DATA','+DATA') size 200m;Database altered.SQL> select * from v$Log; GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARCHIVED STATUS FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME CON_ID---------- ---------- ---------- ---------- ---------- ---------- --------- ------------------------------------------------ ------------- --------------- ------------ --------------- ---------- 1 1 4429 209715200 512 1 NO CURRENT 3711801002 21-MAR-18 1.8447E+19 0 2 1 4428 209715200 512 1 NO INACTIVE 3711526576 21-MAR-18 3711801002 21-MAR-18 0 3 2 4623 209715200 512 1 NO CURRENT 3711753735 21-MAR-18 1.8447E+19 0 4 2 4622 209715200 512 1 NO INACTIVE 3711406786 21-MAR-18 3711753735 21-MAR-18 0 5 3 0 209715200 512 2 YES UNUSED 0 0 0 6 3 0 209715200 512 2 YES UNUSED 0 0 06 rows selected.SQL> alter database enable thread 3;Database altered. ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use在可用节点查看现有undo表空间123456select tablespace_name, file_name, AUTOEXTENSIBLE, bytes / 1024 / 1024 size_mb from dba_data_files where tablespace_name like '%UNDO%'; 123456789101112131415161718192021TABLESPACE_NAME------------------------------------------------------------------------------------------FILE_NAME--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------AUTOEXTEN SIZE_MB--------- ----------UNDOTBS1+DATA/SRMCLOUD/DATAFILE/undotbs1.278.963232783YES 3805UNDOTBS2+DATA/SRMCLOUD/DATAFILE/undotbs2.273.963232811YES 3075TABLESPACE_NAME------------------------------------------------------------------------------------------FILE_NAME--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------AUTOEXTEN SIZE_MB--------- ----------UNDOTBS3+DATA/SRMCLOUD/DATAFILE/undotbs3.579.971361293YES 3805 发现是有undotbs3的表空间,因此只需要修改第三节点的undo即可 在可用节点修改第三节点的undo表空间123SQL> alter system set undo_tablespace='UNDOTBS3' sid='srmcloud3';System altered.SQL>]]></content>
<categories>
<category>oracle</category>
<category>rac</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>rac</tag>
</tags>
</entry>
<entry>
<title><![CDATA[hwcloud上的Oracle-RAC-12cR2安装手册(4)--数据库实例的创建]]></title>
<url>%2F2017%2F11%2F30%2Fhwcloud%E4%B8%8A%E7%9A%84Oracle-RAC-12cR2%E5%AE%89%E8%A3%85%E6%89%8B%E5%86%8C-4-%E6%95%B0%E6%8D%AE%E5%BA%93%E5%AE%9E%E4%BE%8B%E7%9A%84%E5%88%9B%E5%BB%BA%2F</url>
<content type="text"><![CDATA[node01上,oracle用户,执行dbca,如下: 1[oracle@node01 ~]$ dbca 安装过程如下: 注:此处的SID前缀需要与环境变量里的ORACLE_SID对应:例如,SID前缀设置orcl,ORACLE_SID需要设置为orcl1和orcl2。 node01、node02上,oracle用户,执行lsnrctl status查看实例运行状态 node01: 12345678910111213141516171819202122232425262728293031[oracle@node01 ~]$ lsnrctl statusLSNRCTL for Linux: Version 12.2.0.1.0 - Production on 30-NOV-2017 17:27:08Copyright (c) 1991, 2016, Oracle. All rights reserved.Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))STATUS of the LISTENER------------------------Alias LISTENERVersion TNSLSNR for Linux: Version 12.2.0.1.0 - ProductionStart Date 29-NOV-2017 18:45:30Uptime 0 days 22 hr. 41 min. 37 secTrace Level offSecurity ON: Local OS AuthenticationSNMP OFFListener Parameter File /u01/app/12.2.0/grid/network/admin/listener.oraListener Log File /u01/app/grid/diag/tnslsnr/node01/listener/alert/log.xmlListening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.14)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.17)(PORT=1521)))Services Summary...Service "+ASM" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service...Service "+ASM_DATA" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service...Service "+ASM_OCR_VOT_GIMR" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service...Service "orcl.myCluster.com" has 1 instance(s). Instance "orcl1", status READY, has 1 handler(s) for this service...Service "orclXDB.myCluster.com" has 1 instance(s). Instance "orcl1", status READY, has 1 handler(s) for this service...The command completed successfully node02: 12345678910111213141516171819202122232425262728293031[oracle@node02 ~]$ lsnrctl statusLSNRCTL for Linux: Version 12.2.0.1.0 - Production on 30-NOV-2017 17:29:46Copyright (c) 1991, 2016, Oracle. All rights reserved.Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))STATUS of the LISTENER------------------------Alias LISTENERVersion TNSLSNR for Linux: Version 12.2.0.1.0 - ProductionStart Date 29-NOV-2017 18:57:02Uptime 0 days 22 hr. 32 min. 43 secTrace Level offSecurity ON: Local OS AuthenticationSNMP OFFListener Parameter File /u01/app/12.2.0/grid/network/admin/listener.oraListener Log File /u01/app/grid/diag/tnslsnr/node02/listener/alert/log.xmlListening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.15)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.18)(PORT=1521)))Services Summary...Service "+ASM" has 1 instance(s). Instance "+ASM2", status READY, has 1 handler(s) for this service...Service "+ASM_DATA" has 1 instance(s). Instance "+ASM2", status READY, has 1 handler(s) for this service...Service "+ASM_OCR_VOT_GIMR" has 1 instance(s). Instance "+ASM2", status READY, has 1 handler(s) for this service...Service "orcl.myCluster.com" has 1 instance(s). Instance "orcl2", status READY, has 1 handler(s) for this service...Service "orclXDB.myCluster.com" has 1 instance(s). Instance "orcl2", status READY, has 1 handler(s) for this service...The command completed successfully node01、node02上,oracle用户,执行sqlplus / as sysdba尝试登录数据库,并执行show pdbs;查看pdb状态 node01: 12345678910[oracle@node01 ~]$ sqlplus / as sysdbaSQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 30 17:37:41 2017Copyright (c) 1982, 2016, Oracle. All rights reserved.Connected to:Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit ProductionSQL> show pdbs; CON_ID CON_NAME OPEN MODE RESTRICTED---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NOSQL> node02: 12345678910[oracle@node02 ~]$ sqlplus / as sysdbaSQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 30 17:39:15 2017Copyright (c) 1982, 2016, Oracle. All rights reserved.Connected to:Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit ProductionSQL> show pdbs; CON_ID CON_NAME OPEN MODE RESTRICTED---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NOSQL> 至此,完成了数据库实例的创建,ORACLE RAC安装完成,后面测试ORACLE RAC个节点分开调整服务器配置,敬请期待。]]></content>
<categories>
<category>oracle</category>
<category>rac</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>rac</tag>
</tags>
</entry>
<entry>
<title><![CDATA[hwcloud上的Oracle-RAC-12cR2安装手册(3)--oracle软件的安装]]></title>
<url>%2F2017%2F11%2F29%2Fhwcloud%E4%B8%8A%E7%9A%84Oracle-RAC-12cR2%E5%AE%89%E8%A3%85%E6%89%8B%E5%86%8C-3-oracle%E8%BD%AF%E4%BB%B6%E7%9A%84%E5%AE%89%E8%A3%85%2F</url>
<content type="text"><![CDATA[node01上,oracle用户 12cd /u01/software/database/./runInstaller 安装过程如下: node01、node02上,root用户,执行/u01/app/oracle/product/12.2.0/dbhome_1/root.sh如下: node01: 12345678910111213[root@node01 ~]# /u01/app/oracle/product/12.2.0/dbhome_1/root.shPerforming root user operation.The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/12.2.0/dbhome_1Enter the full pathname of the local bin directory: [/usr/local/bin]:The contents of "dbhome" have not changed. No need to overwrite.The contents of "oraenv" have not changed. No need to overwrite.The contents of "coraenv" have not changed. No need to overwrite.Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root script.Now product-specific root actions will be performed. node02: 12345678910111213[root@node02 ~]# /u01/app/oracle/product/12.2.0/dbhome_1/root.shPerforming root user operation.The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/12.2.0/dbhome_1Enter the full pathname of the local bin directory: [/usr/local/bin]:The contents of "dbhome" have not changed. No need to overwrite.The contents of "oraenv" have not changed. No need to overwrite.The contents of "coraenv" have not changed. No need to overwrite.Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root script.Now product-specific root actions will be performed. 至此,完成了数据库软件的安装 接着看hwcloud上的Oracle-RAC-12cR2安装手册(4)--数据库实例的创建]]></content>
<categories>
<category>oracle</category>
<category>rac</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>rac</tag>
</tags>
</entry>
<entry>
<title><![CDATA[Centos6安装ASMLIB]]></title>
<url>%2F2017%2F11%2F27%2Fabc%2F</url>
<content type="text"><![CDATA[Centos6安装ASMLIB我这里操作系统用的是Centos 6.8,Centos 6系列和Redhat 6系列都可以按下面的方式安装,至于其他版本的下载连接,参考地址:http://www.oracle.com/technetwork/server-storage/linux/asmlib/index-101839.html?ssSourceSiteId=ocomen 注:以下操作需要在node01和node02两个节点上操作 下载相关软件12[root@node01 ~]# wget http://oss.oracle.com/projects/oracleasm-support/dist/files/RPMS/rhel6/amd64/2.1.8/oracleasm-support-2.1.8-1.el6.x86_64.rpm[root@node01 ~]# wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.4-1.el6.x86_64.rpm 使用yum自动安装相关依赖1[root@node01 ~]# yum install oracleasm 使用yum本地安装下载好的两个安装包12[root@node01 ~]# yum localinstall oracleasmlib-2.0.4-1.el6.x86_64.rpm[root@node01 ~]# yum localinstall oracleasm-support-2.1.8-1.el6.x86_64.rpm 配置oracleasm服务123456789101112[root@node01 ~]# oracleasm configure -iConfiguring the Oracle ASM library driver.This will configure the on-boot properties of the Oracle ASM librarydriver. The following questions will determine whether the driver isloaded on boot and what permissions it will have. The current valueswill be shown in brackets ('[]'). Hitting <ENTER> without typing ananswer will keep that current value. Ctrl-C will abort.Default user to own the driver interface []: gridDefault group to own the driver interface []: asmadminStart Oracle ASM library driver on boot (y/n) [n]: yScan for Oracle ASM disks on boot (y/n) [y]: yWriting Oracle ASM library driver configuration: done 启动oracleasm1[root@node01 ~]# /etc/init.d/oracleasm enable]]></content>
<categories>
<category>oracle</category>
<category>ASM</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>ASM</tag>
</tags>
</entry>
<entry>
<title><![CDATA[centos6.8调整swap空间]]></title>
<url>%2F2017%2F11%2F27%2Fcentos6-8%E8%B0%83%E6%95%B4swap%E7%A9%BA%E9%97%B4%2F</url>
<content type="text"><![CDATA[起因安装数据库时,要求swap空间至少为8G大小,由于操作系统已经安装完成,因此需要用如下方式调整swap空间 创建的swap文件这里启用的swap会与默认的swap空间叠加使用,默认的是4G大小,因此这里只需要再创建4Gswap即可 123fallocate -l 4g /var/swapfilechmod 600 /var/swapfile mkswap /var/swapfile 启用创建的swap文件1swapon /var/swapfile 添加至fstab文件编辑/etc/fstab文件,添加如下内容 1/var/swapfile swap swap defaults 0 0]]></content>
<categories>
<category>LINUX</category>
<category>swap</category>
</categories>
<tags>
<tag>LINUX</tag>
<tag>swap</tag>
</tags>
</entry>
<entry>
<title><![CDATA[hwcloud上的Oracle RAC 12cR2安装手册(2)--grid的安装]]></title>
<url>%2F2017%2F11%2F27%2Fhwcloud%E4%B8%8A%E7%9A%84Oracle-RAC-12cR2%E5%AE%89%E8%A3%85%E6%89%8B%E5%86%8C-2-grid%E7%9A%84%E5%AE%89%E8%A3%85%2F</url>
<content type="text"><![CDATA[安装CVUnode01上,root用户 执行如下命令: 12cd /u01/app/12.2.0/grid/cv/rpm/rpm -ivh cvuqdisk-1.0.10-1.rpm node01上,grid用户 执行如下命令: 12cd /u01/app/12.2.0/grid/./runcluvfy.sh stage -pre crsinst -n node01,node02 -fixup -verbose 返回结果如下,则表示配置正确 12345Pre-check for cluster services setup was successful.CVU operation performed: stage -pre crsinstDate: Nov 27, 2017 6:26:58 PMCVU home: /u01/app/12.2.0/grid/User: grid 如果有报错,则按照提示修复错误即可 安装网格基础设施(grid)注: 安装前需保证node02上ORACLE_BASE和ORACLE_HOME目录为空,如果报错,可以手动清理掉 安装前保证node01上的ORACLE_BASE的目录为空,如果报错,可以手动清理掉 node01上,grid用户 12cd /u01/app/12.2.0/grid/./gridSetup.sh 安装完成后,点击close关闭即可。 验证使用grid用户分别在node01和node02上执行crs_stat -v -t查看服务运行状态,如下: 创建其他磁盘组node01上,grid用户 执行asmca,结果如下: 至此,完成Grid安装 接着看hwcloud上的Oracle RAC 12cR2安装手册(3)—oracle软件的安装]]></content>
<categories>
<category>oracle</category>
<category>rac</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>rac</tag>
</tags>
</entry>
<entry>
<title><![CDATA[Centos6修改Hostname]]></title>
<url>%2F2017%2F11%2F24%2FCentos6%E4%BF%AE%E6%94%B9Hostname%2F</url>
<content type="text"><![CDATA[查看当前hostname12[root@node01 ~]# hostnamenode01.myCluster.com 临时修改hostname命令格式为: hostname newhostname 如下: 123[root@node01 ~]# hostname node02.myCluster.com[root@node01 ~]# hostnamenode02.myCluster.com 通过以上修改,只能临时生效,重启服务器后失效 修改/etc/sysconfig/network通过修改/etc/sysconfig/network中的hostname,实现永久生效 12NETWORKING=yesHOSTNAME=node02.myCluster.com]]></content>
<categories>
<category>LINUX</category>
<category>hostname</category>
</categories>
<tags>
<tag>LINUX</tag>
<tag>hostname</tag>
</tags>
</entry>
<entry>
<title><![CDATA[hwcloud上的Oracle RAC 12cR2安装手册(1)--环境的准备]]></title>
<url>%2F2017%2F11%2F23%2Fhwcloud%E4%B8%8A%E7%9A%84Oracle-RAC-12cR2%E5%AE%89%E8%A3%85%E6%89%8B%E5%86%8C-1-%E7%8E%AF%E5%A2%83%E7%9A%84%E5%87%86%E5%A4%87%2F</url>
<content type="text"><![CDATA[网络网卡分配 网卡名 作用 网段 eth0 Public IP 192.168.1.0/24 eth1 NET IP 192.168.2.0/24 eth2 Private IP 192.168.6.0/24 这里我用了三块网卡: eth0做Public IP的网卡,同时会绑定Virtual IP 和 SCAN IP eth1做NET IP的网卡,用来提供访问互联网的服务 eth2做Private的网卡,用来做节点之间的数据交互 注:在华为云控制台关闭node01节点和node02节点的Public IP网卡的源/目的检查,否则虚拟ip是不通的 IP段分配 Name node01 node02 Public IP 192.168.1.14 192.168.1.15 Private IP 192.168.6.2 192.168.6.3 Virtual IP 192.168.1.16 192.168.1.17 SCAN IP 192.168.1.18 192.168.1.19 192.168.1.20 NET IP 192.168.2.95 192.168.2.96 安全组修改华为云安全组,放行169.254.0.0/16 网段、放行Public IP、放心Virtual IP、放行SCAN IP、Private IP 修改主机名参考Centos6修改Hostname 分别将两个节点的主机名改为node01.myCluster.com、node02.myCluster.com 配置host解析修改node1和node2的hosts文件,添加如下内容 12345678910111213# Public192.168.1.14 node01.myCluster.com node01192.168.1.14 node02.myCluster.com node02# Private192.168.6.2 node01-priv.myCluster.com node01-priv192.168.6.3 node02-priv.myCluster.com node02-priv# Virtual192.168.1.16 node01-vip.myCluster.com node01-vip192.168.1.17 node02-vip.myCluster.com node02-vip# SCAN192.168.1.18 nodes-scan.myCluster.com nodes-scan192.168.1.19 nodes-scan.myCluster.com nodes-scan192.168.1.20 nodes-scan.myCluster.com nodes-scan 关闭libvirt的虚拟网卡功能如果服务器安装了libvirt,则需要关闭其自带的虚拟网卡功能,方法如下: 1234virsh net-listvirsh net-destroy defaultvirsh net-undefine defaultservice libvirtd restart (注:华为云提供的Centos6.8镜像不需要做此操作) 禁用ZEROCONF分别编辑node01和node02上的/etc/sysconfig/network,添加如下内容 1NOZEROCONF=yes 操作系统配置安装依赖1yum -y install binutils compat-libcap1 compat-libstdc++ compat-libstdc++-33 e2fsprogs e2fsprogs-libs glibc glibc glibc-devel glibc-devel ksh libgcc libgcc libstdc++ libstdc++ libstdc++-devel libstdc++-devel libaio libaio libaio-devel libaio-devel libXtst libXtst libX11 libX11 libXau libXau libxcb libxcb libXi libXi make net-tools nfs-utils sysstat smartmontools gcc gcc-c++ gcc-devel 配置swaporacle要求swap空间至少为8G,我用的这个镜像默认swap为4G,因此需要调整,参考centos6.8调整swap空间 调整即可 配置内核参数我服务器配置为4C/8G 在node01和node02上编辑/etc/sysctl.conf,添加或修改如下内容 123456789101112kernel.shmall = 4294967296kernel.shmmax = 8589934591fs.aio-max-nr = 1048576fs.file-max = 6815744kernel.shmmni = 4096kernel.sem = 250 32000 100 128net.ipv4.ip_local_port_range = 9000 65500net.core.rmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_default = 262144net.core.wmem_max = 1048576kernel.panic_on_oops=1 关于以上部分参数的介绍: kernel.shmmax: 用来定义共享内存单个共享内存段可使用的内存大小,最好设置改值得大小能容下整个SGA的大小,否则会导致性能下降,原因是因为在实例启动以及ServerProcess创建的时候,多个小的共享内存段可能会导致当时轻微的系统性能的降低(在启动的时候需要去创建多个虚拟地址段,在进程创建的时候要让进程对多个段进行“识别”,会有一些影响),但是其他时候都不会有影响。 以上未经过验证 基于以上的说法,在设置改参数的时候需要考虑SGA的大小。在我的数据库中,一般设置MEMORY_TARGET为物理内存80%,SGA又占MEMORY_TARGET的80%,因此可将kernel.shmmax设置为物理内存的64%,一般情况下大于SGA即可,这里为了方便计算,我设置为物理-1byte,既8589934591byte kernel.shmall: 该参数控制可以使用的共享内存的总页数。Linux共享内存页大小为4KB,共享内存段的大小都是共享内存页大小的整数倍。一个共享内存段的最大大小是16G,那么需要共享内存页数是16GB/4KB=16777216KB /4KB=4194304(页),也就是64Bit系统下16GB物理内存,设置kernel.shmall = 4194304才符合要求(几乎是原来设置2097152的两倍)。这时可以将shmmax参数调整到16G了,同时可以修改SGA_MAX_SIZE和SGA_TARGET为12G(您想设置的SGA最大大小,当然也可以是2G~14G等,还要协调PGA参数及OS等其他内存使用,不能设置太满,比如16G) 以上的说法来自于网络,这里贴出来仅供参考 其实,该参数是用来控制可用的共享内存数的,单位是page(通常在linux中1page=4KB),只要可用的共享内存书大于SGA的大小即可,一般默认的大小就可以满足大部分oracle的需求了,这里我没有修改默认值 kernel.shmmni: 该参数是共享内存段的最大数量。shmmni缺省值4096,一般肯定是够用了。 fs.file-max: 该参数决定了系统中所允许的文件句柄最大数目,文件句柄设置代表linux系统中可以打开的文件的数量。 fs.aio-max-nr: 此参数限制并发未完成的请求,应该设置避免I/O子系统故障。 kernel.sem: 以kernel.sem = 250 32000 100 128为例: 250是参数semmsl的值,表示一个信号量集合中能够包含的信号量最大数目。 32000是参数semmns的值,表示系统内可允许的信号量最大数目。 100是参数semopm的值,表示单个semopm()调用在一个信号量集合上可以执行的操作数量。 128是参数semmni的值,表示系统信号量集合总数。 net.ipv4.ip_local_port_range: 表示应用程序可使用的IPv4端口范围。 net.core.rmem_default: 表示套接字接收缓冲区大小的缺省值。 net.core.rmem_max: 表示套接字接收缓冲区大小的最大值。 net.core.wmem_default: 表示套接字发送缓冲区大小的缺省值。 net.core.wmem_max: 表示套接字发送缓冲区大小的最大值。 执行sysctl -p使其生效 关闭NTPOracle Rac有自带的Oracle Cluster Time Synchronization Service来保证节点间的时间同步,因此,关闭自带的NTP服务 命令如下: 1234/sbin/service ntpd stopchkconfig ntpd offmv /etc/ntp.conf /etc/ntp.conf.orgrm /var/run/ntpd.pid 配置PAM在node01和node02节点上编辑/etc/pam.d/login,添加如下内容: 1session required pam_limits.so limit文件在node01和node02节点上编辑/etc/security/limits.conf,添加如下内容: 12345678@oinstall hard nofile 65536@oinstall soft nofile 10240@oinstall hard nproc 16384@oinstall soft nproc 16384@oinstall hard stack 32768@oinstall soft stack 10240@oinstall soft memlock 475188563@oinstall hard memlock 475188563 关闭SELinux在node01和node02上分别执行 1setenforce 0 在node01和node02节点上编辑文件/etc/sysconfig/selinux,修改如下内容: 1SELINUX=disabled 创建用户、组、文件夹在node01和node02上分别执行如下命令: 123456789101112131415groupadd -g 1000 oinstallgroupadd -g 1020 asmadmingroupadd -g 1021 asmdbagroupadd -g 1022 asmopergroupadd -g 1031 dbagroupadd -g 1032 operuseradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba griduseradd -u 1101 -g oinstall -G dba,asmdba,oper oraclemkdir -p /u01/softwaremkdir -p /u01/app/12.2.0/gridmkdir -p /u01/app/gridmkdir -p /u01/app/oraclechown -R grid:oinstall /u01chown oracle:oinstall /u01/app/oraclechmod -R 775 /u01/ 将下面的两个文件分别上传至node01的/u01/software文件夹下 linuxx64_12201_grid_home.zip linuxx64_12201_database.zip 解压并修改对应权限 12345cd /u01/softwareunzip linuxx64_12201_grid_home.zip -d /u01/app/12.2.0/grid/chown -R grid:oinstall /u01/app/12.2.0/grid/unzip linuxx64_12201_database.zipchown oracle:oinstall database 配置环境变量配置grid用户的环境变量在node01上,grid用户 编辑/home/grid/.bash_profile,添加如下内容: 12345678export ORACLE_SID=+ASM1export TMP=/tmpexport ORACLE_BASE=/u01/app/gridexport ORACLE_HOME=/u01/app/12.2.0/gridexport PATH=$ORACLE_HOME/bin:/usr/sbin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibexport LC_ALL=en_US.UTF-8 在node02上,grid用户 编辑/home/grid/.bash_profile,添加如下内容: 12345678export ORACLE_SID=+ASM2export TMP=/tmpexport ORACLE_BASE=/u01/app/gridexport ORACLE_HOME=/u01/app/12.2.0/gridexport PATH=$ORACLE_HOME/bin:/usr/sbin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibexport LC_ALL=en_US.UTF-8 配置oracle的环境变量在node01上,oracle用户 编辑/home/oracle/.bash_profile,添加如下内容: 123456789export TMP=/tmpexport ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=$ORACLE_BASE/product/12.2.0/dbhome_1export ORACLE_SID=orcl1export PATH=$ORACLE_HOME/bin:/usr/sbin:$ORACLE_HOME/OPatch:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibexport NLS_LANG=AMERICAN_AMERICA.UTF8export LC_ALL=en_US.UTF-8 在node02上,oracle用户 编辑/home/oracle/.bash_profile,添加如下内容: 123456789export TMP=/tmpexport ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=$ORACLE_BASE/product/12.2.0/dbhome_1export ORACLE_SID=orcl2export PATH=$ORACLE_HOME/bin:/usr/sbin:$ORACLE_HOME/OPatch:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibexport NLS_LANG=AMERICAN_AMERICA.UTF8export LC_ALL=en_US.UTF-8 配置SSH互信在node01上,grid用户,分别为grid用户和oracle用户配置SSH互信。 注:配置过程中需要输入grid和oracle用户的密码,请提前设置好 123cd /u01/app/12.2.0/grid/oui/prov/resources/scripts/./sshUserSetup.sh -hosts "node01 node02" -user grid -advanced -noPromptPassphrase./sshUserSetup.sh -hosts "node01 node02" -user oracle -advanced -noPromptPassphrase 在node01上,grid用户,尝试登录node02验证 1234[grid@node01 ~]$ ssh node02 hostnamenode02.myCluster.com[grid@node01 ~]$ ssh node02-priv hostnamenode02.myCluster.com 在node02上,grid用户,尝试登录node01验证 1234[grid@node02 ~]$ ssh node01 hostnamenode01.myCluster.com[grid@node02 ~]$ ssh node01-priv hostnamenode01.myCluster.com 在node01上,oracle用户,尝试登录node02验证 1234[oracle@node01 ~]$ ssh node02 hostnamenode02.myCluster.com[oracle@node01 ~]$ ssh node02-priv hostnamenode02.myCluster.com 在node02上,oracle用户,尝试登录node01验证 1234[oracle@node02 ~]$ ssh node01 hostnamenode01.myCluster.com[oracle@node02 ~]$ ssh node01-priv hostnamenode01.myCluster.com 配置存储在华为云控制台选购磁盘,共享模式选择共享,磁盘模式选择VBD(这里使用的是VBD的磁盘,使用oracleasm来配置共享磁盘) 安装ASMLIB参考Centos6.8安装ASMLIB 创建分区注: 我的共享磁盘挂载点为/dev/xvdc OCR_VOT_GIMR磁盘组和DATA磁盘组的冗余类型都选择为External /dev/sdb共100G空间,创建1个40G分区分配给OCR_VOT_GIMR磁盘组使用,创建一个60G分区分配至DATA磁盘组 ASM磁盘要求参考官方文档 在node01上执行,仅分区即可 1234567891011121314151617181920212223242526272829303132[root@node01 ~]# fdisk /dev/xvdcDevice contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel with disk identifier 0x3c07d9cd.Changes will remain in memory only, until you decide to write them.After that, of course, the previous content won't be recoverable.Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').Command (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-13054, default 1):Using default value 1Last cylinder, +cylinders or +size{K,M,G} (1-13054, default 13054): +40GCommand (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 2First cylinder (5224-13054, default 5224):Using default value 5224Last cylinder, +cylinders or +size{K,M,G} (5224-13054, default 13054):Using default value 13054Command (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.Syncing disks. 分区结果如下: 12345678910[root@node01 ~]# fdisk -l /dev/xvdcDisk /dev/xvdc: 107.4 GB, 107374182400 bytes255 heads, 63 sectors/track, 13054 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x3c07d9cd Device Boot Start End Blocks Id System/dev/xvdc1 1 5223 41953716 83 Linux/dev/xvdc2 5224 13054 62902507+ 83 Linux 创建磁盘组node01执行如下命令创建磁盘组: 123456[root@node01 ~]# oracleasm createdisk OCR_VOT_GIMR /dev/xvdc1Writing disk header: doneInstantiating disk: done[root@node01 ~]# oracleasm createdisk DATA /dev/xvdc2Writing disk header: doneInstantiating disk: done 使用oracleasm listdisks查看已创建的磁盘组,如下: 123[root@node01 ~]# oracleasm listdisksDATAOCR_VOT_GIMR 在node02上扫描node01上创建的磁盘组,如下: 123456[root@node02 ~]# oracleasm scandisksReloading disk partitions: doneCleaning any stale ASM disks...Scanning system for ASM disks...Instantiating disk "OCR_VOT_GIMR"Instantiating disk "DATA" 使用oracleasm listdisks查看扫描到的磁盘组,如下: 123[root@node02 ~]# oracleasm listdisksDATAOCR_VOT_GIMR 至此,安装环境准备完成。 接着看hwcloud上的Oracle RAC 12cR2安装手册(2)--grid的安装]]></content>
<categories>
<category>oracle</category>
<category>rac</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>rac</tag>
</tags>
</entry>
<entry>
<title><![CDATA[Performing RMAN Recovery]]></title>
<url>%2F2017%2F11%2F17%2FPerforming-RMAN-Recovery%2F</url>
<content type="text"><![CDATA[安装数据库软件(不需要安装实例)略 创建对应路径(与原库路径一致)123456789101112131415161718mkdir -p /u01/app/oracle/oradata/TESTmkdir -p /u01/app/oracle/fast_recovery_areamkdir -p /u01/app/oracle/admin/TESTmkdir /u01/app/oracle/admin/TEST/adumpmkdir /u01/app/oracle/admin/TEST/bdumpmkdir /u01/app/oracle/admin/TEST/cdumpmkdir /u01/app/oracle/admin/TEST/ddumpmkdir -p /u01/app/oracle/oradata/TEST/19A9B2E4ED836CFCE0537680A8C03E8F/DATAFILEmkdir -p /u01/app/oracle/oradata/TEST/1A325228C22837A9E0537680A8C0AB5A/DATAFILEmkdir -p /u01/app/oracle/oradata/TEST/1B5D770684A12982E0537680A8C00768/DATAFILEmkdir -p /u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILEmkdir -p /u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/DATAFILEmkdir -p /u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILEmkdir -p /u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILEmkdir -p /u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILEmkdir -p /u01/app/oracle/oradata/TEST/BCG/DATAFILEmkdir /u01/app/oracle/oradata/TEST/CONTROLFILEmkdir /u01/app/oracle/oradata/TEST/DATAFILE 开始恢复 rman target / nocatalog 设置DBID可通过原库查询,也可通过控制文件的备份名称获得(备份名称c-DBID-date-num.bak) 1SET DBID 999753383; 恢复spfile 和 pfile123STARTUP NOMOUNT;RESTORE SPFILE FROM '/u01/rmanbak/c-999753383-20170925-00.bak';RESTORE SPFILE TO PFILE '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initTEST.ora' FROM '/u01/rmanbak/c-999753383-20170925-00.bak'; 结合当前服务器的配置信息修改pfile文件123456789101112131415161718192021222324252627TEST.__data_transfer_cache_size=0TEST.__db_cache_size=1010827260TEST.__java_pool_size=20971520TEST.__large_pool_size=8388608TEST.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environmentTEST.__pga_aggregate_target=289406976TEST.__sga_target=1333788670TEST.__shared_io_pool_size=46137344TEST.__shared_pool_size=234881024TEST.__streams_pool_size=8388608*.audit_file_dest='/u01/app/oracle/admin/TEST/adump'*.audit_trail='db'*.compatible='12.1.0.2.0'*.control_files='/u01/app/oracle/oradata/TEST/CONTROLFILE/current.257.883692833','/u01/app/oracle/oradata/TEST/CONTROLFILE/current.258.883692835'*.db_block_size=32768*.db_domain=''*.db_name='TEST'*.db_recovery_file_dest_size=30g*.diagnostic_dest='/u01/app/oracle'*.dispatchers='(PROTOCOL=TCP) (SERVICE=TESTXDB)'*.enable_pluggable_database=true*.open_cursors=300*.pga_aggregate_target=1310m*.processes=1500*.remote_login_passwordfile='exclusive'*.resource_manager_plan=''*.sga_target=5242m 使用pfile文件启动数据库到NOMOUNT状态1STARTUP FORCE NOMOUNT PFILE='/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initTEST.ora'; 恢复控制文件1RESTORE CONTROLFILE FROM '/u01/rmanbak/c-999753383-20170925-00.bak'; 切换数据库为MOUNT状态1sql 'alter database mount'; 执行恢复以下之所以rename,因为新库的数据文件路径发生改变,所以需要rename,可通过再rman中执行report schema查看所有的数据文件名和文件编号 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132CATALOG START WITH '/u01/rmanbak';RUN{ ALLOCATE CHANNEL c1 TYPE DISK; SET NEWNAME FOR DATAFILE 1 TO '/u01/app/oracle/oradata/TEST/DATAFILE/system.263.883692841'; SET NEWNAME FOR DATAFILE 2 TO '/u01/app/oracle/oradata/TEST/19A9B2E4ED836CFCE0537680A8C03E8F/DATAFILE/system.264.883692851'; SET NEWNAME FOR DATAFILE 3 TO '/u01/app/oracle/oradata/TEST/DATAFILE/sysaux.265.883692857'; SET NEWNAME FOR DATAFILE 4 TO '/u01/app/oracle/oradata/TEST/19A9B2E4ED836CFCE0537680A8C03E8F/DATAFILE/sysaux.266.883692865'; SET NEWNAME FOR DATAFILE 5 TO '/u01/app/oracle/oradata/TEST/DATAFILE/undotbs1.267.883692869'; SET NEWNAME FOR DATAFILE 6 TO '/u01/app/oracle/oradata/TEST/DATAFILE/undotbs2.270.883692925'; SET NEWNAME FOR DATAFILE 7 TO '/u01/app/oracle/oradata/TEST/DATAFILE/users.271.883692927'; SET NEWNAME FOR DATAFILE 15 TO '/u01/app/oracle/oradata/TEST/1A325228C22837A9E0537680A8C0AB5A/DATAFILE/system.282.884366019'; SET NEWNAME FOR DATAFILE 16 TO '/u01/app/oracle/oradata/TEST/1A325228C22837A9E0537680A8C0AB5A/DATAFILE/sysaux.281.884366019'; SET NEWNAME FOR DATAFILE 20 TO '/u01/app/oracle/oradata/TEST/1A325228C22837A9E0537680A8C0AB5A/DATAFILE/fcprod_01.dbf'; SET NEWNAME FOR DATAFILE 21 TO '/u01/app/oracle/oradata/TEST/1A325228C22837A9E0537680A8C0AB5A/DATAFILE/fcprod_02.dbf'; SET NEWNAME FOR DATAFILE 22 TO '/u01/app/oracle/oradata/TEST/1B5D770684A12982E0537680A8C00768/DATAFILE/system.288.885650835'; SET NEWNAME FOR DATAFILE 23 TO '/u01/app/oracle/oradata/TEST/1B5D770684A12982E0537680A8C00768/DATAFILE/sysaux.289.885650835'; SET NEWNAME FOR DATAFILE 24 TO '/u01/app/oracle/oradata/TEST/1B5D770684A12982E0537680A8C00768/DATAFILE/sugontra_01.dbf'; SET NEWNAME FOR DATAFILE 25 TO '/u01/app/oracle/oradata/TEST/1B5D770684A12982E0537680A8C00768/DATAFILE/sugontra_02.dbf'; SET NEWNAME FOR DATAFILE 33 TO '/u01/app/oracle/oradata/TEST/1B5D770684A12982E0537680A8C00768/DATAFILE/sgtread.dbf'; SET NEWNAME FOR DATAFILE 38 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/system.365.897157457'; SET NEWNAME FOR DATAFILE 39 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sysaux.376.897157457'; SET NEWNAME FOR DATAFILE 42 TO '/u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/DATAFILE/system.886.898112773'; SET NEWNAME FOR DATAFILE 43 TO '/u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/DATAFILE/sysaux.896.898112773'; SET NEWNAME FOR DATAFILE 44 TO '/u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/DATAFILE/absenpro_01.dbf'; SET NEWNAME FOR DATAFILE 45 TO '/u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/DATAFILE/absenpro_02.dbf'; SET NEWNAME FOR DATAFILE 50 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sugonpro_01.dbf'; SET NEWNAME FOR DATAFILE 51 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sugonpro_02.dbf'; SET NEWNAME FOR DATAFILE 63 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sugonpro_data.2127.902684869'; SET NEWNAME FOR DATAFILE 64 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/system.4371.904906001'; SET NEWNAME FOR DATAFILE 65 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/sysaux.4372.904906011'; SET NEWNAME FOR DATAFILE 66 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data.4376.904906531'; SET NEWNAME FOR DATAFILE 67 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data.4377.904906561'; SET NEWNAME FOR DATAFILE 68 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data.4378.904906591'; SET NEWNAME FOR DATAFILE 69 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data.4379.904906621'; SET NEWNAME FOR DATAFILE 70 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data.4380.904906651'; SET NEWNAME FOR DATAFILE 71 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data.4381.904906683'; SET NEWNAME FOR DATAFILE 72 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data.4382.904906713'; SET NEWNAME FOR DATAFILE 73 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data.4383.904906741'; SET NEWNAME FOR DATAFILE 74 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data.4384.904906771'; SET NEWNAME FOR DATAFILE 75 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/system.9415.906672919'; SET NEWNAME FOR DATAFILE 76 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/sysaux.11560.906672919'; SET NEWNAME FOR DATAFILE 77 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_01.dbf'; SET NEWNAME FOR DATAFILE 78 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_02.dbf'; SET NEWNAME FOR DATAFILE 79 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sugonpro_03.dbf'; SET NEWNAME FOR DATAFILE 80 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_03.dbf'; SET NEWNAME FOR DATAFILE 81 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_04.dbf'; SET NEWNAME FOR DATAFILE 82 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_05.dbf'; SET NEWNAME FOR DATAFILE 83 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_06.dbf'; SET NEWNAME FOR DATAFILE 84 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sugonpro_04.dbf'; SET NEWNAME FOR DATAFILE 85 TO '/u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/DATAFILE/absenpro_03.dbf'; SET NEWNAME FOR DATAFILE 86 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data_10.dbf'; SET NEWNAME FOR DATAFILE 87 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sugonpro_data_6.dbf'; SET NEWNAME FOR DATAFILE 88 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_7.dbf'; SET NEWNAME FOR DATAFILE 89 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data_11.dbf'; SET NEWNAME FOR DATAFILE 90 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_8.dbf'; SET NEWNAME FOR DATAFILE 91 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_9.dbf'; SET NEWNAME FOR DATAFILE 92 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data_12.dbf'; SET NEWNAME FOR DATAFILE 93 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_10.dbf'; SET NEWNAME FOR DATAFILE 94 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_11.dbf'; SET NEWNAME FOR DATAFILE 95 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_12.dbf'; SET NEWNAME FOR DATAFILE 96 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_13.dbf'; SET NEWNAME FOR DATAFILE 116 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data_13.dbf'; SET NEWNAME FOR DATAFILE 117 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_14.dbf'; SET NEWNAME FOR DATAFILE 118 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_15.dbf'; SET NEWNAME FOR DATAFILE 119 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_16.dbf'; SET NEWNAME FOR DATAFILE 120 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data_14.dbf'; SET NEWNAME FOR DATAFILE 121 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILE/system.6863.913669707'; SET NEWNAME FOR DATAFILE 122 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILE/sysaux.6861.913669707'; SET NEWNAME FOR DATAFILE 123 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILE/shacpro_01.dbf'; SET NEWNAME FOR DATAFILE 124 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILE/shacpro_02.dbf'; SET NEWNAME FOR DATAFILE 125 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILE/shacpro_03.dbf'; SET NEWNAME FOR DATAFILE 126 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILE/shacpro_04.dbf'; SET NEWNAME FOR DATAFILE 127 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILE/shacpro_05.dbf'; SET NEWNAME FOR DATAFILE 128 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data_15.dbf'; SET NEWNAME FOR DATAFILE 129 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_17.dbf'; SET NEWNAME FOR DATAFILE 130 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sugonpro_data_7.dbf'; SET NEWNAME FOR DATAFILE 131 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_18.dbf'; SET NEWNAME FOR DATAFILE 132 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_19.dbf'; SET NEWNAME FOR DATAFILE 133 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data_16.dbf'; SET NEWNAME FOR DATAFILE 134 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_20.dbf'; SET NEWNAME FOR DATAFILE 135 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_21.dbf'; SET NEWNAME FOR DATAFILE 136 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data_17.dbf'; SET NEWNAME FOR DATAFILE 137 TO '/u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/DATAFILE/absenpro_data_4.dbf'; SET NEWNAME FOR DATAFILE 142 TO '/u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/DATAFILE/absenpro_data_5.dbf'; SET NEWNAME FOR DATAFILE 147 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sugonpro_data_8.dbf'; SET NEWNAME FOR DATAFILE 149 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data_18.dbf'; SET NEWNAME FOR DATAFILE 150 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_data_19.dbf'; SET NEWNAME FOR DATAFILE 151 TO'/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_22.dbf'; SET NEWNAME FOR DATAFILE 152 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sugonpro_data_9.dbf'; SET NEWNAME FOR DATAFILE 153 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_23.dbf'; SET NEWNAME FOR DATAFILE 154 TO '/u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/DATAFILE/absenpro_data_6.dbf'; SET NEWNAME FOR DATAFILE 155 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_24.dbf'; SET NEWNAME FOR DATAFILE 156 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILE/shacpro_06.dbf'; SET NEWNAME FOR DATAFILE 157 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_25.dbf'; SET NEWNAME FOR DATAFILE 169 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/system01.dbf'; SET NEWNAME FOR DATAFILE 170 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/sysaux01.dbf'; SET NEWNAME FOR DATAFILE 171 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/clouddemo_01.dbf'; SET NEWNAME FOR DATAFILE 172 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/clouddemo_02.dbf'; SET NEWNAME FOR DATAFILE 173 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/clouddemo_03.dbf'; SET NEWNAME FOR DATAFILE 174 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/clouddemo_04.dbf'; SET NEWNAME FOR DATAFILE 175 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/clouddemo_05.dbf'; SET NEWNAME FOR DATAFILE 176 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/clouddemo_06.dbf'; SET NEWNAME FOR DATAFILE 177 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/clouddemo_07.dbf'; SET NEWNAME FOR DATAFILE 178 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/clouddemo_08.dbf'; SET NEWNAME FOR DATAFILE 179 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/clouddemo_09.dbf'; SET NEWNAME FOR DATAFILE 180 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_data_26.dbf'; SET NEWNAME FOR DATAFILE 181 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILE/shacpro_07.dbf'; SET NEWNAME FOR TEMPFILE 1 TO '/u01/app/oracle/oradata/TEST/TEMPFILE/temp.268.883692873'; SET NEWNAME FOR TEMPFILE 2 TO '/u01/app/oracle/oradata/TEST/19A9B2E4ED836CFCE0537680A8C03E8F/TEMPFILE/temp.269.883692873'; SET NEWNAME FOR TEMPFILE 3 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_temp.dbf'; SET NEWNAME FOR TEMPFILE 4 TO '/u01/app/oracle/oradata/TEST/1A325228C22837A9E0537680A8C0AB5A/TEMPFILE/temp.283.884366053'; SET NEWNAME FOR TEMPFILE 5 TO '/u01/app/oracle/oradata/TEST/1A325228C22837A9E0537680A8C0AB5A/DATAFILE/fcprod_temp.dbf'; SET NEWNAME FOR TEMPFILE 6 TO '/u01/app/oracle/oradata/TEST/1B5D770684A12982E0537680A8C00768/TEMPFILE/temp.290.885650869'; SET NEWNAME FOR TEMPFILE 7 TO '/u01/app/oracle/oradata/TEST/1B5D770684A12982E0537680A8C00768/DATAFILE/sugontra_temp.dbf'; SET NEWNAME FOR TEMPFILE 8 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/TEMPFILE/temp.9418.906672955'; SET NEWNAME FOR TEMPFILE 9 TO '/u01/app/oracle/oradata/TEST/2E2B9591B4C13CC9E0537680A8C09CCF/DATAFILE/zzmetro_temp.dbf'; SET NEWNAME FOR TEMPFILE 10 TO '/u01/app/oracle/oradata/TEST/1B5D770684A12982E0537680A8C00768/DATAFILE/sgtread_temp.dbf'; SET NEWNAME FOR TEMPFILE 11 TO '/u01/app/oracle/oradata/TEST/BCG/DATAFILE/temp01.dbf'; SET NEWNAME FOR TEMPFILE 13 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/TEMPFILE/temp.462.897157491'; SET NEWNAME FOR TEMPFILE 14 TO '/u01/app/oracle/oradata/TEST/25C07163596B0AD0E0537680A8C0EAE1/DATAFILE/sugonpro_temp.dbf'; SET NEWNAME FOR TEMPFILE 15 TO '/u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/TEMPFILE/temp.898.898112805'; SET NEWNAME FOR TEMPFILE 16 TO '/u01/app/oracle/oradata/TEST/268AC0DAD4617497E0537680A8C0152A/DATAFILE/absenpro_temp.dbf'; SET NEWNAME FOR TEMPFILE 19 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/ljbb_temp_01.dbf'; SET NEWNAME FOR TEMPFILE 20 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/TEMPFILE/temp.6857.913669745'; SET NEWNAME FOR TEMPFILE 21 TO '/u01/app/oracle/oradata/TEST/347488B0A2DA5C21E0537680A8C0E0CA/DATAFILE/shacpro_temp.dbf'; SET NEWNAME FOR TEMPFILE 25 TO '/u01/app/oracle/oradata/TEST/2CB869B96579626EE0530C30A8C0E46C/DATAFILE/temp001.dbf'; RESTORE DATABASE; switch DATAFILE all; switch tempfile all; RECOVER DATABASE;} 由于归档日志缺失,只能部分恢复数据库,也就是只能恢复到rman备份的那一刻,根据报错显示的SCN号,恢复到指定SCN 1RECOVER DATABASE UNTIL SCN 10144773006017; 根据pfile创建spfile1Create spfile from pfile='/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initTEST.ora'; 修改日志文件路径1234567891011121314151617select member from v$logfile;mkdir -p /u01/app/oracle/oradata/TEST/ONLINELOG;mkdir -p /u01/app/oracle/oradata/TEST/STANDBYLOG;alter database rename file '+DATA/TEST/ONLINELOG/group1.259.883692835' to '/u01/app/oracle/oradata/TEST/ONLINELOG/group1.259.883692835';alter database rename file '+DATA/TEST/ONLINELOG/group1.260.883692837' to '/u01/app/oracle/oradata/TEST/ONLINELOG/group1.260.883692837';alter database rename file '+DATA/TEST/ONLINELOG/group2.261.883692837' to '/u01/app/oracle/oradata/TEST/ONLINELOG/group2.261.883692837';alter database rename file '+DATA/TEST/ONLINELOG/group2.262.883692839' to '/u01/app/oracle/oradata/TEST/ONLINELOG/group2.262.883692839';alter database rename file '+DATA/TEST/ONLINELOG/group3.272.883704715' to '/u01/app/oracle/oradata/TEST/ONLINELOG/group3.272.883704715';alter database rename file '+DATA/TEST/ONLINELOG/group3.273.883704717' to '/u01/app/oracle/oradata/TEST/ONLINELOG/group3.273.883704717';alter database rename file '+DATA/TEST/ONLINELOG/group4.274.883704719' to '/u01/app/oracle/oradata/TEST/ONLINELOG/group4.274.883704719';alter database rename file '+DATA/TEST/ONLINELOG/group4.275.883704721' to '/u01/app/oracle/oradata/TEST/ONLINELOG/group4.275.883704721';alter database rename file '+DATA/TEST/STANDBYLOG/standby_group11.log' to '/u01/app/oracle/oradata/TEST/STANDBYLOG/standby_group11.log';alter database rename file '+DATA/TEST/STANDBYLOG/standby_group12.log' to '/u01/app/oracle/oradata/TEST/STANDBYLOG/standby_group12.log';alter database rename file '+DATA/TEST/STANDBYLOG/standby_group13.log' to '/u01/app/oracle/oradata/TEST/STANDBYLOG/standby_group13.log';alter database rename file '+DATA/TEST/STANDBYLOG/standby_group14.log' to '/u01/app/oracle/oradata/TEST/STANDBYLOG/standby_group14.log';alter database rename file '+DATA/TEST/STANDBYLOG/standby_group15.log' to '/u01/app/oracle/oradata/TEST/STANDBYLOG/standby_group15.log';alter database rename file '+DATA/TEST/STANDBYLOG/standby_group16.log' to '/u01/app/oracle/oradata/TEST/STANDBYLOG/standby_group16.log'; 启动数据库由于修改了数据库日志路径,第一次启动需要resetlogs 1alter database open resetlogs; 创建lisenter1netca /silent /responsefile /u01/software/database/response/netca.rsp]]></content>
<categories>
<category>oracle</category>
<category>rman</category>
</categories>
<tags>
<tag>oracle</tag>
<tag>rman</tag>
</tags>
</entry>
<entry>
<title><![CDATA[rebuild oracle password file]]></title>
<url>%2F2017%2F10%2F19%2Frebuild-oracle-password-file%2F</url>
<content type="text"><![CDATA[起因之前通过rman将一台生产库恢复到了另外一台服务器上。今天使用sys用户连接数据库时,一直报用户名密码错误,后台通过sys / as sysdba登录服务器之后,使用alter user sys identified by newpassword修改密码之后依然不能使用密码登录数据库,因此才有如下解决方式。 原因是因为数据库回复后并没有生成对应的密码文件,因此sys用户无法作为sysdba登录,在开始之前需要先检查数据库的remote_login_passwordfile参数为exclusive,这也是oracle数据库的默认值。 检查方式如下: 1SQL> show parameter remote_login_passwordfile VALUE值为EXCLUSIVE即可 关于密码文件: oracle官网解释说在用DBCA创建数据库对的时候会自动创建密码文件,所以用DBCA创建的数据库一般是没问题的,但是我这次是用rman恢复的实例,不知道是否我原库是RAC的原因,恢复完之后并没有密码文件 具有管理员权限的角色(SYSDBA, SYSOPER, SYSBACKUP, SYSDG, or SYSKM)都会通过密码文件进行验证,而不是通过数据库来进行验证,但前提是该用户在数据库中存在。 密码文件中保存的密码是区分大小写的 RAC的密码文件是保存在ASM磁盘中的 解决方案重建密码文件即可,其中密码文件的路径为$ORACLE_HOME/dbs/orapw$instance_name 其中最后的文件名必须是orapw与instance_name拼接起来的名字,假设这里的instance_name为TEST 使用如下命令重建密码文件 1$ orapwd file="$ORACLE_HOME/dbs/orapwTEST" password=testtest 重建之后即可正常使用密码登录sys了,其中password指定的就是sys的密码 验证密码文件是否生效可登陆数据库后同若如下命令验证 1SQL> select * from v$pwfile_users; 如果密码文件没生效,则查出的结果为空,如果密码生效了,则查出的是记录在密码文件中的用户,默认仅有SYS用户。]]></content>
<categories>
<category>oracle</category>
</categories>
<tags>
<tag>oracle</tag>
</tags>
</entry>
<entry>
<title><![CDATA[create LVM disk]]></title>
<url>%2F2017%2F07%2F19%2Fcreate-LVM-disk%2F</url>
<content type="text"><![CDATA[创建物理分区查看是否有未分区的磁盘1234567891011121314151617181920212223[root@srm-fs ~]# fdisk -lDisk /dev/xvda: 42.9 GB, 42949672960 bytes255 heads, 63 sectors/track, 5221 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000d6814 Device Boot Start End Blocks Id System/dev/xvda1 1 523 4194304 82 Linux swap / SolarisPartition 1 does not end on cylinder boundary./dev/xvda2 * 523 5222 37747712 83 LinuxDisk /dev/xvdb: 536.9 GB, 536870912000 bytes255 heads, 63 sectors/track, 65270 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/xvdc: 536.9 GB, 536870912000 bytes255 heads, 63 sectors/track, 65270 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000 发现有两个未使用的磁盘,这里以xvdb为例 在磁盘xvdb上创建xvdb1分区12345678910111213141516171819202122232425262728[root@srm-fs ~]# fdisk /dev/xvdbDevice contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel with disk identifier 0xf72f347d.Changes will remain in memory only, until you decide to write them.After that, of course, the previous content won't be recoverable.Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').Command (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-65270, default 1):Using default value 1Last cylinder, +cylinders or +size{K,M,G} (1-65270, default 65270):Using default value 65270添加磁盘系统表id为8e(LVM的标识)Command (m for help): tSelected partition 1Hex code (type L to list codes): 8eChanged system type of partition 1 to 8e (Linux LVM)保存分区表Command (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table. 将分区划分为物理卷,主要是添加LVM属性信息并划分PE存储单元.1234567891011121314151617[root@srm-fs ~]# pvcreate /dev/xvdb1 Physical volume "/dev/xvdb1" successfully created[root@srm-fs ~]# pvs PV VG Fmt Attr PSize PFree /dev/xvdb1 lvm2 a-- 499.99g 499.99g[root@srm-fs ~]# pvdisplay "/dev/xvdb1" is a new physical volume of "499.99 GiB" --- NEW Physical volume --- PV Name /dev/xvdb1 VG Name PV Size 499.99 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID 5KBToI-5jzg-P0ra-kdZO-3bt2-MRS2-Hbbjk3 创建卷组vg-atm, 并将分区xvdb1加入卷组其中PE为卷组的最小存储单元,可通过-s命令修改1234567891011121314151617181920212223242526[root@srm-fs ~]# vgcreate vg-atm /dev/xvdb1 Volume group "vg-atm" successfully created[root@srm-fs ~]# vgs VG #PV #LV #SN Attr VSize VFree vg-atm 1 0 0 wz--n- 499.99g 499.99g[root@srm-fs ~]# vgdisplay --- Volume group --- VG Name vg-atm System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 499.99 GiB PE Size 4.00 MiB Total PE 127998 Alloc PE / Size 0 / 0 Free PE / Size 127998 / 499.99 GiB VG UUID 8Enzk5-pm35-z6OQ-ee2v-RW4X-vp1n-00Z2Ua 从vg-atm卷组创建逻辑分区lv-atm11234567891011121314151617181920212223242526[root@srm-fs ~]# lvcreate -l 127998 -n lv-atm1 vg-atm Logical volume "lv-atm1" created其中,指定创建的逻辑分区大小有两种方式:-L 后面指定大小(e.g. 499G)不能超过卷组的free size,可通过vgdisplay查看-l 后面指定PE大小(e.g. 127998)不能超过卷组的free PE大小,可通过vgdisplay查看[root@srm-fs ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv-atm1 vg-atm -wi-a----- 499.99g[root@srm-fs ~]# lvdisplay --- Logical volume --- LV Path /dev/vg-atm/lv-atm1 LV Name lv-atm1 VG Name vg-atm LV UUID U8nc8o-aa2p-NeNw-JMie-mjoh-eRfo-wKdQuS LV Write Access read/write LV Creation host, time srm-fs, 2017-06-07 10:42:57 +0800 LV Status available # open 0 LV Size 499.99 GiB Current LE 127998 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 在逻辑分区lv-atm上创建文件系统1234567891011121314151617181920212223[root@srm-fs ~]# mkfs.ext4 /dev/vg-atm/lv-atm1mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks32768000 inodes, 131069952 blocks6553497 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=42949672964000 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000Writing inode tables: done Creating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 25 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override. 挂载文件系统1234567891011[root@srm-fs ~]# mount /dev/vg-atm/lv-atm1 /u01[root@srm-fs ~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/xvda2 36G 2.4G 32G 7% /tmpfs 487M 0 487M 0% /dev/shm/dev/mapper/vg--atm-lv--atm1 493G 198M 467G 1% /u01[root@srm-fs ~]# mount | grep u01/dev/mapper/vg--atm-lv--atm1 on /u01 type ext4创建好之后,会在/dev/mapper/生成一个软连接名字为”卷组-逻辑卷”从上面的命令也可以看出来。[root@srm-fs ~]# ll /dev/vg-atm/lv-atm1lrwxrwxrwx 1 root root 7 Jun 7 13:32 /dev/vg-atm/lv-atm1 -> ../dm-0 配置开机自动挂载12345678查看新增分区的UUID[root@srm-fs ~]# blkid/dev/xvda1: UUID="25ec3bdb-ba24-4561-bcdc-802edf42b85f" TYPE="swap"/dev/xvda2: UUID="1a1ce4de-e56a-4e1f-864d-31b7d9dfb547" TYPE="ext4"/dev/xvdb1: UUID="5KBToI-5jzg-P0ra-kdZO-3bt2-MRS2-Hbbjk3" TYPE="LVM2_member"/dev/mapper/vg--atm-lv--atm1: UUID="eb80318b-480f-486c-8b7c-6093974139f8" TYPE="ext4"编辑/etc/fstab,添加如下内容UUID=eb80318b-480f-486c-8b7c-6093974139f8" TYPE="ext4 /u01 ext4 defaults 1 1 添加新的物理分区至卷组vg-atm同样需要先创建pv,具体步骤省略1vgextend vg-atm /dev/xvdc1 扩展逻辑分区lv-atm12lvextend -L +500M /dev/vg-atm/lv-atm1路径为逻辑分区的路径,可通过lvdisplay查看]]></content>
<categories>
<category>LINUX</category>
<category>LVM</category>
</categories>
<tags>
<tag>LINUX</tag>
<tag>LVM</tag>
<tag>DISK</tag>
</tags>
</entry>
<entry>
<title><![CDATA[OpenVpn4Centos]]></title>
<url>%2F2017%2F07%2F11%2FOpenVpn4Centos%2F</url>
<content type="text"><![CDATA[声明 OpenVpn版本: 2.4.3 Centos版本: 6.8 本机IP: 192.168.5.3 实现功能: 客户端需通过用户名密码登录 用户名密码为系统用户 不同用户分为不同网段 通过VPN服务器的iptables来限制不同网段的访问,间接限制用户先实现证书登录,再实现用户名密码登录,也可以跳过证书登录的配置,直接实现用户名密码登录 用户 获得网段 可访问网段 普通用户 10.0.2.0/24 192.168.3.0/24 192.168.4.0/24 192.168.5.0/24 特殊用户 10.0.1.0/24 192.168.0.0/24 192.168.1.0/24 192.168.5.0/24 管理员 10.0.0.0/24 192.168.0.0/16 安装安装epel库12wget http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpmrpm -Uvh epel-release-6-8.noarch.rpm 安装openvpn1yum install openvpn 安装easy-rs该包用来制作ca证书,服务端证书,客户端证书。最新的为easy-rsa31234wget https://github.com/OpenVPN/easy-rsa/archive/master.zipunzip master.zipmv easy-rsa-mater/ easy-rsa/cp -R easy-rsa/ /etc/openvpn/ 编辑vars文件 先进入/etc/openvpn/easy-rsa/easyrsa3目录 1cd /etc/openvpn/easy-rsa/easyrsa3/ 复制vars.example 为vars 1cp vars.example vars 修改vars文件的如下字段 123456set_var EASYRSA_REQ_COUNTRY “CN” //根据自己情况更改set_var EASYRSA_REQ_PROVINCE “ShangHai”set_var EASYRSA_REQ_CITY “ShangHai”set_var EASYRSA_REQ_ORG “Test”set_var EASYRSA_REQ_EMAIL “test@test.com”set_var EASYRSA_REQ_OU “TestOpenVpn” 初始化easyrsa12cd /etc/openvpn/easy-rsa/easyrsa3/./easyrsa init-pki 创建CA(根)证书123456789101112131415161718192021./easyrsa build-caGenerating a 2048 bit RSA private key…………………………………….+++……+++writing new private key to ‘/root/easy-rsa/easyrsa3/pki/private/ca.key’Enter PEM pass phrase:Verifying – Enter PEM pass phrase:—–You are about to be asked to enter information that will be incorporatedinto your certificate request.What you are about to enter is what is called a Distinguished Name or a DN.There are quite a few fields but you can leave some blankFor some fields there will be a default value,If you enter ‘.’, the field will be left blank.—–Common Name (eg: your user, host, or server name) [Easy-RSA CA]:TestCA creation complete and you may now import and sign cert requests.Your new CA certificate file for publishing is at:/root/easy-rsa/easyrsa3/pki/ca.crt 注意:在上述部分需要输入PEM密码 PEM pass phrase,输入两次,此密码必须记住,不然以后不能为证书签名。还需要输入common name 通用名,这个你自己随便设置个独一无二的。 创建服务器端证书12345678910111213141516171819./easyrsa gen-req server nopassGenerating a 2048 bit RSA private key……………………………………………………………………..+++……………………+++writing new private key to ‘/root/easy-rsa/easyrsa3/pki/private/server.key’—–You are about to be asked to enter information that will be incorporatedinto your certificate request.What you are about to enter is what is called a Distinguished Name or a DN.There are quite a few fields but you can leave some blankFor some fields there will be a default value,If you enter ‘.’, the field will be left blank.—–Common Name (eg: your user, host, or server name) [server]:TestServerKeypair and certificate request completed. Your files are:req: /root/easy-rsa/easyrsa3/pki/reqs/server.reqkey: /root/easy-rsa/easyrsa3/pki/private/server.key 该过程中需要输入common name,随意但是不要跟之前的根证书的一样 签约服务端证书1234567891011121314151617181920212223./easyrsa sign server serverYou are about to sign the following certificate.Please check over the details shown below for accuracy. Note that this requesthas not been cryptographically verified. Please be sure it came from a trustedsource or that you have verified the request checksum with the sender.Request subject, to be signed as a server certificate for 3650 days:subject=commonName = TestServerType the word ‘yes’ to continue, or any other input to abort.Confirm request details: yesUsing configuration from /root/easy-rsa/easyrsa3/openssl-1.0.cnfEnter pass phrase for /root/easy-rsa/easyrsa3/pki/private/ca.key:Check that the request matches the signatureSignature okThe Subject’s Distinguished Name is as followscommonNameRINTABLE:’TestServer’Certificate is to be certified until Apr 20 06:02:10 2024 GMT (3650 days)Write out database with 1 new entriesData Base UpdatedCertificate created at: /root/easy-rsa/easyrsa3/pki/issued/server.crt 该命令中.需要你确认生成,要输入yes,还需要你提供我们当时创建CA时候的密码。如果你忘记了密码,那你就重头开始再来一次吧。 创建Diffie-Hellman12345678./easyrsa gen-dhNote: using Easy-RSA configuration from: ./varsGenerating DH parameters, 2048 bit long safe prime, generator 2This is going to take a long time……..+……………………………….+..+…………………………………………………………………………………………………………………………….DH parameters of size 2048 created at /etc/openvpn/easy-rsa/easyrsa3/pki/dh.pem 创建客户端证书以下是为了实现证书登录而做的操作。为了避免与服务器端证书冲突,因此另外复制一份easyrsa到另一个文件夹下操作。进入root目录新建client文件夹(也可以是任意目录,只要有权限访问),文件夹可随意命名,然后拷贝前面解压得到的easy-ras文件夹到client文件夹,进入下列目录,并初始化12345cd /root/mkdir clientcp -R easy-rsa/ client/cd client/easy-rsa/easyrsa3/./easyrsa init-pki 创建客户端key及生成证书(记住生成是自己输入的密码) 1./easyrsa gen-req client (名字自定义) 将的到的client.req导入然后签约证书 123cd /etc/openvpn/easy-rsa/easyrsa3/./easyrsa import-req /root/client/easy-rsa/easyrsa3/pki/reqs/client.req client./easyrsa sign client client(后面这个为客户端证书的名字,前面的为固定格式) 过程跟签约server类似,需要输入CA证书的密码 现在说一下我们上面都生成了什么东西服务端:(/cetc/openvpn/easy-rsa/文件夹) /etc/openvpn/easy-rsa/easyrsa3/pki/ca.crt /etc/openvpn/easy-rsa/easyrsa3/pki/reqs/server.req /etc/openvpn/easy-rsa/easyrsa3/pki/reqs/client.req /etc/openvpn/easy-rsa/easyrsa3/pki/private/ca.key /etc/openvpn/easy-rsa/easyrsa3/pki/private/server.key /etc/openvpn/easy-rsa/easyrsa3/pki/issued/server.crt /etc/openvpn/easy-rsa/easyrsa3/pki/issued/client.crt /etc/openvpn/easy-rsa/easyrsa3/pki/dh.pem 客户端:(root/client/easy-rsa文件夹) /root/client/easy-rsa/easyrsa3/pki/private/client.key /root/client/easy-rsa/easyrsa3/pki/reqs/client.req //这个文件被我们导入到了服务端,所以那里也有 将生成的证书拷贝到指定文件夹(/etc/openvpn/keys)1234cp /etc/openvpn/easy-rsa/easyrsa3/pki/ca.crt /etc/openvpn/keyscp /etc/openvpn/easy-rsa/easyrsa3/pki/private/server.key /etc/openvpn/keyscp /etc/openvpn/easy-rsa/easyrsa3/pki/issued/server.crt /etc/openvpn/keyscp /etc/openvpn/easy-rsa/easyrsa3/pki/dh.pem /etc/openvpn/keys 生成ta.key文件并移动到/etc/openvpn/keys12openvpn --genkey --secret ta.keymv ta.key /etc/openvpn/keys 编辑server的配置文件当你安装好了openvpn时候,他会提供一个server配置的文件例子,在/usr/share/doc/openvpn-2.4.3/sample/sample-config-files下会有一个server.conf文件,我们将这个文件复制到/etc/openvpn1cp /usr/share/doc/openvpn-2.4.3/sample/sample-config-files/server.conf /etc/openvpn 修改配置文件为如下内容 12345678910111213141516171819202122232425262728293031323334local 192.168.5.3 #本地IP,既服务器的IP地址port 1194 #vpn端口proto udp #使用UDP协议,也可以选择tcp协议dev tun #相对应的tab,tab是桥接模式,tun为虚拟网卡模式ca /etc/openvpn/keys/ca.crt #ca证书cert /etc/openvpn/keys/server.crt #服务器端证书key /etc/openvpn/keys/server.key #服务器端的key,需保密dh /etc/openvpn/keys/dh.pem #dh证书server 10.0.2.0 255.255.255.0 #普通vpn用户虚拟IP的网段ifconfig-pool-persist /etc/openvpn/ipp.txt #虚拟IP记录文件,防止重复分发IPpush "route 192.168.0.0 255.255.255.0" #网客户端注入route规则,用来实现部分网段走vpn, 其他网段仍然走客户端电脑的默认链路push "route 192.168.1.0 255.255.255.0"push "route 192.168.2.0 255.255.255.0"push "route 192.168.3.0 255.255.255.0"push "route 192.168.4.0 255.255.255.0"push "route 192.168.5.0 255.255.255.0"client-config-dir /etc/openvpn/ccd #用户个性化配置目录,特殊用户与管理员用户 的网段就是通过这个文件夹下的配置实现的route 10.0.0.0 255.255.255.0 #添加本地route,将特殊用户与管理员用户 的网段加入vpn服务器的路由表中route 10.0.1.0 255.255.255.0push "dhcp-option DNS 114.114.114.114" #为客户端电脑得到的虚拟IP推送DNSduplicate-cn #允许同一个证书或同一个用户同一时间多次登录keepalive 10 120 #客户端与服务器的心跳包,互相知道对方是否断开tls-auth /etc/openvpn/keys/ta.key 0 #ta证书,需保密cipher AES-256-CBC #加密规则comp-lzo #兼容旧的客户端max-clients 100 #客户端数量persist-keypersist-tunstatus /etc/openvpn/logs/openvpn-status.log #日志文件log /etc/openvpn/logs/openvpn.log #日志文件verb 3 #日志等级 创建 /etc/openvpn/logs文件夹创建 /etc/openvpn/ccd文件夹 配置特殊用户的个性化配置文件名必须与客户端证书名或者与下面即将配置的用户名密码登录的用户名一致,一个用户一个对应的文件,如果没有,则按默认配置这里假设client为特殊用户,既获得的网段在192.168.1.0/24。需要创建client对应的客户端证书(如果使用用户名密码登录,需要创建对应用户)123456789101112131415161718192021222324252627touch clientvi clientifconfig-push 10.0.1.1 10.0.1.2# 这里是指定该用户的IP,既直接固定IP,而不是随机分配。暂时没找到指定IP段的方法。因此,通过固定IP的方式来达到特殊用户属于特殊网段# 这里指定的是一对IP,表示虚拟客户端和服务器的IP端点。它们必须从连续的/30子网网段中获取(这里是/30表示xxx.xxx.xxx.xxx/30,即子网掩码位数为30),以便于与Windows客户端和TAP-Windows驱动兼容。暂时还没理解。明确地说,每个端点的IP地址对的最后8位字节必须取自下面的集合:[ 1, 2] [ 5, 6] [ 9, 10] [ 13, 14][ 17, 18] [ 21, 22] [ 25, 26] [ 29, 30] [ 33, 34] [ 37, 38] [ 41, 42] [ 45, 46][ 49, 50] [ 53, 54] [ 57, 58] [ 61, 62] [ 65, 66] [ 69, 70] [ 73, 74] [ 77, 78][ 81, 82] [ 85, 86] [ 89, 90] [ 93, 94] [ 97, 98] [101,102] [105,106] [109,110] [113,114] [117,118] [121,122] [125,126] [129,130] [133,134] [137,138] [141,142] [145,146] [149,150] [153,154] [157,158][161,162] [165,166] [169,170] [173,174] [177,178] [181,182] [185,186] [189,190] [193,194] [197,198] [201,202] [205,206] [209,210] [213,214] [217,218] [221,222] [225,226] [229,230] [233,234] [237,238][241,242] [245,246] [249,250] [253,254] 下载openvpn客户端,并进行配置用sftp将我们在vpn服务器上生成的客户端证书和key下载到客户端电脑,包括如下四个文件 /etc/openvpn/easy-rsa/easyrsa3/pki/ca.crt /etc/openvpn/easy-rsa/easyrsa3/pki/issued/client.crt /root/client/easy-rsa/easyrsa3/pki/private/client.key /etc/openvpn/keys/ta.key去官网下载openvpn客户端进行安装,然后安装目录找到simple-config,默认为C:\Program Files\OpenVPN\sample-config\client.ovpn。将client.ovpn 复制到openvpn的config目录下,默认为C:\Program Files\OpenVPN\config将下载到的四个文件同样放入config目录下,默认为C:\Program Files\OpenVPN\config 修改配置文件为如下内容:12345678910111213141516client #标记为客户端dev tun #与服务器端配置一致proto udp #与服务器端配置一致remote 122.112.219.248 1194 #服务器端IP与端口resolv-retry infinitenobindpersist-keypersist-tunca ca.crt #ca证书cert client.crt #客户端证书key client.key #客户端证书的keyremote-cert-tls server #服务器证书的名字tls-auth ta.key 1 #ta证书,如果服务器端配置,则客户端必须配置cipher AES-256-CBC #与服务器端配置一致comp-lzo #与服务器端配置一致verb 3 #日志级别 经过以上配置,客户端就能连接上vpn服务器了(服务器端的1194端口要开放),但是并不能访问内网服务器,因为vpn服务器没有配置虚拟ip的转发。 配置服务器端iptables通过如下命令来达到将某个网段的IP转发到主网卡上,以达到与内网服务器通信的目的。1iptables -t nat -A POSTROUTING -s 10.0.1.0/24 -d 192.168.0.0/24 -o eth0 -j MASQUERADE 为了实现不同用户不同权限,需要做如下操作来限制不同网段的用户的访问权限1234567iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j MASQUERADEiptables -t nat -A POSTROUTING -s 10.0.1.0/24 -d 192.168.0.0/24 -o eth0 -j MASQUERADEiptables -t nat -A POSTROUTING -s 10.0.1.0/24 -d 192.168.1.0/24 -o eth0 -j MASQUERADEiptables -t nat -A POSTROUTING -s 10.0.1.0/24 -d 192.168.5.0/24 -o eth0 -j MASQUERADEiptables -t nat -A POSTROUTING -s 10.0.2.0/24 -d 192.168.3.0/24 -o eth0 -j MASQUERADEiptables -t nat -A POSTROUTING -s 10.0.2.0/24 -d 192.168.4.0/24 -o eth0 -j MASQUERADEiptables -t nat -A POSTROUTING -s 10.0.2.0/24 -d 192.168.5.0/24 -o eth0 -j MASQUERADE 经过如此配置,即可访问内网 如果不能访问,使用如下命令检查服务器是否允许ip_forward 1/sbin/sysctl -a | grep net.ipv4.ip_forward 确保net.ipv4.ip_forward的值为1,如果不为1,修改/etc/sysctl.conf文件,内容如下: 1net.ipv4.ip_forward = 1 使用sysctl -p命令使修改生效 配置完成后,使用如下命令保存iptables 规则到文件 1service iptables save 实现用户名密码登录在经过以上配置后,再来实现用户名密码登录就简单多了,只需要改一些配置即可。 修改服务器端配置编辑/etc/openvpn/server.conf,添加如下内容123plugin /usr/lib64/openvpn/plugin/lib/openvpn-auth-pam.so login #我是64位操作系统client-cert-not-required #只需验证用户名密码,不要求客户端证书username-as-common-name #用户名做common-name,既用户名相当于客户端名,个性化的时候使用用户名即可。 编辑客户端配置文件,既C:\Program Files\OpenVPN\sample-config\client.ovpn注释掉如下内容12;cert client.crt;key client.key 添加如下内容1auth-user-pass 创建服务器用户,由于仅作为vpn用户登录,因此建议创建不带home目录,没有登录权限的用户12useradd -M user1 -s /sbin/nologinecho "password" | passwd user1 --stdin]]></content>
<categories>
<category>OpenVpn</category>
</categories>
<tags>
<tag>OpenVpn</tag>
</tags>
</entry>
<entry>
<title><![CDATA[s3fs on obs]]></title>
<url>%2F2017%2F06%2F21%2Fs3fs-on-obs%2F</url>
<content type="text"><![CDATA[s3fs简介s3fs 是一款用于 Amazon S3 在 Linux 环境下的命令行工具,并为商业和私人免费使用,只需支付对应的存储服务的费用。它的主要功能是能够把桶(Bucket)挂载到本地目录,进行文件、文件夹的拷入、拷出和删除等操作,即完成对桶内文件、文件夹的上传、下载和删除操作。 下载s3fs登录网址 http://code.google.com/p/s3fs/downloads/list 进入下载界面,选择相应版本,单击“Download”进行下载。本文档以s3fs-1.74.tar.gz为例 安装s3fs安装依赖12345678910111213141516yum install fuse.x86_64 fuse-devel.x86_64 fuse-libs.x86_64 libconfuse-devel.x86_64 libconfuse.x86_64 gcc-c++.noarch curl.x86_64 libcurl-devel.x86_64 libcurl.x86_64 libxml2.x86_64 libxml2-devel.x86_64 openssl-devel.x86_64# 检查fuse版本(必须要大于2.8.4)rpm -qa | grep fuse# 若小于2.8.4,则需要用源码安装新版的fuse,如下yum remove fuse fuse-develtar -xvf fuse-2.8.4.tarcd fuse-2.8.4./configure --prefix=/usr/local/fusemakemake installecho /usr/local/fuse/lib >> /etc/ld.so.confecho "export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/lib64/pkgconfig/:/usr/local/fuse/lib/pkgconfig/" >> /etc/profilesource /etc/profileldconfigmodprobe fusepkg-config --modversion fuse 解压1tar -zxvf s3fs-1.74.tar.gz 安装123cd s3fs-1.74./configuremake && make install 配置秘钥12345vi /etc/passwd-s3fs# 添加秘钥,格式如下AK:SK# 更改权限chmod 640 /etc/passwd-s3fs 创建obs用户12groupadd obsuseradd -g obs obs 用户obs用户创建cache文件夹1mkdir .obs_cache 挂载1s3fs www.going-link.com /obs -o host=http://obs.myhwclouds.com -o umask=0022 -o uid=501 -o gid=501 -o use_cache=/home/obs/.obs_cache -o allow_other]]></content>
<categories>
<category>华为云</category>
<category>obs</category>
</categories>
<tags>
<tag>华为云</tag>
<tag>obs</tag>
</tags>
</entry>
<entry>
<title><![CDATA[duplicity install]]></title>
<url>%2F2017%2F06%2F20%2Fduplicity-install%2F</url>
<content type="text"><![CDATA[安装环境依赖123yum install -y gcc-c++ librsync python-lockfile python-urllib3 python-setuptools python-devel librsync-devel python-pippip install oss2 安装123tar zxvf duplicity.tar.gzcd duplicitypython setup.py install]]></content>
<categories>
<category>duplicity</category>
<category>install</category>
</categories>
<tags>
<tag>duplicity</tag>
</tags>
</entry>
<entry>
<title><![CDATA[redis sentinel]]></title>
<url>%2F2017%2F06%2F15%2Fredis-sentinel%2F</url>
<content type="text"><![CDATA[更新日志 修改redis-sentinel的配置文件, 添加sentinel auth-pass mymaster 123456, 解决新版redis, master宕机后不能自动切换的问题前言该文章只是简单的记录redis sentinel的安装过程,并没有深度优化,只能保证安装完成后可以使用。后续再继续优化。master ip: 100.65.9.250slave1 ip: 100.65.16.158slave2 ip: 100.65.16.162 Redis安装redis的安装包去redis官网下载安装依赖1yum -y install gcc gcc-c++ tcl 编译123tar -zxvf redis-3.2.9.tar.gzcd redis-3.2.9make 这一步可能出现如下错误123456789cd src && make allmake[1]: Entering directory `/root/redis-3.2.9/src' CC adlist.oIn file included from adlist.c:34:zmalloc.h:50:31: error: jemalloc/jemalloc.h: No such file or directoryzmalloc.h:55:2: error: #error "Newer version of jemalloc required"make[1]: *** [adlist.o] Error 1make[1]: Leaving directory `/root/redis-3.2.9/src'make: *** [all] Error 2 执行如下命令解决1make distclean 再重新make 安装12345cd srcmake install PREFIX=/usr/local/rediscd ..cp redis.conf /usr/local/redis/cp sentinel.conf /usr/local/redis/ 配置master和slave的配置文件编辑/usr/local/redis/redis.conf修改如下内容123456789101112131415bind 0.0.0.0# 访问master的密码 由于master和slave会互换,所以两个密码都最好配上masterauth 123456# redis的访问密码requirepass 123456# 开启守护模式 daemonize yes # 指定数据存储目录 dir /data/redis # 打开aof持久化 appendonly yes # 每秒一次aof写 appendfsync everysec# 修改日志路径logfile "/var/log/redis.log" 配置slave的配置文件在以上配置文件的基础上添加如下内容1234# 指定所属的主机 slaveof 100.65.9.250 6379 # 指定从机"只读" slave-read-only yes 配置sentinel配置文件编辑所有服务器的/usr/local/redis/sentinel.conf文件添加或修改如下内容12345678910111213141516bind 0.0.0.0daemonize yeslogfile "/var/log/sentinel.log"# sentinel需要监控的master/slaver信息,格式为sentinel monitor <mastername> <masterIP> <masterPort> <quorum> # 其中<quorum>应该小于集群中slave的个数,当失效的节点数超过了<quorum>,则认为整个体系结构失效sentinel monitor mymaster 100.65.9.250 6379 2sentinel auth-pass mymaster 123456# master被当前sentinel实例认定为失效的间隔时间,格式为sentinel down-after-milliseconds <mastername> <milliseconds>sentinel down-after-milliseconds mymaster 10000# 当新master产生时,同时进行“slaveof”到新master并进行同步复制的slave个数 # 在salve执行salveof同步时,将会终止客户端请求。 # 此值较大,意味着“集群”终止客户端请求的时间总和和较大。 # 此值较小,意味着“集群”在故障转移期间,多个salve向客户端提供服务时仍然使用旧数据。sentinel parallel-syncs mymaster 1# failover过期时间。当failover开始后,在此时间内仍然没有触发任何failover操作,当前sentinel将会认为此次failoer失败。sentinel failover-timeout mymaster 60000 启动三台服务器的redis1/usr/local/redis/bin/redis-server /usr/local/redis/redis.conf 启动三台服务器的sentinel先确保redis已经启动1/usr/local/redis/bin/redis-sentinel /usr/local/redis/sentinel.conf 测试在任意一个节点上查看主从机的复制信息查看master节点信息1234567891011[root@redis ~]# /usr/local/redis/bin/redis-cli -h 100.65.9.250 -p 6379 -a 123456 info Replication# Replicationrole:masterconnected_slaves:2slave0:ip=100.65.16.158,port=6379,state=online,offset=2437,lag=0slave1:ip=100.65.16.162,port=6379,state=online,offset=2437,lag=0master_repl_offset:2437repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:2repl_backlog_histlen:2436 查看slave节点信息1234567891011121314151617[root@redis ~]# /usr/local/redis/bin/redis-cli -h 100.65.16.158 -p 6379 -a 123456 info Replication# Replicationrole:slavemaster_host:100.65.9.250master_port:6379master_link_status:upmaster_last_io_seconds_ago:9master_sync_in_progress:0slave_repl_offset:2605slave_priority:100slave_read_only:1connected_slaves:0master_repl_offset:0repl_backlog_active:0repl_backlog_size:1048576repl_backlog_first_byte_offset:0repl_backlog_histlen:0 测试数据客户端连接master set一条测试数据1234[root@redis ~]# /usr/local/redis/bin/redis-cli -h 100.65.9.250 -p 6379 -a 123456100.65.9.250:6379> set test thisistestOK100.65.9.250:6379> 客户端连接任意slave查看数据1234[root@redis ~]# /usr/local/redis/bin/redis-cli -h 100.65.16.158 -p 6379 -a 123456100.65.16.158:6379> get test"thisistest"100.65.16.158:6379> 测试主从检测关闭一台slave节点,然后检测节点信息12345678910[root@redis ~]# /usr/local/redis/bin/redis-cli -h 100.65.9.250 -p 6379 -a 123456 info Replication# Replicationrole:masterconnected_slaves:1slave0:ip=100.65.16.162,port=6379,state=online,offset=4082,lag=0master_repl_offset:4082repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:2repl_backlog_histlen:4081 发现少了一个slave节点再启动关闭的slave节点,并检测1234567891011[root@redis ~]# /usr/local/redis/bin/redis-cli -h 100.65.9.250 -p 6379 -a 123456 info Replication# Replicationrole:masterconnected_slaves:2slave0:ip=100.65.16.162,port=6379,state=online,offset=4362,lag=0slave1:ip=100.65.16.158,port=6379,state=online,offset=4362,lag=0master_repl_offset:4362repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:2repl_backlog_histlen:4361 发现检测到slave节点又变为了2关闭master,然后检测节点信息123456789101112131415161718[root@redis ~]# /usr/local/redis/bin/redis-cli -h 100.65.16.162 -p 6379 -a 123456 info Replication# Replicationrole:slavemaster_host:100.65.9.250master_port:6379master_link_status:downmaster_last_io_seconds_ago:-1master_sync_in_progress:0slave_repl_offset:4586master_link_down_since_seconds:27slave_priority:100slave_read_only:1connected_slaves:0master_repl_offset:0repl_backlog_active:0repl_backlog_size:1048576repl_backlog_first_byte_offset:0repl_backlog_histlen:0 发现并没有马上选举出新的master,还是就得master,这段时间是不可写的。过一段时间后,重新检测,发现选举出了新的master。]]></content>
<categories>
<category>redis</category>
</categories>
<tags>
<tag>redis</tag>
<tag>sentinel</tag>
<tag>HA</tag>
</tags>
</entry>
<entry>
<title><![CDATA[hwcloud user guid]]></title>
<url>%2F2017%2F06%2F13%2Fhwcloud-user-guid%2F</url>
<content type="text"><![CDATA[前言该文章旨在记录在使用华为云的过程中遇到的各种坑以及解决方案。 无公网 IP 的云服务器访问 Internet操作场景华为官方的说法如下:为保证安全和节省公网IP资源,通常只为特定的云服务器配置公网IP,可直接访问 Internet,其他云服务器只配置私网IP,无法直接访问Internet。因此,当只配置了私网 IP的云服务器需要访问Internet,执行软件升级、给系统打补丁或者其它需求时,可选 择一台绑定了公网IP的云服务器作为代理云服务器,为其他无公网IP的云服务器提供访 问通道,正常访问Internet。为了让无公网 IP 的云服务器访问 Internet 需要手动配置网关。 前提条件 已拥有一台绑定了公网IP的云服务器作为代理云服务器。 本节操作中,以代理云服务器的操作系统是CentOS 6.5为例。 代理云服务器和其他需要访问Internet的云服务器需要访问外网的网卡均处于同一网段,并且在同一安 全组内。 我代理服务器的网段为192.168.2.0/24 网关为192.168.2.1 代理服务器内网ip为192.168.2.254 我的无公网 IP 的服务器的主网卡网段为192.168.0.0/24 网关为192.168.0.1。 无公网 IP 服务器的主网卡如果与代理服务器不在一个网段,则可以再添加一个与代理服务器在一个网段的网卡。 配置代理服务器 登录管理控制台将代理服务器的网卡的“源/目的检查”选项 设置为“OFF”。 登录代理服务器,ping 下百度或谷歌测试网络连通性 查看代理云服务器的 IP 转发功能是否开启 1cat /proc/sys/net/ipv4/ip_forward 回显为“0”表示关闭,请执行步骤4。 回显为“1”表示开启,请执行步骤5。 配置 IP 转发编辑 /etc/sysctl.conf ,将 net.ipv4.ip_forward 的值改为 “1”,并执行如下命令使配置文件生效 1sysctl -p /etc/sysctl.conf 清除 IPTABLES 规则 1iptables -F 配置SNAT 1iptables -t nat -A POSTROUTING -o eth0 -s 192.168.2.0/24 -j SNAT --to 192.168.2.254 验证 1iptables -t nat --list 回显类似下图,表示配置成功 保存 IPTABLES 配置1service iptables save 配置无公网 IP 的云服务器这里直接设置静态路由表来实现,重启服务器仍然生效 配置默认网关 1[root@srm-app ~]# echo "GATEWAY=192.168.2.254" >> /etc/sysconfig/network 配置静态路由表 1vi /etc/sysconfig/static-routes 添加如下内容12345any net 169.254.169.254/32 gw 192.168.2.1 eth1any net 169.254.169.254/32 gw 192.168.0.1 eth0any net 192.168.2.0/24 gw 192.168.2.1 eth1any net 192.168.0.0/16 gw 192.168.0.1 eth0any net 100.125.0.0/16 gw 192.168.0.1 eth0 其中 169.254.169.254/32 为华为官方文档特别说明要添加的 192.168.2.0/24 这个网段必须要添加 192.168.0.0/16 添加这个网段是我所有内网服务器的网段,添加这个网段是为了让我内网之间互相访问的时候不走代理服务器 100.125.0.0/16 这个网段是华为云的负载均衡的内网网段,添加这个网段是为了让负载均衡在访问服务器的时候不走代理 重启网络服务1service network restart ping外网来验证]]></content>
<categories>
<category>华为云</category>
<category>ELB</category>
</categories>
<tags>
<tag>华为云</tag>
<tag>网络</tag>
<tag>ELB</tag>
</tags>
</entry>
<entry>
<title><![CDATA[first blog from atom]]></title>
<url>%2F2017%2F06%2F09%2Ffirst-blog-from-atom%2F</url>
<content type="text"><![CDATA[comming soon…]]></content>
<categories>
<category>test</category>
</categories>
<tags>
<tag>test</tag>
</tags>
</entry>
</search>