아바마 노드의 각 하드웨어 Componet가 정상인지 확인
(mapall 명령어를 사용하여 전체 노드의 하드웨어 상태 확인 필요)
root@ava0-util:~/#: omreport chassis Health Main System Chassis SEVERITY : COMPONENT Ok : Fans Ok : Intrusion Ok : Memory Ok : Power Supplies Ok : Processors Ok : Temperatures Ok : Voltages Ok : Hardware Log Ok : Batteries For further help, type the command followed by -? |
각 아바마 노드의 디스크 상태 확인
(mapall 명령어를 사용하여 전체 노드의 하드웨어 상태 확인 필요)
root@ava0-util:~/#: omreport storage pdisk controller=0 List of Physical Disks on Controller PERC 6/i Integrated (Embedded) Controller PERC 6/i Integrated (Embedded) ID : 0:0:0 Status : Ok Name : Physical Disk 0:0:0 State : Online Failure Predicted : No Progress : Not Applicable Type : SAS Capacity : 278.88 GB (299439751168 bytes) Used RAID Disk Space : 278.88 GB (299439751168 bytes) Available RAID Disk Space : 0.00 GB (0 bytes) Hot Spare : No Vendor ID : DELL(tm) Product ID : ST3300656SS Revision : HS0A Serial No. : 3QP132-- Negotiated Speed : Not Available Capable Speed : Not Available Manufacture Day : 01 Manufacture Week : 04 Manufacture Year : 2009 SAS Address : 5000C5000F97EED1 ID : 0:0:1 Status : Ok Name : Physical Disk 0:0:1 State : Online Failure Predicted : No Progress : Not Applicable Type : SAS Capacity : 278.88 GB (299439751168 bytes) Used RAID Disk Space : 278.88 GB (299439751168 bytes) Available RAID Disk Space : 0.00 GB (0 bytes) Hot Spare : No Vendor ID : DELL(tm) Product ID : ST3300656SS Revision : HS0A Serial No. : 3QP132-- Negotiated Speed : Not Available Capable Speed : Not Available Manufacture Day : 01 Manufacture Week : 04 Manufacture Year : 2009 SAS Address : 5000C5000F97EE65 |
최근 체크포인트 정상 생성 및 Validation 정상 확인
root@ava0-util:~/#: cplist cp.20130307030013 Thu Mar 7 12:00:13 2013 valid rol --- nodes 13/13 stripes 15039 cp.20130307074458 Thu Mar 7 16:44:58 2013 valid --- --- nodes 13/13 stripes 15063 |
각 노드가 Online 상태인지 확인하고 Offline된 스트라이프가 있는지 확인
root@ava0-util:~/#: status.dpn Thu Mar 7 18:09:02 KST 2013 [AVA0-UTIL] Thu Mar 7 09:09:02 2013 UTC (Initialized Wed Feb 27 09:50:52 2013 UTC) Node IP Address Version State Runlevel Srvr+Root+User Dis Suspend Load UsedMB Errlen %Full Percent Full and Stripe Status by Disk 0.0 10.0.0.131 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.08 3940400 141086 21.4% 22%(onl:287) 20%(onl:289) 21%(onl:287) 21%(onl:288) 0.1 10.0.0.132 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.03 6004420 143390 20.4% 21%(onl:291) 20%(onl:295) 20%(onl:293) 20%(onl:292) 0.2 10.0.0.133 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.58 3939484 143466 21.8% 23%(onl:290) 21%(onl:290) 21%(onl:293) 21%(onl:293) 0.3 10.0.0.134 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.08 3935416 147986 22.0% 23%(onl:297) 21%(onl:288) 21%(onl:292) 21%(onl:293) 0.4 10.0.0.135 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.22 3936032 138754 20.8% 21%(onl:287) 20%(onl:278) 20%(onl:278) 20%(onl:283) 0.5 10.0.0.136 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.23 3945020 144696 20.4% 21%(onl:295) 20%(onl:292) 20%(onl:294) 20%(onl:291) 0.6 10.0.0.137 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 1 false 0.05 3940528 141408 20.0% 21%(onl:290) 19%(onl:288) 19%(onl:290) 19%(onl:287) 0.7 10.0.0.138 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.02 5999696 141607 20.1% 21%(onl:288) 19%(onl:290) 19%(onl:287) 19%(onl:290) 0.8 10.0.0.139 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.03 5998768 141663 20.0% 21%(onl:287) 19%(onl:287) 19%(onl:285) 19%(onl:289) 0.9 10.0.0.140 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.03 5997128 143490 20.4% 21%(onl:294) 20%(onl:287) 19%(onl:296) 20%(onl:295) 0.A 10.0.0.141 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.22 5997392 144134 20.8% 21%(onl:295) 20%(onl:297) 20%(onl:293) 20%(onl:297) 0.B 10.0.0.142 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.03 6000868 140330 19.8% 21%(onl:287) 19%(onl:285) 19%(onl:285) 19%(onl:284) 0.C 10.0.0.143 5.0.4-30 ONLINE fullaccess mhpu+0hpu+0hpu 0 false 0.21 5999676 144497 20.4% 21%(onl:293) 20%(onl:292) 20%(onl:288) 20%(onl:291) Srvr+Root+User Modes = migrate + hfswriteable + persistwriteable + useraccntwriteable All reported states=(ONLINE), runlevels=(fullaccess), modes=(mhpu+0hpu+0hpu) System-Status: ok Access-Status: full Last checkpoint: cp.20130307074458 finished Thu Mar 7 16:45:17 2013 after 00m 18s (OK) Last GC: finished Thu Mar 7 09:18:58 2013 after 03m 58s >> recovered 13.55 GB (OK) Last hfscheck: finished Thu Mar 7 12:22:57 2013 after 22m 25s >> checked 6521 of 6521 stripes (OK) Maintenance windows scheduler capacity profile is active. The backup window is currently running. Next backup window start time: Fri Mar 8 17:00:00 2013 KST Next blackout window start time: Fri Mar 8 09:00:00 2013 KST Next maintenance window start time: Fri Mar 8 12:00:00 2013 KST |
아바마 서비스 상태 확인
root@ava0-util:~/#: dpnctl status dpnctl: INFO: gsan status: ready dpnctl: INFO: MCS status: up. dpnctl: INFO: EMS status: up. dpnctl: INFO: Backup scheduler status: up. dpnctl: INFO: dtlt status: up. dpnctl: INFO: Maintenance windows scheduler status: enabled. dpnctl: INFO: Maintenance cron jobs status: enabled. dpnctl: INFO: Unattended startup status: disabled. |
각 아바마 노드의 Filesystem 공간이 90% 미만인지 확인 (data01 ~ data04만 해당)
root@ava0-util:~/#: mapall --parallel df -h Using /usr/local/avamar/var/probe.xml (0.0) ssh -x root@10.0.0.131 'df -h' (0.1) ssh -x root@10.0.0.132 'df -h' (0.2) ssh -x root@10.0.0.133 'df -h' (0.3) ssh -x root@10.0.0.134 'df -h' (0.4) ssh -x root@10.0.0.135 'df -h' (0.5) ssh -x root@10.0.0.136 'df -h' (0.6) ssh -x root@10.0.0.137 'df -h' (0.7) ssh -x root@10.0.0.138 'df -h' (0.8) ssh -x root@10.0.0.139 'df -h' (0.9) ssh -x root@10.0.0.140 'df -h' (0.10) ssh -x root@10.0.0.141 'df -h' (0.11) ssh -x root@10.0.0.142 'df -h' (0.12) ssh -x root@10.0.0.143 'df -h' Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.8G 2.6G 4.9G 35% / /dev/sda1 122M 13M 103M 11% /boot /dev/sda9 316G 72G 228G 24% /data01 /dev/sdb1 338G 69G 252G 22% /data02 /dev/sdc1 338G 70G 251G 22% /data03 /dev/sdd1 338G 69G 252G 22% /data04 none 2.0G 0 2.0G 0% /dev/shm /dev/sda8 1.5G 88M 1.3G 7% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.8G 2.3G 5.2G 31% / /dev/sda1 122M 13M 103M 11% /boot /dev/sda9 316G 73G 226G 25% /data01 /dev/sdb1 338G 70G 251G 22% /data02 /dev/sdc1 338G 71G 250G 22% /data03 /dev/sdd1 338G 71G 250G 23% /data04 none 2.0G 0 2.0G 0% /dev/shm /dev/sda8 1.5G 88M 1.3G 7% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.9G 2.1G 5.5G 28% / /dev/sda1 122M 13M 103M 12% /boot /dev/sda9 321G 72G 249G 23% /data01 /dev/sdb1 344G 71G 273G 21% /data02 /dev/sdc1 344G 71G 273G 21% /data03 /dev/sdd1 344G 71G 273G 21% /data04 none 3.0G 0 3.0G 0% /dev/shm /dev/sda8 1.5G 84M 1.4G 6% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.8G 2.3G 5.2G 31% / /dev/sda1 122M 13M 103M 11% /boot /dev/sda9 316G 74G 226G 25% /data01 /dev/sdb1 338G 71G 250G 23% /data02 /dev/sdc1 338G 71G 250G 23% /data03 /dev/sdd1 338G 71G 250G 22% /data04 none 2.0G 0 2.0G 0% /dev/shm /dev/sda8 1.5G 94M 1.3G 7% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.8G 2.3G 5.2G 31% / /dev/sda1 122M 13M 103M 11% /boot /dev/sda9 316G 70G 230G 24% /data01 /dev/sdb1 338G 67G 254G 21% /data02 /dev/sdc1 338G 67G 254G 21% /data03 /dev/sdd1 338G 67G 254G 21% /data04 none 2.0G 0 2.0G 0% /dev/shm /dev/sda8 1.5G 88M 1.3G 7% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.9G 2.3G 5.3G 30% / /dev/sda1 122M 13M 103M 11% /boot /dev/sda9 321G 70G 251G 22% /data01 /dev/sdb1 344G 70G 274G 21% /data02 /dev/sdc1 344G 70G 274G 21% /data03 /dev/sdd1 344G 70G 274G 21% /data04 none 2.0G 0 2.0G 0% /dev/shm /dev/sda8 1.5G 86M 1.4G 6% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.9G 2.3G 5.3G 30% / /dev/sda1 122M 13M 103M 11% /boot /dev/sda9 321G 74G 248G 23% /data01 /dev/sdb1 344G 70G 274G 21% /data02 /dev/sdc1 344G 71G 273G 21% /data03 /dev/sdd1 344G 71G 273G 21% /data04 none 2.0G 0 2.0G 0% /dev/shm /dev/sda8 1.5G 90M 1.4G 7% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.9G 2.2G 5.4G 29% / /dev/sda1 122M 13M 103M 12% /boot /dev/sda9 321G 70G 252G 22% /data01 /dev/sdb1 344G 71G 273G 21% /data02 /dev/sdc1 344G 70G 274G 21% /data03 /dev/sdd1 344G 70G 274G 21% /data04 none 3.0G 0 3.0G 0% /dev/shm /dev/sda8 1.5G 83M 1.4G 6% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.9G 2.2G 5.4G 29% / /dev/sda1 122M 13M 103M 12% /boot /dev/sda9 321G 70G 251G 22% /data01 /dev/sdb1 344G 70G 274G 21% /data02 /dev/sdc1 344G 69G 275G 21% /data03 /dev/sdd1 344G 70G 274G 21% /data04 none 3.0G 0 3.0G 0% /dev/shm /dev/sda8 1.5G 83M 1.4G 6% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.9G 2.2G 5.4G 29% / /dev/sda1 122M 13M 103M 12% /boot /dev/sda9 321G 71G 251G 22% /data01 /dev/sdb1 344G 70G 274G 21% /data02 /dev/sdc1 344G 70G 274G 21% /data03 /dev/sdd1 344G 71G 274G 21% /data04 none 3.0G 0 3.0G 0% /dev/shm /dev/sda8 1.5G 83M 1.4G 6% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.9G 2.2G 5.4G 29% / /dev/sda1 122M 13M 103M 12% /boot /dev/sda9 321G 73G 249G 23% /data01 /dev/sdb1 344G 72G 272G 21% /data02 /dev/sdc1 344G 72G 272G 21% /data03 /dev/sdd1 344G 72G 272G 21% /data04 none 3.0G 0 3.0G 0% /dev/shm /dev/sda8 1.5G 83M 1.4G 6% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.9G 2.2G 5.4G 29% / /dev/sda1 122M 13M 103M 12% /boot /dev/sda9 321G 71G 250G 23% /data01 /dev/sdb1 344G 71G 273G 21% /data02 /dev/sdc1 344G 71G 273G 21% /data03 /dev/sdd1 344G 72G 272G 21% /data04 none 3.0G 0 3.0G 0% /dev/shm /dev/sda8 1.5G 83M 1.4G 6% /var Filesystem Size Used Avail Use% Mounted on /dev/sda5 7.9G 2.2G 5.4G 29% / /dev/sda1 122M 13M 103M 12% /boot /dev/sda9 321G 69G 252G 22% /data01 /dev/sdb1 344G 69G 275G 20% /data02 /dev/sdc1 344G 69G 275G 21% /data03 /dev/sdd1 344G 69G 275G 20% /data04 none 3.0G 0 3.0G 0% /dev/shm /dev/sda8 1.5G 83M 1.4G 6% /var |