mdadm 소프트 레이드 raid1과 multipath 속도 테스트 정보
Linux mdadm 소프트 레이드 raid1과 multipath 속도 테스트본문
/dev/md0 raid1로 구성
/dev/md1 multipath 로 구성
Personalities : [raid0] [multipath] [raid1]
md1 : active multipath xvda8[0] xvda7[1]
104320 blocks [2/2] [UU]
md0 : active raid1 xvda6[1] xvda5[0]
104320 blocks [2/2] [UU]
hdparm 테스트
raid1
1차:
Timing cached reads: 18036 MB in 1.99 seconds = 9048.53 MB/sec
Timing buffered disk reads: 100 MB in 0.79 seconds = 126.93 MB/sec
2차:
Timing cached reads: 15768 MB in 1.99 seconds = 7906.37 MB/sec
Timing buffered disk reads: 100 MB in 0.92 seconds = 108.52 MB/sec
3차:
Timing cached reads: 15724 MB in 1.99 seconds = 7885.65 MB/sec
Timing buffered disk reads: 100 MB in 0.86 seconds = 116.37 MB/sec
multipath
1차:
Timing cached reads: 17672 MB in 1.99 seconds = 8864.20 MB/sec
Timing buffered disk reads: 100 MB in 0.47 seconds = 212.10 MB/sec
2차:
Timing cached reads: 16460 MB in 1.99 seconds = 8256.26 MB/sec
Timing buffered disk reads: 100 MB in 0.40 seconds = 248.29 MB/sec
3차:
Timing cached reads: 16076 MB in 1.99 seconds = 8062.19 MB/sec
Timing buffered disk reads: 100 MB in 0.44 seconds = 228.45 MB/sec
2. 데이터 복사 속도
20M 데이터
raid1
real 0m3.290s
user 0m0.004s
sys 0m0.036s
multipath
real 0m0.305s
user 0m0.020s
sys 0m0.040s
3. 하드 fail 테스트
하드 1개씩 fail 상태로 만들기
초기상태
[root@www local]# cat /proc/mdstat
Personalities : [raid0] [multipath] [raid1]
md1 : active multipath xvda8[0] xvda7[1]
104320 blocks [2/2] [UU]
md0 : active raid1 xvda6[1] xvda5[0]
104320 blocks [2/2] [UU]
unused devices: <none>
하드1개 fail
[root@www local]# mdadm /dev/md0 -f /dev/xvda5
mdadm: set /dev/xvda5 faulty in /dev/md0
[root@www local]# mdadm /dev/md1 -f /dev/xvda7
mdadm: set /dev/xvda7 faulty in /dev/md1
Personalities : [raid0] [multipath] [raid1]
md1 : active multipath xvda8[0] xvda7[2](F)
104320 blocks [2/1] [U_]
md0 : active raid1 xvda6[1] xvda5[2](F)
104320 blocks [2/1] [_U]
용량 및 파일 갯수
[root@www data1]# du -sh
20M .
[root@www data1]# find ./ -name "*" | wc -l
593
[root@www data2]# du -sh
20M .
[root@www data2]# find ./ -name "*" | wc -l
593
데이터 6M 복사
[root@www data2]# du -sh /data1
25M /data1
[root@www data2]# du -sh /data2
25M /data2
문제있는 하드 hot remove
[root@www data2]# mdadm /dev/md0 -r /dev/xvda5
mdadm: hot removed /dev/xvda5
[root@www data2]# mdadm /dev/md1 -r /dev/xvda7
mdadm: hot removed /dev/xvda7
md1 : active multipath xvda8[0]
104320 blocks [2/1] [U_]
md0 : active raid1 xvda6[1]
104320 blocks [2/1] [_U]
fail 되었던 하드 다시 포맷
mkfs -j /dev/xvda5
mkfs -j /dev/xvda7
새하드 hot add
[root@www data2]# mdadm /dev/md0 -a /dev/xvda5
mdadm: added /dev/xvda5
[root@www data2]# mdadm /dev/md1 -a /dev/xvda7
mdadm: added /dev/xvda7
하드 용량 및 파일 갯수 확인
[root@www data2]# du -sh /data1
25M /data1
[root@www data2]# find /data1 -name "*" | wc -l
768
[root@www data2]# du -sh /data2
25M /data2
[root@www data2]# find /data2 -name "*" | wc -l
768
결론:
raid1 로 했을때 보다 multipath 로 했을때 속도도 더 잘 나옴.
자료 유실 없음.
차이점:
md1 : active multipath xvda7[2](S) xvda8[0]
104320 blocks [2/1] [U_]
---> State : clean, degraded
md0 : active raid1 xvda5[0] xvda6[1]
104320 blocks [2/2] [UU]
/dev/md1의 상태
Number Major Minor RaidDevice State
0 202 8 0 active sync /dev/xvda8
1 0 0 1 removed
2 202 7 - spare /dev/xvda7
리부팅후 mdadm -a /dev/md1 /dev/xvda7 하면 정상 상태로 됨.
[root@www data2]# cat /proc/mdstat
Personalities : [raid1] [multipath]
md1 : active multipath xvda7[2] xvda8[0]
104320 blocks [2/2] [UU]
md0 : active raid1 xvda5[0] xvda6[1]
104320 blocks [2/2] [UU]
[root@www data2]# du -sh /data1 ; find /data1 -name "*" | wc -l
25M /data1
768
[root@www data2]# du -sh /data2 ; find /data1 -name "*" | wc -l
25M /data2
768
추천
0
0
댓글 0개