-
Notifications
You must be signed in to change notification settings - Fork 3
/
glusterfs部署与应用
381 lines (302 loc) · 10.3 KB
/
glusterfs部署与应用
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
Volume – glusterfs 主流的分布式存储集群,可以保证数据的可靠性,提高性能,大数据用到,要求性能高的,企业一般用这个
任意一个节点坏了,还有另外一个顶着,所以影响不大
GlusterFS部署:http://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
https://github.com/kubernetes/kubernetes/tree/8fd414537b5143ab039cb910590237cabf4af783/examples/volumes/glusterfs
# kubectl create -f glusterfs-endpoints.json
# kubectl create -f glusterfs-service.json
GlusterFS部署步骤
两个节点组成集群
要求系统有两个分区
三个节点都创建一个新磁盘,/dev/sdb
[root@maste ~]# fdisk /dev/sdb
欢迎使用 fdisk (util-linux 2.23.2)。
更改将停留在内存中,直到您决定将更改写入磁盘。
使用写入命令前请三思。
Device does not contain a recognized partition table
使用磁盘标识符 0x18a85a8b 创建新的 DOS 磁盘标签。
命令(输入 m 获取帮助):n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
分区号 (1-4,默认 1):
起始 扇区 (2048-4194303,默认为 2048):
将使用默认值 2048
Last 扇区, +扇区 or +size{K,M,G} (2048-4194303,默认为 4194303):
将使用默认值 4194303
分区 1 已设置为 Linux 类型,大小设为 2 GiB
命令(输入 m 获取帮助):w
The partition table has been altered!
Calling ioctl() to re-read partition table.
正在同步磁盘。
配置hosts
# 有三台机器两台master一主一从,一个client,三台机器分别配置好/etc/hosts
# glusterfs服务器节点:最后一个是client节点 ,以下三台都要操作
cat <<EOF>> /etc/hosts
192.168.224.142 server1
192.168.224.143 server2
192.168.224.144 server3
EOF
mkfs.xfs -i size=512 /dev/sdb1
mkdir -p /data/brick1
echo '/dev/sdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab
mount -a && mount
yum install -y bash-completion
yum install centos-release-gluster -y
yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
systemctl start glusterd.service
systemctl enable glusterd.service
service glusterd status
iptables -I INPUT -p all -s 192.168.224.142 -j ACCEPT
iptables -I INPUT -p all -s 192.168.224.143 -j ACCEPT
iptables -I INPUT -p all -s 192.168.224.144 -j ACCEPT
来自“server1”
gluster peer probe server2
gluster peer probe server3
来自“server2”
gluster peer probe server1
检查server1上的对等体状态
gluster peer status
步骤6 - 设置GlusterFS卷
在所有服务器上:
mkdir -p /data/brick1/gv0
从任何单一服务器:
gluster volume create gv0 replica 3 server1:/data/brick1/gv0 server2:/data/brick1/gv0 server3:/data/brick1/gv0
gluster volume start gv0
确认卷显示“已启动”:
gluster volume info
第7步 - 测试GlusterFS卷
对于此步骤,我们将使用其中一个服务器来装入卷。通常,您可以从外部计算机(称为“客户端”)执行此操作。由于使用此方法需要在客户端计算机上安装其他软件包,我们将使用其中一个服务器作为首先进行测试的简单位置,就像它是“客户端”一样。
mount -t glusterfs server1:/gv0 /mnt
touch a
for i in `seq -w 1 100`; do cp -rp a /mnt/copy-test-$i; done
首先,检查客户端挂载点:
ls -lA /mnt/copy* | wc -l
您应该看到返回100个文件。接下来,检查每台服务器上的GlusterFS砖安装点:
ls -lA /data/brick1/gv0/copy*
您应该使用我们在此处列出的方法在每台服务器上看到100个文件。如果没有复制,在仅分发卷(此处未详述)中,您应该在每个卷上看到大约33个文件。
准备工作,在Glusterfs的master上
cd /data/brick1/gv0/ && rm -rf * && echo Gluster > index.html
[root@glusterfs01 gv0]# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: 6bf89f23-13c1-42c5-bf10-46425a7b1f5a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server1:/data/brick1/gv0
Brick2: server2:/data/brick1/gv0
Brick3: server3:/data/brick1/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@glusterfs01 gv0]# cat /etc/hosts
192.168.224.142 server1
192.168.224.143 server2
192.168.224.144 server3
在k8s集群的所有服务器都要添加这个到hosts上
cat << EOF >> /etc/hosts
192.168.224.142 server1
192.168.224.143 server2
192.168.224.144 server3
EOF
每个k8s节点都要安装gluster的客户端工具
yum install -y glusterfs-fuse
在k8s上Node03操作
[root@Node03 ~]# mount -t glusterfs server1:/gv0 /mnt
[root@Node03 ~]# ls /mnt
a copy-test-017 copy-test-034 copy-test-051 copy-test-068 copy-test-085
copy-test-001 copy-test-018 copy-test-035 copy-test-052 copy-test-069 copy-test-086
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
这些文件是在glusterfs测试产生的文件,也挂载到本地目录上来了
利用k8s的service把glusterfs整合在一块
[root@Master01 volume]# mkdir glusterfs ; cd glusterfs
cat << EOF > glusterfs-service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"ports": [
{"port": 1}
]
}
}
EOF
cat << EOF > glusterfs-pod.json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "glusterfs"
},
"spec": {
"containers": [
{
"name": "glusterfs",
"image": "kubernetes/pause",
"volumeMounts": [
{
"mountPath": "/mnt/glusterfs",
"name": "glusterfsvol"
}
]
}
],
"volumes": [
{
"name": "glusterfsvol",
"glusterfs": {
"endpoints": "glusterfs-cluster",
"path": "kube_vol",
"readOnly": true
}
}
]
}
}
EOF
cat << EOF > glusterfs-endpoints.json
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"subsets": [
{
"addresses": [
{
"ip": "192.168.224.142"
}
],
"ports": [
{
"port": 1
}
]
},
{
"addresses": [
{
"ip": "192.168.224.143"
}
],
"ports": [
{
"port": 1
}
]
},
{
"addresses": [
{
"ip": "192.168.224.144"
}
],
"ports": [
{
"port": 1
}
]
}
]
}
EOF
cat << EOF > nginx-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: glusterfsvo1
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: glusterfsvo1
glusterfs:
endpoints: glusterfs-cluster
path: gv0
readOnly: false
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
type: NodePort
EOF
[root@Master01 gclusterfs]# kubectl create -f glusterfs-endpoints.json
endpoints "glusterfs-cluster" created
[root@Master01 gclusterfs]# kubectl get ep
NAME ENDPOINTS AGE
glusterfs-cluster 192.168.224.142:1,192.168.224.143:1,192.168.224.144:1 3m
[root@Master01 gclusterfs]# kubectl create -f glusterfs-service.json
service "glusterfs-cluster" created
[root@Master01 gclusterfs]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs-cluster ClusterIP 10.254.118.211 <none> 1/TCP 11s
删除以前Nfs测试的
[root@Master01 volume]# kubectl delete -f ../nginx-deployment.yaml
deployment "nginx-deployment" deleted
[root@Master01 volume]# kubectl delete -f ../nginx-service.yaml
service "nginx-service" deleted
[root@Master01 glusterfs]# kubectl create -f nginx-deployment.yaml
deployment "nginx-deployment" created
service "nginx-service" created
[root@Master01 glusterfs]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-deployment-b55b5b655-58b9v 1/1 Running 0 29s
nginx-deployment-b55b5b655-gftm2 1/1 Running 0 29s
nginx-deployment-b55b5b655-m24h4 1/1 Running 0 29s
[root@Master01 glusterfs]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs-cluster ClusterIP 10.254.118.211 <none> 1/TCP 59m
nginx-service NodePort 10.254.84.255 <none> 80:24858/TCP 58s
[root@Master01 glusterfs]# kubectl exec -it nginx-deployment-b55b5b655-m24h4 bash
root@nginx-deployment-b55b5b655-m24h4:/# mount | grep gluster
192.168.224.142:gv0 on /usr/share/nginx/html type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
挂载到了192.168.224.142,现在把192.168.224.142这台服务器停掉
[root@glusterfs01 gv0]# ip a | grep 192
inet 192.168.224.142/24 brd 192.168.224.255 scope global ens33
[root@glusterfs01 gv0]# systemctl stop glusterd
[root@glusterfs01 gv0]# gluster peer status
Connection failed. Please check if gluster daemon is operational.
F5刷新页面,依然能访问
[root@Master01 glusterfs]# kubectl exec -it nginx-deployment-b55b5b655-gftm2 bash
root@nginx-deployment-b55b5b655-gftm2:/# mount | grep gluster
192.168.224.142:gv0 on /usr/share/nginx/html type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@glusterfs01 gv0]# systemctl start glusterd
[root@glusterfs01 gv0]# gluster peer status
Number of Peers: 2
Hostname: server2
Uuid: d9147123-e428-4769-9208-4b40444a602f
State: Peer in Cluster (Connected)
Hostname: server3
Uuid: 219524a5-db2f-4762-a9f7-53a1b825434a
State: Peer in Cluster (Connected)
更多k8s支持的数据卷,请参考:
https://kubernetes.io/docs/concepts/storage/volumes/