Skip to content

Commit

Permalink
update from SparkLearning
Browse files Browse the repository at this point in the history
  • Loading branch information
xubo245 committed Mar 21, 2017
1 parent ba2de0b commit 3ccb91d
Show file tree
Hide file tree
Showing 205 changed files with 18,687 additions and 296,695 deletions.
296,337 changes: 0 additions & 296,337 deletions adam.log

This file was deleted.

Binary file removed docs/Issues/~$报错情况汇报.docx
Binary file not shown.
22 changes: 22 additions & 0 deletions docs/LinuxLearning/Ubuntu学习100之模版.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@

更多代码请见:https://github.com/xubo245

Ubuntu学习

# 1.解释 #




# 2.代码: #



# 3.结果: #



参考

【1】https://github.com/xubo245
【2]http://blog.csdn.net/xubo245/
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@

更多代码请见:https://github.com/xubo245

Ubuntu学习

# 1.解释 #

原来系统应为RAID机制启动不了,包 Non RAID disk错误,RAID之后启动不了,dhcp获取ip,但是tftp timeout,所以将磁盘放到好的主机可以恢复


# 2.代码: #
使用parted指令: parted为分区指令,参考【3】

hadoop@Mcnode7:~$ sudo parted /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) rm 1
Error: Can't have a partition outside the disk!
(parted) print
Error: Can't have a partition outside the disk!
(parted) mklabel gpt
(parted) print
Model: ATA Hitachi HDT72105 (scsi)
Disk /dev/sdb: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags

(parted) mkpart primary 0% 100%
(parted) print
Model: ATA Hitachi HDT72105 (scsi)
Disk /dev/sdb: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 1049kB 500GB 500GB ext4 primary


# 3.结果: #


查看:发下原来Mcnode5的数据都还在,而且还可以访问

hadoop@Mcnode7:~$ sudo fdisk -lu
[sudo] password for hadoop:

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000db939

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 951625727 475811840 83 Linux
/dev/sda2 951627774 976771071 12571649 5 Extended
/dev/sda5 951627776 976771071 12571648 82 Linux swap / Solaris

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdb1 1 976773167 488386583+ ee GPT
hadoop@Mcnode7:~$ cd /media/hadoop/7c63f77d-11b0-4769-bf60-5380d8ae7b9e
hadoop@Mcnode7:/media/hadoop/7c63f77d-11b0-4769-bf60-5380d8ae7b9e$ ls
bin boot cdrom dev etc home initrd.img lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var vmlinuz
hadoop@Mcnode7:/media/hadoop/7c63f77d-11b0-4769-bf60-5380d8ae7b9e$ cd usr/
hadoop@Mcnode7:/media/hadoop/7c63f77d-11b0-4769-bf60-5380d8ae7b9e/usr$ ls
bin games include lib local sbin share src
hadoop@Mcnode7:/media/hadoop/7c63f77d-11b0-4769-bf60-5380d8ae7b9e/usr$ cd ../home/
hadoop@Mcnode7:/media/hadoop/7c63f77d-11b0-4769-bf60-5380d8ae7b9e/home$ ls
hadoop uftp
hadoop@Mcnode7:/media/hadoop/7c63f77d-11b0-4769-bf60-5380d8ae7b9e/home$ cd hadoop/
hadoop@Mcnode7:/media/hadoop/7c63f77d-11b0-4769-bf60-5380d8ae7b9e/home/hadoop$ ls
backup223 cloud disk.txt Downloads logs Music Public redis1492.sh rmq_bk_gc.log store test Videos zookeeper.out
Chng Desktop Documents examples.desktop middleware Pictures recommend redis149.sh site Templates test.sh xubo


参考

【1】https://github.com/xubo245
【2]http://blog.csdn.net/xubo245/
【3】http://www.cnblogs.com/net2012/archive/2013/02/01/2888453.html
193 changes: 193 additions & 0 deletions docs/LinuxLearning/Ubuntu学习2之Ubuntu系统增加磁盘.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,193 @@

更多代码请见:https://github.com/xubo245

Ubuntu学习

# 1.解释 #

现在单机有500G,但是不够用,正好有磁盘多,如何挂载新的磁盘,使其能有1T?

1)分区
2)格式化
2)挂载


# 2.代码: #



## (0)分区前: ##

hadoop@Mcnode7:~$ sudo fdisk -lu
[sudo] password for hadoop:

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000db939

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 951625727 475811840 83 Linux
/dev/sda2 951627774 976771071 12571649 5 Extended
/dev/sda5 951627776 976771071 12571648 82 Linux swap / Solaris

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00004386

Device Boot Start End Blocks Id System
hadoop@Mcnode7:~$ sudo parted

可以看到/dev/sdb没有被分区,所以得先分区
## (1)分区 ##

进入分区:

hadoop@Mcnode7:~$ sudo fdisk /edv/sdb

查看指令:

Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

操作:n为创建分区,p为主分区,1为partition为1个,后面为起始和end地址,选择了默认

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-976773167, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-976773167, default 976773167):
Using default value 976773167

查看:p为print,查看

Command (m for help): p

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00004386

Device Boot Start End Blocks Id System
/dev/sdb1 2048 976773167 488385560 83 Linux

新建的分区在系统中生效。

partprobe

## (2)格式化 ##
格式化新建分区:

hadoop@Mcnode7:~$ sudo mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
30531584 inodes, 122096390 blocks
6104819 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
3727 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

## (3)挂载 ##
查看UUID:

hadoop@Mcnode7:~$ sudo blkid -p /dev/sdb1
/dev/sdb1: UUID="0da52ea9-623e-45fa-80b9-f5982c7f63ac" VERSION="1.0" TYPE="ext4" USAGE="filesystem" PART_ENTRY_SCHEME="dos" PART_ENTRY_TYPE="0x83" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="976771120" PART_ENTRY_DISK="8:16"

或者:

hadoop@Mcnode4:~$ sudo blkid /dev/sdb1
/dev/sdb1: UUID="0da52ea9-623e-45fa-80b9-f5982c7f63ac" TYPE="ext4"

编辑fstab:在fstab中新增一行

UUID=0da52ea9-623e-45fa-80b9-f5982c7f63ac /home/hadoop/disk2 ext4 defaults 0 2

操作记录:需要新建/home/hadoop/disk2路径

hadoop@Mcnode4:~$ sudo vi /etc/fstab
[sudo] password for hadoop:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=75b77b05-3660-4d44-b914-cf6233aa18f6 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=d04b6d14-feef-4b22-8744-15d2360cfd61 none swap sw 0 0

UUID=0da52ea9-623e-45fa-80b9-f5982c7f63ac /home/hadoop/disk2 ext4 defaults 0 2

使其生效

hadoop@Mcnode4:~/disk2$ sudo mount -a

查看效果:

hadoop@Mcnode4:~/disk2$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 453G 368G 63G 86% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 3.0G 8.0K 3.0G 1% /dev
tmpfs 597M 1.3M 595M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.0G 80K 3.0G 1% /run/shm
none 100M 36K 100M 1% /run/user
/dev/sdb1 459G 70M 435G 1% /home/hadoop/disk2

挂载成功,重启后依然挂载了。


# 3.结果: #

配置成功了。

参考

【1】https://github.com/xubo245
【2]http://blog.csdn.net/xubo245/
44 changes: 44 additions & 0 deletions docs/LinuxLearning/Ubuntu学习3之设置静态ip地址.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@

更多代码请见:https://github.com/xubo245

Ubuntu学习

# 1.解释 #

搭集群的时候,需要设置静态ip,防止ip变化带来的集群连接问题

# 2.代码: #

(1)interfaces文件修改:

hadoop@Master:~$ sudo vi /etc/network/interfaces
[sudo] password for hadoop:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto eth0
iface eth0 inet static
address 192.168.1.149
gateway 192.168.1.1
netmask 255.255.255.0

(2)设置dns

hadoop@Master:~$ vi /etc/resolvconf/resolv.conf.d/base

nameserver 192.168.223.10

(3)生效

sudo ifdown eth0
sudo ifup eth0
或者sudo reboot重启

# 3.结果: #

静态ip设置成功


参考

【1】https://github.com/xubo245
【2]http://blog.csdn.net/xubo245/
Loading

0 comments on commit 3ccb91d

Please sign in to comment.