![](https://blog.kakaocdn.net/dn/rcuM2/btrn6O4iU1C/0OCtJGakKUjKrkKZseeWr0/img.jpg)
-
오른쪽 정렬왼쪽 정렬가운데 정렬
-
-
구분선 1구분선 2구분선 3구분선 4구분선 5구분선 6구분선 7구분선 8
-
- 삭제
mkdir /hadoop/temp
mkdir /hadoop/namenode_home
mkdir /hadoop/datanode_home
![](https://blog.kakaocdn.net/dn/u1xrD/btrn7rnvWeN/kHjnSDo8MdggaN76qSY44k/img.png)
cd /hadoop
ls
![](https://blog.kakaocdn.net/dn/ca0JWw/btrn7R0vnq2/pW6bxCVhzGyyFVI6lnF9K1/img.png)
temp와 namenode_home, datanode_home이 잘 만들어져 있습니다.
cd $HADOOP_CONFIG_HOME
ls
![](https://blog.kakaocdn.net/dn/cPQhS0/btrn447wNMV/YsHfYKxr3X8sDZFYMLHXYk/img.png)
뭔가 파일이 엄청 많이 만들어져 있네요!
mapred-site.xml.template를 복사해서 mapred-site.xml을 만들려고합니다.
cp mapred-site.xml.template mapred-site.xml
![](https://blog.kakaocdn.net/dn/cNpo4M/btrn6zsPWJe/nIPNoUWNKDKemM1NFKoHcK/img.png)
mapred-site.xml.template이라는 파일을 mapred-site.xml 이라는 이름으로 복사하였습니다.
vim mapred-site.xml
![](https://blog.kakaocdn.net/dn/bzZtSY/btrn8ICfgDy/bjHO5ARicO38se0x76HkK1/img.png)
![](https://blog.kakaocdn.net/dn/pl98D/btrn84yCybX/Oka4XEXmokRfsfPTvp7cTK/img.png)
vim core-site.xml
![](https://blog.kakaocdn.net/dn/Nvokp/btrobVgBmRH/AjqTNVqgnKrN69hK9TQgK0/img.png)
![](https://blog.kakaocdn.net/dn/GSOQF/btrn66wPIu3/VrTKWRKwhVPZilYoctf5O1/img.png)
vim hdfs-site.xml
![](https://blog.kakaocdn.net/dn/mZUFP/btroaseWAYn/hxf6Bkeq49XrZicygpyIE1/img.png)
![](https://blog.kakaocdn.net/dn/bnD4mY/btrn205r93a/2KYqTpTW1LbkFiZJz1W1K0/img.png)
hadoop namenode -format
![](https://blog.kakaocdn.net/dn/oRdvO/btrn4qXtEXd/VbMmKmR5kioKA3AMrNubVK/img.png)
start-all.sh
yes
yes
![](https://blog.kakaocdn.net/dn/zykeK/btrn6O4iWZ7/Csn5Tz6GNOkn79ZBX1SQLK/img.png)
jpa
![](https://blog.kakaocdn.net/dn/w4MxL/btrn6iR2FJ3/cgN08W3Ut23QYt8rB50PO0/img.png)
hadoop fs -mkdir -p /test
cd $HADOOP_HOME
hadoop fs -put LICENSE.txt /test
hadoop fs -ls /test
![](https://blog.kakaocdn.net/dn/1Bmzl/btrobVnmTGL/Lw5VJPdZnVBOiD5GsvTAPk/img.png)
LICENSE파일을 test파일에 올리는 과정이었습니다.
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.1.jar wordcount /test /test_out
![](https://blog.kakaocdn.net/dn/brh3ER/btrn7r17h1R/U2EI7wpfTUurSkpUAfgQrK/img.png)
hadoop fs -cat /test_out/*
![](https://blog.kakaocdn.net/dn/daXris/btroarNRSkc/dTZGOztThuBKbEYMAdo880/img.png)
단어의 갯수를 세어주는 역할입니다!
가상컴퓨터(컨테이너)가 Master이자 Slave였습니다.
rm -rf /hadoop/namenode_home
rm -rf /hadoop/datanode_home
mkdir /hadoop/namenode_home
mkdir /hadoop/datanode_home
![](https://blog.kakaocdn.net/dn/k3uRb/btrn7QAx5qZ/bkJHYtl2M0mLuuFoxZwAK0/img.png)
cd $HADOOP_CONFIG_HOME
vim core-site.xml
![](https://blog.kakaocdn.net/dn/OAAh3/btrn7rujc3W/BkJq3YMk3XV8ok4qs3WTFK/img.png)
![](https://blog.kakaocdn.net/dn/J9Tq8/btrn6GZsllp/X0LPq0fLW08m7BJQgpHVjk/img.png)
localhost를 master로 바꾸어줍니다.
vim mapred-site.xml
![](https://blog.kakaocdn.net/dn/NFKHU/btrobOIHFgE/OYRkVrA2mJrwnZ2jrrwmek/img.png)
exit
docker ps
docker stop hadoop
docker commit hadoop myhadoop
![](https://blog.kakaocdn.net/dn/bCVVgI/btrobU9PHBC/acKP5KNdBfxnPHrKC1Rcuk/img.png)
프로세스가 싱핼되고 있는 상태라면 docker stop hadoop으로 정지하고 실행되고 있지않으면 해당명령은 실행하지 않아도 됩니다.
hadoop으로 만들었던 컨테이너를 myhadoop이라는 이미지로 저장하기위한 명령으로 docker commit hadoop myhadoop을 사용하였습니다.
docker run -it -h master --name master -p 5070:50070 myhadoop
외부포트가 5070이고 내부 포트가 50070입니다.
![](https://blog.kakaocdn.net/dn/3uW6x/btrobU24bHX/oi3qR3FRnBQwkoFcuS57A0/img.png)
exit
docker ps
![](https://blog.kakaocdn.net/dn/bLFzCK/btrobGqtqAv/yLzHPDdBGrkjHJTkENNCP0/img.png)
exit로 나가주시고 docker ps를 보면 master가 올라와있지 않습니다.
docker run -it -h slave1 --name slave1 --link master:master myhadoop
docker run -it -h slave2 --name slave2 --link master:master myhadoop
docker run -it -h slave3 --name slave3 --link master:master myhadoop
![](https://blog.kakaocdn.net/dn/euNsRG/btrn83mbXvc/U5KOJ7NMjFsCTl9CY0LPZ1/img.png)
docker start master
docker start slave1
docker start slave2
docker start slave3
docker inspect master | find "IPAdrress"
![](https://blog.kakaocdn.net/dn/bCzFnz/btrobTJRhiR/NfKwBe2etEaxlTvHg9KSR0/img.png)
mac의 경우 docker inspect master | grep IPAddress명령을 하면됩니다.
172.17.0.2는 master입니다.
docker inspect slave1 | find "IPAdrress"
docker inspect slave2 | find "IPAdrress"
docker inspect slave3 | find "IPAdrress"
![](https://blog.kakaocdn.net/dn/bxwp1Z/btrn7qoBUDN/FnXLOanqszRfgofhSmlBnk/img.png)
순서대로 1, 2, 3을 하나씩 넣어주면 IP주소가 하나씩 바뀌게됩니다.
172.17.0.3은 slave1 / 172.17.0.4는 slave2 / 172.17.0.5는 slave3 입니다.
docker exec -it master bash
vim /etc/host
![](https://blog.kakaocdn.net/dn/ehsgKn/btrobH3YPGK/znvPVLbZ01scCoFHp76Qi1/img.png)
vim으로 해당 파일을 열어보면 172.17.0.2로 master가 잡혀있네요.
slave도 추가해줄까요?
![](https://blog.kakaocdn.net/dn/kCObJ/btrobMYp7o5/Uc3q1LPKRWlekpibyEY761/img.png)
cd $HADOOP_CONFIG_HOME
ls
![](https://blog.kakaocdn.net/dn/bJJQH3/btrn7QAx5oR/RkR7n0KlfkfkbackKTWqGk/img.png)
vim masters
![](https://blog.kakaocdn.net/dn/wyP0e/btrn1YNzdFs/V3kKCmo6cO6QucY9vsiFLK/img.png)
masters를열면 비어있습니다. master를 써주세요.
vim slaves
![](https://blog.kakaocdn.net/dn/bdYhia/btrobTpym1A/lupdb4cpF9gdsDCKR29aK0/img.png)
slaves로 들어오면 localhost가 잡혀있는데 이것을 지우고 다른내용을 채울겁니다.
![](https://blog.kakaocdn.net/dn/IxJ3T/btrn45FrcfW/Atr1hCkt8TnB4IkqfEKORK/img.png)
ssh slave1
yes
exit
ssh slave2
yes
exit
ssh slave3
yesy
exit
![](https://blog.kakaocdn.net/dn/brAeHd/btrobUILatA/KFomMoO79r0jRov9IBK5eK/img.png)
이렇게 하면 앞으로 계속 왔다갔다 할때 비밀번호입력을 묻지않습니다.
hadoop namenode -format
Y
hadoop datanode -format
start-dfs.sh
yes
start-yarn.sh
![](https://blog.kakaocdn.net/dn/MF2PL/btroar1oWJj/xRrhk2UTvC7GtdUbRKYCX0/img.png)
![](https://blog.kakaocdn.net/dn/baVxxJ/btrn8GLc0uH/ck4mGj30mukmbbOZm2i3g0/img.png)
url에 localhost:5070을 입력하면 이런 화면이 나옵니다.
stop-dfs.sh
stop-yarn.sh
rm -rf /hadoop/namenode_home
rm -rf /hadoop/datanode_home
mkdir /hadoop/namenode_home
mkdir /hadoop/datanode_home
ssh slave1
exit
ssh slave2
exit
ssh slave3
exit
![](https://blog.kakaocdn.net/dn/p51n7/btrn43Ol7Zc/JquJZUfJV4nellVe36kxx0/img.png)
start-dfs.sh
start-yarn.sh
![](https://blog.kakaocdn.net/dn/dcNX3g/btroarNRSha/DzD4PPm9y8lO697DhmWV7k/img.png)
'빅데이터 > hadoop' 카테고리의 다른 글
[hadoop] docker를 이용한 hadoop설치부터 기본 명령01( java설치 포함) (0) | 2022.07.02 |
---|