How to study Hadoop on MacOS

Instructions

Install

brew install hadoop

Setting

Environment Variables

  • find hadoop_home
    brew info hadoop
    
  • open shell rc
vi ~/.zshrc # echo $0 to check shell
  • insert lines
export HADOOP_HOME="/opt/homebrew/Cellar/hadoop/3.4.0/libexec"
# export PATH="${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH"
export PATH="${HADOOP_HOME}/bin:$PATH"

export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar

alias hstart="${HADOOP_HOME}/sbin/start-all.sh"
alias hstop="${HADOOP_HOME}/sbin/stop-all.sh"
  • source
source ~/.zshrc

Configuration

  1. hadoop configuration files
file description
etc/hadoop/core-site.xml 클러스터 내 네임노드에서 실행되는 하둡 데몬에 관한 설정
etc/hadoop/hdfs-site.xml 하둡 파일시스템에 관한 설정 
etc/hadoop/yarn-site.xml Resource Manager에 관한 설정
etc/hadoop/mapred-site.xml 맵리듀스에 관한 설정
ll /opt/homebrew/Cellar/hadoop/3.4.0/libexec/etc/hadoop
  1. hadoop-env.sh
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="
export JAVA_HOME="/Library/Java/JavaVirtualMachines/zulu-17.jdk/Contents/Home"
  • JAVA_HOME should be below.
  • Check compatibility to
/usr/libexec/java_home -V
  1. core-site.xml
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/Users/user/hadoop/hdfs/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>fs.checkpoint.dir</name>
        <value>/Users/user/hadoop/dfs/namesecondary</value>
    </property>
</configuration>
  1. hdfs-site.xml
  • namespace and transaction log
  • name node:
  • data node:
  • replication:
```xml dfs.namenode.name.dir file:///data/namenode dfs.datanode.data.dir file:///data/datanode dfs.namenode.checkpoint.dir file:///data/namesecondary dfs.replication 3 ```
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
        <property>
            <name>dfs.name.dir</name>
            <value>/Users/user/hadoop/dfs/name</value>
        </property>
        <property>
            <name>dfs.data.dir</name>
            <value>/Users/user/hadoop/dfs/data</value>
        </property>
    </property>
</configuration>
  1. yarn-site.xml
  • resource manager web-ui url
  • local and log directories
```xml yarn.nodemanager.local-dirs file:///data/yarn/local yarn.nodemanager.log-dirs file:///data/yarn/logs yarn.resourcemanager.hostname hmng ```
<configuration>
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
	<property>
		<name>yarn.nodemanager.env-whitelist</name>
		<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
	</property>
</configuration>
  1. mapred-site.xml
  • set default mapreduce framework
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.env</name>
        <value>HADOOP_MAPRED_HOME=/Users/user/hadoop</value>
    </property>
    <property>
        <name>mapreduce.map.env</name>
        <value>HADOOP_MAPRED_HOME=/Users/user/hadoop</value>
    </property>
    <property>
        <name>mapreduce.reduce.env</name>
        <value>HADOOP_MAPRED_HOME=/Users/user/hadoop</value>
    </property>
    <property>
        <name>mapreduce.application.classpath</name>
        <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:/opt/homebrew/Cellar/hadoop/3.4.0/libexec/etc/hadoop:/opt/homebrew/Cellar/hadoop/3.4.0/libexec/share/hadoop/common/lib/*:/opt/homebr    ew/Cellar/hadoop/3.4.0/libexec/share/hadoop/common/*:/opt/homebrew/Cellar/hadoop/3.4.0/libexec/share/hadoop/hdfs:/opt/homebrew/Cellar/hadoop/3.4.0/libexec/share/hadoop/hdfs/lib/*:/opt/homebrew/Cellar/hadoop/3.4.0/libexec/share/hadoop/hdfs/*:    /opt/homebrew/Cellar/hadoop/3.4.0/libexec/share/hadoop/mapreduce/*:/opt/homebrew/Cellar/hadoop/3.4.0/libexec/share/hadoop/yarn:/opt/homebrew/Cellar/hadoop/3.4.0/libexec/share/hadoop/yarn/lib/*:/opt/homebrew/Cellar/hadoop/3.4.0/libexec/share/    hadoop/yarn/*:/Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/lib/tools.jar</value>
    </property>
</configuration>

Format

cd /usr/local/cellar/hadoop/3.3.0/libexec/bin
hdfs namenode -format

Start

cd /opt/homebrew/Cellar/hadoop/3.4.0/libexec/sbin
./start-all.sh

Check

jps
29171 NodeManager
28644 NameNode
29255 Jps
28745 DataNode
28206 ResourceManager
28879 SecondaryNameNode

Stop

cd /opt/homebrew/Cellar/hadoop/3.4.0/libexec/sbin
./stop-all.sh

Tip

vi /opt/homebrew/Cellar/hadoop/3.4.0/libexec/sbin/start-all.sh
/sleep -> comment

vi /opt/homebrew/Cellar/hadoop/3.4.0/libexec/sbin/stop-all.sh
/sleep -> comment

Troubleshooting

Connection refused

  1. Connection refused
    • Symptoms

      ***: ssh: connect to host *** port 22: Connection refused

    • Solution

      일반 > 공유 > 원격 로그인 > 켬

  2. Permission denied
    • Symptoms

      ***: Warning: Permanently added ‘***’ (ED25519) to the list of known hosts.

      ***: user@***: Permission denied (publickey,password,keyboard-interactive).

    • Solution
        ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
        cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
        chmod 600 ~/.ssh/id_rsa.pub 
        chmod 600 ~/.ssh/authroized_keys
      

ResourceManager or NodeManager won’t start

  • Hadoop version and JAVA_HOME version may be not compatible.
    Change JAVE_HOME to another version.

WordCount won’t work

  • https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
  1. ClassNotFoundException
Exception in thread "main" java.lang.ClassNotFoundException: WordCount
  • Check your jar file
tar xvf your.jar
  • Check MANIFEST.MF and directory
# Main-Class: WordCount is needed
cat MANIFEST.MF
# WordCount.class MUST BE in root folder
# Otherwise, it won't work.
ll
  1. Safemode
  • log
Exception in thread "main" org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-yarn/staging/user/.staging/job_1722181699888_0001. Name node is in safe mode.
The reported blocks 7 has reached the threshold 0.9990 of total blocks 7. The minimum number of live datanodes is not required. In
  • check
hdfs dfsadmin -safemode get                                                                                                                                              1 ↵
2024-07-29 00:51:28,675 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Safe mode is OFF
  • set leave again
$ hdfs dfsadmin -safemode enter
Safe mode is ON
$ hdfs dfsadmin -safemode leave
Safe mode is OFF

Logs

ll /opt/homebrew/Cellar/hadoop/3.4.0/libexec/log
cat /opt/homebrew/Cellar/hadoop/3.4.0/libexec/log/...

Reference