spark单机版本的搭建

首先配置ssh localhost

确保安装好ssh:

sudoaptgetupdate sudo apt-get install openssh-server
$ sudo /etc/init.d/ssh start

生成并添加密钥:

sshkeygentrsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys

如果已经生成过密钥,只需执行后两行命令。
测试ssh localhost

sshlocalhost exit

要注意安装的软件间的版本兼容性,比如spark和hadoop要配套版本使用
(1)安装JDK

#下载jdk1.8的rpm包
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u112-b15/jdk-8u112-linux-x64.rpm 
rpm -ivh jdk-8u112-linux-x64.rpm 

#增加JAVA_HOME
vim etc/profile

#增加如下行:
#Java home
export JAVA_HOME=/usr/java/jdk1.8.0_112/

#刷新配置:
source /etc/profile #当然reboot也是可以的

(2)安装scala

#下载scala安装包:
wget -O "scala-2.12.2.rpm" "https://downloads.lightbend.com/scala/2.12.1/scala-2.12.2.rpm"

#安装rpm包:
rpm -ivh scala-2.12.2.rpm

#增加SCALA_HOME
vim /etc/profile

#增加如下内容;
#Scala Home
export SCALA_HOME=/usr/share/scala

#刷新配置
source /etc/profile

(3)安装hadoop

解压hadoop2.6.0到任意目录:

$ cd /home/tom
$ wget http://apache.claz.org/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
$ tar -xzvf hadoop-2.6.0.tar.gz

编辑/etc/profile文件,在最后加上java环境变量:

export HADOOP_HOME=/home/tom/hadoop-2.6.0
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

编辑$HADOOP_HOME/etc/hadoop/hadoop-env.sh文件

$ vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh

在最后加上:

export JAVA_HOME=/home/mi/jdk1.8.0_73/

修改Configuration文件:

$ cd $HADOOP_HOME/etc/hadoop

修改core-site.xml:

<configuration>
<property>
  <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
</property>
</configuration>

修改hdfs-site.xml:

<configuration>
<property>
 <name>dfs.replication</name>
 <value>1</value>
</property>

<property>
  <name>dfs.name.dir</name>
    <value>file:///home/mi/hadoopdata/hdfs/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
    <value>file:///home/mi/hadoopdata/hdfs/datanode</value>
</property>
</configuration>

第一个是dfs的备份数目,单机用1份就行,后面两个是namenode和datanode的目录。

修改mapred-site.xml:

<configuration>
 <property>
  <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>
</configuration>

修改yarn-site.xml:

<configuration>
 <property>
  <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
 </property>
</configuration>

初始化hadoop:

$ hdfs namenode -format

启动

$ $HADOOP_HOME/sbin/start-all.sh

停止

$ $HADOOP_HOME/sbin/stop-all.sh

检查WebUI,浏览器打开端口:http://localhost:8088

port 8088: cluster and all applications

port 50070: Hadoop NameNode

port 50090: Secondary NameNode

port 50075: DataNode

hadoop运行后可使用jps命令查看,得到结果:

10057 Jps
9611 ResourceManager
9451 SecondaryNameNode
9260 DataNode
9102 NameNode
9743 NodeManager
(4)安装spark

解压spark安装包到任意目录:

$ cd /home/tom
$ tar -xzvf spark-1.6.0-bin-hadoop2.6.tgz
$ mv spark-1.6.0-bin-hadoop2.6 spark-1.6.0
$ sudo vim /etc/profile

在/etc/profile文件的末尾添加环境变量:

export SPARK_HOME=/home/tom/spark-1.6.0
export PATH=$SPARK_HOME/bin:$PATH

保存并更新/etc/profile:

$ source /etc/profil

在conf目录下复制并重命名spark-env.sh.template为spark-env.sh:

$ cp spark-env.sh.template spark-env.sh
$ vim spark-env.sh

在spark-env.sh中添加:

export JAVA_HOME=/home/tom/jdk1.8.0_73/
export SCALA_HOME=/home/tom//scala-2.10.6
export SPARK_MASTER_IP=localhost
export SPARK_WORKER_MEMORY=4G

启动

$ $SPARK_HOME/sbin/start-all.sh

停止

$ $SPARK_HOME/sbin/stop-all.sh

测试Spark是否安装成功:

$ $SPARK_HOME/bin/run-example SparkPi

得到结果:

Pi is roughly 3.14716

检查WebUI,浏览器打开端口:http://localhost:8080

;