Hadoop+Hive+HBase+Spark 集群部署(三)

/ 技术分享 / 没有评论 / 2632浏览

2. spark

spark-env.sh

export SCALA_HOME=/opt/soft/scala-2.12.6
export JAVA_HOME=/usr/java/jdk1.8.0_162
export HADOOP_HOME=/opt/soft/hadoop-2.8.3
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_HOME=/opt/soft/spark-2.3.0-bin-hadoop2.7
export SPARK_MASTER_IP=node
export SPARK_EXECUTOR_MEMORY=4G

slaves

node1
node2
node3

启动 / 停止 命令

spark_webUI 端口

shell

[root@node ~]# spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://node:4040
Spark context available as 'sc' (master = local[*], app id = local-1525334225269).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.0
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_162)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 8*8
res0: Int = 64