此文的环境接着"hadoop,zookeeper,hbase集群配置",链接:http://blog.chinaunix.net/uid-16361381-id-5769123.html
spark的版本要和hadoop的版本对应,下载后,解压spark到任意目录:tar -xzf spark-2.1.0-bin-hadoop2.7.tgz -C /usr/local/
接着配置spark的环境文件,spark-env.sh:
-
#!/usr/bin/env bash
-
-
#
-
# Licensed to the Apache Software Foundation (ASF) under one or more
-
# contributor license agreements. See the NOTICE file distributed with
-
# this work for additional information regarding copyright ownership.
-
# The ASF licenses this file to You under the Apache License, Version 2.0
-
# (the "License"); you may not use this file except in compliance with
-
# the License. You may obtain a copy of the License at
-
#
-
# http://www.apache.org/licenses/LICENSE-2.0
-
#
-
# Unless required by applicable law or agreed to in writing, software
-
# distributed under the License is distributed on an "AS IS" BASIS,
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-
# See the License for the specific language governing permissions and
-
# limitations under the License.
-
#
-
-
# This file is sourced when running various Spark programs.
-
# Copy it as spark-env.sh and edit that to configure Spark for your site.
-
-
# Options read when launching programs locally with
-
# ./bin/run-example or ./bin/spark-submit
-
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
-
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
-
# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
-
# - SPARK_CLASSPATH, default classpath entries to append
-
-
# Options read by executors and drivers running inside the cluster
-
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
-
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
-
# - SPARK_CLASSPATH, default classpath entries to append
-
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
-
# - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos
-
-
# Options read in YARN client mode
-
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
-
# - SPARK_EXECUTOR_INSTANCES, Number of executors to start (Default: 2)
-
# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
-
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
-
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
-
-
# Options for the daemons used in the standalone deploy mode
-
# - SPARK_MASTER_HOST, to bind the master to a different IP address or hostname
-
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
-
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
-
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
-
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
-
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
-
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
-
# - SPARK_WORKER_DIR, to set the working directory of worker processes
-
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
-
# - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g).
-
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
-
# - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y")
-
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
-
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
-
-
# Generic options for the daemons used in the standalone deploy mode
-
# - SPARK_CONF_DIR Alternate conf dir. (Default: ${SPARK_HOME}/conf)
-
# - SPARK_LOG_DIR Where log files are stored. (Default: ${SPARK_HOME}/logs)
-
# - SPARK_PID_DIR Where the pid file is stored. (Default: /tmp)
-
# - SPARK_IDENT_STRING A string representing this instance of spark. (Default: $USER)
-
# - SPARK_NICENESS The scheduling priority for daemons. (Default: 0)
-
# - SPARK_NO_DAEMONIZE Run the proposed command in the foreground. It will not output a PID file.
-
-
export JAVA_HOME=/usr/local/lib/jdk1.8.0_144
-
export SCALA_HOME=/usr/local/scala-2.13.0-M1
-
export HADOOP_HOME=/usr/local/hadoop
-
-
export STANDALONE_SPARK_MASTER_HOST=test
-
export SPARK_MASTER_IP=$STANDALONE_SPARK_MASTER_HOST
-
-
### Let's run everything with JVM runtime, instead of Scala
-
export SPARK_LAUNCH_WITH_SCALA=0
-
export SPARK_LIBRARY_PATH=${SPARK_HOME}/lib
-
export SCALA_LIBRARY_PATH=${SPARK_HOME}/lib
-
export SPARK_MASTER_WEBUI_PORT=18080
-
#export SPARK_MASTER_PORT=7077
-
#export SPARK_WORKER_PORT=7078
-
#export SPARK_WORKER_WEBUI_PORT=18081
-
#export SPARK_WORKER_DIR=/var/run/spark/work
-
#export SPARK_LOG_DIR=/var/log/spark
-
#export SPARK_PID_DIR='/var/run/spark/
添加节点到slaves文件,把 spark整个目录拷贝到每个节点,然后就可以启动了,/usr/local/spark-2.1.0-bin-hadoop2.7/sbin/start-all.sh
每个节点会根据spark-env.sh配置的"export STANDALONE_SPARK_MASTER_HOST=test
",寻找主机。
登陆spark shell尽行验证:/usr/local/spark-2.1.0-bin-hadoop2.7/bin/spark-shell
可以成功登陆 spark shell说明spark已经启动成功了!
暂时写到这里,以后再更新!
阅读(1927) | 评论(0) | 转发(0) |