banner



How To Install Apache Spark On Ubuntu

Apache Spark - Installation


Spark is Hadoop's sub-project. Therefore, information technology is better to install Spark into a Linux based arrangement. The following steps show how to install Apache Spark.

Step one: Verifying Java Installation

Java installation is one of the mandatory things in installing Spark. Try the following command to verify the JAVA version.

$java -version        

If Java is already, installed on your organization, you lot get to come across the following response −

java version "ane.7.0_71"  Coffee(TM) SE Runtime Surroundings (build 1.seven.0_71-b13)  Java HotSpot(TM) Client VM (build 25.0-b02, mixed manner)        

In example you practise not take Java installed on your system, and then Install Java before proceeding to side by side step.

Step 2: Verifying Scala installation

Y'all should Scala language to implement Spark. Then let united states verify Scala installation using following command.

$scala -version        

If Scala is already installed on your organization, you lot get to see the post-obit response −

Scala lawmaking runner version two.11.vi -- Copyright 2002-2013, LAMP/EPFL        

In case you don't take Scala installed on your system, then go on to next step for Scala installation.

Step 3: Downloading Scala

Download the latest version of Scala past visit the following link Download Scala. For this tutorial, nosotros are using scala-2.11.half dozen version. Later downloading, yous will find the Scala tar file in the download folder.

Step four: Installing Scala

Follow the below given steps for installing Scala.

Extract the Scala tar file

Type the following control for extracting the Scala tar file.

$ tar xvf scala-2.xi.half dozen.tgz        

Motion Scala software files

Employ the following commands for moving the Scala software files, to respective directory (/usr/local/scala).

$ su –  Countersign:  # cd /dwelling house/Hadoop/Downloads/  # mv scala-2.xi.6 /usr/local/scala  # exit        

Set PATH for Scala

Employ the post-obit command for setting PATH for Scala.

$ export PATH = $PATH:/usr/local/scala/bin        

Verifying Scala Installation

After installation, it is amend to verify it. Use the following command for verifying Scala installation.

$scala -version        

If Scala is already installed on your organisation, you get to come across the following response −

Scala code runner version 2.11.6 -- Copyright 2002-2013, LAMP/EPFL        

Pace 5: Downloading Apache Spark

Download the latest version of Spark by visiting the following link Download Spark. For this tutorial, we are using spark-1.3.one-bin-hadoop2.6 version. Subsequently downloading it, you volition find the Spark tar file in the download folder.

Footstep half-dozen: Installing Spark

Follow the steps given below for installing Spark.

Extracting Spark tar

The post-obit command for extracting the spark tar file.

$ tar xvf spark-1.3.1-bin-hadoop2.six.tgz        

Moving Spark software files

The following commands for moving the Spark software files to respective directory (/usr/local/spark).

$ su –  Countersign:    # cd /home/Hadoop/Downloads/  # mv spark-1.3.i-bin-hadoop2.half-dozen /usr/local/spark  # get out        

Setting upwardly the environment for Spark

Add the post-obit line to ~/.bashrc file. It ways adding the location, where the spark software file are located to the PATH variable.

consign PATH=$PATH:/usr/local/spark/bin        

Use the post-obit command for sourcing the ~/.bashrc file.

$ source ~/.bashrc        

Step seven: Verifying the Spark Installation

Write the following command for opening Spark shell.

$spark-shell        

If spark is installed successfully and so you lot volition find the following output.

Spark assembly has been built with Hive, including Datanucleus jars on classpath  Using Spark'due south default log4j profile: org/apache/spark/log4j-defaults.backdrop  15/06/04 15:25:22 INFO SecurityManager: Changing view acls to: hadoop  15/06/04 fifteen:25:22 INFO SecurityManager: Changing modify acls to: hadoop 15/06/04 xv:25:22 INFO SecurityManager: SecurityManager: authentication disabled;    ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Gear up(hadoop)  xv/06/04 15:25:22 INFO HttpServer: Starting HTTP Server  15/06/04 fifteen:25:23 INFO Utils: Successfully started service 'HTTP class server' on port 43292.  Welcome to        ____              __       / __/__  ___ _____/ /__      _\ \/ _ \/ _ `/ __/  '_/     /___/ .__/\_,_/_/ /_/\_\   version 1.4.0        /_/   		 Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_71)  Blazon in expressions to accept them evaluated.  Spark context available as sc   scala>        

Useful Video Courses


Apache Spark Online Training

Video

Apache Spark with Scala - Hands On with Big Data

Video

Learn Apache Cordova using Visual Studio 2022 & Command line

Video

Delta Lake with Apache Spark using Scala

Video

Apache Zeppelin - Big Data Visualization Tool

Video

Olympic Games Analytics Project in Apache Spark for Beginner

Video

Source: https://www.tutorialspoint.com/apache_spark/apache_spark_installation.htm

Posted by: beasonunth1951.blogspot.com

0 Response to "How To Install Apache Spark On Ubuntu"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel