Tuesday, July 14, 2020

Load delimited data into a Spark Dataframe using Scala

In this article we will load delimited data into a Spark Dataframe using Scala.

The delimited file is provided below for reference.



The SBT library dependencies are shown below for reference.

scalaVersion := "2.11.12"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.3.0"

The Scala program is provided below.

import org.apache.spark.sql.SparkSession

object DelimitedReader extends App {

  val spark = SparkSession.builder()
    .master("local")
    .appName("DelimitedFileReader")
    .getOrCreate()

  import spark.implicits._

  val df = spark    .read
    .format("csv")
    .option("header", "true")
    .option("delimiter", "|")
    .load("C:\\data\\delimited.txt")

  df.show()
}

Here is the output after running the program.

+---+------+---+
| Id|  Name|Age|
+---+------+---+
|  1|Name-1| 53|
|  2|Name-2| 42|
|  3|Name-3| 37|
+---+------+---+

Thanks. That is all for now!

No comments:

Post a Comment