Thursday, July 30, 2020

Azure Databricks - Explode Column in Spark Dataframe

In this post, we will see how to explode columns containing arrays into rows in a Spark Dataframe using scala.

First, load the files from Databricks dbfs into a dataframe.

import org.apache.spark.sql.functions._
val df = spark.read.option("multiline", "true").json("dbfs:/FileStore/files/json/json_file_1.txt")

Next, issue the following commands one by one.

display(df.select("source_id", "data"))



display(df.select("source_id", "data.sensor1", "data.sensor2"))




display(df.select("source_id", "data.sensor1.c02_level"))



display(df.select("source_id", "data.sensor1.c02_level").withColumn("c02_level", explode(col("c02_level"))))



Cheers!

No comments:

Post a Comment