When IntelliJ runs your scala program, it will loop for the HADOOP_HOME directory from your computer's environment variables list. In case you don't have Hadoop installed on your local machine, IntelliJ will throw an error.
You can bypass this behavior by following the following steps.
First, download WINUTILS.exe from https://github.com/steveloughran/winutils/blob/master/hadoop-2.7.1/bin/winutils.exe
Place this winutils.exe file in "C:\intellij.winutils\bin" as shown below.
Then, add the following line of code to the beginning of your program's entry point.
System.setProperty("hadoop.home.dir","C:\\intellij.winutils")
That's all.
You are all set to run hadoop spark programs on your local computer.
No comments:
Post a Comment