Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
I am trying to connect to s3 provided by minio using spark But it is saying the
bucket minikube does not exists
. (created bucket already)
val spark = SparkSession.builder().appName("AliceProcessingTwentyDotTwo")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer").master("local[1]")
.getOrCreate()
val sc= spark.sparkContext
sc.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
sc.hadoopConfiguration.set("fs.s3a.endpoint", "http://localhost:9000")
sc.hadoopConfiguration.set("fs.s3a.access.key", "minioadmin")
sc.hadoopConfiguration.set("fs.s3a.secret.key", "minioadmin")
sc.hadoopConfiguration.set("fs.s3`a`.path.style.access", "true")
sc.hadoopConfiguration.set("fs.s3a.connection.ssl.enabled","false")
sc.textFile("""s3a://minikube/data.json""").collect()
I am using the following guide to connect.
https://github.com/minio/cookbook/blob/master/docs/apache-spark-with-minio.md
These are the dependencies I used in scala.
"org.apache.spark" %% "spark-core" % "2.4.0", "org.apache.spark" %%
"spark-sql" % "2.4.0", "com.amazonaws" % "aws-java-sdk" % "1.11.712",
"org.apache.hadoop" % "hadoop-aws" % "2.7.3",
–
–
Try spark 2.4.3 without hadoop and use Hadoop 2.8.2 or 3.1.2. After trying steps in below link I am able to connect minio using cli
https://www.jitsejan.com/setting-up-spark-with-minio-as-object-storage.html
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.