You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
#72
Open
nguacon90 opened this issue
Dec 9, 2021
· 0 comments
I try do simple application with scala spark as follow:
val conf = new SparkConf().setMaster("spark://192.168.20.108:7077")
.setAppName("BI-SERVICE")
val numbersRdd = sc.parallelize((1 to 10000).toList)
numbersRdd.saveAsTextFile("hdfs://192.168.20.108:8020/numbers-as-text02")
However spark job still running and show warning message: WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
and not write hdfs file.
Anyone can help to resolve the problem?
The text was updated successfully, but these errors were encountered:
nguacon90
changed the title
Cannot write to hdfs
TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Dec 11, 2021
I try do simple application with scala spark as follow:
However spark job still running and show warning message:
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
and not write hdfs file.
Anyone can help to resolve the problem?
The text was updated successfully, but these errors were encountered: