hadoop - Spark submit, spark shell or pyspark not taking default environment values -
i have installed hadoop , spark on google cloud using click deploy. trying run spark shell , spark submit test installation. when try
spark-shell --master yarn-client
i error caused by: org.apache.hadoop.yarn.exceptions.invalidresourcerequestexception: invalid resource request, requested memory < 0, or requested memory > max configured, requestedmemory=6383, maxmemory=5999
the problem when don't provide --executor-memory 1g or value , doesn't pick default 1g value , don't know why yarn allocates max memory executor. commands arguments , without arguments
pyspark --master yarn-client --executor-memory 1g --num-executors 1 --verbose parsed arguments: master yarn-client deploymode null executormemory 1g numexecutors 1 pyspark --master yarn --verbose parsed arguments: master yarn deploymode null executormemory null numexecutors null
is spark bug or google cloud configuration issue? there anyway can set default values.
Comments
Post a Comment