Apache Spark: Setting Executor Instances Does Not Change The Executors
Answer : Increase yarn.nodemanager.resource.memory-mb in yarn-site.xml With 12g per node you can only launch driver(3g) and 2 executors(11g). Node1 - driver 3g (+7% overhead) Node2 - executor1 11g (+7% overhead) Node3 - executor2 11g (+7% overhead) now you are requesting for executor3 of 11g and no node has 11g memory available. for 7% overhead refer spark.yarn.executor.memoryOverhead and spark.yarn.driver.memoryOverhead in https://spark.apache.org/docs/1.2.0/running-on-yarn.html Note that yarn.nodemanager.resource.memory-mb is total memory that a single NodeManager can allocate across all containers on one node. In your case, since yarn.nodemanager.resource.memory-mb = 12G , if you add up the memory allocated to all YARN containers on any single node, it cannot exceed 12G. You have requested 11G ( -executor-memory 11G ) for each Spark Executor container. Though 11G is less than 12G, this still won't work. Why ? Because you have to account for spark.yar...