i have faced a problem in which i submit a Spark SQL application in cluster mode, and the process on worker nodes can not be stopped.
some parameters in my spark-submit commands are as follow:
--master yarn --deploy-mode cluster --conf spark.executor.memory=7000m --conf spark.driver.memory=2000m
Here are the methods i tried before:
- firstly, i tried to stop YARN, i run hadoop/
yarn-stop.sh
on master node, but it doesn t help - i try to kill the application with the command
yarn application -kill <appid>
, however maybe because i restarted YARN, i cannot find appID by usingbin/yarn application -list
, i found the appID in HDFS, and run the first command, it doesn t help either. - finally i try to use
kill -9
to kill the dhclient process on workder nodes, but it continouslly restart