English 中文(简体)
Cannot stop spark application process on worker node in YARN mode
原标题:

i have faced a problem in which i submit a Spark SQL application in cluster mode, and the process on worker nodes can not be stopped.

some parameters in my spark-submit commands are as follow: --master yarn --deploy-mode cluster --conf spark.executor.memory=7000m --conf spark.driver.memory=2000m

Here are the methods i tried before:

  • firstly, i tried to stop YARN, i run hadoop/yarn-stop.sh on master node, but it doesn t help
  • i try to kill the application with the command yarn application -kill <appid> , however maybe because i restarted YARN, i cannot find appID by using bin/yarn application -list, i found the appID in HDFS, and run the first command, it doesn t help either.
  • finally i try to use kill -9 to kill the dhclient process on workder nodes, but it continouslly restart
问题回答

暂无回答




相关问题
how to use phoenix5.0 with spark 3.0 preview

case "phoenix" =>{ outputStream.foreachRDD(rdd=>{ val spark=SparkSession.builder().config(rdd.sparkContext.getConf).getOrCreate() val ds=spark.createDataFrame(rdd,...

同一S3bucket使用多位证书

I'm using 2.1.1 with Hadoop 2.7.3 and I'm use data from different S3 sites in one管线。

热门标签