用IDEA提交spark程序,yarn-client模式报错!(用ambari搭建的环境)

在网上看了一些关于此类的文章,自己在调试的时候碰到了如下问题:

  • 添加了相关属性,还是会做上传jar包的动作

conf.set("spark.yarn.preserve.staging.files","false")
//      sc.addJar("/LINYUN/spark-sql-1.0.0.jar")
      conf.set("spark.yarn.jar", "/hdp/apps/2.5.0.0-1245/spark/spark-hdp-assembly.jar")
//      conf.set("HDP_VERSION","2.5.0.0-1245")
      conf.set("spark.yarn.dist.files", "yarn-site.xml")
//      conf.set("yarn.resourcemanager.hostname", "10.50.1.152")
  • 相关log:

16/12/07 08:48:29 INFO Client: Setting up the launch environment for our AM container
16/12/07 08:48:29 INFO Client: Preparing resources for our AM container
16/12/07 08:48:30 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because UNIX Domain sockets are not available on Windows.
16/12/07 08:48:30 INFO Client: Uploading resource file:/E:/Development/code/spark-sql/src/lib/spark-hdp-assembly.jar -> hdfs://mycluster/user/hdfs/.sparkStaging/application_1480651479504_0028/spark-hdp-assembly.jar
16/12/07 08:48:47 INFO Client: Uploading resource file:/C:/Users/JiKai Lin/AppData/Local/Temp/spark-493d656d-b638-40cc-a09c-216036b202c8/__spark_conf__3311390895407233878.zip -> hdfs://mycluster/user/hdfs/.sparkStaging/application_1480651479504_0028/__spark_conf__3311390895407233878.zip
16/12/07 08:48:47 WARN Client: 
hdp.version is not found,
Please set HDP_VERSION=xxx in spark-env.sh,
or set -Dhdp.version=xxx in spark.{driver|yarn.am}.extraJavaOptions
or set SPARK_JAVA_OPTS="-Dhdp.verion=xxx" in spark-env.sh
If you're running Spark under HDP.
        
16/12/07 08:48:47 INFO SecurityManager: Changing view acls to: JiKai Lin,hdfs
16/12/07 08:48:47 INFO SecurityManager: Changing modify acls to: JiKai Lin,hdfs
16/12/07 08:48:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(JiKai Lin, hdfs); users with modify permissions: Set(JiKai Lin, hdfs)
  • 连上ResourceManager之后,PROXYHOST为什么会改成dn4节点,且程序一直处于ACCEPT状态,不会进入RUNNING状态,在resourceManager看到log,报连接异常,明显不认识RM了,这真是奇怪了。

16/12/07 08:48:50 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:51 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:52 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
16/12/07 08:48:52 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> dn4, PROXY_URI_BASES -> http://dn4:8088/proxy/application_1480651479504_0028), /proxy/application_1480651479504_0028
16/12/07 08:48:52 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/12/07 08:48:53 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
2016-12-07 08:48:50,500 INFO  [main] yarn.ApplicationMaster (SignalLogger.scala:register(47)) - Registered signal handlers for [TERM, HUP, INT]
2016-12-07 08:48:51,134 INFO  [main] yarn.ApplicationMaster (Logging.scala:logInfo(58)) - ApplicationAttemptId: appattempt_1480651479504_0028_000001
2016-12-07 08:48:51,449 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(58)) - Changing view acls to: yarn,hdfs
2016-12-07 08:48:51,449 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(58)) - Changing modify acls to: yarn,hdfs
2016-12-07 08:48:51,450 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(58)) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); users with modify permissions: Set(yarn, hdfs)
2016-12-07 08:48:51,825 INFO  [main] yarn.ApplicationMaster (Logging.scala:logInfo(58)) - Waiting for Spark driver to be reachable.
2016-12-07 08:48:51,830 INFO  [main] yarn.ApplicationMaster (Logging.scala:logInfo(58)) - Driver now available: myIp:55275
2016-12-07 08:48:52,072 INFO  [dispatcher-event-loop-1] yarn.ApplicationMaster$AMEndpoint (Logging.scala:logInfo(58)) - Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> dn4, PROXY_URI_BASES -> http://dn4:8088/proxy/application_1480651479504_0028),/proxy/application_1480651479504_0028)
2016-12-07 08:48:52,089 INFO  [main] client.RMProxy (RMProxy.java:newProxyInstance(125)) - Connecting to ResourceManager at /0.0.0.0:8030
2016-12-07 08:48:52,132 INFO  [main] yarn.YarnRMClient (Logging.scala:logInfo(58)) - Registering the ApplicationMaster
2016-12-07 08:48:53,180 INFO  [main] ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-07 08:48:54,181 INFO  [main] ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-07 08:48:55,183 INFO  [main] ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-07 08:48:56,184 INFO  [main] ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-07 08:48:57,186 INFO  [main] ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-07 08:48:58,187 INFO  [main] ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-07 08:48:59,190 INFO  [main] ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-07 08:49:00,192 INFO  [main] ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-07 08:49:01,193 INFO  [main] ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-07 08:49:02,194 INFO  [main] ipc.Client (Client.java:handleConnectionFailure(904)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-07 08:49:02,197 WARN  [main] ipc.Client (Client.java:handleConnectionFailure(886)) - Failed to connect to server: 0.0.0.0/0.0.0.0:8030: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206
  • IDEA上的完整log:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/12/07 08:48:22 INFO SparkContext: Running Spark version 1.6.2
16/12/07 08:48:24 INFO SecurityManager: Changing view acls to: JiKai Lin,hdfs
16/12/07 08:48:24 INFO SecurityManager: Changing modify acls to: JiKai Lin,hdfs
16/12/07 08:48:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(JiKai Lin, hdfs); users with modify permissions: Set(JiKai Lin, hdfs)
16/12/07 08:48:26 INFO Utils: Successfully started service 'sparkDriver' on port 55275.
16/12/07 08:48:26 INFO Slf4jLogger: Slf4jLogger started
16/12/07 08:48:26 INFO Remoting: Starting remoting
16/12/07 08:48:27 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 55288.
16/12/07 08:48:27 INFO SparkEnv: Registering MapOutputTracker
16/12/07 08:48:27 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.10.16.61:55288]
16/12/07 08:48:27 INFO SparkEnv: Registering BlockManagerMaster
16/12/07 08:48:27 INFO DiskBlockManager: Created local directory at C:\Users\JiKai Lin\AppData\Local\Temp\blockmgr-d07d132c-3726-498a-8910-549fbffda2ee
16/12/07 08:48:27 INFO MemoryStore: MemoryStore started with capacity 1098.0 MB
16/12/07 08:48:27 INFO SparkEnv: Registering OutputCommitCoordinator
16/12/07 08:48:28 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/12/07 08:48:28 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.10.16.61:4040
16/12/07 08:48:29 INFO RMProxy: Connecting to ResourceManager at rm.hadoop/10.50.1.152:8050
16/12/07 08:48:29 INFO Client: Requesting a new application from cluster with 5 NodeManagers
16/12/07 08:48:29 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (5120 MB per container)
16/12/07 08:48:29 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/12/07 08:48:29 INFO Client: Setting up container launch context for our AM
16/12/07 08:48:29 INFO Client: Setting up the launch environment for our AM container
16/12/07 08:48:29 INFO Client: Preparing resources for our AM container
16/12/07 08:48:30 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because UNIX Domain sockets are not available on Windows.
16/12/07 08:48:30 INFO Client: Uploading resource file:/E:/Development/code/spark-sql/src/lib/spark-hdp-assembly.jar -> hdfs://mycluster/user/hdfs/.sparkStaging/application_1480651479504_0028/spark-hdp-assembly.jar
16/12/07 08:48:47 INFO Client: Uploading resource file:/C:/Users/JiKai Lin/AppData/Local/Temp/spark-493d656d-b638-40cc-a09c-216036b202c8/__spark_conf__3311390895407233878.zip -> hdfs://mycluster/user/hdfs/.sparkStaging/application_1480651479504_0028/__spark_conf__3311390895407233878.zip
16/12/07 08:48:47 WARN Client: 
hdp.version is not found,
Please set HDP_VERSION=xxx in spark-env.sh,
or set -Dhdp.version=xxx in spark.{driver|yarn.am}.extraJavaOptions
or set SPARK_JAVA_OPTS="-Dhdp.verion=xxx" in spark-env.sh
If you're running Spark under HDP.
        
16/12/07 08:48:47 INFO SecurityManager: Changing view acls to: JiKai Lin,hdfs
16/12/07 08:48:47 INFO SecurityManager: Changing modify acls to: JiKai Lin,hdfs
16/12/07 08:48:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(JiKai Lin, hdfs); users with modify permissions: Set(JiKai Lin, hdfs)
16/12/07 08:48:47 INFO Client: Submitting application 28 to ResourceManager
16/12/07 08:48:47 INFO YarnClientImpl: Submitted application application_1480651479504_0028
16/12/07 08:48:47 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1480651479504_0028 and attemptId None
16/12/07 08:48:48 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:48 INFO Client: 
     client token: N/A
     diagnostics: AM container is launched, waiting for AM container to Register with RM
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1481071727608
     final status: UNDEFINED
     tracking URL: http://rm.hadoop:8088/proxy/application_1480651479504_0028/
     user: hdfs
16/12/07 08:48:49 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:50 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:51 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:52 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
16/12/07 08:48:52 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> dn4, PROXY_URI_BASES -> http://dn4:8088/proxy/application_1480651479504_0028), /proxy/application_1480651479504_0028
16/12/07 08:48:52 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/12/07 08:48:53 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:54 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:55 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:56 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:57 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:58 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:48:59 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:00 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:01 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:02 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:03 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:04 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:05 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:06 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:07 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:08 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:09 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:10 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:11 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:12 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:13 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:14 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:15 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:16 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:17 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:18 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:19 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:20 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:21 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:22 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:23 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:24 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:25 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:26 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
16/12/07 08:49:27 INFO Client: Application report for application_1480651479504_0028 (state: ACCEPTED)
……
  • 在linux上spark-shell --master yarn-client的完整log,可正常执行!

[spark@dn5 ~]$ spark-shell --master yarn-client
16/12/07 08:53:08 INFO SecurityManager: Changing view acls to: spark
16/12/07 08:53:08 INFO SecurityManager: Changing modify acls to: spark
16/12/07 08:53:08 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/12/07 08:53:09 INFO HttpServer: Starting HTTP Server
16/12/07 08:53:09 INFO Server: jetty-8.y.z-SNAPSHOT
16/12/07 08:53:09 INFO AbstractConnector: Started SocketConnector@0.0.0.0:41608
16/12/07 08:53:09 INFO Utils: Successfully started service 'HTTP class server' on port 41608.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.6.2
      /_/

Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77)
Type in expressions to have them evaluated.
Type :help for more information.
16/12/07 08:53:12 INFO SparkContext: Running Spark version 1.6.2
16/12/07 08:53:12 INFO SecurityManager: Changing view acls to: spark
16/12/07 08:53:12 INFO SecurityManager: Changing modify acls to: spark
16/12/07 08:53:12 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/12/07 08:53:12 INFO Utils: Successfully started service 'sparkDriver' on port 35156.
16/12/07 08:53:12 INFO Slf4jLogger: Slf4jLogger started
16/12/07 08:53:13 INFO Remoting: Starting remoting
16/12/07 08:53:13 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.50.1.158:54202]
16/12/07 08:53:13 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 54202.
16/12/07 08:53:13 INFO SparkEnv: Registering MapOutputTracker
16/12/07 08:53:13 INFO SparkEnv: Registering BlockManagerMaster
16/12/07 08:53:13 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-8abd39d4-f3e0-43f2-ab0d-d081ebb26837
16/12/07 08:53:13 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/12/07 08:53:13 INFO SparkEnv: Registering OutputCommitCoordinator
16/12/07 08:53:13 INFO Server: jetty-8.y.z-SNAPSHOT
16/12/07 08:53:13 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/12/07 08:53:13 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/12/07 08:53:13 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.50.1.158:4040
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
16/12/07 08:53:14 INFO TimelineClientImpl: Timeline service address: http://rm.hadoop:8188/ws/v1/timeline/
16/12/07 08:53:14 INFO RMProxy: Connecting to ResourceManager at rm.hadoop/10.50.1.152:8050
16/12/07 08:53:14 INFO AHSProxy: Connecting to Application History server at rm.hadoop/10.50.1.152:10200
16/12/07 08:53:15 INFO Client: Requesting a new application from cluster with 5 NodeManagers
16/12/07 08:53:15 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (5120 MB per container)
16/12/07 08:53:15 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/12/07 08:53:15 INFO Client: Setting up container launch context for our AM
16/12/07 08:53:15 INFO Client: Setting up the launch environment for our AM container
16/12/07 08:53:15 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs://mycluster/hdp/apps/2.5.0.0-1245/spark/spark-hdp-assembly.jar
16/12/07 08:53:15 INFO Client: Preparing resources for our AM container
16/12/07 08:53:15 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs://mycluster/hdp/apps/2.5.0.0-1245/spark/spark-hdp-assembly.jar
16/12/07 08:53:15 INFO Client: Source and destination file systems are the same. Not copying hdfs://mycluster/hdp/apps/2.5.0.0-1245/spark/spark-hdp-assembly.jar
16/12/07 08:53:16 INFO Client: Uploading resource file:/tmp/spark-9d05b084-4096-4f4a-8a0d-d34e5eb47502/__spark_conf__1891090391836780524.zip -> hdfs://mycluster/user/spark/.sparkStaging/application_1480651479504_0029/__spark_conf__1891090391836780524.zip
16/12/07 08:53:16 INFO SecurityManager: Changing view acls to: spark
16/12/07 08:53:16 INFO SecurityManager: Changing modify acls to: spark
16/12/07 08:53:16 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/12/07 08:53:16 INFO Client: Submitting application 29 to ResourceManager
16/12/07 08:53:16 INFO YarnClientImpl: Submitted application application_1480651479504_0029
16/12/07 08:53:16 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1480651479504_0029 and attemptId None
16/12/07 08:53:17 INFO Client: Application report for application_1480651479504_0029 (state: ACCEPTED)
16/12/07 08:53:17 INFO Client: 
     client token: N/A
     diagnostics: AM container is launched, waiting for AM container to Register with RM
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1481071996338
     final status: UNDEFINED
     tracking URL: http://rm.hadoop:8088/proxy/application_1480651479504_0029/
     user: spark
16/12/07 08:53:18 INFO Client: Application report for application_1480651479504_0029 (state: ACCEPTED)
16/12/07 08:53:19 INFO Client: Application report for application_1480651479504_0029 (state: ACCEPTED)
16/12/07 08:53:20 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
16/12/07 08:53:20 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> rm.hadoop, PROXY_URI_BASES -> http://rm.hadoop:8088/proxy/application_1480651479504_0029), /proxy/application_1480651479504_0029
16/12/07 08:53:20 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/12/07 08:53:20 INFO Client: Application report for application_1480651479504_0029 (state: ACCEPTED)
16/12/07 08:53:21 INFO Client: Application report for application_1480651479504_0029 (state: RUNNING)
16/12/07 08:53:21 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 10.50.1.158
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1481071996338
     final status: UNDEFINED
     tracking URL: http://rm.hadoop:8088/proxy/application_1480651479504_0029/
     user: spark
16/12/07 08:53:21 INFO YarnClientSchedulerBackend: Application application_1480651479504_0029 has started running.
16/12/07 08:53:21 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 40062.
16/12/07 08:53:21 INFO NettyBlockTransferService: Server created on 40062
16/12/07 08:53:21 INFO BlockManagerMaster: Trying to register BlockManager
16/12/07 08:53:21 INFO BlockManagerMasterEndpoint: Registering block manager 10.50.1.158:40062 with 511.1 MB RAM, BlockManagerId(driver, 10.50.1.158, 40062)
16/12/07 08:53:21 INFO BlockManagerMaster: Registered BlockManager
16/12/07 08:53:22 INFO EventLoggingListener: Logging events to hdfs:///spark-history/application_1480651479504_0029
16/12/07 08:53:24 INFO YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (dn2.hadoop:35283) with ID 1
16/12/07 08:53:24 INFO BlockManagerMasterEndpoint: Registering block manager dn2.hadoop:50521 with 511.1 MB RAM, BlockManagerId(1, dn2.hadoop, 50521)
16/12/07 08:53:24 INFO YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (dn5.hadoop:35323) with ID 2
16/12/07 08:53:24 INFO BlockManagerMasterEndpoint: Registering block manager dn5.hadoop:44573 with 511.1 MB RAM, BlockManagerId(2, dn5.hadoop, 44573)
16/12/07 08:53:24 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
16/12/07 08:53:24 INFO SparkILoop: Created spark context..
Spark context available as sc.

望速度来人救救我吧~

阅读 14.7k
4 个回答
16/12/09 16:10:49 WARN Client: 
hdp.version is not found,
Please set HDP_VERSION=xxx in spark-env.sh,
or set -Dhdp.version=xxx in spark.{driver|yarn.am}.extraJavaOptions
or set SPARK_JAVA_OPTS="-Dhdp.verion=xxx" in spark-env.sh
If you're running Spark under HDP.
        
16/12/09 16:10:49 INFO SecurityManager: Changing view acls to: JiKai Lin,spark

在IDEA上如何赋值hdp.version.

各种方法都试过了:
1.

conf.set("spark.driver.extraJavaOptions","-Dhdp.version=2.5.0.0-1245")
conf.set("spark.yarn.am.extraJavaOptions","-Dhdp.version=2.5.0.0-1245")

2.spark-env.sh配置

export HDP_VERSION=${hdp.verson:-2.5.0.0-1245}

3.spark-defaults.conf

spark.driver.extraJavaOptions -Dhdp.version=2.5.0.0-1245
spark.yarn.am.extraJavaOptions -Dhdp.version=2.5.0.0-1245

都不行~真的不懂怎么去配置这个东东,ambari集群是怎么实现配置这个的?

已经解决该问题,自问自答。。~~~

新手上路,请多包涵

我遇到和你一样的错误,你是怎么解决的呢,可以告诉一下嘛?

老哥,处理完了分享一下经验嘛,大家学习。我这边处理好了,是在打包完后,在启动文件(.sh文件)里添加了export HDP_VERSION=3.0.1.0-187的设置

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
宣传栏