Apache Spark Driver logs redirecting to directory in both cluster and client mode
Apache Spark driver
class logs need to direct to directory in both cluster and client mode.
Application users
can log useful information in driver class. 
We have two things:
1) Spark Application run in yarn-client mode
2) Spark Application run in yarn-cluster mode
Spark Application run in yarn-client mode
When running a job
in yarn-client mode, the driver logs are spilled on the console, but this may
not useful for longer run, because the terminal will be aborted. So it is
always a good approach to log the driver information in to definite location.
Following is the
approach as discussed in the HDP blogspot for yarn-client mode: 
https://community.hortonworks.com/articles/138849/how-to-capture-spark-driver-and-executor-logs-in-y.html.
Here are the steps:
1.    Place a driver_log4j.properties file in a certain
location (say /tmp) on the machine where you will be submitting the job in
yarn-client mode
Contents of driver_log4j.properties
#Set everything to
be logged to the file
log4j.rootCategory=INFO,FILE
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd
HH:mm:ss} %p %c{1}: %m%n
log4j.appender.FILE=org.apache.log4j.RollingFileAppender
log4j.appender.FILE.File=/tmp/SparkDriver.log
log4j.appender.FILE.ImmediateFlush=true
log4j.appender.FILE.Threshold=debug
log4j.appender.FILE.Append=true
log4j.appender.FILE.MaxFileSize=500MB
log4j.appender.FILE.MaxBackupIndex=10
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.ConversionPattern=%d{yy/MM/dd
HH:mm:ss} %p %c{1}: %m%n
#Settings to quiet
third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
Change the value of
log4j.appender.FILE.File as needed.
2. Add the
following to the spark-submit command so that it picks the above log4j
properties and makes the driver log to a file.
--driver-java-options
"-Dlog4j.configuration=file:/tmp/driver_log4j.properties"
Example
spark-submit
--driver-java-options
"-Dlog4j.configuration=file:/tmp/driver_log4j.properties"
--class
org.apache.spark.examples.JavaSparkPi --master yarn-client --num-executors 3
--driver-memory
512m --executor-memory 512m --executor-cores 1 spark-examples*.jar 10
3. Now, once you
submit this new command, spark driver will log at the location specified by
log4j.appender.FILE.File in driver_log4j.properties. Thus, it will log to
/tmp/SparkDriver.log
In cluster mode we can enable the driver to log into local to the data node, using the below configuration.
--conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=driver_log4j.properties" 
Please note :
the log4j.properties need to available in the driver class path. We can set in multiple ways by creating an UBER jar and placing in the node lib directories or using option --files as suggested below.
--files file:/tmp/driver_log4j.properties. -- Necessary changes need to point to exact log4j.properties.
Using these two options we can enable pushing driver logs to local data node.
Example:
spark-submit --files file:/tmp/driver_log4j.properties --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=driver_log4j.properties"  --class org.apache.spark.examples.JavaSparkPi --master yarn --deploy-mode cluster --num-executors 3 --driver-memory 512m --executor-memory 512m --executor-cores 1 /usr/hdp/current/spark2-client/examples/jars/spark-examples*.jar 10
Note: Spark Application runs in different modes will use different properties to enable driver logging.
Example: 
In client Mode we need to use property : --driver-java-options
In cluster mode we need to use different property : spark.driver.extraJavaOptions
Sample Output:
<**.***.***.***3120.**.***.***.***> REMOTE_MODULE command sudo cat /tmp/SparkDriver.log #USE_SHELL
**.***.***.***3120.**.***.***.*** | success | rc=0 >>
17/11/27 10:59:56 INFO SignalUtils: Registered signal handler for TERM
17/11/27 10:59:56 INFO SignalUtils: Registered signal handler for HUP
17/11/27 10:59:56 INFO SignalUtils: Registered signal handler for INT
17/11/27 10:59:57 INFO ApplicationMaster: Preparing Local resources
17/11/27 10:59:58 WARN Client: Exception encountered while connecting to the server : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
17/11/27 10:59:59 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1524603292751_2668_000001
17/11/27 10:59:59 INFO SecurityManager: Changing view acls to: userptt
17/11/27 10:59:59 INFO SecurityManager: Changing modify acls to: userptt
17/11/27 10:59:59 INFO SecurityManager: Changing view acls groups to: 
17/11/27 10:59:59 INFO SecurityManager: Changing modify acls groups to: 
17/11/27 10:59:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(userptt); groups with view permissions: Set(); users  with modify permissions: Set(userptt); groups with modify permissions: Set()
17/11/27 10:59:59 INFO ApplicationMaster: Starting the user application in a separate Thread
17/11/27 10:59:59 INFO ApplicationMaster: Waiting for spark context initialization...
17/11/27 10:59:59 INFO SparkContext: Running Spark version 2.1.1.2.6.1.0-129
17/11/27 10:59:59 WARN SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
17/11/27 10:59:59 INFO SecurityManager: Changing view acls to: userptt
17/11/27 10:59:59 INFO SecurityManager: Changing modify acls to: userptt
17/11/27 10:59:59 INFO SecurityManager: Changing view acls groups to: 
17/11/27 10:59:59 INFO SecurityManager: Changing modify acls groups to: 
17/11/27 10:59:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(userptt); groups with view permissions: Set(); users  with modify permissions: Set(userptt); groups with modify permissions: Set()
17/11/27 10:59:59 INFO Utils: Successfully started service 'sparkDriver' on port 43627.
17/11/27 10:59:59 INFO SparkEnv: Registering MapOutputTracker
17/11/27 10:59:59 INFO SparkEnv: Registering BlockManagerMaster
17/11/27 10:59:59 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/11/27 10:59:59 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/11/27 10:59:59 INFO DiskBlockManager: Created local directory at /ngs1/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/blockmgr-16bb497f-b218-486f-a57e-db06df6c5f9a
17/11/27 10:59:59 INFO DiskBlockManager: Created local directory at /ngs2/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/blockmgr-ce3910c4-93e5-43d1-98c7-6e9e076f9180
17/11/27 10:59:59 INFO DiskBlockManager: Created local directory at /ngs3/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/blockmgr-615ba90d-5856-4561-9735-1485f5b68e12
17/11/27 10:59:59 INFO DiskBlockManager: Created local directory at /ngs4/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/blockmgr-f43d840f-a5dd-43b0-8c04-35a5f6a2d64f
17/11/27 10:59:59 INFO DiskBlockManager: Created local directory at /ngs5/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/blockmgr-06bf20ec-d5f9-410e-96ee-d116ca5c83df
17/11/27 10:59:59 INFO DiskBlockManager: Created local directory at /ngs6/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/blockmgr-ed7370e1-c841-4bd8-b83f-bbfcf226b314
17/11/27 10:59:59 INFO DiskBlockManager: Created local directory at /ngs7/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/blockmgr-bfadcac0-049c-4b3f-ba37-cd8f490fa306
17/11/27 10:59:59 INFO DiskBlockManager: Created local directory at /ngs8/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/blockmgr-419e6cf1-bfd0-4211-8cdb-5a67b3a559d3
17/11/27 10:59:59 INFO MemoryStore: MemoryStore started with capacity 114.6 MB
17/11/27 10:59:59 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/27 11:00:00 INFO log: Logging initialized @3981ms
17/11/27 11:00:00 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/11/27 11:00:00 INFO Server: jetty-9.2.z-SNAPSHOT
17/11/27 11:00:00 INFO Server: Started @4122ms
17/11/27 11:00:00 INFO ServerConnector: Started ServerConnector@1f6dd073{HTTP/1.1}{0.0.0.0:33403}
17/11/27 11:00:00 INFO Utils: Successfully started service 'SparkUI' on port 33403.
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5208b11e{/jobs,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@36cbafc3{/jobs/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5fc9d050{/jobs/job,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@415edbff{/jobs/job/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@67e82540{/stages,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1b267ddf{/stages/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@73bd4585{/stages/stage,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5f872bcf{/stages/stage/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@29c7eb53{/stages/pool,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@d664208{/stages/pool/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7cad2eea{/storage,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3ac7f1b{/storage/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7b3d7c63{/storage/rdd,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2e087a4b{/storage/rdd/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@32703943{/environment,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7d438814{/environment/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4a99ba6c{/executors,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2f0d0a57{/executors/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5a88d2a9{/executors/threadDump,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@6fb73bde{/executors/threadDump/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@29558943{/static,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@592bff88{/,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@11bb1d98{/api,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@6518d2bb{/jobs/job/kill,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@31bb0e70{/stages/stage/kill,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://**.***.***.***:33403
17/11/27 11:00:00 INFO YarnClusterScheduler: Created YarnClusterScheduler
17/11/27 11:00:00 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_15246132313151_2668 and attemptId Some(appattempt_1524603292751_2668_000001)
17/11/27 11:00:00 INFO Utils: Using initial executors = 3, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
17/11/27 11:00:00 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41204.
17/11/27 11:00:00 INFO NettyBlockTransferService: Server created on **.***.***.***.100:41204
17/11/27 11:00:00 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/11/27 11:00:00 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver,**.***.***.*** 41204, None)
17/11/27 11:00:00 INFO BlockManagerMasterEndpoint: Registering block manager **.***.***.***:41204 with 114.6 MB RAM, BlockManagerId(driver, **.***.***.***.100, 41204, None)
17/11/27 11:00:00 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, **.***.***.***.100, 41204, None)
17/11/27 11:00:00 INFO BlockManager: external shuffle service port = 7447
17/11/27 11:00:00 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, **.***.***.***.100, 41204, None)
17/11/27 11:00:00 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@419b6540{/metrics/json,null,AVAILABLE,@Spark}
17/11/27 11:00:00 INFO EventLoggingListener: Logging events to hdfs:///spark-history2/application_15246132313151_2668_1
17/11/27 11:00:00 INFO Utils: Using initial executors = 3, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
17/11/27 11:00:00 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
17/11/27 11:00:00 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@**.***.***.***.100:43627)
17/11/27 11:00:01 INFO ApplicationMaster: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> {#PWD#}<CPS>{#PWD#}/__spark_conf__<CPS>{#PWD#}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>/usr/hdp/current/hadoop-client/*<CPS>/usr/hdp/current/hadoop-client/lib/*<CPS>/usr/hdp/current/hadoop-hdfs-client/*<CPS>/usr/hdp/current/hadoop-hdfs-client/lib/*<CPS>/usr/hdp/current/hadoop-yarn-client/*<CPS>/usr/hdp/current/hadoop-yarn-client/lib/*<CPS>/etc/hadoop/conf/:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/2.6.1.0-129/hadoop/lib/hadoop-lzo-0.6.0.2.6.1.0-129.jar:/etc/hadoop/conf
    SPARK_YARN_STAGING_DIR -> hdfs://graven23/user/userptt/.sparkStaging/application_15246132313151_2668
    SPARK_USER -> userptt
    SPARK_YARN_MODE -> true
  command:
    LD_LIBRARY_PATH="/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/current/hadoop-client/lib/native/:$LD_LIBRARY_PATH" \ 
      {#JAVA_HOME#}/bin/java \ 
      -server \ 
      -Xmx512m \ 
      '-XX:+UseG1GC' \ 
      '-XX:InitiatingHeapOccupancyPercent=45' \ 
      '-XX:ConcGCThreads=4' \ 
      '-Djava.net.preferIPv4Stack=true' \ 
      '-XX:MaxPermSize=512m' \ 
      '-XX:+PrintGCDetails' \ 
      '-XX:+PrintGCTimeStamps' \ 
      '-XX:+UseCompressedOops' \ 
      '-Dhdp.version=2.2.9.26-3' \ 
      -Djava.io.tmpdir={#PWD#}/tmp \ 
      '-Dspark.ssl.historyServer.protocol=TLS' \ 
      '-Dspark.history.ui.port=18080' \ 
      '-Dspark.ssl.historyServer.enabled=true' \ 
      '-Dspark.ssl.historyServer.trustStore=/tmp/app/hdfs/security/node.ts' \ 
      '-Dspark.ssl.historyServer.trustStoreType=JKS' \ 
      '-Dspark.ssl.historyServer.keyStore=/tmp/app/hdfs/security/node.ks' \ 
      '-Dspark.ssl.historyServer.keyStoreType=JKS' \ 
      -Dspark.yarn.app.container.log.dir=<LOG_DIR> \ 
      -XX:OnOutOfMemoryError='kill %p' \ 
      org.apache.spark.executor.CoarseGrainedExecutorBackend \ 
      --driver-url \ 
      spark://CoarseGrainedScheduler@**.***.***.***.100:43627 \ 
      --executor-id \ 
      <executorId> \ 
      --hostname \ 
      <hostname> \ 
      --cores \ 
      1 \ 
      --app-id \ 
      application_15246132313151_2668 \ 
      --user-class-path \ 
      file:$PWD/__app__.jar \ 
      1><LOG_DIR>/stdout \ 
      2><LOG_DIR>/stderr
  resources:
    __app__.jar -> resource { scheme: "hdfs" host: "graven23" port: -1 file: "/user/userptt/.sparkStaging/application_15246132313151_2668/spark-examples_2.11-2.1.1.2.6.1.0-129.jar" } size: 1977149 timestamp: 1524826792781 type: FILE visibility: PRIVATE
    __spark_libs__ -> resource { scheme: "hdfs" host: "graven23" port: -1 file: "/user/spark/spark2.1.1/archive/spark-2.1.1.zip" } size: 182900368 timestamp: 1502843495278 type: ARCHIVE visibility: PUBLIC
    __spark_conf__ -> resource { scheme: "hdfs" host: "graven23" port: -1 file: "/user/userptt/.sparkStaging/application_15246132313151_2668/__spark_conf__.zip" } size: 107758 timestamp: 1524826792973 type: ARCHIVE visibility: PRIVATE
    driver_log4j.properties -> resource { scheme: "hdfs" host: "graven23" port: -1 file: "/user/userptt/.sparkStaging/application_15246132313151_2668/driver_log4j.properties" } size: 1077 timestamp: 1524826792836 type: FILE visibility: PRIVATE
===============================================================================
17/11/27 11:00:01 INFO YarnRMClient: Registering the ApplicationMaster
17/11/27 11:00:01 INFO ConfiguredRMFailoverProxyProvider: Failing over to rm2
17/11/27 11:00:01 INFO Utils: Using initial executors = 3, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
17/11/27 11:00:01 INFO YarnAllocator: Will request 3 executor container(s), each with 1 core(s) and 896 MB memory (including 384 MB of overhead)
17/11/27 11:00:01 INFO YarnAllocator: Submitted 3 unlocalized container requests.
17/11/27 11:00:01 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
17/11/27 11:00:01 INFO AMRMClientImpl: Received new token for : **.***.***.***3059.**.***.***.***:45454
17/11/27 11:00:01 INFO YarnAllocator: Launching container container_e48_1524603292751_2668_01_000002 on host **.***.***.***3059.**.***.***.***
17/11/27 11:00:01 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
17/11/27 11:00:01 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
17/11/27 11:00:01 INFO ContainerManagementProtocolProxy: Opening proxy : **.***.***.***3059.**.***.***.***:45454
17/11/27 11:00:01 INFO AMRMClientImpl: Received new token for : **.***.***.***3058.**.***.***.***:45454
17/11/27 11:00:01 INFO AMRMClientImpl: Received new token for : **.***.***.***3063.**.***.***.***:45454
17/11/27 11:00:01 INFO YarnAllocator: Launching container container_e48_1524603292751_2668_01_000003 on host **.***.***.***3058.**.***.***.***
17/11/27 11:00:01 INFO YarnAllocator: Launching container container_e48_1524603292751_2668_01_000004 on host **.***.***.***3063.**.***.***.***
17/11/27 11:00:01 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
17/11/27 11:00:01 INFO YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them.
17/11/27 11:00:01 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
17/11/27 11:00:01 INFO ContainerManagementProtocolProxy: Opening proxy : **.***.***.***3058.**.***.***.***:45454
17/11/27 11:00:01 INFO ContainerManagementProtocolProxy: Opening proxy : **.***.***.***3063.**.***.***.***:45454
17/11/27 11:00:04 INFO AMRMClientImpl: Received new token for : **.***.***.***:45454
17/11/27 11:00:04 INFO AMRMClientImpl: Received new token for : **.***.***.***:45454
17/11/27 11:00:04 INFO YarnAllocator: Received 2 containers from YARN, launching executors on 0 of them.
17/11/27 11:00:07 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (**.***.***.***.39:46740) with ID 1
17/11/27 11:00:07 INFO ExecutorAllocationManager: New executor 1 has registered (new total is 1)
17/11/27 11:00:07 INFO BlockManagerMasterEndpoint: Registering block manager **.***.***.***3059.**.***.***.***:44551 with 127.2 MB RAM, BlockManagerId(1, **.***.***.***3059.**.***.***.***, 44551, None)
17/11/27 11:00:07 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (**.***.***.***.38:37132) with ID 2
17/11/27 11:00:07 INFO ExecutorAllocationManager: New executor 2 has registered (new total is 2)
17/11/27 11:00:07 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (**.***.***.***.43:33090) with ID 3
17/11/27 11:00:07 INFO ExecutorAllocationManager: New executor 3 has registered (new total is 3)
17/11/27 11:00:07 INFO BlockManagerMasterEndpoint: Registering block manager **.***.***.***3063.**.***.***.***:38315 with 127.2 MB RAM, BlockManagerId(3, **.***.***.***3063.**.***.***.***, 38315, None)
17/11/27 11:00:07 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
17/11/27 11:00:07 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done
17/11/27 11:00:07 INFO BlockManagerMasterEndpoint: Registering block manager **.***.***.***3058.**.***.***.***:46356 with 127.2 MB RAM, BlockManagerId(2, **.***.***.***3058.**.***.***.***, 46356, None)
17/11/27 11:00:07 INFO SharedState: Warehouse path is 'file:/ngs2/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/container_e48_1524603292751_2668_01_000001/spark-warehouse'.
17/11/27 11:00:07 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7741d482{/SQL,null,AVAILABLE,@Spark}
17/11/27 11:00:07 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@37a55ec{/SQL/json,null,AVAILABLE,@Spark}
17/11/27 11:00:07 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5b91ad4e{/SQL/execution,null,AVAILABLE,@Spark}
17/11/27 11:00:07 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@a749d0{/SQL/execution/json,null,AVAILABLE,@Spark}
17/11/27 11:00:07 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@292f8639{/static/sql,null,AVAILABLE,@Spark}
17/11/27 11:00:08 INFO SparkContext: Starting job: reduce at JavaSparkPi.java:52
17/11/27 11:00:08 INFO DAGScheduler: Got job 0 (reduce at JavaSparkPi.java:52) with 10 output partitions
17/11/27 11:00:08 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at JavaSparkPi.java:52)
17/11/27 11:00:08 INFO DAGScheduler: Parents of final stage: List()
17/11/27 11:00:08 INFO DAGScheduler: Missing parents: List()
17/11/27 11:00:08 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at JavaSparkPi.java:52), which has no missing parents
17/11/27 11:00:08 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.3 KB, free 114.6 MB)
17/11/27 11:00:08 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1355.0 B, free 114.6 MB)
17/11/27 11:00:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on **.***.***.***.100:41204 (size: 1355.0 B, free: 114.6 MB)
17/11/27 11:00:08 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
17/11/27 11:00:08 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at JavaSparkPi.java:52)
17/11/27 11:00:08 INFO YarnClusterScheduler: Adding task set 0.0 with 10 tasks
17/11/27 11:00:08 WARN TaskSetManager: Stage 0 contains a task of very large size (390 KB). The maximum recommended task size is 100 KB.
17/11/27 11:00:08 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, **.***.***.***3063.**.***.***.***, executor 3, partition 0, PROCESS_LOCAL, 399560 bytes)
17/11/27 11:00:08 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, **.***.***.***3058.**.***.***.***, executor 2, partition 1, PROCESS_LOCAL, 407856 bytes)
17/11/27 11:00:08 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, **.***.***.***3059.**.***.***.***, executor 1, partition 2, PROCESS_LOCAL, 407856 bytes)
17/11/27 11:00:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on **.***.***.***3063.**.***.***.***:38315 (size: 1355.0 B, free: 127.2 MB)
17/11/27 11:00:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on **.***.***.***3059.**.***.***.***:44551 (size: 1355.0 B, free: 127.2 MB)
17/11/27 11:00:09 INFO YarnAllocator: Driver requested a total number of 4 executor(s).
17/11/27 11:00:09 INFO YarnAllocator: Will request 1 executor container(s), each with 1 core(s) and 896 MB memory (including 384 MB of overhead)
17/11/27 11:00:09 INFO YarnAllocator: Submitted 1 unlocalized container requests.
17/11/27 11:00:09 INFO ExecutorAllocationManager: Requesting 1 new executor because tasks are backlogged (new desired total will be 4)
17/11/27 11:00:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on **.***.***.***3058.**.***.***.***:46356 (size: 1355.0 B, free: 127.2 MB)
17/11/27 11:00:09 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, **.***.***.***3063.**.***.***.***, executor 3, partition 3, PROCESS_LOCAL, 407856 bytes)
17/11/27 11:00:09 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 986 ms on **.***.***.***3063.**.***.***.*** (executor 3) (1/10)
17/11/27 11:00:09 INFO AMRMClientImpl: Received new token for : **.***.***.***3083.**.***.***.***:45454
17/11/27 11:00:09 INFO YarnAllocator: Launching container container_e48_1524603292751_2668_01_000007 on host **.***.***.***3083.**.***.***.***
17/11/27 11:00:09 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
17/11/27 11:00:09 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
17/11/27 11:00:09 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, **.***.***.***3059.**.***.***.***, executor 1, partition 4, PROCESS_LOCAL, 407856 bytes)
17/11/27 11:00:09 INFO ContainerManagementProtocolProxy: Opening proxy : **.***.***.***3083.**.***.***.***:45454
17/11/27 11:00:09 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 877 ms on **.***.***.***3059.**.***.***.*** (executor 1) (2/10)
17/11/27 11:00:09 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, **.***.***.***3063.**.***.***.***, executor 3, partition 5, PROCESS_LOCAL, 407856 bytes)
17/11/27 11:00:09 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, **.***.***.***3058.**.***.***.***, executor 2, partition 6, PROCESS_LOCAL, 407856 bytes)
17/11/27 11:00:09 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 124 ms on **.***.***.***3063.**.***.***.*** (executor 3) (3/10)
17/11/27 11:00:09 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 951 ms on **.***.***.***3058.**.***.***.*** (executor 2) (4/10)
17/11/27 11:00:09 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, **.***.***.***3059.**.***.***.***, executor 1, partition 7, PROCESS_LOCAL, 407856 bytes)
17/11/27 11:00:09 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 113 ms on **.***.***.***3059.**.***.***.*** (executor 1) (5/10)
17/11/27 11:00:09 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, **.***.***.***3063.**.***.***.***, executor 3, partition 8, PROCESS_LOCAL, 407856 bytes)
17/11/27 11:00:09 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 92 ms on **.***.***.***3063.**.***.***.*** (executor 3) (6/10)
17/11/27 11:00:09 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, **.***.***.***3058.**.***.***.***, executor 2, partition 9, PROCESS_LOCAL, 407856 bytes)
17/11/27 11:00:09 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 110 ms on **.***.***.***3058.**.***.***.*** (executor 2) (7/10)
17/11/27 11:00:09 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 94 ms on **.***.***.***3063.**.***.***.*** (executor 3) (8/10)
17/11/27 11:00:09 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 116 ms on **.***.***.***3059.**.***.***.*** (executor 1) (9/10)
17/11/27 11:00:09 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 116 ms on **.***.***.***3058.**.***.***.*** (executor 2) (10/10)
17/11/27 11:00:09 INFO YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 
17/11/27 11:00:09 INFO DAGScheduler: ResultStage 0 (reduce at JavaSparkPi.java:52) finished in 1.285 s
17/11/27 11:00:09 INFO DAGScheduler: Job 0 finished: reduce at JavaSparkPi.java:52, took 1.672201 s
17/11/27 11:00:09 INFO ServerConnector: Stopped Spark@1f6dd073{HTTP/1.1}{0.0.0.0:0}
17/11/27 11:00:09 INFO SparkUI: Stopped Spark web UI at http://**.***.***.***.100:33403
17/11/27 11:00:09 INFO YarnAllocator: Driver requested a total number of 0 executor(s).
17/11/27 11:00:09 INFO YarnClusterSchedulerBackend: Shutting down all executors
17/11/27 11:00:09 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
17/11/27 11:00:09 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
17/11/27 11:00:09 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/11/27 11:00:09 INFO MemoryStore: MemoryStore cleared
17/11/27 11:00:09 INFO BlockManager: BlockManager stopped
17/11/27 11:00:09 INFO BlockManagerMaster: BlockManagerMaster stopped
17/11/27 11:00:09 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/11/27 11:00:09 INFO SparkContext: Successfully stopped SparkContext
17/11/27 11:00:09 INFO ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
17/11/27 11:00:09 INFO ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
17/11/27 11:00:09 INFO AMRMClientImpl: Waiting for application to be successfully unregistered.
17/11/27 11:00:10 INFO ApplicationMaster: Deleting staging directory hdfs://graven23/user/userptt/.sparkStaging/application_15246132313151_2668
17/11/27 11:00:10 INFO ShutdownHookManager: Shutdown hook called
17/11/27 11:00:10 INFO ShutdownHookManager: Deleting directory /ngs8/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/spark-451051fa-e4bd-41f4-b655-43417b2e5b27
17/11/27 11:00:10 INFO ShutdownHookManager: Deleting directory /ngs3/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/spark-2ed6c58b-0989-45a6-a0db-93d7c40b0efa
17/11/27 11:00:10 INFO ShutdownHookManager: Deleting directory /ngs4/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/spark-ab10687c-c65f-4fc1-b02b-f0dca7a27997
17/11/27 11:00:10 INFO ShutdownHookManager: Deleting directory /ngs1/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/spark-8ed3551e-2905-4aa8-a752-9c6a11fecaa2
17/11/27 11:00:10 INFO ShutdownHookManager: Deleting directory /ngs5/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/spark-3852fefb-0cac-4388-9724-572ee0758367
17/11/27 11:00:10 INFO ShutdownHookManager: Deleting directory /ngs2/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/spark-7ca91c7a-3af4-4bb0-83db-7589a1f28909
17/11/27 11:00:10 INFO ShutdownHookManager: Deleting directory /ngs7/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/spark-c314ccd5-214a-4c35-a291-ed7c7e44d1e3
17/11/27 11:00:10 INFO ShutdownHookManager: Deleting directory /ngs6/app/yarn/local/usercache/userptt/appcache/application_15246132313151_2668/spark-3498dc8a-a98f-46f5-900c-8ebee8c742f3
Comments
Post a Comment