Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

strange error #72

Open
Arnold1 opened this issue Dec 8, 2017 · 0 comments
Open

strange error #72

Arnold1 opened this issue Dec 8, 2017 · 0 comments

Comments

@Arnold1
Copy link

Arnold1 commented Dec 8, 2017

hi, i get the following error (it seems randomly, sometimes it works....). any idea why?

$ docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -h sandbox sequenceiq/spark:2.1.0 bash
/
Starting sshd:                                             [  OK  ]
Starting namenodes on [sandbox]
sandbox: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-sandbox.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-sandbox.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-sandbox.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-sandbox.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-sandbox.out
chown: missing operand after `/usr/local/hadoop/logs'
Try `chown --help' for more information.
starting historyserver, logging to /usr/local/hadoop/logs/mapred--historyserver-sandbox.out
bash-4.1# 
bash-4.1# 
bash-4.1# spark-submit --class org.apache.spark.examples.SparkPi \
>              --master yarn \
>              --driver-memory 1g \
>              --executor-memory 1g \
>              --executor-cores 1 \
>              $SPARK_HOME/examples/jars/spark-examples*.jar
17/12/08 11:05:57 INFO spark.SparkContext: Running Spark version 2.1.0
17/12/08 11:05:57 WARN spark.SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
17/12/08 11:05:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/12/08 11:05:58 INFO spark.SecurityManager: Changing view acls to: root
17/12/08 11:05:58 INFO spark.SecurityManager: Changing modify acls to: root
17/12/08 11:05:58 INFO spark.SecurityManager: Changing view acls groups to: 
17/12/08 11:05:58 INFO spark.SecurityManager: Changing modify acls groups to: 
17/12/08 11:05:58 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
17/12/08 11:05:59 INFO util.Utils: Successfully started service 'sparkDriver' on port 32857.
17/12/08 11:05:59 INFO spark.SparkEnv: Registering MapOutputTracker
17/12/08 11:05:59 INFO spark.SparkEnv: Registering BlockManagerMaster
17/12/08 11:05:59 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/12/08 11:05:59 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/12/08 11:05:59 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-9ef99bf3-adf8-4299-b473-1f220542b8d2
17/12/08 11:05:59 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
17/12/08 11:05:59 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/12/08 11:05:59 INFO util.log: Logging initialized @2813ms
17/12/08 11:05:59 INFO server.Server: jetty-9.2.z-SNAPSHOT
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@29968f06{/jobs,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5b87e83e{/jobs/json,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@37a06d64{/jobs/job,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@56ddcc4{/jobs/job/json,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6fb8caa4{/stages,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4d000e49{/stages/json,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3eaa021d{/stages/stage,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@b70de0f{/stages/stage/json,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1f02b0a7{/stages/pool,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@699bb3d8{/stages/pool/json,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6d3c6012{/storage,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@16c775c5{/storage/json,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@104e432{/storage/rdd,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@68218f23{/storage/rdd/json,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@733c783d{/environment,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6fa27e6{/environment/json,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1151709e{/executors,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@79b89df3{/executors/json,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4789faf3{/executors/threadDump,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@33ba8c36{/executors/threadDump/json,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1c4b47c2{/static,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@12542011{/,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5105457d{/api,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@31153b19{/jobs/job/kill,null,AVAILABLE}
17/12/08 11:05:59 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@68daff7b{/stages/stage/kill,null,AVAILABLE}
17/12/08 11:05:59 INFO server.ServerConnector: Started ServerConnector@433c482{HTTP/1.1}{0.0.0.0:4040}
17/12/08 11:05:59 INFO server.Server: Started @3037ms
17/12/08 11:05:59 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
17/12/08 11:05:59 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://172.17.0.2:4040
17/12/08 11:05:59 INFO spark.SparkContext: Added JAR file:/usr/local/spark/examples/jars/spark-examples_2.11-2.1.0.jar at spark://172.17.0.2:32857/jars/spark-examples_2.11-2.1.0.jar with timestamp 1512749159790
17/12/08 11:06:00 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/12/08 11:06:01 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
17/12/08 11:06:01 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
17/12/08 11:06:01 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
17/12/08 11:06:01 INFO yarn.Client: Setting up container launch context for our AM
17/12/08 11:06:01 INFO yarn.Client: Setting up the launch environment for our AM container
17/12/08 11:06:01 INFO yarn.Client: Preparing resources for our AM container
17/12/08 11:06:02 WARN yarn.Client: Failed to cleanup staging dir hdfs://sandbox:9000/user/root/.sparkStaging/application_1512749152998_0001
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /user/root/.sparkStaging/application_1512749152998_0001. Name node is in safe mode.
The reported blocks 238 has reached the threshold 0.9990 of total blocks 238. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 9 seconds.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3674)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:946)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:611)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

	at org.apache.hadoop.ipc.Client.call(Client.java:1475)
	at org.apache.hadoop.ipc.Client.call(Client.java:1412)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy12.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy13.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2044)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.spark.deploy.yarn.Client.cleanupStagingDir(Client.scala:194)
	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:180)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
	at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
	at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
	at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
	at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
	at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/12/08 11:06:02 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/root/.sparkStaging/application_1512749152998_0001. Name node is in safe mode.
The reported blocks 238 has reached the threshold 0.9990 of total blocks 238. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 9 seconds.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3855)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:977)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

	at org.apache.hadoop.ipc.Client.call(Client.java:1475)
	at org.apache.hadoop.ipc.Client.call(Client.java:1412)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy12.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
	at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
	at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1881)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:600)
	at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:432)
	at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:868)
	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:170)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
	at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
	at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
	at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
	at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
	at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/12/08 11:06:02 INFO server.ServerConnector: Stopped ServerConnector@433c482{HTTP/1.1}{0.0.0.0:4040}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@68daff7b{/stages/stage/kill,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@31153b19{/jobs/job/kill,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5105457d{/api,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@12542011{/,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1c4b47c2{/static,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@33ba8c36{/executors/threadDump/json,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4789faf3{/executors/threadDump,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@79b89df3{/executors/json,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1151709e{/executors,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6fa27e6{/environment/json,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@733c783d{/environment,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@68218f23{/storage/rdd/json,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@104e432{/storage/rdd,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@16c775c5{/storage/json,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6d3c6012{/storage,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@699bb3d8{/stages/pool/json,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1f02b0a7{/stages/pool,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@b70de0f{/stages/stage/json,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3eaa021d{/stages/stage,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4d000e49{/stages/json,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6fb8caa4{/stages,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@56ddcc4{/jobs/job/json,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@37a06d64{/jobs/job,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5b87e83e{/jobs/json,null,UNAVAILABLE}
17/12/08 11:06:02 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@29968f06{/jobs,null,UNAVAILABLE}
17/12/08 11:06:02 INFO ui.SparkUI: Stopped Spark web UI at http://172.17.0.2:4040
17/12/08 11:06:02 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
17/12/08 11:06:02 INFO cluster.YarnClientSchedulerBackend: Stopped
17/12/08 11:06:02 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/12/08 11:06:02 INFO memory.MemoryStore: MemoryStore cleared
17/12/08 11:06:02 INFO storage.BlockManager: BlockManager stopped
17/12/08 11:06:02 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/12/08 11:06:02 WARN metrics.MetricsSystem: Stopping a MetricsSystem that is not running
17/12/08 11:06:02 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/12/08 11:06:02 INFO spark.SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/root/.sparkStaging/application_1512749152998_0001. Name node is in safe mode.
The reported blocks 238 has reached the threshold 0.9990 of total blocks 238. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 9 seconds.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3855)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:977)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

	at org.apache.hadoop.ipc.Client.call(Client.java:1475)
	at org.apache.hadoop.ipc.Client.call(Client.java:1412)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy12.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
	at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
	at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1881)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:600)
	at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:432)
	at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:868)
	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:170)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
	at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
	at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
	at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
	at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
	at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/12/08 11:06:02 INFO util.ShutdownHookManager: Shutdown hook called
17/12/08 11:06:02 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-cb8c0672-ec3a-482a-aad4-fa64cbee6b49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant