You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am able to successfully pull/run the container and I can confirm hadoop is indeed working. However, any attempt to start spark (spark-shell or other) results in a hang and I cannot find logs that have any information indicating what might be going wrong. Unsure if any of the below helps:
I have created a 4GB VM and am using it
My physical machine is Win7
I am using DockerToolbox-1.9.1b
/etc/hosts has 'sandbox' resolving to the IP of the 4 GB VM
In another experiment, I tried running start-all.sh and saw the following exception in the log:
Exception in thread "main" java.lang.ClassFormatError: org.apache.spark.launcher
.Main (unrecognized class file version)
at java.lang.VMClassLoader.defineClass(libgcj.so.10)
at java.lang.ClassLoader.defineClass(libgcj.so.10)
at java.security.SecureClassLoader.defineClass(libgcj.so.10)
at java.net.URLClassLoader.findClass(libgcj.so.10)
at java.lang.ClassLoader.loadClass(libgcj.so.10)
at java.lang.ClassLoader.loadClass(libgcj.so.10)
at gnu.java.lang.MainThread.run(libgcj.so.10)
This made me think it might be using the wrong JVM (perhaps above is GNU java 1.5?). But the hadoop process is indeed using Java 1.7.
Any help or advice on where to see what might be going wrong much appreciated!
Thanks.
The text was updated successfully, but these errors were encountered:
Hello,
I am able to successfully pull/run the container and I can confirm hadoop is indeed working. However, any attempt to start spark (spark-shell or other) results in a hang and I cannot find logs that have any information indicating what might be going wrong. Unsure if any of the below helps:
In another experiment, I tried running start-all.sh and saw the following exception in the log:
Exception in thread "main" java.lang.ClassFormatError: org.apache.spark.launcher
.Main (unrecognized class file version)
at java.lang.VMClassLoader.defineClass(libgcj.so.10)
at java.lang.ClassLoader.defineClass(libgcj.so.10)
at java.security.SecureClassLoader.defineClass(libgcj.so.10)
at java.net.URLClassLoader.findClass(libgcj.so.10)
at java.lang.ClassLoader.loadClass(libgcj.so.10)
at java.lang.ClassLoader.loadClass(libgcj.so.10)
at gnu.java.lang.MainThread.run(libgcj.so.10)
This made me think it might be using the wrong JVM (perhaps above is GNU java 1.5?). But the hadoop process is indeed using Java 1.7.
Any help or advice on where to see what might be going wrong much appreciated!
Thanks.
The text was updated successfully, but these errors were encountered: