You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi - I am using CalliopeServer 2 compiled for Hadoop1 with Cassandra 2.1. It works fine when I issue a simple select query such as "SELECT x from ks.cf"
However, when I issue the following request in a beeline client connected to CalliopeServer2:
"SELECT * from Keyspace.ColumnFamily"
I get the following exception
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1.0 (TID 6, gbd-cass-02): java.lang.ClassCastException: com.tuplejump.calliope.hadoop.MultiRangeSplit cannot be cast to com.tuplejump.calliope.hadoop.ColumnFamilySplit
Here are the error messages printed by CalliopeServer:
spark.jars --> file:/ephemeral/pkgs/calliope-server-assembly-1.1.0-CTP-U2.jar
spark.driver.port --> 44996
spark.master --> spark://gbd-is-02:7077
14/11/21 02:25:28 WARN TaskSetManager: Lost task 2.0 in stage 1.0 (TID 2, gbd-cass-02): java.lang.ClassCastException: com.tuplejump.calliope.hadoop.MultiRangeSplit cannot be cast to com.tuplejump.calliope.hadoop.ColumnFamilySplit
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initializeWithColumnFamilySplit(CqlRecordReader.java:93)
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:87)
com.tuplejump.calliope.native.NativeCassandraRDD$$anon$1.(NativeCassandraRDD.scala:65)
com.tuplejump.calliope.native.NativeCassandraRDD.compute(NativeCassandraRDD.scala:50)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
14/11/21 02:25:29 ERROR TaskSetManager: Task 2 in stage 1.0 failed 4 times; aborting job
14/11/21 02:25:29 ERROR CalliopeSQLOperationManager: Error executing query:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1.0 (TID 6, gbd-cass-02): java.lang.ClassCastException: com.tuplejump.calliope.hadoop.M
ultiRangeSplit cannot be cast to com.tuplejump.calliope.hadoop.ColumnFamilySplit
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initializeWithColumnFamilySplit(CqlRecordReader.java:93)
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:87)
com.tuplejump.calliope.native.NativeCassandraRDD$$anon$1.(NativeCassandraRDD.scala:65)
com.tuplejump.calliope.native.NativeCassandraRDD.compute(NativeCassandraRDD.scala:50)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
The text was updated successfully, but these errors were encountered:
Hi - I am using CalliopeServer 2 compiled for Hadoop1 with Cassandra 2.1. It works fine when I issue a simple select query such as "SELECT x from ks.cf"
However, when I issue the following request in a beeline client connected to CalliopeServer2:
"SELECT * from Keyspace.ColumnFamily"
I get the following exception
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1.0 (TID 6, gbd-cass-02): java.lang.ClassCastException: com.tuplejump.calliope.hadoop.MultiRangeSplit cannot be cast to com.tuplejump.calliope.hadoop.ColumnFamilySplit
Here are the error messages printed by CalliopeServer:
spark.jars --> file:/ephemeral/pkgs/calliope-server-assembly-1.1.0-CTP-U2.jar
spark.driver.port --> 44996
spark.master --> spark://gbd-is-02:7077
14/11/21 02:25:28 WARN TaskSetManager: Lost task 2.0 in stage 1.0 (TID 2, gbd-cass-02): java.lang.ClassCastException: com.tuplejump.calliope.hadoop.MultiRangeSplit cannot be cast to com.tuplejump.calliope.hadoop.ColumnFamilySplit
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initializeWithColumnFamilySplit(CqlRecordReader.java:93)
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:87)
com.tuplejump.calliope.native.NativeCassandraRDD$$anon$1.(NativeCassandraRDD.scala:65)
com.tuplejump.calliope.native.NativeCassandraRDD.compute(NativeCassandraRDD.scala:50)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
14/11/21 02:25:29 ERROR TaskSetManager: Task 2 in stage 1.0 failed 4 times; aborting job
14/11/21 02:25:29 ERROR CalliopeSQLOperationManager: Error executing query:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1.0 (TID 6, gbd-cass-02): java.lang.ClassCastException: com.tuplejump.calliope.hadoop.M
ultiRangeSplit cannot be cast to com.tuplejump.calliope.hadoop.ColumnFamilySplit
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initializeWithColumnFamilySplit(CqlRecordReader.java:93)
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:87)
com.tuplejump.calliope.native.NativeCassandraRDD$$anon$1.(NativeCassandraRDD.scala:65)
com.tuplejump.calliope.native.NativeCassandraRDD.compute(NativeCassandraRDD.scala:50)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
The text was updated successfully, but these errors were encountered: