Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"select * from ks.cf" causes an exception #43

Open
mohammedguller opened this issue Nov 21, 2014 · 0 comments
Open

"select * from ks.cf" causes an exception #43

mohammedguller opened this issue Nov 21, 2014 · 0 comments

Comments

@mohammedguller
Copy link

Hi - I am using CalliopeServer 2 compiled for Hadoop1 with Cassandra 2.1. It works fine when I issue a simple select query such as "SELECT x from ks.cf"

However, when I issue the following request in a beeline client connected to CalliopeServer2:

"SELECT * from Keyspace.ColumnFamily"

I get the following exception

Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1.0 (TID 6, gbd-cass-02): java.lang.ClassCastException: com.tuplejump.calliope.hadoop.MultiRangeSplit cannot be cast to com.tuplejump.calliope.hadoop.ColumnFamilySplit

Here are the error messages printed by CalliopeServer:
spark.jars --> file:/ephemeral/pkgs/calliope-server-assembly-1.1.0-CTP-U2.jar
spark.driver.port --> 44996
spark.master --> spark://gbd-is-02:7077
14/11/21 02:25:28 WARN TaskSetManager: Lost task 2.0 in stage 1.0 (TID 2, gbd-cass-02): java.lang.ClassCastException: com.tuplejump.calliope.hadoop.MultiRangeSplit cannot be cast to com.tuplejump.calliope.hadoop.ColumnFamilySplit
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initializeWithColumnFamilySplit(CqlRecordReader.java:93)
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:87)
com.tuplejump.calliope.native.NativeCassandraRDD$$anon$1.(NativeCassandraRDD.scala:65)
com.tuplejump.calliope.native.NativeCassandraRDD.compute(NativeCassandraRDD.scala:50)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
14/11/21 02:25:29 ERROR TaskSetManager: Task 2 in stage 1.0 failed 4 times; aborting job
14/11/21 02:25:29 ERROR CalliopeSQLOperationManager: Error executing query:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1.0 (TID 6, gbd-cass-02): java.lang.ClassCastException: com.tuplejump.calliope.hadoop.M
ultiRangeSplit cannot be cast to com.tuplejump.calliope.hadoop.ColumnFamilySplit
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initializeWithColumnFamilySplit(CqlRecordReader.java:93)
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:87)
com.tuplejump.calliope.native.NativeCassandraRDD$$anon$1.(NativeCassandraRDD.scala:65)
com.tuplejump.calliope.native.NativeCassandraRDD.compute(NativeCassandraRDD.scala:50)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant