Skip to content

Commit c577ae7

Browse files
the-sakthiMaxGekk
authored andcommitted
[SPARK-51423][SQL] Add the current_time() function for TIME datatype
### What changes were proposed in this pull request? This PR adds support for a new function current_time() which returns the current time at the start of query evaluation. ```bash # happy cases scala> spark.sql("SELECT current_time(0);").show() +---------------+ |current_time(0)| +---------------+ | 17:11:26| +---------------+ scala> spark.sql("SELECT current_time(3);").show() +---------------+ |current_time(3)| +---------------+ | 17:11:50.225| +---------------+ scala> spark.sql("SELECT current_time(6);").show() +---------------+ |current_time(6)| +---------------+ |17:12:00.734735| +---------------+ # No braces and Empty braces scala> spark.sql("SELECT current_time;").show() +---------------+ |current_time(6)| +---------------+ |17:12:23.132088| +---------------+ scala> spark.sql("SELECT current_time();").show() +---------------+ |current_time(6)| +---------------+ |17:12:26.718602| +---------------+ # foldability ## Nested Arithmetic scala> spark.sql("SELECT current_time((4 - 2) * (1 + 1));").show() +---------------------------------+ |current_time(((4 - 2) * (1 + 1)))| +---------------------------------+ | 17:13:04.4647| +---------------------------------+ ## Casting String literals scala> spark.sql("SELECT current_time(CAST(' 0005 ' AS INT));").show() +---------------------------------+ |current_time(CAST( 0005 AS INT))| +---------------------------------+ | 17:13:26.28039| +---------------------------------+ scala> spark.sql("SELECT current_time('5');").show() +---------------+ |current_time(5)| +---------------+ | 22:34:07.65007| +---------------+ ## Combine Cast and Arithmetic scala> spark.sql("SELECT current_time(CAST('4' AS INT) * CAST('1' AS INT));").show() +-----------------------------------------------+ |current_time((CAST(4 AS INT) * CAST(1 AS INT)))| +-----------------------------------------------+ | 17:14:06.7151| +-----------------------------------------------+ # failure cases scala> spark.sql("SELECT current_time(-1);").show() org.apache.spark.sql.catalyst.ExtendedAnalysisException: [DATATYPE_MISMATCH.VALUE_OUT_OF_RANGE] Cannot resolve "current_time(-1)" due to data type mismatch: The `precision` must be between [0, 6] (current value = -1). SQLSTATE: 42K09; line 1 pos 7; 'Project [unresolvedalias(current_time(-1))] +- OneRowRelation' scala> spark.sql("SELECT current_time('foo');").show() org.apache.spark.SparkNumberFormatException: [CAST_INVALID_INPUT] The value 'foo' of the type "STRING" cannot be cast to "INT" because it is malformed. Correct the value as per the syntax, or change its target type. Use `try_cast` to tolerate malformed input and return NULL instead. SQLSTATE: 22018 == SQL (line 1, position 8) == SELECT current_time('foo'); scala> spark.sql("SELECT current_time(2,2);").show() org.apache.spark.sql.AnalysisException: [WRONG_NUM_ARGS.WITHOUT_SUGGESTION] The `current_time` requires [0, 1] parameters but the actual number is 2. Please, refer to 'https://spark.apache.org/docs/latest/sql-ref-functions.html' for a fix. SQLSTATE: 42605; line 1 pos 7 # All calls of current_time within the same query should return the same value. scala> val df = spark.sql(""" | SELECT | current_time AS col1, | current_time() AS col2, | current_time(0) AS col3, | current_time(1) AS col4, | current_time(2) AS col5, | current_time(3) AS col6, | current_time(4) AS col7, | current_time(5) AS col8, | current_time(6) AS col9, | current_time AS col10 | """) val df: org.apache.spark.sql.DataFrame = [col1: time(6), col2: time(6) ... 8 more fields] scala> df.show() +---------------+---------------+--------+----------+-----------+-----------+-------------+--------------+---------------+---------------+ | col1| col2| col3| col4| col5| col6| col7| col8| col9| col10| +---------------+---------------+--------+----------+-----------+-----------+-------------+--------------+---------------+---------------+ |17:15:47.680648|17:15:47.680648|17:15:47|17:15:47.6|17:15:47.68|17:15:47.68|17:15:47.6806|17:15:47.68064|17:15:47.680648|17:15:47.680648| +---------------+---------------+--------+----------+-----------+-----------+-------------+--------------+---------------+---------------+ ``` ### Why are the changes needed? Adds a built-in current_time([n]) function returning just the time portion (in a TIME(n) type). This aligns Spark with other SQL systems offering a native time function, improves convenience for time-only queries, and complements existing functions like current_date and current_timestamp. ### Does this PR introduce _any_ user-facing change? Yes, adds a new function. Users can now get the current time using this function. ### How was this patch tested? Manual testing as shown above and running UTs added: ```bash $ build/sbt "test:testOnly *TimeExpressionsSuite.scala" $ build/sbt "test:testOnly *ComputeCurrentTimeSuite.scala" $ build/sbt "test:testOnly *ResolveInlineTablesSuite.scala $ build/sbt "test:testOnly *AnalysisSuite.scala ``` ### Was this patch authored or co-authored using generative AI tooling? No Closes #50336 from the-sakthi/SPARK-51162. Authored-by: Sakthi <[email protected]> Signed-off-by: Max Gekk <[email protected]>
1 parent 01c16af commit c577ae7

File tree

15 files changed

+357
-21
lines changed

15 files changed

+357
-21
lines changed

sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1155,7 +1155,7 @@ datetimeUnit
11551155
;
11561156

11571157
primaryExpression
1158-
: name=(CURRENT_DATE | CURRENT_TIMESTAMP | CURRENT_USER | USER | SESSION_USER) #currentLike
1158+
: name=(CURRENT_DATE | CURRENT_TIMESTAMP | CURRENT_USER | USER | SESSION_USER | CURRENT_TIME) #currentLike
11591159
| name=(TIMESTAMPADD | DATEADD | DATE_ADD) LEFT_PAREN (unit=datetimeUnit | invalidUnit=stringLit) COMMA unitsAmount=valueExpression COMMA timestamp=valueExpression RIGHT_PAREN #timestampadd
11601160
| name=(TIMESTAMPDIFF | DATEDIFF | DATE_DIFF | TIMEDIFF) LEFT_PAREN (unit=datetimeUnit | invalidUnit=stringLit) COMMA startTimestamp=valueExpression COMMA endTimestamp=valueExpression RIGHT_PAREN #timestampdiff
11611161
| CASE whenClause+ (ELSE elseExpression=expression)? END #searchedCase

sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/SparkDateTimeUtils.scala

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,39 @@ trait SparkDateTimeUtils {
133133
}
134134
}
135135

136+
/**
137+
* Gets the number of microseconds since midnight using the session time zone.
138+
*/
139+
def instantToMicrosOfDay(instant: Instant, timezone: String): Long = {
140+
val zoneId = getZoneId(timezone)
141+
val localDateTime = LocalDateTime.ofInstant(instant, zoneId)
142+
localDateTime.toLocalTime.getLong(MICRO_OF_DAY)
143+
}
144+
145+
/**
146+
* Truncates a time value (in microseconds) to the specified fractional precision `p`.
147+
*
148+
* For example, if `p = 3`, we keep millisecond resolution and discard any digits beyond the
149+
* thousand-microsecond place. So a value like `123456` microseconds (12:34:56.123456) becomes
150+
* `123000` microseconds (12:34:56.123).
151+
*
152+
* @param micros
153+
* The original time in microseconds.
154+
* @param p
155+
* The fractional second precision (range 0 to 6).
156+
* @return
157+
* The truncated microsecond value, preserving only `p` fractional digits.
158+
*/
159+
def truncateTimeMicrosToPrecision(micros: Long, p: Int): Long = {
160+
assert(
161+
p >= TimeType.MIN_PRECISION && p <= TimeType.MICROS_PRECISION,
162+
s"Fractional second precision $p out" +
163+
s" of range [${TimeType.MIN_PRECISION}..${TimeType.MICROS_PRECISION}].")
164+
val scale = TimeType.MICROS_PRECISION - p
165+
val factor = math.pow(10, scale).toLong
166+
(micros / factor) * factor
167+
}
168+
136169
/**
137170
* Converts the timestamp `micros` from one timezone to another.
138171
*

sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -627,6 +627,7 @@ object FunctionRegistry {
627627
expression[CurrentDate]("current_date"),
628628
expressionBuilder("curdate", CurDateExpressionBuilder, setAlias = true),
629629
expression[CurrentTimestamp]("current_timestamp"),
630+
expression[CurrentTime]("current_time"),
630631
expression[CurrentTimeZone]("current_timezone"),
631632
expression[LocalTimestamp]("localtimestamp"),
632633
expression[DateDiff]("datediff"),

sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/LiteralFunctionResolution.scala

Lines changed: 4 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -17,16 +17,7 @@
1717

1818
package org.apache.spark.sql.catalyst.analysis
1919

20-
import org.apache.spark.sql.catalyst.expressions.{
21-
Alias,
22-
CurrentDate,
23-
CurrentTimestamp,
24-
CurrentUser,
25-
Expression,
26-
GroupingID,
27-
NamedExpression,
28-
VirtualColumn
29-
}
20+
import org.apache.spark.sql.catalyst.expressions.{Alias, CurrentDate, CurrentTime, CurrentTimestamp, CurrentUser, Expression, GroupingID, NamedExpression, VirtualColumn}
3021
import org.apache.spark.sql.catalyst.util.toPrettySQL
3122

3223
/**
@@ -47,10 +38,12 @@ object LiteralFunctionResolution {
4738
}
4839
}
4940

50-
// support CURRENT_DATE, CURRENT_TIMESTAMP, CURRENT_USER, USER, SESSION_USER and grouping__id
41+
// support CURRENT_DATE, CURRENT_TIMESTAMP, CURRENT_TIME,
42+
// CURRENT_USER, USER, SESSION_USER and grouping__id
5143
private val literalFunctions: Seq[(String, () => Expression, Expression => String)] = Seq(
5244
(CurrentDate().prettyName, () => CurrentDate(), toPrettySQL(_)),
5345
(CurrentTimestamp().prettyName, () => CurrentTimestamp(), toPrettySQL(_)),
46+
(CurrentTime().prettyName, () => CurrentTime(), toPrettySQL(_)),
5447
(CurrentUser().prettyName, () => CurrentUser(), toPrettySQL),
5548
("user", () => CurrentUser(), toPrettySQL),
5649
("session_user", () => CurrentUser(), toPrettySQL),

sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/timeExpressions.scala

Lines changed: 112 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,13 +19,17 @@ package org.apache.spark.sql.catalyst.expressions
1919

2020
import java.time.DateTimeException
2121

22-
import org.apache.spark.sql.catalyst.analysis.ExpressionBuilder
22+
import org.apache.spark.sql.catalyst.analysis.{ExpressionBuilder, TypeCheckResult}
23+
import org.apache.spark.sql.catalyst.analysis.TypeCheckResult.{DataTypeMismatch, TypeCheckSuccess}
24+
import org.apache.spark.sql.catalyst.expressions.Cast.{toSQLExpr, toSQLId, toSQLType, toSQLValue}
2325
import org.apache.spark.sql.catalyst.expressions.objects.{Invoke, StaticInvoke}
26+
import org.apache.spark.sql.catalyst.trees.TreePattern.{CURRENT_LIKE, TreePattern}
2427
import org.apache.spark.sql.catalyst.util.DateTimeUtils
2528
import org.apache.spark.sql.catalyst.util.TimeFormatter
29+
import org.apache.spark.sql.catalyst.util.TypeUtils.{ordinalNumber}
2630
import org.apache.spark.sql.errors.{QueryCompilationErrors, QueryExecutionErrors}
2731
import org.apache.spark.sql.internal.types.StringTypeWithCollation
28-
import org.apache.spark.sql.types.{AbstractDataType, IntegerType, ObjectType, TimeType, TypeCollection}
32+
import org.apache.spark.sql.types.{AbstractDataType, DataType, IntegerType, ObjectType, TimeType, TypeCollection}
2933
import org.apache.spark.unsafe.types.UTF8String
3034

3135
/**
@@ -349,3 +353,109 @@ object SecondExpressionBuilder extends ExpressionBuilder {
349353
}
350354
}
351355

356+
/**
357+
* Returns the current time at the start of query evaluation.
358+
* There is no code generation since this expression should get constant folded by the optimizer.
359+
*/
360+
// scalastyle:off line.size.limit
361+
@ExpressionDescription(
362+
usage = """
363+
_FUNC_([precision]) - Returns the current time at the start of query evaluation.
364+
All calls of current_time within the same query return the same value.
365+
366+
_FUNC_ - Returns the current time at the start of query evaluation.
367+
""",
368+
arguments = """
369+
Arguments:
370+
* precision - An optional integer literal in the range [0..6], indicating how many
371+
fractional digits of seconds to include. If omitted, the default is 6.
372+
""",
373+
examples = """
374+
Examples:
375+
> SELECT _FUNC_();
376+
15:49:11.914120
377+
> SELECT _FUNC_;
378+
15:49:11.914120
379+
> SELECT _FUNC_(0);
380+
15:49:11
381+
> SELECT _FUNC_(3);
382+
15:49:11.914
383+
> SELECT _FUNC_(1+1);
384+
15:49:11.91
385+
""",
386+
group = "datetime_funcs",
387+
since = "4.1.0"
388+
)
389+
case class CurrentTime(child: Expression = Literal(TimeType.MICROS_PRECISION))
390+
extends UnaryExpression with FoldableUnevaluable with ImplicitCastInputTypes {
391+
392+
def this() = {
393+
this(Literal(TimeType.MICROS_PRECISION))
394+
}
395+
396+
final override val nodePatterns: Seq[TreePattern] = Seq(CURRENT_LIKE)
397+
398+
override def nullable: Boolean = false
399+
400+
override def checkInputDataTypes(): TypeCheckResult = {
401+
// Check foldability
402+
if (!child.foldable) {
403+
return DataTypeMismatch(
404+
errorSubClass = "NON_FOLDABLE_INPUT",
405+
messageParameters = Map(
406+
"inputName" -> toSQLId("precision"),
407+
"inputType" -> toSQLType(child.dataType),
408+
"inputExpr" -> toSQLExpr(child)
409+
)
410+
)
411+
}
412+
413+
// Evaluate
414+
val precisionValue = child.eval()
415+
if (precisionValue == null) {
416+
return DataTypeMismatch(
417+
errorSubClass = "UNEXPECTED_NULL",
418+
messageParameters = Map("exprName" -> "precision"))
419+
}
420+
421+
// Check numeric range
422+
precisionValue match {
423+
case n: Number =>
424+
val p = n.intValue()
425+
if (p < TimeType.MIN_PRECISION || p > TimeType.MICROS_PRECISION) {
426+
return DataTypeMismatch(
427+
errorSubClass = "VALUE_OUT_OF_RANGE",
428+
messageParameters = Map(
429+
"exprName" -> toSQLId("precision"),
430+
"valueRange" -> s"[${TimeType.MIN_PRECISION}, ${TimeType.MICROS_PRECISION}]",
431+
"currentValue" -> toSQLValue(p, IntegerType)
432+
)
433+
)
434+
}
435+
case _ =>
436+
return DataTypeMismatch(
437+
errorSubClass = "UNEXPECTED_INPUT_TYPE",
438+
messageParameters = Map(
439+
"paramIndex" -> ordinalNumber(0),
440+
"requiredType" -> toSQLType(IntegerType),
441+
"inputSql" -> toSQLExpr(child),
442+
"inputType" -> toSQLType(child.dataType))
443+
)
444+
}
445+
TypeCheckSuccess
446+
}
447+
448+
// Because checkInputDataTypes ensures the argument is foldable & valid,
449+
// we can directly evaluate here.
450+
lazy val precision: Int = child.eval().asInstanceOf[Number].intValue()
451+
452+
override def dataType: DataType = TimeType(precision)
453+
454+
override def prettyName: String = "current_time"
455+
456+
override protected def withNewChildInternal(newChild: Expression): Expression = {
457+
copy(child = newChild)
458+
}
459+
460+
override def inputTypes: Seq[AbstractDataType] = Seq(IntegerType)
461+
}

sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/finishAnalysis.scala

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ import org.apache.spark.sql.catalyst.trees.TreePattern._
3131
import org.apache.spark.sql.catalyst.trees.TreePatternBits
3232
import org.apache.spark.sql.catalyst.util.DateTimeUtils
3333
import org.apache.spark.sql.catalyst.util.DateTimeUtils.{convertSpecialDate, convertSpecialTimestamp, convertSpecialTimestampNTZ, instantToMicros, localDateTimeToMicros}
34+
import org.apache.spark.sql.catalyst.util.SparkDateTimeUtils.{instantToMicrosOfDay, truncateTimeMicrosToPrecision}
3435
import org.apache.spark.sql.catalyst.util.TypeUtils.toSQLExpr
3536
import org.apache.spark.sql.connector.catalog.CatalogManager
3637
import org.apache.spark.sql.types._
@@ -113,6 +114,7 @@ object ComputeCurrentTime extends Rule[LogicalPlan] {
113114
val instant = Instant.now()
114115
val currentTimestampMicros = instantToMicros(instant)
115116
val currentTime = Literal.create(currentTimestampMicros, TimestampType)
117+
val currentTimeOfDayMicros = instantToMicrosOfDay(instant, conf.sessionLocalTimeZone)
116118
val timezone = Literal.create(conf.sessionLocalTimeZone, StringType)
117119
val currentDates = collection.mutable.HashMap.empty[ZoneId, Literal]
118120
val localTimestamps = collection.mutable.HashMap.empty[ZoneId, Literal]
@@ -129,6 +131,10 @@ object ComputeCurrentTime extends Rule[LogicalPlan] {
129131
Literal.create(
130132
DateTimeUtils.microsToDays(currentTimestampMicros, cd.zoneId), DateType)
131133
})
134+
case currentTimeType : CurrentTime =>
135+
val truncatedTime = truncateTimeMicrosToPrecision(currentTimeOfDayMicros,
136+
currentTimeType.precision)
137+
Literal.create(truncatedTime, TimeType(currentTimeType.precision))
132138
case CurrentTimestamp() | Now() => currentTime
133139
case CurrentTimeZone() => timezone
134140
case localTimestamp: LocalTimestamp =>

sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2888,12 +2888,14 @@ class AstBuilder extends DataTypeAstBuilder
28882888
CurrentDate()
28892889
case SqlBaseParser.CURRENT_TIMESTAMP =>
28902890
CurrentTimestamp()
2891+
case SqlBaseParser.CURRENT_TIME =>
2892+
CurrentTime()
28912893
case SqlBaseParser.CURRENT_USER | SqlBaseParser.USER | SqlBaseParser.SESSION_USER =>
28922894
CurrentUser()
28932895
}
28942896
} else {
28952897
// If the parser is not in ansi mode, we should return `UnresolvedAttribute`, in case there
2896-
// are columns named `CURRENT_DATE` or `CURRENT_TIMESTAMP`.
2898+
// are columns named `CURRENT_DATE` or `CURRENT_TIMESTAMP` or `CURRENT_TIME`
28972899
UnresolvedAttribute.quoted(ctx.name.getText)
28982900
}
28992901
}

sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -819,6 +819,25 @@ class AnalysisSuite extends AnalysisTest with Matchers {
819819
}
820820
}
821821

822+
test("CURRENT_TIME should be case insensitive") {
823+
withSQLConf(SQLConf.CASE_SENSITIVE.key -> "true") {
824+
val input = Project(Seq(
825+
// The user references "current_time" or "CURRENT_TIME" in the query
826+
UnresolvedAttribute("current_time"),
827+
UnresolvedAttribute("CURRENT_TIME")
828+
), testRelation)
829+
830+
// The analyzer should resolve both to the same expression: CurrentTime()
831+
val expected = Project(Seq(
832+
Alias(CurrentTime(), toPrettySQL(CurrentTime()))(),
833+
Alias(CurrentTime(), toPrettySQL(CurrentTime()))()
834+
), testRelation).analyze
835+
836+
checkAnalysis(input, expected)
837+
}
838+
}
839+
840+
822841
test("CTE with non-existing column alias") {
823842
assertAnalysisErrorCondition(parsePlan("WITH t(x) AS (SELECT 1) SELECT * FROM t WHERE y = 1"),
824843
"UNRESOLVED_COLUMN.WITH_SUGGESTION",

sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/ResolveInlineTablesSuite.scala

Lines changed: 28 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,11 +21,11 @@ import org.scalatest.BeforeAndAfter
2121

2222
import org.apache.spark.sql.AnalysisException
2323
import org.apache.spark.sql.catalyst.EvaluateUnresolvedInlineTable
24-
import org.apache.spark.sql.catalyst.expressions.{Alias, Cast, CurrentTimestamp, Literal, Rand}
24+
import org.apache.spark.sql.catalyst.expressions.{Alias, Cast, CurrentTime, CurrentTimestamp, Literal, Rand}
2525
import org.apache.spark.sql.catalyst.expressions.aggregate.Count
2626
import org.apache.spark.sql.catalyst.optimizer.{ComputeCurrentTime, EvalInlineTables}
2727
import org.apache.spark.sql.catalyst.plans.logical.LocalRelation
28-
import org.apache.spark.sql.types.{LongType, NullType, TimestampType}
28+
import org.apache.spark.sql.types.{LongType, NullType, TimestampType, TimeType}
2929

3030
/**
3131
* Unit tests for [[ResolveInlineTables]]. Note that there are also test cases defined in
@@ -113,6 +113,32 @@ class ResolveInlineTablesSuite extends AnalysisTest with BeforeAndAfter {
113113
}
114114
}
115115

116+
test("cast and execute CURRENT_TIME expressions") {
117+
val table = UnresolvedInlineTable(
118+
Seq("c1"),
119+
Seq(
120+
Seq(CurrentTime()),
121+
Seq(CurrentTime())
122+
)
123+
)
124+
val resolved = ResolveInlineTables(table)
125+
assert(resolved.isInstanceOf[ResolvedInlineTable],
126+
"Expected an inline table to be resolved into a ResolvedInlineTable")
127+
128+
val transformed = ComputeCurrentTime(resolved)
129+
EvalInlineTables(transformed) match {
130+
case LocalRelation(output, data, _, _) =>
131+
// expect default precision = 6
132+
assert(output.map(_.dataType) == Seq(TimeType(6)))
133+
// Should have 2 rows
134+
assert(data.size == 2)
135+
// Both rows should have the *same* microsecond value for current_time
136+
assert(data(0).getLong(0) == data(1).getLong(0),
137+
"Both CURRENT_TIME calls must yield the same value in the same query")
138+
}
139+
}
140+
141+
116142
test("convert TimeZoneAwareExpression") {
117143
val table = UnresolvedInlineTable(Seq("c1"),
118144
Seq(Seq(Cast(lit("1991-12-06 00:00:00.0"), TimestampType))))

sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/TimeExpressionsSuite.scala

Lines changed: 48 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,10 @@ package org.apache.spark.sql.catalyst.expressions
1919

2020
import org.apache.spark.{SPARK_DOC_ROOT, SparkDateTimeException, SparkFunSuite}
2121
import org.apache.spark.sql.AnalysisException
22+
import org.apache.spark.sql.catalyst.analysis.TypeCheckResult.{DataTypeMismatch, TypeCheckSuccess}
23+
import org.apache.spark.sql.catalyst.expressions.Cast.{toSQLId, toSQLValue}
2224
import org.apache.spark.sql.catalyst.util.DateTimeTestUtils._
23-
import org.apache.spark.sql.types.{StringType, TimeType}
25+
import org.apache.spark.sql.types.{IntegerType, StringType, TimeType}
2426

2527
class TimeExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper {
2628
test("ParseToTime") {
@@ -226,4 +228,49 @@ class TimeExpressionsSuite extends SparkFunSuite with ExpressionEvalHelper {
226228
checkConsistencyBetweenInterpretedAndCodegen(
227229
(child: Expression) => SecondsOfTime(child).replacement, TimeType())
228230
}
231+
232+
test("CurrentTime") {
233+
// test valid precision
234+
var expr = CurrentTime(Literal(3))
235+
assert(expr.dataType == TimeType(3), "Should produce TIME(3) data type")
236+
assert(expr.checkInputDataTypes() == TypeCheckSuccess)
237+
238+
// test default constructor => TIME(6)
239+
expr = CurrentTime()
240+
assert(expr.precision == 6, "Default precision should be 6")
241+
assert(expr.dataType == TimeType(6))
242+
assert(expr.checkInputDataTypes() == TypeCheckSuccess)
243+
244+
// test no value => TIME()
245+
expr = CurrentTime()
246+
assert(expr.precision == 6, "Default precision should be 6")
247+
assert(expr.dataType == TimeType(6))
248+
assert(expr.checkInputDataTypes() == TypeCheckSuccess)
249+
250+
// test foldable value
251+
expr = CurrentTime(Literal(1 + 1))
252+
assert(expr.precision == 2, "Precision should be 2")
253+
assert(expr.dataType == TimeType(2))
254+
assert(expr.checkInputDataTypes() == TypeCheckSuccess)
255+
256+
// test out of range precision => checkInputDataTypes fails
257+
expr = CurrentTime(Literal(2 + 8))
258+
assert(expr.checkInputDataTypes() ==
259+
DataTypeMismatch(
260+
errorSubClass = "VALUE_OUT_OF_RANGE",
261+
messageParameters = Map(
262+
"exprName" -> toSQLId("precision"),
263+
"valueRange" -> s"[${TimeType.MIN_PRECISION}, ${TimeType.MICROS_PRECISION}]",
264+
"currentValue" -> toSQLValue(10, IntegerType)
265+
)
266+
)
267+
)
268+
269+
// test non number value should fail since we skip analyzer here
270+
expr = CurrentTime(Literal("2"))
271+
val failure = intercept[ClassCastException] {
272+
expr.precision
273+
}
274+
assert(failure.getMessage.contains("cannot be cast to class java.lang.Number"))
275+
}
229276
}

0 commit comments

Comments
 (0)