Skip to content

Commit 038528b

Browse files
authored
Fix toolcalling timestamp (#22)
* Fix toolcalling timestamp bug - 0.1.4 server started returning completed_at and started_at timestamp, but it's not compliant with the OffsetDateTime format that SDK expects. This is causing the agentTurnResponseStepComplete to be missing/incorrectly type-casted to unknown. This patch will ignore the timestamp for now, at least until 0.1.5 release next week when the format is corrected on server side. * Update readme and bump version
1 parent 53103b3 commit 038528b

File tree

4 files changed

+14
-12
lines changed

4 files changed

+14
-12
lines changed

README.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,9 @@ Features:
88
- Remote Inferencing: Perform inferencing tasks remotely with Llama models hosted on a remote connection (or serverless localhost).
99
- Simple Integration: With easy-to-use APIs, a developer can quickly integrate Llama Stack in their Android app. The difference with local vs remote inferencing is also minimal.
1010

11-
Latest Release Notes: [v0.1.4](https://github.com/meta-llama/llama-stack-client-kotlin/releases/tag/v0.1.4)
11+
Latest Release Notes: [v0.1.4.1](https://github.com/meta-llama/llama-stack-client-kotlin/releases/tag/v0.1.4.1)
12+
13+
Note: The current recommended version is 0.1.4 Llama Stack server with 0.1.4.1 Kotlin client SDK. Kotlin SDK 0.1.4 has a known bug on tool calling, which will be fixed in upcoming Llama Stack server release.
1214

1315
*Tagged releases are stable versions of the project. While we strive to maintain a stable main branch, it's not guaranteed to be free of bugs or issues.*
1416

@@ -24,7 +26,7 @@ The key files in the app are `ExampleLlamaStackLocalInference.kt`, `ExampleLlama
2426
Add the following dependency in your `build.gradle.kts` file:
2527
```
2628
dependencies {
27-
implementation("com.llama.llamastack:llama-stack-client-kotlin:0.1.4")
29+
implementation("com.llama.llamastack:llama-stack-client-kotlin:0.1.4.1")
2830
}
2931
```
3032
This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/`
@@ -60,7 +62,7 @@ Start a Llama Stack server on localhost. Here is an example of how you can do th
6062
```
6163
conda create -n stack-fireworks python=3.10
6264
conda activate stack-fireworks
63-
pip install llama-stack=0.1.4
65+
pip install llama-stack=0.1.4.1
6466
llama stack build --template fireworks --image-type conda
6567
export FIREWORKS_API_KEY=<SOME_KEY>
6668
llama stack run /Users/<your_username>/.llama/distributions/llamastack-fireworks/fireworks-run.yaml --port=5050
@@ -99,7 +101,7 @@ client = LlamaStackClientLocalClient
99101
client = LlamaStackClientOkHttpClient
100102
.builder()
101103
.baseUrl(remoteURL)
102-
.headers(mapOf("x-llamastack-client-version" to listOf("0.1.4")))
104+
.headers(mapOf("x-llamastack-client-version" to listOf("0.1.4.1")))
103105
.build()
104106
```
105107
</td>
@@ -286,7 +288,7 @@ The purpose of this section is to share more details with users that would like
286288
### Prerequisite
287289

288290
You must complete the following steps:
289-
1. Clone the repo (`git clone https://github.com/meta-llama/llama-stack-client-kotlin.git -b release/0.1.4`)
291+
1. Clone the repo (`git clone https://github.com/meta-llama/llama-stack-client-kotlin.git -b release/0.1.4.1`)
290292
2. Port the appropriate ExecuTorch libraries over into your Llama Stack Kotlin library environment.
291293
```
292294
cd llama-stack-client-kotlin-client-local

build.gradle.kts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,5 +4,5 @@ plugins {
44

55
allprojects {
66
group = "com.llama.llamastack"
7-
version = "0.1.4"
7+
version = "0.1.4.1"
88
}

llama-stack-client-kotlin-core/src/main/kotlin/com/llama/llamastack/models/InferenceStep.kt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,9 @@ private constructor(
5050

5151
fun turnId(): String = turnId.getRequired("turn_id")
5252

53-
fun completedAt(): OffsetDateTime? = completedAt.getNullable("completed_at")
53+
fun completedAt(): OffsetDateTime? = null
5454

55-
fun startedAt(): OffsetDateTime? = startedAt.getNullable("started_at")
55+
fun startedAt(): OffsetDateTime? = null
5656

5757
/** A message containing the model's (assistant) response in a chat conversation. */
5858
@JsonProperty("model_response")

llama-stack-client-kotlin-core/src/test/kotlin/com/llama/llamastack/models/InferenceStepTest.kt

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -56,9 +56,9 @@ class InferenceStepTest {
5656
)
5757
assertThat(inferenceStep.stepId()).isEqualTo("step_id")
5858
assertThat(inferenceStep.turnId()).isEqualTo("turn_id")
59-
assertThat(inferenceStep.completedAt())
60-
.isEqualTo(OffsetDateTime.parse("2019-12-27T18:11:19.117Z"))
61-
assertThat(inferenceStep.startedAt())
62-
.isEqualTo(OffsetDateTime.parse("2019-12-27T18:11:19.117Z"))
59+
// assertThat(inferenceStep.completedAt())
60+
// .isEqualTo(OffsetDateTime.parse("2019-12-27T18:11:19.117Z"))
61+
// assertThat(inferenceStep.startedAt())
62+
// .isEqualTo(OffsetDateTime.parse("2019-12-27T18:11:19.117Z"))
6363
}
6464
}

0 commit comments

Comments
 (0)