|
1 | 1 | Status: published
|
2 | 2 | Date: 2023-05-06 13:28:33
|
3 |
| -Modified: 2025-06-02 12:16:52 |
| 3 | +Modified: 2025-06-02 23:12:26 |
4 | 4 | Author: Benjamin Du
|
5 | 5 | Slug: tips-on-large-language-models
|
6 | 6 | Title: Tips on Large Language Models
|
@@ -200,38 +200,40 @@ Tags: Computer Science, programming, AI, machine learning, LLM, large language m
|
200 | 200 | it acts as a "ChatGPT for people," allowing you to search for individuals based on your network and desired qualifications.
|
201 | 201 | </td>
|
202 | 202 | </tr>
|
203 |
| -</tbody> |
204 |
| -</table> |
205 |
| - |
206 |
| - |
207 |
| - |
208 | 203 | <tr>
|
209 | 204 | <td class="tg-0pky">
|
210 |
| - <a href="https://www.emergentmind.com/">Emergent Mind</a> |
| 205 | + <a href="https://github.com/jmorganca/ollama">ollama</a> |
211 | 206 | </td>
|
212 | 207 | <td class="tg-0pky">
|
213 |
| - Research |
| 208 | + Deploy |
214 | 209 | </td>
|
215 | 210 | <td class="tg-0pky">
|
216 | 211 | </td>
|
217 | 212 | <td class="tg-0pky">
|
218 |
| - AI Research Assistant for Computer Scientists |
| 213 | + Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 |
| 214 | + and other large language models. |
219 | 215 | </td>
|
220 | 216 | <tr>
|
| 217 | + <tr> |
| 218 | + <td class="tg-0pky"> |
| 219 | + <a href="https://github.com/OpenBMB/ToolBench">ToolBench</a> |
| 220 | + </td> |
| 221 | + <td class="tg-0pky"> |
| 222 | + Benchmark |
| 223 | + </td> |
| 224 | + <td class="tg-0pky"> |
| 225 | + </td> |
| 226 | + <td class="tg-0pky"> |
| 227 | + An open platform for training, serving, and evaluating large language model for tool learning. |
| 228 | + </td> |
| 229 | + <tr> |
| 230 | +</tbody> |
| 231 | +</table> |
| 232 | + |
221 | 233 |
|
222 | 234 |
|
223 |
| -- [LLama - Local](https://github.com/jmorganca/ollama) |
224 | 235 |
|
225 |
| -- [ToolBench](https://github.com/OpenBMB/ToolBench) |
226 | 236 |
|
227 |
| - [ToolBench](https://github.com/OpenBMB/ToolBench) |
228 |
| - aims to construct open-source, large-scale, high-quality instruction |
229 |
| - tuning SFT data to facilitate the construction of powerful LLMs with general tool-use capability. |
230 |
| - We aim to empower open-source LLMs to master thousands of diverse real-world APIs. |
231 |
| - We achieve this by collecting a high-quality instruction-tuning dataset. |
232 |
| - It is constructed automatically using the latest ChatGPT (gpt-3.5-turbo-16k), |
233 |
| - which is upgraded with enhanced function call capabilities. |
234 |
| - We provide the dataset, the corresponding training and evaluation scripts, and a capable model ToolLLaMA fine-tuned on ToolBench. |
235 | 237 |
|
236 | 238 | ## Tutorials
|
237 | 239 |
|
|
0 commit comments