-
Notifications
You must be signed in to change notification settings - Fork 3
/
STORIES.chat
733 lines (554 loc) · 30.8 KB
/
STORIES.chat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
{"Sysmsg":"You are a technical writer.","Version":"3.0.6"}
USER:
Write user stories for grokker based on the README.
File: README.md
```# grokker
A swiss-army-knife tool for natural language processing, code and file
interpretation and generation, and AI-based research and development.
Uses OpenAI API services for backend.
- Interactive conversation with one or more documents and/or code
- Design, research, and rapid learning
- Local vector database with several useful interfaces
- Easy VIM integration
- Chat client with named file I/O
- Able to accept and generate file content in natural language, code,
structured messages, or a combination
- LLM tooling including system message inputs, token counting and
embedding subcommands
Grokker helped create and maintain this document in a VIM session by
reading its own source code along with iterations of this document.
## Roadmap
Here's where it appears this project is going:
- Multi-agent collaboration (with human, AI, and algorithmic agents)
- working on this right now
- Decentralized consensus tool
- a useful side-effect of multi-agent collaboration
- Web Assembly (WASM/WASI) execution
- enables easy use and distribution
- Web UI (while keeping CLI)
- enabled by WASM/WASI
- Plugin architecture
- enabled by WASM/WASI
- Decentralized storage
- enabled by WASM/WASI
- Decentralized virtual machine
- enabled by WASM/WASI
- Decentralized vector database
- enabled by decentralized storage/VM
- Decentralized neural nets
- enabled by decentralized computing/storage/VM
- Decentralized LLM/AI
- enabled by all of the above
- Decentralized community consensus and governance
- enabled by all of the above
## Example Usage
Getting started, using grokker's own source code as example documents:
```
$ grok init
Initialized a new .grok file in the current directory.
$ grok add README.md cmd/grok/main.go grokker.go TODO.md
Creating .grok file... adding README.md... adding cmd/grok/main.go...
adding grokker.go... adding TODO.md... done!
```
Make a one-time query without storing chat history:
```
$ grok q "What is grokker?"
Grokker is a tool for interactive conversation with one or more
documents, used for research, training, and rapid learning. It
utilizes OpenAI API services for the backend. Essentially, you can
input documents into Grokker and it will use natural language
processing to analyze the text and help you answer questions about
those documents.
```
Same thing, but providing the question on stdin:
```
$ echo "What is the `qi` subcommand for?" | grok qi
The 'qi' subcommand allows you to ask a question by providing it on
standard input rather than passing it as a command-line argument. This
subcommand is especially useful in editor sessions and when writing
plugins -- more about this below.
```
## Installation
```
go install github.com/stevegt/grokker/cmd/grok
```
You'll need an account with OpenAI and an API key. You can create and
manage your API keys in your OpenAI dashboard.
Before using Grokker, you'll need to set your API key as an
environment variable in your terminal session or .bashrc with the
following command:
```
export OPENAI_API_KEY=<your_api_key>
```
## Tell me more about the `chat` subcommand
The `chat` subcommand allows you to interact with the system's
knowledge base in a conversational manner. It accepts a `chatfile` as
a mandatory argument, which is where the chat history is stored. The
`-s` flag is optional and can be used to pass a system message, which
controls the behavior of the OpenAI API. In usage, it might look like
`grok chat chatfile -s sysmsg`, where `sysmsg` is an optional system
message, and `chatfile` is the required text file where the chat
history is stored. There are several other optional flags that can be
used with the `chat` subcommand, such as a `-m` flag so you can
provide the prompt on the command line instead of on stdin, and `-i`
and `-o` flags to specify input and output files. There are also
flags to control context sources. See `grok chat -h` for more
details.
## Tell me more about the `qi` subcommand
The `qi` subcommand allows you to ask a question by providing it on
stdin. It's a way to generate quick answers to questions without
having to provide the question as a command-line argument.
The `qi` subcommand enables you to use grokker as a chat client in an
editor session by typing questions directly in your document and
receiving answers inline after the question.
## Using grokker as a chat client in an editor session
Using Grokker as a chat client in an editor session can help you
quickly find and summarize information from a set of documents in a
local directory tree, including the documents you are editing. This
can make your development, research, or learning process more
efficient and streamlined. Additionally, having the context of your
editor session available as part of the chat history can help you
better keep track of and synthesize information as you work.
It's a quick way to build a document and was used to build this one.
Using grokker as a chat client in an editor session is also a way to
access the backend servers used by ChatGPT without being constrained
by the ChatGPT web frontend, all while maintaining your own chat
history and any additional context in your own local files,
versionable in git.
### How can I use grokker in a VIM editor session?
To use the `qi` subcommand in a VIM editor session, you can add a
keyboard mapping to your vimrc file. Here's an example mapping:
```
:map <leader>g vap:!grok qi<CR>
```
This mapping will allow you to ask a question by typing it in VIM and
then pressing `<leader>g`. The `vap` causes the current paragraph to be
hightlighted, and the `:!grok qi` causes it to be sent as input to the
`qi` subcommand of Grokker. The answer will be inserted into the
buffer below the current paragrpah. Note that the mapping assumes that
Grokker's `grok` command is installed and in your system path.
You will get better results if you `:set autowrite` so the current
file's most recent content will be included in the question context.
Experiment with variations on these mappings -- you might emphasize
more recent context by including the previous two paragraphs as part
of the query, or the most recent 50 lines, or the output of `git
diff`, etc. (Future versions of grokker may help with this
by timestamping individual document chunks and prioritizing more
recent edits.)
In practice, as of this writing I either hit `<leader>g` to highlight
and use the current paragraph as the GPT query, or I use `<Shift-V>` to
highlight several paragraphs for more context, and then run
`:'<,'>!grok qi`. Works.
## Tell me more about the `-g` flag
The `-g` flag is an optional parameter that you can include when
running the `q` subcommand. It stands for "global" and when included,
Grokker will provide answers not only from the local documents that
you've added but also from OpenAI's global knowledge base. This means
that you'll get a wider range of potentially useful answers, but it
may take longer to receive your results as the global knowledge base
is larger and may take more time to search through. If you don't
include the `-g` flag, Grokker will prefer the local documents that
you've added.
## What are the `models` and `model` subcommands?
The `models` subcommand is used to list all the available OpenAI
models for text processing in Grokker, including their name and
maximum token limit.
The `model` subcommand is used to set the default GPT model for use in
queries. This default is stored in the local .grok db. (I haven't
added a flag to override this default for a single query, but this
would be doable.)
## About the words `grokker` and `grok`
The word `grok` is from Robert Heinlein's [Stranger in a Strange
Land](https://en.wikipedia.org/wiki/Stranger_in_a_Strange_Land) --
there's a good summary of the word's meaning and history in
[Wikipedia](https://en.wikipedia.org/wiki/Grok). Roughly translated,
it means "to understand something so thoroughly that the observer
becomes a part of the observed".
It's a popular word in science fiction and computer science and the
namespace is crowded.
The name `grokker` is used by the company grokker.com, though the
problem domains are different. We are not affiliated with
grokker.com.
The folks at xAI released and filed a trademark application for the
`grok` online AI tool several months after we were already using the
word in this project. We're not affiliated with xAI, but we wish them
well.
Jordan Sissel's log file analysis tool also uses a `grok` command. If
you want to install grokker on the same machine, you can install it
using an alternate command name. Here's an example of installing
grokker as `grokker` instead of `grok`:
```
cd /tmp
git clone http://github.com/stevegt/grokker
cd grokker/cmd/grok/
go build -o grokker
cp grokker $GOPATH/bin
```
## Is grokker done?
Grokker is not done, but I use it extensively every day. See
[TODO.md](./TODO.md) for a pretty long list of wishlist and brainstorm
items. At this time, refactoring the storage for text chunks and
embeddings is likely the most important -- that .grok file can get
pretty big. So far I haven't seen performance problems even when
grokking several dozen documents or source code files, but I want to
be able to grok an entire tree of hundreds of files without concerns.
In all of the following use cases, I'd say my own productivity has
increased by an order of magnitude -- I'm finding myself finishing
projects in days that previously would have taken weeks. What's
really nice is that I'm finally making progress on years-old complex
projects that were previously stalled.
### What are some use cases grokker already supports?
In all of the following use cases, I tend to create and `grok add` a
`context.md` file that I use as a scratchpad, writing and refining
questions and answers as I work on other files in the same directory
or repository. This file is my interactive, animated [rubber
duck](https://en.wikipedia.org/wiki/Rubber_duck_debugging). This
technique has worked well. I'm considering switching to using
something like `grok.md`, `grokker.md`, `groktext.md`, or `gpt.md` for
this filename and proposing it as a best practice.
Grokker has been a huge help in its original use case -- getting up to
speed quickly on complex topics, documents, and code bases. It's
particularly good at translating the unique terminology that tends to
exist in specialized papers and code. The large language models
backing grokker are optimized for inferring meaning from context;
this allows them to expand terms into more general language even in
cases where the original author was unable to make that difficult
leap.
I've been pleasantly surprised by how much grokker has also helped
translate my own ideas into documents and code. I can describe things
in my own terms in one or more files, and just as with others' works,
the language models do a better job than I can of translating my
terminology into more-general human language and executable code.
Another useful technique I've found is to prompt the model to ask me
questions about a concept I'm having trouble getting out of my own
head into text; I then ask the model to answer its own questions, then
I manually edit the answers to reflect what I'm actually thinking.
This clears writer's block, reduces my own typing workload by moving
me into the role of editor, and helps to more quickly detect and
resolve uncertainties. Because grokker will include my edited text as
context in future model queries, this provides feedback to the model,
causing future answers to converge toward my intent. (See
[RLHF](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback)
for one possible way of formalizing this.)
# Important disclaimer regarding sensitive and confidential information
Using OpenAI's API services to analyze documents means that any
document you `grok add`, and any question you ask of grokker, will be
broken into chunks and sent to OpenAI's servers twice -- first to
generate context embedding vectors, and again as relevant context when
you run `grok q` or `grok qi`.
If any of your document's content is sensitive or confidential, you'll
want to review OpenAI's policies regarding data usage and retention.
Additionally, some topics are banned by OpenAI's policies, so be sure
to review and comply with their guidelines in order to prevent your
API access being suspended or revoked.
As always, it's a good idea to review the terms and conditions of any
API service you're considering using to ensure that you're comfortable
with how your data will be handled.
```
AI:
File: user_stories.md
```markdown
# User Stories for Grokker
### As a software developer
- I want to have interactive conversations with my code and documentation so that I can better understand complex systems quickly.
- I want to integrate Grokker with my VIM editor so that I can ask questions and get answers directly within my coding sessions.
- I need a tool that helps me to rapidly learn new technologies and programming languages by allowing me to ask questions and receive detailed explanations.
### As a researcher
- I want to use Grokker to design research questions and hypotheses by synthesizing information from various document sources.
- I need a tool that can help me quickly summarize and understand extensive research papers or reports related to my field.
### As a team working on a project
- We want to utilize the multi-agent collaboration feature to enhance our teamwork, combining human, AI, and algorithmic insights for our project.
- We need a decentralized consensus tool that allows us to collectively make decisions and document them effectively.
### As a developer interested in AI and machine learning
- I want to use Grokker's LLM tooling for system message inputs, token counting, and embedding subcommands to enhance my understanding of AI models.
- I'm interested in experimenting with decentralized neural nets and AI, and Grokker's roadmap promises capabilities that align with my interests.
### As a content creator
- I want to use Grokker to help me translate my ideas into written documents or code by having a conversation with the AI about my concepts.
- I need a tool that can assist me in overcoming writer's block by suggesting questions about my topic and helping me answer them to refine my content.
### As a software engineer concerned about privacy
- I want to ensure that sensitive and confidential information in my documents is handled securely, and I need clarity on data usage and retention policies before using Grokker.
- I'm looking for a tool that allows me to process my company's proprietary code and documents without risking the exposure of sensitive data.
### As a new user
- I need an easy and straightforward way to install Grokker and set it up, including integrating it with OpenAI's API services.
- I want clear examples of how to use Grokker for my specific use cases, such as interacting with documents or using it within an editor.
### As a community organizer
- I'm interested in the decentralized community consensus and governance features foreseen in Grokker's roadmap and want to explore how they can be used for community-led projects.
EOF_user_stories.md
```
USER:
Write user stories for grokker based on the README.
File: README.md
```# grokker
A swiss-army-knife tool for natural language processing, code and file
interpretation and generation, and AI-based research and development.
Uses OpenAI API services for backend.
- Interactive conversation with one or more documents and/or code
- Design, research, and rapid learning
- Local vector database with several useful interfaces
- Easy VIM integration
- Chat client with named file I/O
- Able to accept and generate file content in natural language, code,
structured messages, or a combination
- LLM tooling including system message inputs, token counting and
embedding subcommands
Grokker helped create and maintain this document in a VIM session by
reading its own source code along with iterations of this document.
## Roadmap
Here's where it appears this project is going:
- Multi-agent collaboration (with human, AI, and algorithmic agents)
- working on this right now
- Decentralized consensus tool
- a useful side-effect of multi-agent collaboration
- Web Assembly (WASM/WASI) execution
- enables easy use and distribution
- Web UI (while keeping CLI)
- enabled by WASM/WASI
- Plugin architecture
- enabled by WASM/WASI
- Decentralized storage
- enabled by WASM/WASI
- Decentralized virtual machine
- enabled by WASM/WASI
- Decentralized vector database
- enabled by decentralized storage/VM
- Decentralized neural nets
- enabled by decentralized computing/storage/VM
- Decentralized LLM/AI
- enabled by all of the above
- Decentralized community consensus and governance
- enabled by all of the above
## Example Usage
Getting started, using grokker's own source code as example documents:
```
$ grok init
Initialized a new .grok file in the current directory.
$ grok add README.md cmd/grok/main.go grokker.go TODO.md
Creating .grok file... adding README.md... adding cmd/grok/main.go...
adding grokker.go... adding TODO.md... done!
```
Make a one-time query without storing chat history:
```
$ grok q "What is grokker?"
Grokker is a tool for interactive conversation with one or more
documents, used for research, training, and rapid learning. It
utilizes OpenAI API services for the backend. Essentially, you can
input documents into Grokker and it will use natural language
processing to analyze the text and help you answer questions about
those documents.
```
Same thing, but providing the question on stdin:
```
$ echo "What is the `qi` subcommand for?" | grok qi
The 'qi' subcommand allows you to ask a question by providing it on
standard input rather than passing it as a command-line argument. This
subcommand is especially useful in editor sessions and when writing
plugins -- more about this below.
```
## Installation
```
go install github.com/stevegt/grokker/cmd/grok
```
You'll need an account with OpenAI and an API key. You can create and
manage your API keys in your OpenAI dashboard.
Before using Grokker, you'll need to set your API key as an
environment variable in your terminal session or .bashrc with the
following command:
```
export OPENAI_API_KEY=<your_api_key>
```
## Tell me more about the `chat` subcommand
The `chat` subcommand allows you to interact with the system's
knowledge base in a conversational manner. It accepts a `chatfile` as
a mandatory argument, which is where the chat history is stored. The
`-s` flag is optional and can be used to pass a system message, which
controls the behavior of the OpenAI API. In usage, it might look like
`grok chat chatfile -s sysmsg`, where `sysmsg` is an optional system
message, and `chatfile` is the required text file where the chat
history is stored. There are several other optional flags that can be
used with the `chat` subcommand, such as a `-m` flag so you can
provide the prompt on the command line instead of on stdin, and `-i`
and `-o` flags to specify input and output files. There are also
flags to control context sources. See `grok chat -h` for more
details.
## Tell me more about the `qi` subcommand
The `qi` subcommand allows you to ask a question by providing it on
stdin. It's a way to generate quick answers to questions without
having to provide the question as a command-line argument.
The `qi` subcommand enables you to use grokker as a chat client in an
editor session by typing questions directly in your document and
receiving answers inline after the question.
## Using grokker as a chat client in an editor session
Using Grokker as a chat client in an editor session can help you
quickly find and summarize information from a set of documents in a
local directory tree, including the documents you are editing. This
can make your development, research, or learning process more
efficient and streamlined. Additionally, having the context of your
editor session available as part of the chat history can help you
better keep track of and synthesize information as you work.
It's a quick way to build a document and was used to build this one.
Using grokker as a chat client in an editor session is also a way to
access the backend servers used by ChatGPT without being constrained
by the ChatGPT web frontend, all while maintaining your own chat
history and any additional context in your own local files,
versionable in git.
### How can I use grokker in a VIM editor session?
To use the `qi` subcommand in a VIM editor session, you can add a
keyboard mapping to your vimrc file. Here's an example mapping:
```
:map <leader>g vap:!grok qi<CR>
```
This mapping will allow you to ask a question by typing it in VIM and
then pressing `<leader>g`. The `vap` causes the current paragraph to be
hightlighted, and the `:!grok qi` causes it to be sent as input to the
`qi` subcommand of Grokker. The answer will be inserted into the
buffer below the current paragrpah. Note that the mapping assumes that
Grokker's `grok` command is installed and in your system path.
You will get better results if you `:set autowrite` so the current
file's most recent content will be included in the question context.
Experiment with variations on these mappings -- you might emphasize
more recent context by including the previous two paragraphs as part
of the query, or the most recent 50 lines, or the output of `git
diff`, etc. (Future versions of grokker may help with this
by timestamping individual document chunks and prioritizing more
recent edits.)
In practice, as of this writing I either hit `<leader>g` to highlight
and use the current paragraph as the GPT query, or I use `<Shift-V>` to
highlight several paragraphs for more context, and then run
`:'<,'>!grok qi`. Works.
## Tell me more about the `-g` flag
The `-g` flag is an optional parameter that you can include when
running the `q` subcommand. It stands for "global" and when included,
Grokker will provide answers not only from the local documents that
you've added but also from OpenAI's global knowledge base. This means
that you'll get a wider range of potentially useful answers, but it
may take longer to receive your results as the global knowledge base
is larger and may take more time to search through. If you don't
include the `-g` flag, Grokker will prefer the local documents that
you've added.
## What are the `models` and `model` subcommands?
The `models` subcommand is used to list all the available OpenAI
models for text processing in Grokker, including their name and
maximum token limit.
The `model` subcommand is used to set the default GPT model for use in
queries. This default is stored in the local .grok db. (I haven't
added a flag to override this default for a single query, but this
would be doable.)
## About the words `grokker` and `grok`
The word `grok` is from Robert Heinlein's [Stranger in a Strange
Land](https://en.wikipedia.org/wiki/Stranger_in_a_Strange_Land) --
there's a good summary of the word's meaning and history in
[Wikipedia](https://en.wikipedia.org/wiki/Grok). Roughly translated,
it means "to understand something so thoroughly that the observer
becomes a part of the observed".
It's a popular word in science fiction and computer science and the
namespace is crowded.
The name `grokker` is used by the company grokker.com, though the
problem domains are different. We are not affiliated with
grokker.com.
The folks at xAI released and filed a trademark application for the
`grok` online AI tool several months after we were already using the
word in this project. We're not affiliated with xAI, but we wish them
well.
Jordan Sissel's log file analysis tool also uses a `grok` command. If
you want to install grokker on the same machine, you can install it
using an alternate command name. Here's an example of installing
grokker as `grokker` instead of `grok`:
```
cd /tmp
git clone http://github.com/stevegt/grokker
cd grokker/cmd/grok/
go build -o grokker
cp grokker $GOPATH/bin
```
## Is grokker done?
Grokker is not done, but I use it extensively every day. See
[TODO.md](./TODO.md) for a pretty long list of wishlist and brainstorm
items. At this time, refactoring the storage for text chunks and
embeddings is likely the most important -- that .grok file can get
pretty big. So far I haven't seen performance problems even when
grokking several dozen documents or source code files, but I want to
be able to grok an entire tree of hundreds of files without concerns.
In all of the following use cases, I'd say my own productivity has
increased by an order of magnitude -- I'm finding myself finishing
projects in days that previously would have taken weeks. What's
really nice is that I'm finally making progress on years-old complex
projects that were previously stalled.
### What are some use cases grokker already supports?
In all of the following use cases, I tend to create and `grok add` a
`context.md` file that I use as a scratchpad, writing and refining
questions and answers as I work on other files in the same directory
or repository. This file is my interactive, animated [rubber
duck](https://en.wikipedia.org/wiki/Rubber_duck_debugging). This
technique has worked well. I'm considering switching to using
something like `grok.md`, `grokker.md`, `groktext.md`, or `gpt.md` for
this filename and proposing it as a best practice.
Grokker has been a huge help in its original use case -- getting up to
speed quickly on complex topics, documents, and code bases. It's
particularly good at translating the unique terminology that tends to
exist in specialized papers and code. The large language models
backing grokker are optimized for inferring meaning from context;
this allows them to expand terms into more general language even in
cases where the original author was unable to make that difficult
leap.
I've been pleasantly surprised by how much grokker has also helped
translate my own ideas into documents and code. I can describe things
in my own terms in one or more files, and just as with others' works,
the language models do a better job than I can of translating my
terminology into more-general human language and executable code.
Another useful technique I've found is to prompt the model to ask me
questions about a concept I'm having trouble getting out of my own
head into text; I then ask the model to answer its own questions, then
I manually edit the answers to reflect what I'm actually thinking.
This clears writer's block, reduces my own typing workload by moving
me into the role of editor, and helps to more quickly detect and
resolve uncertainties. Because grokker will include my edited text as
context in future model queries, this provides feedback to the model,
causing future answers to converge toward my intent. (See
[RLHF](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback)
for one possible way of formalizing this.)
# Important disclaimer regarding sensitive and confidential information
Using OpenAI's API services to analyze documents means that any
document you `grok add`, and any question you ask of grokker, will be
broken into chunks and sent to OpenAI's servers twice -- first to
generate context embedding vectors, and again as relevant context when
you run `grok q` or `grok qi`.
If any of your document's content is sensitive or confidential, you'll
want to review OpenAI's policies regarding data usage and retention.
Additionally, some topics are banned by OpenAI's policies, so be sure
to review and comply with their guidelines in order to prevent your
API access being suspended or revoked.
As always, it's a good idea to review the terms and conditions of any
API service you're considering using to ensure that you're comfortable
with how your data will be handled.
```
AI:
File: STORIES.md
```markdown
# User Stories for Grokker
## General Users
- **As a technology enthusiast**, I want to use grokker to have interactive conversations with documents and code so that I can understand complex texts and programming concepts more easily.
- **As a researcher**, I need to synthesize information from various sources quickly, using grokker to facilitate design, research, and rapid learning.
## Developers and Software Engineers
- **As a software developer**, I want to integrate grokker with my VIM editor seamlessly for real-time assistance during coding sessions, leveraging its easy VIM integration.
- **As a development team member**, I am interested in utilizing grokker for multi-agent collaboration that includes human, AI, and algorithmic agents to improve project outcomes.
- **As an AI developer**, I am eager to explore grokker’s LLM tooling capabilities, including system message inputs and token counting, to advance my projects on AI research and development.
## Specialized Use Cases
- **As a content creator**, I aim to use grokker's chat client with named file I/O to generate creative content quickly, navigating through ideas with ease.
- **As someone interested in decentralized systems**, I am excited about grokker's roadmap towards decentralized storage, virtual machines, and neural nets, and want to experiment with these technologies as they become available.
- **As a community organizer**, I see great potential in using grokker's decentralized community consensus and governance tools for more inclusive decision-making processes.
## Security and Privacy Concerned Users
- **As a security-conscious user**, I need to clearly understand grokker's data handling practices, especially regarding sensitive and confidential information, before using it extensively.
- **As a privacy advocate**, I am concerned about the policies regarding data usage and retention by OpenAI’s API services and want assurance that my data is handled responsibly.
## Installation and Usage
- **As a new user**, I want a straightforward installation process for grokker, complete with documentation on configuring OpenAI API services, so I can start using it without hassle.
- **As an intermediate user**, I'm looking for documentation or examples of using various grokker commands like `chat` and `qi` to maximize my productivity.
- **As an advanced user**, I aim to contribute to grokker’s development, especially focusing on enhancing plugins and decentralized features as per the roadmap.
## Enhancing Development Experience
- **As a coder**, I want to use grokker's feature to accept and generate file content in natural languages or code, making it easier to document and review my projects.
- **As an open source contributor**, I am keen on using grokker’s potential for facilitating multi-agent collaboration in open-source project development.
## Seeking Efficiency and Growth
- **As a lifelong learner**, I am interested in using grokker's capabilities for rapid learning, allowing me to pick up new skills and knowledge efficiently.
- **As a professional aiming for productivity**, I see grokker as a multifaceted tool that could significantly boost my daily work efficiency, whether through enhancing research or streamlining development workflows.
```
EOF_STORIES.md
```