Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major LLM services and UI improvement #23

Merged
merged 144 commits into from
May 23, 2024
Merged
Show file tree
Hide file tree
Changes from 136 commits
Commits
Show all changes
144 commits
Select commit Hold shift + click to select a range
3b241a4
Implement RequestsLogger
GlebSolovev Apr 18, 2024
dec8c1e
Log requests & handle errors in llm services
GlebSolovev Apr 19, 2024
dab15f4
Take file logic out from RequestsLogger
GlebSolovev Apr 19, 2024
6a2eef5
Test RequestsLogger properly
GlebSolovev Apr 19, 2024
24cad6e
Implement LLMService-s availability UI via events
GlebSolovev Apr 23, 2024
49e4045
Move LLMService-s availability UI out of CoqPilot
GlebSolovev Apr 23, 2024
58e45ab
Move out code from CoqPilot into separate files
GlebSolovev Apr 23, 2024
0d5837f
Implement `estimateTimeToBecomeAvailable`
GlebSolovev Apr 23, 2024
ffeed59
Move out code from LLMService
GlebSolovev Apr 23, 2024
3c51d90
Show availability estimation to user
GlebSolovev Apr 23, 2024
8492d2e
Test and fix time utils
GlebSolovev Apr 25, 2024
c697666
Test and fix RequestsLogger
GlebSolovev Apr 26, 2024
46837a1
Test and improve SyncFile
GlebSolovev Apr 26, 2024
1474ae1
Improve code style
GlebSolovev Apr 26, 2024
fc7ea0d
Extend LLMIterator tests
GlebSolovev Apr 26, 2024
f81cab6
Implement MockLLMService
GlebSolovev Apr 26, 2024
6b159bf
Test parsing of UserModelParams from JSON
GlebSolovev Apr 26, 2024
4d3209e
Better test and improve ChatTokensFitter
GlebSolovev Apr 26, 2024
b84058a
Update prefix of LLMService-s utils tests
GlebSolovev Apr 27, 2024
b3e7868
Test and improve chat builders
GlebSolovev Apr 27, 2024
7566ff9
Move llm test utils to separate folder
GlebSolovev Apr 27, 2024
09f69a9
Test parameters resolution
GlebSolovev Apr 27, 2024
4b10fa4
Dispose LLMServices
GlebSolovev Apr 28, 2024
c3f0893
Make LLMService code better to use and test
GlebSolovev Apr 28, 2024
2f79e1a
Implement PredefinedProofsService properly
GlebSolovev Apr 28, 2024
0334b3a
Support debug mode in LLMService-s
GlebSolovev Apr 28, 2024
77a93f3
Improve, fix and document MockLLMService
GlebSolovev Apr 28, 2024
eab3d7a
Test `generateFromChat` and `GeneratedProof`
GlebSolovev Apr 28, 2024
787cd69
Test generateProof with integration testing
GlebSolovev Apr 29, 2024
6e9b3ae
Test generateProof with async stress, improve MockLLMService
GlebSolovev Apr 29, 2024
60b0cf9
Improve common test functions, test OpenAIService
GlebSolovev Apr 30, 2024
caf05e0
Refactor tests
GlebSolovev Apr 30, 2024
f6bcf3e
Test GrazieService, improve conditional tests
GlebSolovev Apr 30, 2024
3416abf
Test LMStudioService
GlebSolovev Apr 30, 2024
edea619
Test PredefinedProofsService, improve code style
GlebSolovev Apr 30, 2024
a09ca1b
Test and improve default availability estimator
GlebSolovev Apr 30, 2024
54e713b
Refactor test utils
GlebSolovev May 1, 2024
a0f38ab
Add npm test tasks
GlebSolovev May 1, 2024
9608028
Fix no-successful-logs bug of availability estimator
GlebSolovev May 1, 2024
a036f6b
Improve availability warning message
GlebSolovev May 1, 2024
0469170
Find directory for services logs
GlebSolovev May 1, 2024
0b24ddd
Override availability estimator of predefined proofs
GlebSolovev May 3, 2024
6bbdd25
Introduce LLMServiceInternal, improve LLMService ctor
GlebSolovev May 3, 2024
22cd5a1
Introduce new error handling to LLMService, document it
GlebSolovev May 4, 2024
c299066
Add more wrappers, docs, refactor to LLMService package
GlebSolovev May 4, 2024
430131b
Update tests according to LLMService changes
GlebSolovev May 4, 2024
2e3c9c6
Cover new errors handling with tests, refactor
GlebSolovev May 4, 2024
4804d09
Improve LLMService specification
GlebSolovev May 5, 2024
5bce2cf
Distinguish `modelName` and `modelId`
GlebSolovev May 5, 2024
936bbc8
Rename `newMessageMaxTokens` to `maxTokensToGenerate`
GlebSolovev May 5, 2024
939bcb6
Update README and package.json with new settings
GlebSolovev May 5, 2024
04361a4
Validate unique models' id-s
GlebSolovev May 5, 2024
ba265bc
Rework settings validation
GlebSolovev May 5, 2024
aee93a7
Validate models are present, refactor errors
GlebSolovev May 5, 2024
7236080
Make order of services coherent
GlebSolovev May 5, 2024
09f4af0
Improve settings description in package.json
GlebSolovev May 5, 2024
80fcb11
Introduce, support, test `LLMServiceRequest`
GlebSolovev May 6, 2024
72cd1fd
Update & refactor LLMServices events handling in UI
GlebSolovev May 7, 2024
af27f50
Improve `choices` validation
GlebSolovev May 7, 2024
6b9cb0b
Link to setting name on configuration error
GlebSolovev May 7, 2024
4c85d6a
Refactor "new Error" away
GlebSolovev May 7, 2024
55cc569
Show error's content on service failed warning
GlebSolovev May 7, 2024
a3fc1b4
Improve error messages
GlebSolovev May 8, 2024
6cac0ea
Improve `PredefinedProofs` params validation
GlebSolovev May 8, 2024
a4709c9
Validate `OpenAi` params, test it
GlebSolovev May 8, 2024
bb78922
Specify thrown error in tests
GlebSolovev May 8, 2024
8a42d25
Fix "finally with async" bug
GlebSolovev May 8, 2024
b9ee6b6
Log and display Coqpilot error better
GlebSolovev May 9, 2024
e94a273
Implement draft ParamsResolver
GlebSolovev May 10, 2024
f9246cc
Implement ModelParamsResolver-s
GlebSolovev May 10, 2024
f1e1e98
Make `LLMService` generic, use `ParamsResolver`
GlebSolovev May 10, 2024
fb302b9
Remove redundant `ConfigurationError`-s
GlebSolovev May 10, 2024
ad51635
Support no-default-value help message
GlebSolovev May 10, 2024
90cd8a3
Solve `choices` problem: add them into `ModelParams`
GlebSolovev May 10, 2024
7d0192c
Move params resolution to UI, show feedback to user
GlebSolovev May 11, 2024
1448460
Support parameter insertion in `ParamsResolver`
GlebSolovev May 11, 2024
2637688
Improve `LLMService` typing: `GeneratedProofImpl`
GlebSolovev May 11, 2024
a487595
Parameterize with `LLMServiceInternalType`
GlebSolovev May 11, 2024
0ed8d5d
Return `GeneratedProof` from `LLMService` facade methods
GlebSolovev May 11, 2024
a09f4e1
Improve `BasicModelParamsResolver`
GlebSolovev May 11, 2024
037a180
Refactor and improve chat-factory tests
GlebSolovev May 11, 2024
9fcb68c
Complete `LLMService` typing with recursive generics
GlebSolovev May 11, 2024
035468c
Fix and refactor params resolvers
GlebSolovev May 12, 2024
d0818c0
Design and implement powerful resolvers interfaces
GlebSolovev May 13, 2024
f1f95ae
Fix `tmp.dirSync` bug
GlebSolovev May 15, 2024
f2b2fc7
Refactor params resolvers, note their critical issues
GlebSolovev May 15, 2024
7e3c7aa
Fix models params resolvers
GlebSolovev May 15, 2024
7fbef4c
Update & refactor test utils according to latest changes
GlebSolovev May 16, 2024
6083980
Update & refactor tests according to latest changes
GlebSolovev May 16, 2024
6f63ef9
Test parameters resolution
GlebSolovev May 16, 2024
1b891d7
Rework LLM-services' tests, expand with configuration ones
GlebSolovev May 16, 2024
8e56486
Improve params-resolution UI messages
GlebSolovev May 16, 2024
cfd0beb
Fix resolvers by specifying schemas
GlebSolovev May 17, 2024
7456d6e
Move resolvers to services, test valid params
GlebSolovev May 18, 2024
7712fe4
Test params are read correctly
GlebSolovev May 18, 2024
a97af3b
Fix empty lists failing LoggerRecord bug
GlebSolovev May 18, 2024
7bc77ae
Support censorship in GenerationsLogger
GlebSolovev May 18, 2024
1928e2e
Refactor settings for `GenerationsLogger`
GlebSolovev May 18, 2024
9e358d4
Tweak LLM services tests timeouts
GlebSolovev May 18, 2024
0c256aa
Support context length error in OpenAiService
GlebSolovev May 18, 2024
0e37775
Support connection error in OpenAiService
GlebSolovev May 18, 2024
42ec5c7
Support and use rich messages in resolvers
GlebSolovev May 19, 2024
7f3787d
Add default tokens params to OpenAiService
GlebSolovev May 19, 2024
b20efb2
Fix messages, defaults in settings
GlebSolovev May 19, 2024
27ff5f7
Make code safer: get rid of any-s
GlebSolovev May 19, 2024
ffa8ec4
Move `LLMServiceInternal` to a separate file
GlebSolovev May 19, 2024
c4cec91
Improve config parsers names
GlebSolovev May 19, 2024
88e8286
Improve eslint naming-convention
GlebSolovev May 19, 2024
d08cd33
Make `estimatedTokens` rich and clear
GlebSolovev May 19, 2024
ee4131d
Introduce and support `RemoteConnectionError`
GlebSolovev May 19, 2024
c7b33eb
Setup debugging for CI
GlebSolovev May 19, 2024
21fbcc5
Make all tests run on CI
GlebSolovev May 19, 2024
a3a289c
Make possible to run CI manually
GlebSolovev May 19, 2024
9adb81a
Update coqpilot version
GlebSolovev May 19, 2024
384f2bb
Fix "override the same value" bug
GlebSolovev May 19, 2024
8dacf40
Introduce & use stringify util for consistent messages
GlebSolovev May 19, 2024
f30d7c9
Show overriden message only if user specified value
GlebSolovev May 19, 2024
8b10d2f
Improve top-level errors handling
GlebSolovev May 20, 2024
9851280
Forbid additional properties in schemas
GlebSolovev May 20, 2024
bb2f5d3
Refactor editor messages: move all in one place
GlebSolovev May 20, 2024
23c3ebd
Handle errors from Ajv
GlebSolovev May 20, 2024
e53e388
Minor package.json updates
GlebSolovev May 20, 2024
d1c9b84
Improve tests for user settings schemas
GlebSolovev May 20, 2024
90c0ded
Provide type checks for resolver's properties
GlebSolovev May 20, 2024
bab6377
Support `ValidParamsResolverImpl` type check
GlebSolovev May 20, 2024
a032df7
Spread type checks to single parameter resolvers
GlebSolovev May 20, 2024
57b9252
Test resolution of hidden wrong type
GlebSolovev May 20, 2024
c64d181
Document parameter resolvers
GlebSolovev May 22, 2024
6cb8fb9
Improve OpenAI tokens params default resolution
GlebSolovev May 22, 2024
b90100f
Mention defaults in OpenAI settings description
GlebSolovev May 22, 2024
2cb6632
Merge branch 'v2.2.0-dev' into llm-services-improvement
GlebSolovev May 22, 2024
9f6bb61
Update benchmarks: params resolution, muted mode
GlebSolovev May 22, 2024
85f85b2
Test and fix proof extraction in `GeneratedProof`
GlebSolovev May 22, 2024
ae0fefc
Improve English in CHANGELOG.md, items honesty
GlebSolovev May 22, 2024
aa817fd
Present 2.2.0 changes
GlebSolovev May 22, 2024
0bc235a
Trigger CI on development PR-s
GlebSolovev May 22, 2024
1ca0983
Comment on text format used by `GenerationsLogger`
GlebSolovev May 23, 2024
71270b1
Manage node version via `.nvmrc`
GlebSolovev May 23, 2024
873c667
Cache opam on CI
GlebSolovev May 23, 2024
eb83834
Remove redundant CI step
GlebSolovev May 23, 2024
a3e32cd
Enable `dune-cache` on CI
GlebSolovev May 23, 2024
995d89c
Try to make opam cache work on CI
GlebSolovev May 23, 2024
97422f6
Set back opam path to cache
GlebSolovev May 23, 2024
8f9081a
Set up opam caches on CI properly
GlebSolovev May 23, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 35 additions & 11 deletions .eslintrc.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,23 +5,47 @@
"ecmaVersion": 6,
"sourceType": "module"
},
"extends": [ "prettier" ],
"plugins": [
"@typescript-eslint",
"prettier"
],
"extends": ["prettier"],
"plugins": ["@typescript-eslint", "prettier"],
"rules": {
"@typescript-eslint/naming-convention": "warn",
/* Most of the rules here are just the default ones,
* the main changes are:
* - required UPPER_CASE for enum members;
* - allowed UPPER_CASE in general.
*/
"@typescript-eslint/naming-convention": [
"warn",
{ "selector": "default", "format": ["camelCase"] },
{
"selector": "variable",
"format": ["camelCase", "UPPER_CASE"],
"leadingUnderscore": "allow"
},
{
"selector": "property",
"format": ["camelCase", "UPPER_CASE"],
"leadingUnderscore": "allow"
},
{
"selector": "parameter",
"format": ["camelCase"],
"leadingUnderscore": "allow"
},
{
"selector": "typeLike",
"format": ["PascalCase"]
},
{
"selector": "enumMember",
"format": ["UPPER_CASE"]
}
],
"@typescript-eslint/semi": "warn",
"curly": "warn",
"eqeqeq": "warn",
"no-throw-literal": "warn",
"semi": "off",
"prettier/prettier": "warn"
},
"ignorePatterns": [
"out",
"dist",
"**/*.d.ts"
]
"ignorePatterns": ["out", "dist", "**/*.d.ts"]
}
25 changes: 19 additions & 6 deletions .github/workflows/coqpilot.yml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to add .nvmrc file and fix the node version

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, made it pretty much the same as you did in ai_agents_server and also used it on CI

Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,14 @@ name: Build and Test

on:
push:
branches: [ main ]
branches:
- main
pull_request:
branches: [ main ]
branches:
- main
- 'v[0-9]+.[0-9]+.[0-9]+-dev'
workflow_dispatch:

env:
coqlsppath: "coq-lsp"

Expand Down Expand Up @@ -40,7 +45,7 @@ jobs:
- name: Install Node.js
uses: actions/setup-node@v3
with:
node-version: 16.x
node-version: ">=20.5.0"

- run: npm ci

Expand All @@ -65,14 +70,22 @@ jobs:
- name: Test on Linux
env:
COQ_LSP_PATH: ${{ env.coqlsppath }}
run: xvfb-run -a npm run test-ci
run: |
eval $(opam env)
xvfb-run -a npm run clean-test
if: runner.os == 'Linux'

- name: Test not on Linux
env:
COQ_LSP_PATH: ${{ env.coqlsppath }}
run: npm run test-ci
run: |
eval $(opam env)
npm run clean-test
if: runner.os != 'Linux'

- name: Setup tmate session
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3

- name: Package Extension
id: packageExtension
Expand All @@ -95,7 +108,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v3
with:
node-version: 16.x
node-version: ">=20.5.0"
- name: Install Dependencies
run: npm ci
- name: Download Build Artifact
Expand Down
149 changes: 100 additions & 49 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,95 +1,146 @@
# Change Log
# Changelog

## 2.2.0

### Public changes

- Support time estimation for LLM services to become available after failure via logging proof-generation requests. This information is shown to the user.
- Set up interaction between `LLMService`-s and UI to report errors that happened during proof-generation.
- Improve LLM services' parameters: their naming, transparency, and description.
- Introduce `modelId` to distinguish a model identifier from the name of an OpenAI / Grazie model.
- Rename `newMessageMaxTokens` to `maxTokensToGenerate` for greater clarity.
- Update the settings description, and make it more user-friendly.
- Significantly improve settings validation.
- Use parameters resolver framework to resolve parameters (with overrides and defaults) and validate them.
- Support messages about parameter resolution: both failures and unexpected overrides.
- Clarify existing error messages.
- Add some general checks: input models have unique `modelId`-s, there are models to generate proofs with.
- Improve interaction with OpenAI.
- Notify the user of configuration errors (invalid model name, incorrect API key, maximum context length exceeded) and connection errors.
- Support resolution of `tokensLimit` and `maxTokensToGenerate` with recommended defaults for known models.
- Fix minor bugs and make minor improvements detected by thorough testing.

### Internal changes

- Rework and document LLMService architecture: `LLMServiceInternal`, better facades, powerful typing.
- Introduce hierarchy for LLMService errors. Support proper logging and error handling inside `LLMService`-s.
- Rework settings validation.
- Refactor `SettingsValidationError`, move all input parameters validation to one place and make it coherent.
- Design and implement a powerful and well-documented parameters resolver framework.

### Testing infrastructure changes

- Test the LLM Services module thoroughly.
- Improve test infrastructure in general by introducing and structuring utils.
- Fix the issue with building test resources on CI. Set up CI debugging, and enable launching CI manually.

## 2.1.0

### 2.1.0
Major:
- Create a (still in development and improvement) benchmarking system. A guide on how to use it is in the README.
- Conduct an experiment on the performance of different LLMs, using the developed infrastructure. Benchmarking report is located in the [docs folder](etc/docs/benchmarking_report01.md).
- Correctly handle and display settings which occur when the user settings are not correctly set.
- Conduct an experiment on the performance of different LLMs, using the developed infrastructure. The benchmarking report is located in the [docs folder](etc/docs/benchmarking_report01.md).

Minor:
- Set order of contributed settings.
- Set the order of contributed settings.
- Add a comprehensive user settings guide to the README.
- Fix issue with Grazie service not being able to correctly accept coq ligatures.
- Fix issue that occured when generated proof contained `Proof using {...}.` construct.
- Fix the issue with Grazie service not being able to correctly accept coq ligatures.
- Fix the issue that occurred when the generated proof contained the `Proof using {...}.` construct.

## 2.0.0

### 2.0.0
- Added multiple strategies for ranking theorems from the working file. As LLM context window is limited, we sometimes should somehow choose a subset of theorems we want to provide as references to the LLM. Thus, we have made a few strategies for ranking theorems. Now there are only 2 of them, but there are more to come. Now we have a strategy that randomly picks theorems, and also the one that ranks them depending on the distance from the hole.
- Added multiple strategies for ranking theorems from the working file. As the LLM context window is limited, we sometimes should somehow choose a subset of theorems we want to provide as references to the LLM. Thus, we have made a few strategies for ranking theorems. Now there are only 2 of them, but there are more to come. Now we have a strategy that randomly picks theorems, and also the one that ranks them depending on the distance from the hole.
- Now different holes are solved in parallel. This is a huge improvement in terms of performance.
- Implemented multi-round fixing procedure for the proofs from the LLM. It can now be configured in the settings. One can set the amount of attempts for the consequtive proof fixing with compiler feedback.
- Implemented multi-round fixing procedure for the proofs from the LLM. It can now be configured in the settings. One can set the number of attempts for the consecutive proof fixing with compiler feedback.
- Added an opportunity to use LM Studio as a language model provider.
- More accurate token count. Tiktoken is now used for open-ai models.
- Different logging levels now supported.
- More accurate token count. Tiktoken is now used for OpenAI models.
- Different logging levels are now supported.
- The LLM iterator now supports adding a sequence of models for each service. This brings freedom to the user to experiment with different model parameters.
- Now temperature, prompt, and other parameters are configurable in the settings.

### 1.9.0
- Huge refactoring done. Project re organized.
## 1.9.0

- Huge refactoring is done. Project reorganized.

## 1.5.3

### 1.5.3
- Fix Grazie service request headers and endpoint.

### 1.5.2
## 1.5.2

- Fix issue with double document symbol provider registration (@Alizter, [#9](https://github.com/JetBrains-Research/coqpilot/issues/9))

### 1.5.1
- Add support of the Grazie platform as an llm provider.
## 1.5.1

- Add support for the Grazie platform as an LLM provider.  

### 1.5.0
- Now when the hole can be solved by a single tactic solver, using predefined tactics, gpt will not be called, LLMs are now fetched consequently.
## 1.5.0

- Now when the hole can be solved by a single tactic solver, using predefined tactics, OpenAI and Grazie will not be called, LLMs are now fetched consequently.
- Parallel hole completion is unfortunately postponed due to the implementation complexity. Yet, hopefully, will still be implemented in milestone `2.0.0`.

### 1.4.6
- Fix issue with plugin breaking after parsing a file containing theorem without `Proof.` keyword.
## 1.4.6

- Fix the issue with the plugin breaking after parsing a file containing theorem without the `Proof.` keyword.

## 1.4.5

### 1.4.5
- Fix formatting issues when inserting the proof into the editor.

### 1.4.4
- Do not require a theorem to be `Admitted.` for coqpilot to prove holes in it.
- Correctly parse theorems that are declared with `Definition` keyword.
## 1.4.4

- Do not require a theorem to be `Admitted.` for CoqPilot to prove holes in it.
- Correctly parse theorems that are declared with the `Definition` keyword.

## 1.4.3

- Tiny patch with the shuffling of the hole array.

### 1.4.3
- Tiny patch with shuffling of the hole array.
## 1.4.2

### 1.4.2
- Now no need to add dot in the end of the tactic, when configuring a single tactic solver.
- Now no need to add a dot at the end of the tactic, when configuring a single tactic solver.
- Automatic reload settings on change in the settings file. Not all settings are reloaded automatically,
but the most ones are. The ones that are not automatically reloaded: `useGpt`, `coqLspPath`, `parseFileOnInit`.
- Added command that solves admits in selected region. Also added that command to the context menu (right click in the editor).
but most ones are. The ones that are not automatically reloaded: `useGpt`, `coqLspPath`, `parseFileOnInit`.
- Added a command that solves admits in a selected region. Also added that command to the context menu (right-click in the editor).
- Fix toggle extension.

### 1.4.1
## 1.4.1

- Add a possibility to configure a single tactic solver.

### 1.4.0
- Add command to solve all admitted holes in the file.
## 1.4.0

- Add a command to solve all admitted holes in the file.
- Fixing bugs with coq-lsp behavior.

### 1.3.1
## 1.3.1

- Test coverage increased.
- Refactoring client and ProofView.
- Refactoring client and ProofView.  
- Set up CI.

### 1.3.0
## 1.3.0

- Fix bug while parsing regarding the updated Fleche doc structure in coq-lsp 0.1.7.
- When GPT generated a response containing three consecutive backtick symbols it tended to
break the typecheking of the proof. Now solved.
- Speed up the file parsing process.

### 1.2.1
## 1.2.1

- Add clearing of aux file after parsing.

### 1.2.0
- Fix error with llm silently failing. Now everything that comes from llm that is not handled inside plugin is presented to user as a message (i.e. incorrect apiKey exception).
- Fix toggle button.
- Fix diagnostics being shown to non coq-lsp plugin coq users.
- Add output stream for the logs in vscode output panel.
## 1.2.0

- Fix error with llm silently failing. Now everything that comes from LLM that is not handled inside the plugin is presented to the user as a message (i.e. incorrect apiKey exception).
- Fix the toggle button.
- Fix diagnostics being shown to non-`coq-lsp` plugin coq users.
- Add output stream for the logs in the vscode output panel.

### 1.1.0
## 1.1.0

Now proof generation could be run in any position inside the theorem. There is no need to retake file snapshot after each significant file change.
More communication with `coq-lsp` is added. Saperate package `coqlsp-client` no longer used.
Now proof generation could be run in any position inside the theorem. There is no need to retake a file snapshot after each significant file change.
More communication with `coq-lsp` is added. The separate package `coqlsp-client` is no longer used.

### 0.0.1
## 0.0.1

Initial release of coqpilot.
The initial release of CoqPilot.
Loading
Loading