Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix bug TypeError, SyntaxError in readme #45

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 3 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,20 +59,17 @@ As soon as you want to use a linear mixed effects model, you have to use the `st


Let's set up a minimal working example using a LinearRegression estimator and some randomly generated regression data.
The `CrossValidation` class is the core of this package. It holds all the information about the data, the models, the cross validation splits and the results. It is also responsible for performing the cross validation and logging the results. Setting up the `CrossValidation` object is easy. We can use method chaining to set up our configuration and perform the cross validation. You might be familiar with this pattern from `pandas` and other packages. The set-methods all return the `CrossValidation` object itself, so we can chain them together. The `perform` method then performs the cross validation and returns the `CrossValidation` object again. The `get_results` method returns a `CrossValidationResults` object which holds all the results of the cross validation. It has a `summary` property which returns a `pandas.DataFrame` with all the results. We can then use the `to_excel` method of the `DataFrame` to save the results to an excel file.

```py
```python
# import the interface class, a data generator and our model
from flexcv import CrossValidation
from flexcv.synthesizer import generate_regression
from flexcv.models import LinearModel

# generate some random sample data that is clustered
X, y, group, _ = generate_regression(10, 100, n_slopes=1, noise_level=9.1e-2, random_seed=42)
```

The `CrossValidation` class is the core of this package. It holds all the information about the data, the models, the cross validation splits and the results. It is also responsible for performing the cross validation and logging the results. Setting up the `CrossValidation` object is easy. We can use method chaining to set up our configuration and perform the cross validation. You might be familiar with this pattern from `pandas` and other packages. The set-methods all return the `CrossValidation` object itself, so we can chain them together. The `perform` method then performs the cross validation and returns the `CrossValidation` object again. The `get_results` method returns a `CrossValidationResults` object which holds all the results of the cross validation. It has a `summary` property which returns a `pandas.DataFrame` with all the results. We can then use the `to_excel` method of the `DataFrame` to save the results to an excel file.

```python
# instantiate our cross validation class
cv = CrossValidation()

Expand All @@ -81,8 +78,7 @@ results = (
cv
.set_data(X, y, group, dataset_name="ExampleData")
# configure our split strategies. Lets go for a GroupKFold since our data is clustered
.set_splits(
method_outer_split=flexcv.CrossValMethod.GROUP
.set_splits(split_out="GroupKFold")
# add the model class
.add_model(LinearModel)
.perform()
Expand Down
15 changes: 15 additions & 0 deletions test/test_readme.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
import pathlib

import pytest
from mktestdocs import check_md_file
from neptune.exceptions import NeptuneInvalidApiTokenException


# @pytest.mark.xfail(raises=OSError)
@pytest.mark.xfail(raises=NeptuneInvalidApiTokenException)
@pytest.mark.parametrize("fpath", pathlib.Path(".").glob("*.md"), ids=str)
def test_readme_codeblocks_valid(fpath):
# Test if the code blocks in the docs files are valid
# ignore the following Exception NeptuneInvalidApiTokenException
# as this is a valid exception
check_md_file(fpath=fpath)
Loading