Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] cleaned tests #20

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open

[MRG] cleaned tests #20

wants to merge 9 commits into from

Conversation

QB3
Copy link
Owner

@QB3 QB3 commented Nov 21, 2019

solves #4

@codecov-io
Copy link

codecov-io commented Nov 21, 2019

Codecov Report

Merging #20 into master will increase coverage by 20.8%.
The diff coverage is 97.77%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #20      +/-   ##
==========================================
+ Coverage   65.78%   86.59%   +20.8%     
==========================================
  Files           9        6       -3     
  Lines         681      552     -129     
==========================================
+ Hits          448      478      +30     
+ Misses        233       74     -159
Impacted Files Coverage Δ
clar/duality_gap.py 100% <ø> (+66.66%) ⬆️
clar/utils.py 55.43% <ø> (+15.56%) ⬆️
clar/tests/test_solvers.py 100% <100%> (ø)
clar/data/artificial.py 96.29% <66.66%> (+24.23%) ⬆️
clar/solvers.py 86.75% <88.88%> (+8.49%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4fb53df...d93b9bd. Read the comment docs.

@QB3
Copy link
Owner Author

QB3 commented Nov 21, 2019

ready for merging IMO

@QB3 QB3 changed the title cleaned tests 1 [MRG] cleaned tests Nov 21, 2019
@QB3
Copy link
Owner Author

QB3 commented Nov 21, 2019

I think this is ready to merge @mathurinm

B_mtl, _, E, gaps = solver(
X, Y, alpha, sigma_min, B0=None,
tol=tol, pb_name=pb_name, n_iter=10000)
gap = gaps[-1]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@QB3 you can directly assert using gaps[-1]

X, Y, alpha, sigma_min, B0=B_mtl,
tol=tol, pb_name=pb_name, n_iter=10000)
np.testing.assert_equal(len(E), 2)
gap = gaps[-1]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

tol = 1e-7

X, all_epochs, _, _ = get_data_me(
dictionary_type="Gaussian", noise_type="Gaussian_iid",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe remove those which are default values

sigma_min = get_sigma_min(Y)
alpha_max = get_alpha_max(X, Y, sigma_min, "SGCL")
alpha = alpha_max * 0.9
print("alpha = %.2e" % alpha)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rm

log_gap = np.log10(gaps[-1])
log_gap_accel = np.log10(gaps_accel[-1])
np.testing.assert_array_less(
np.minimum(log_gap, log_gap_accel), np.log10(tol) * E[0])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why taking log everywhere?

n_iter=10**4, tol=10**-4)[0]

old_size_supp = 0
for supp in dict_masks.values():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what are we testing here?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that the size of the support is growing.
what would you like to test ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although it's a valid intuition, we can't say anything if the test breaks: there is no proof that support have growing sizes

tol=tol, pb_name=pb_name, n_iter=1000,
alpha_Sigma_inv=alpha_Sigma_inv)[-2]

assert (Es[-1] - Es[-2]) <= 1e-10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use numpy

alpha_Sigma_inv = 0.01

Y = np.mean(all_epochs, axis=0)
sigma_min = get_sigma_min(Y)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you check that duality gap is 0 for alpha max and solver returns 0?

@@ -185,19 +185,3 @@ def get_toeplitz_dictionary(
covar = toeplitz(vect, vect)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can fix this too

_, _, E, gaps = solver(
X, Y, alpha, sigma_min, B0=B_mtl,
tol=tol, pb_name=pb_name, n_iter=10000)
np.testing.assert_equal(len(E), 2)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it normal that convergence happens so fast?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems fair since we are passing B0=B_mtl with the same arguments ?

n_sources=n_sources, n_active=3, rho_noise=rho_noise,
SNR=SNR)

Y = np.mean(all_epochs, axis=0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we avoid defining this variable?


alpha_max = get_alpha_max(X, all_epochs, sigma_min, pb_name=pb_name)

alpha_div = 1.1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe it's better to test on a more difficult problem ?

alpha_max = get_alpha_max(X, all_epochs, sigma_min, pb_name=pb_name)

alpha_div = 1.1
alpha = alpha_max / alpha_div
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

directly divide by the value you want ? (say 20 ?)

Y = all_epochs.mean(axis=0)
sigma_min = get_sigma_min(Y)
alpha_max = get_alpha_max(X, Y, sigma_min, "SGCL")
alpha = alpha_max * 0.9
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again a very easy problem


B_mtl, _, E, gaps = solver(
X, Y, alpha, sigma_min, B0=None,
tol=tol, pb_name=pb_name, n_iter=10000)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe rm n_iter?

alpha = alpha_max / alpha_div

B_mtl, _, E, gaps = solver(
X, Y, alpha, sigma_min, B0=None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

B0 = None is default so not needed

all_epochs[0] = Y

_, _, E, (gaps, gaps_accel) = solver(
X, Y, alpha, sigma_min, B0=None, n_iter=n_iter,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rm paramters having their default value

@mathurinm
Copy link
Collaborator

One last pass @QB3 :) !

@mathurinm
Copy link
Collaborator

Good to merge @QB3 ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants