Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests into Develop #30

Merged
merged 66 commits into from
Oct 18, 2024
Merged

Tests into Develop #30

merged 66 commits into from
Oct 18, 2024

Conversation

cebarboza
Copy link
Collaborator

Working version.

AniekMarkus and others added 30 commits July 12, 2024 12:34
BranchBound, PrintCutoff, and binary reduction to settings
@AniekMarkus
Copy link
Contributor

When outputMethod = "EVERY" the trainExplore function becomes very slow now that I call resultsExplore at the end of trainExplore. An option could be to allow an option in resultsExplore that does a 'fast return of results' e.g. only model, candidateModels, countCombinations?

I now updated the default of OutputMethod -> "BEST" (instead of "EVERY").

This was referenced Oct 16, 2024
@cebarboza
Copy link
Collaborator Author

cebarboza commented Oct 17, 2024

@AniekMarkus Tests passing for binary_3, binary_10, continous_4_small, mix_4 with the following config:

result <- trainExplore(train_data = train_data,
                       settings_path = NULL,
                       output_path = output_path,
                       file_name = file_name,
                       OutputFile = NULL,
                       StartRulelength = StartRulelength,
                       EndRulelength = EndRulelength,
                       OperatorMethod = "EXHAUSTIVE",
                       CutoffMethod = "ALL", #"RVAC",
                       ClassFeature = config$class_feature,
                       PositiveClass = config$positive_class,
                       FeatureInclude = "",
                       Maximize = "BALANCEDACCURACY",#"ACCURACY",
                       Accuracy = 0,
                       BalancedAccuracy = 0,
                       Specificity = 0,
                       PrintSettings = TRUE,
                       PrintPerformance = TRUE,
                       Subsumption = TRUE,
                       BranchBound = TRUE,
                       Parallel = FALSE,
                       PrintCutoffSets = TRUE,
                       Sorted = "none",
                       OutputMethod = "BEST", #"EVERY",
                       BinaryReduction = BinaryReduction)

@AniekMarkus
Copy link
Contributor

AniekMarkus commented Oct 17, 2024

@cebarboza

continuous_4 ->
-> the expected values were incorrect and should be:
Length 1 80
Length 2 5160
Length 3 235608 (?) -> value needs to be checked, let's fix later (remove this test for now or leave commented out)
-> setting subsumption=TRUE -> should be FALSE, this gives the correct number for rule length 1
-> if still crashing maybe because candidateModels is (very) large? So might be better to use countRulesWithoutConstraints instead of length(candidateModels) here!
 
categorical_4 ->
-> these expected values are correct
-> problem is reading in the data file to R when the columns are type 'factor':
train_data <- farff::readARFF("~/Documents/Git_Projects/Explore/inst/examples/complexity/categorical_4.arff")
-> let's fix later (remove these tests for now or leave commented out)
 
categorical_4_large
-> can be removed completely (I don't have the expected values for this yet)

@cebarboza cebarboza merged commit 1eee1be into develop Oct 18, 2024
1 of 6 checks passed
@cebarboza cebarboza deleted the tests branch October 18, 2024 12:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants