Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

100% test coverage #246

Merged
merged 18 commits into from
Feb 14, 2022
Merged

100% test coverage #246

merged 18 commits into from
Feb 14, 2022

Conversation

jiegillet
Copy link
Contributor

@jiegillet jiegillet commented Dec 18, 2021

In #229, there was a discussion about automated dependency updates being safer with better test coverage, so I took it as a learning opportunity and chipped away at it and reached full coverage.

The type of changes changes I made were:

  • Adding tests
  • Removing dead code
  • Refactoring
  • Fixing bugs discovered by the extra tests
  • Duplicating some tests for feature, assert_call and check_source because for some reason the modules with use ElixirAnalyzer.ExerciseTest are not always taken into account if they are not defined in a *_test.exs file even if they are actually ran. This is probably due to the heavy macro use. These are a bit unfortunate.

I will add some comments on some parts where I have questions a bit later.

I used excoveralls to find the details of what was or wasn't covered, but I didn't commit that change not to add a dependency. I could do it though, and we could use https://coveralls.io/ to get a badge. I'd be happy to add that (in a different PR, I guess).

@jiegillet jiegillet added x:action/improve Improve existing functionality/content x:module/analyzer Work on Analyzers x:type/coding Write code that is not student-facing content (e.g. test-runners, generators, but not exercises) x:size/large Large amount of work labels Dec 18, 2021
lib/elixir_analyzer.ex Show resolved Hide resolved

* `:path` - path to the submitted solution, defaults to the `path` parameter

* `:output_path` - path to write file output, defaults to the `path` parameter
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not understand these options here, all they do is overwrite the required arguments. Those options were passed from the CLI module. I got rid of them.

You may also test only individual files :
(assuming analyzer tests are compiled for the named module)

$ exercism_analyzer --analyze-file <full-path-to-.ex>:<module-name>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was an option in the CLI, but could not actually be passed to the analyzer, so I got rid of it


You may also pass the following options:
--skip-analysis flag skips running the static analysis
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the purpose of skipping analysis except for debugging, so I got rid of that option too. I think it still exists somewhere in feature though, although it's never used.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it still exists somewhere in feature though, although it's never used.

Do you mean the ability to skip a single check? Because I don't see anything related to skipping analysis as a whole anywhere else 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I suppose that's what I meant. There's probably no relation beyond the name.
Anyway, I don't see the point of skipping anything, just remove the code :)

With depth provided to a function
"""
@spec prewalk(Macro.t(), (Macro.t(), non_neg_integer -> Macro.t())) :: Macro.t()
def prewalk(ast, fun) when is_function(fun, 2) do
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was a bit hesitant to get rid of this. It seems like it could be useful, but it's not actually used anywhere. I'm assuming at this point we won't need to add another complex analysis tool.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No mercy for dead code! Even if it looks potentially useful, we will be able to recreate it when we need it. Making the code base easier to understand by removing trash is the better option every time, IMO.

@@ -121,4 +121,89 @@ defmodule ElixirAnalyzer.ExerciseTest.CheckSourceTest do
"""
]
end

test "full check_source definition in a test file for coverage" do
defmodule Coverage do
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one of these cases where even though these things are being used somewhere else, somehow the coverage tool doesn't see it. It's a bit messy, but I suppose it doesn't hurt to have redundant tests.

@@ -150,6 +170,111 @@ defmodule ElixirAnalyzerTest do
end
end

describe "different failures" do
test "summary for a submission that did not run" do
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit artificial, because I needed to find a test that gives a report with an incomplete analysis, but could not really find a branching in the code that would lead to this. From the point of view of Summary.summary/2 it makes sense to have that formatting option though...


feature "skip" do
comment "skip"
status :skip
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need to keep that status field? It might have been useful for debugging, but we don't actually use it anywhere in the code. I think it's commented out in two-fer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My guess is that it would be safe to remove status. @neenjaw do you have any thoughts about this?


feature "meta" do
comment "meta"
meta(keep_meta(true))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, do we need this meta field?
I can't imagine an application where we would want to keep the meta information. In fact I'm pretty sure that setting keep_meta(true) is equivalent to "never match this form".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can delete it. Forcing meta to match would mean forcing line numbers to match, which doesn't seem useful to me for anything. Adding an empty new line anywhere would mess up the whole analysis? Nah.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I'll get rid of it. In a different PR :)

Copy link
Member

@angelikatyborska angelikatyborska left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent job and very valuable changes 👏. Only one more fix to README.md is necessary before merging.

If you want to keep removing stuff (skip and metadata for example), can you open a new PR for that?

I could do it though, and we could use coveralls.io to get a badge. I'd be happy to add that (in a different PR, I guess).

Totally, go for it!

lib/elixir_analyzer/cli.ex Show resolved Hide resolved
@@ -256,7 +250,7 @@ defmodule ElixirAnalyzer do

submission =
submission
|> submission.analysis_module.analyze(submission.source)
|> submission.analysis_module.analyze()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice getting rid of a duplicated argument 👍


You may also pass the following options:
--skip-analysis flag skips running the static analysis
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it still exists somewhere in feature though, although it's never used.

Do you mean the ability to skip a single check? Because I don't see anything related to skipping analysis as a whole anywhere else 🤔

@@ -60,7 +60,7 @@ defmodule ElixirAnalyzer.ExerciseTest.CommonChecks.FunctionCapture do
functions
end

{node, %{capture_depth: depth - 1, functions: functions}}
{node, %{capture_depth: depth, functions: functions}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did you discover this bug? When I bring it back, no tests fail, so it wasn't by writing tests 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry about that! I took me a while to remember.
I think it was from this test:

# https://github.com/exercism/elixir/blob/main/exercises/practice/sgf-parsing/.meta/example.ex
defmacrop lazy(parser) do
quote do
fn string -> unquote(parser).(string) end
end
end

At some point I was checking if actual_function? gave the right result, I was printing some values and I was expecting to see that test case pop out, but it never came, so I realized that depth was becoming negative.

I've added a failing test now. Let that be a valuable lesson of never fixing bugs without context.

Comment on lines +56 to +58
unless Keyword.has_key?(feature_data, :comment) do
raise "Comment must be defined for each feature test"
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome 🎉 I didn't notice before that this one macro is missing the comment check

With depth provided to a function
"""
@spec prewalk(Macro.t(), (Macro.t(), non_neg_integer -> Macro.t())) :: Macro.t()
def prewalk(ast, fun) when is_function(fun, 2) do
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No mercy for dead code! Even if it looks potentially useful, we will be able to recreate it when we need it. Making the code base easier to understand by removing trash is the better option every time, IMO.


feature "skip" do
comment "skip"
status :skip
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My guess is that it would be safe to remove status. @neenjaw do you have any thoughts about this?


feature "meta" do
comment "meta"
meta(keep_meta(true))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can delete it. Forcing meta to match would mean forcing line numbers to match, which doesn't seem useful to me for anything. Adding an empty new line anywhere would mess up the whole analysis? Nah.

@jiegillet
Copy link
Contributor Author

OK, I think I've addressed everything, please check my last 2 commits.
I'll wait for @neenjaw's input before removing :status but it would be in a different PR anyway.

Like always, thank you for your incredible thoroughness in your reviews, especially for this huge one!

Yay 100%!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
x:action/improve Improve existing functionality/content x:module/analyzer Work on Analyzers x:size/large Large amount of work x:type/coding Write code that is not student-facing content (e.g. test-runners, generators, but not exercises)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants