-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow expectThat
to return True/False if the condition passed or failed.
#239
Comments
HI @blakev, I don't understand the need for this change. perhaps you could show an example test where the existing functionality makes things awkward and this feature would help? Generally speaking, I think adding flow control (if/else/etc) statements to a test is a bad idea: It becomes harder to reason about what was tested by looking at the code. If you find yourself needing to do this, there's almost certainly a better way - perhaps by splitting the tests into two or more tests that test each path separately. Cheers, |
The updated return statement doesn't break anything, because you wouldn't be calling the method to expect a |
Hi @blakev, You are correct about the return value - that's what happens when you review code before your first coffee in the morning, sorry! I still don't understand the rationale for wanting this. You say "The test has no conditional logic, instead the logging/reporting metrics need to know which expects passed and which failed." - which reporting mechanisms, exactly? Perhaps you could show us what you're ultimately trying to achieve here? Some example code would help, I think. |
Consider a per-test-method tear down that logs externally: def expectThat(self, matchee, matcher, message='', verbose=False):
""" An assertion helper method.
Args:
matchee: An object to match with matcher.
matcher: An object meeting the testtools.Matcher protocol.
message (str)
verbose (bool)
Returns:
None
"""
self.logger.debug('called expectThat, ' + message)
# broad wrapper around TestTools `expectThat` method for additional
# conditions and logging.
super(CouncilTestCase, self).expectThat(matchee, matcher, message, verbose)
def tearDown(self):
""" TearDown a test method defined in a TestCase class. """
self.logger.debug('test-case teardown')
interfaces = self._result_interfaces()
if interfaces:
self.logger.debug('logging pass/fail details to (%d) interfaces' % len(interfaces))
# if we have a results instance class defined
# then we want to log these test results to that database
# using its interface.
details = self.formatted_details(misc=self._misc_to_dict())
if self.failed:
row_id = self.__log_failed(details, interfaces)
self.push_status('Failed', {'done': True, 'passed': False}, 100)
else:
row_id = self.__log_success(details, interfaces)
self.push_status('Passed', {'done': True, 'passed': True}, 1)
self.push_status('Logged to database', {'database_rowid': row_id})
self._auto_teardown(None)
self._with_driver = False
self._clear_misc(delete_persist=False)
super(CouncilTestCase, self).tearDown() You know whether or not the whole method passed using standard It would be much easier to know, per By simply returning for key, value in detail_items:
# iterate through all of the details
# trying to assign them in buckets with like-typed objects
if key.lower().startswith(config.mismatch_error_str):
# filter out all of the MismatchErrors, logged by a call
# to :fun:`.expectThat`.
split_msg = value.as_text().split(':', 2)
if len(split_msg) == 2:
# no message parameter passed, fill with a blank
split_msg.append('')
# take of all the white spacing
split_msg = list(map(lambda x: unicode(x).strip(), split_msg))
_, err_str, msg_str = split_msg
mismatches.append('%s,%s' % (err_str, msg_str))
continue |
If I may summarise to see if I understand the crux of the issue:
Sure, you can do that with something like your patch, and I'm ok with something like that being added - though we should take the time to consider the impact on async tests, where futures are a big part of the API. I'm not sure if you are aware though, but tearDown is not guaranteed to run - if setUp fails, tearDown does not run. Only cleanUps are guaranteed to run. The reasons for this are fundamentally backwards compatibility, though cleanUps can provide stronger guarantees in part due to Python's cooperative subclassing behaviour (can't guarantee that someone remembers to upcall for tearDown). So I wonder if it would be useful perhaps to decouple the two considerations you have:
For the former, addDetails will capture arbitrary (and in principle arbitrarily-sized, though I'm sure we have not got it streaming and/or/buffering everywhere as yet) attachments on both success and failure (the default UI just doesn't show anything if the test case succeeds). For the latter a custom StreamResult object - such as Anyhow, I don't intend to try and force you to solve the problem the way I would ;) - so lets focus on the review at hand. |
@blakev is the summary that Robert provided a good representation of the issue at hand? And in case it is, would you be happy to go down that path (albeit surely requires more work on your side), or would you like to move forward with the PR attached to this issue? In the latter case, please can you address the review comments from Robert? |
@freeekanayaka I implemented his suggestion in the project I was working on and forgot to go back and apply it upstream. I'll work on it this week and resubmit a pull request basically mirroring @rbtcollins outline. |
Because
expectThat
does a differed assertion, it would be nice to be able to know while theTestCase
is running whether or not the condition passed for advanced logging or logic.Our use case is using
expectThat
matchers in long running tests that need to log the success AND failures of those checks, not just the ones that failed at the end of the run (returned ingetDetails()
).The text was updated successfully, but these errors were encountered: