Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeErrors from Results do not propagate through list creation #1523

Merged

Conversation

aucampia
Copy link
Member

@aucampia aucampia commented Dec 21, 2021

Creating a list from from a query result that generates a TypeError
succeeds and returns an empty list.

The correct behaviour would be that the TypeError propagates through the
list constuctor and that no list is created so that the caller can be
aware that an error occured and that the caller does not get a seemingly
okay list object which does not contain the result of the executed
query.

This patch adds some tests for SPARQL, including expected failure tests
that demonstrate the problem of TypeErrors not propagating through
list constructor.

For clarity, this adds an xfail for a known bug, once the bug is fixed the tests will be reported as xpass (unexpected pass) and someone should then remove the xfail marker.

@nicholascar
Copy link
Member

Still working my way through understanding all this code... Looking good so far but I just want to ensure I'm across it all before approving!

@aucampia
Copy link
Member Author

aucampia commented Jan 2, 2022

No worries, I will look at fixing it when I get time, the problem is quite weird and I'm not entirely sure what the best approach to fixing it is, I think it may be to add some state in the result object but there may be more elegant fixed.

@nicholascar
Copy link
Member

So I guess this will appear in the testing logs as an xfail, but how will we indicate, in testing logs, which xfails are expected to always fail versus things which are known failures but which should be fixed?

@aucampia
Copy link
Member Author

So I guess this will appear in the testing logs as an xfail, but how will we indicate, in testing logs, which xfails are expected to always fail versus things which are known failures but which should be fixed?

xfail should only ever be used to annotate tests for known defects or deficiencies, and once the defect is fixed the test should pass, and the test should be reported as xpass and we should go remove the xfail annotation. And currently as far as I know all xfails in our codebase are used like this, at least all the ones I have seen and added are used like this.

This is in line with pytest intent also, from pytest docs: Skip and xfail: dealing with tests that cannot succeed

An xfail means that you expect a test to fail for some reason. A common example is a test for a feature not yet implemented, or a bug not yet fixed. When a test passes despite being expected to fail (marked with pytest.mark.xfail), it’s an xpass and will be reported in the test summary.

The name is a bit unfortunate, because in practice is more a known failure than an expected failure, it is actually quite the opposite of an expected failure, but I guess the name is somewhat of a legacy carryover.

In cases where the correct behaviour is that an exception gets thrown, we should use pytest.raises() - but in this case, the expected behaviour is that an exception is raised, but it is a known defect that an exception does not get raised, so the xfail is there because the pytest.raises() does not get the exception that it should get.

Once this problem of exceptions not propagating through list creation is fixed this will report as xpass unless the xfail marker is removed from it.

The benefit of having tests marked with xfail for known issues is:

  • It is easy to show someone the expected behaviour as it is expressed in the form of a test which has decent semantics for expressing what something should be doing.
  • It can help us notice if we inadvertently fix a known issue by reporting xfails that pass as xpass.
  • It is useful for writing conformance reports as it can be mapped to earl:failed. Currently we skip many tests from test suites because they failed at the time the test suites were added, but some of them may have since started passing, and now when reporting we cannot actually report the test as failing, because we don't run it at all - so skip can only really be mapped to earl:untested. If instead it was running but had an xfail marker it would be reported as failed instead of just being reported as skipped.

@aucampia aucampia force-pushed the iwana-20211221T0151-xfail_typeerror branch from 4e8b595 to 38c0ff5 Compare March 16, 2022 21:44
@aucampia aucampia force-pushed the iwana-20211221T0151-xfail_typeerror branch from 38c0ff5 to d7d2df4 Compare April 4, 2022 19:44
@aucampia
Copy link
Member Author

aucampia commented Apr 4, 2022

Will re-check these when I have some time, the tests fail now but they really should not, so may be a regression.

@aucampia aucampia force-pushed the iwana-20211221T0151-xfail_typeerror branch from d7d2df4 to 6ccd022 Compare April 5, 2022 17:58
Creating a list from from a query result that generates a `TypeError`
succeeds and returns an empty list.

The correct behaviour would be that the TypeError propagates through the
list constuctor and that no list is created so that the caller can be
aware that an error occured and that the caller does not get a seemingly
okay list object which does not contain the result of the executed
query.

This patch adds some tests for SPARQL, including expected failure tests
that demonstrate the problem of TypeErrors not propagating through
list constructor.
@aucampia aucampia force-pushed the iwana-20211221T0151-xfail_typeerror branch from 6ccd022 to aab1ea8 Compare April 5, 2022 17:59
@aucampia aucampia requested review from ashleysommer, a user and white-gecko April 5, 2022 18:00
@aucampia
Copy link
Member Author

aucampia commented Apr 5, 2022

Will re-check these when I have some time, the tests fail now but they really should not, so may be a regression.

Problem was due to how CUSTOM_EVAL was referenced, passes now.

@aucampia aucampia added the bug Something isn't working label Apr 9, 2022
@aucampia aucampia merged commit 8fb71bc into RDFLib:master Apr 13, 2022
@aucampia aucampia deleted the iwana-20211221T0151-xfail_typeerror branch April 14, 2022 09:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants