GNOME Bugzilla – Bug 754052
validate: launcher: Add support to skip already passed tests
Last modified: 2018-11-03 11:07:19 UTC
If the test has already passed, then it would be good to skip the test and continue with only failed tests. Since the test status will be saved in the log file, by reading the log file, skipping the test case if it already passed. Added a new argument --skip-passed for this scenario... This might be better to run by default. In that case we can add an argument --force to give an option to force all test cases to be run irrespective of the result.
Created attachment 309939 [details] [review] skip already passed tests
ping :)
Review of attachment 309939 [details] [review]: What is you use case for that exactly?
i have around 1000+ media content. And to run all the test cases for these contents it takes lot of time. Unless the code is changed drastically or the media info is updated, it makes sense to only check for the failed test cases again. It so happens that there will be around 20-30 failed test cases out of say 1000 test cases. After fixing those failed test cases, instead of running each test case individually or running the whole set of 1000 test cases, it would be better to run only the 20-30 failed test cases.. the idea started with, when i had to shutdown the PC when almost 80% of the test cases were already executed and after restarting it had to run all the test cases again. I felt an option like this would be better :)
Review of attachment 309939 [details] [review]: The way of doing it is not really nice, could you rather do --skip-passed previous_result.xml (passing an Xunit file) and use the information about whether the test passed or not from it? It would be much cleaner :)
(In reply to Vineeth from comment #4) > i have around 1000+ media content. > And to run all the test cases for these contents it takes lot of time. > Unless the code is changed drastically or the media info is updated, it > makes sense to only check for the failed test cases again. > It so happens that there will be around 20-30 failed test cases out of say > 1000 test cases. After fixing those failed test cases, instead of running > each test case individually or running the whole set of 1000 test cases, it > would be better to run only the 20-30 failed test cases.. > > the idea started with, when i had to shutdown the PC when almost 80% of the > test cases were already executed and after restarting it had to run all the > test cases again. I felt an option like this would be better :) OK, that makes sense :) Any way we put your test suite in the open in gst-validate-integration-testsuites? :)
(In reply to Thibault Saunier from comment #5) > Review of attachment 309939 [details] [review] [review]: > > The way of doing it is not really nice, could you rather do --skip-passed > previous_result.xml (passing an Xunit file) and use the information about > whether the test passed or not from it? It would be much cleaner :) i tried with this method. One major issue with this method is. First time we run the tests, we create previous_results.xml let us say out of 100 cases there are 60 pass and 40 fail cases. Now for 2nd time, i made some fixes for some of the failed cases. So when i run the tests, i check the xml file and see 60 pass results and hence skip the same and run only 40 fail cases. Now out of the 40 fail cases, 30 cases are success and 10 cases are failure. So the new previous_results.xml will have 30 pass cases and 10 fail cases. Now for the 3rd time, i make some more fixes for the 10 fail cases. So when i run the tests and check the xml file, now there are only 30 pass cases. And i will skip only those 30 test cases. This means the first 60 pass cases found during the first time will be run again this time. So instead of running only 10 fail cases, we will run 60 pass from first time and 10 fail from 2nd time... Not sure if i explained it properly :)... one solution for this method can be, While running the test for the 2nd time, when we skip the 60 passed tests, even though we dont run these passed tests, we copy the results of the passed tests to the newly generated xunit.xml file, so that after running the tests for 2nd time, the xml file will have 90 passed tests and 10 failed tests. Please guide me if this is the right way, or you have any other better solution :) (In reply to Thibault Saunier from comment #6) > Any way we put your test suite in the open in > gst-validate-integration-testsuites? :) Very sorry about this.. I wont be able to share the media files due to company policies :(..
I am not sure I understood what the problem you describe here but if I did the solution is only to always replace last_run.xml with the new testrun results each time at the end of the test run. >Very sorry about this.. I wont be able to share the media files due to company policies :(.. I see, no problem.
Created attachment 317769 [details] [review] skip already passed tests Changed such that xunit.xml file can be passed as parameter. Copying the passed tests to output xunit.xml, so that it will be helpful in finding out what all tests to skip.
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-devtools/issues/18.