Hi devs and QA,
I’d like to follow up on Manual tests vs Automated tests analysis
@ilie.andriuta It would be interesting to review the results since December 2020, and check useful or not the strategy is.
- How many manual tests we’ve been able to mark as automated and thus excluded from manual testing since the beginning of this strategy?
- How many jira issues were created for automated tests to improve?
** Answer: 41 as of today, with only 3 closed unfortunately
** BTW @ilie.andriuta have the manual test status been updated for these 3 tests? - How fast are you adding new tests for new features/improvements? It would be great to have a history of new or updated tests per month.
** I don’t remember a lot of cases when QA would notice a new feature in the release notes or some improvement (or even an important bug fix) and ask if there are automated tests for it and review it to decide if a manual test needs to be added.
I don’t have the figures yet but my guess is that the conclusion is probably:
- Yes it works as it’s been able to save a lot of time for the manual testers
- We’re probably not progressing fast enough on identifying automated tests from the manual list
- We’re definitely not spending enough time on closing Loading... . Should we do a XWiki Day for that?
- I’m not sure that QA reviews enough new automated tests for new features/improvements/important bugs to let devs know about what’s missing or add some new manual test to compensate.
Thanks
3 posts - 3 participants