Test script facility: a new approach to testing and reporting bugs
Hello everyone.
We are pleased to announce the new testing facility which allows recording, playing and automated running tests on Travis. The new facility is based on self-developed scripts engine which keeps user actions in human-readable format. The new engine makes creating tests easier for both developers and testers. Moreover, they extend the number of scenarios which could be caught, recorded and automated in Travis.
Quick tour
We recorded a video to show you a quick tour of the new testing facility: https://youtu.be/Ua35MB0qnRc.
How to add a new test to the repository (developers)
- Open script recorder tool (Tools → Script Recorder)
- Change current directory to /mtest/testscript/scripts
- Open a file from testscript/scripts/init folder (if you need to use a file which is not there yet, just put it there)
- Enter a for the script (don’t use extension, e.g. “quarter_note_input_failed”)
- Do necessary operations in the score (see “Limitations of the script engine” section)
- Press “Stop recording”
- .script and .mscx files should appear in scripts/ directory. They are required for the automated testing facility so add them to Git repository (as well as init file if you added a new one).
How to use scripts when reporting bugs (testers)
- Open script recorder tool (Tools → Script Recorder)
- Open the file which is affected by the issue
- Choose any folder where you want to write the script, enter a name.
- Press “Record”
- Do the operations to reproduce the issue (see “Limitations of the script engine” section)
- Press “Stop recording” (not applicable if a crash is an issue. Script will be written in this case by the way)
- Attach the archive with the original file and the “.script” file which appeared in the chosen folder to the issue description. If you want to demonstrate the final state of the score you can attach the corresponding “.mscx” file too (which can be found alongside with “.script” file in case there was no crash during script recording)
Limitations of the script engine
Currently, we support the following actions in the scripts:
- keyboard shortcuts
- menu actions which do not open additional dialogs
We are planning to improve the scripting facility to support adding palette elements to the elements, and capture properties changed via Inspector panel. Text editing is not recorded, too.
The new facility works along with mtests and vtests frameworks, but significantly simplifies writing tests.
We need your feedback
- Use test scripts each time you report issues.
- Use tests in Pull Requests to keep fixed behavior.
Please, let us know about all the inconsistencies you faced while using the tool. Follow this thread to be notified about the improvements we are introducing to the facility.
Comments
This sounds very exciting for development! Just curious, would there be any way to leverage this to create any sort of user-accessible "macro" facility?
In reply to This sounds very exciting… by Marc Sabatella
Yes, I see no reasons why this could not be made available for users one day if there is such a need. However this script recording framework still lacks support for most of user actions and needs working on some other issues which are neglected currently to speed up its development. It is currently useful for development purposes but will hardly fit the regular user's needs right now.
In reply to Yes, I see no reasons why… by dmitrio95
Sure, just thinking aloud about future possibilities :-)
I've discovered that can run these test scripts from your IDE. After first building debug version of musescore, then:
for QtCreator: in Project Tabs -> Run -> Run Configuration -> Command line arguments set to:
--run-test-script {name of your test script}
and set working directory to your MuseScore sourcde directory plusmtest/testscript/scripts
for MSVC: right click mscore in Solution explorer and right click properties. Under "Configuration Properties" for "Debugging", similarly set the "Command Arguments" to
--run-test-script {name of your test script}
and the "Working Directory" to your MuseScore dir plusmtest\testscript\scripts
This way when investigating a bug, you can create the test script to automatically run the steps to hit the bug. Then you can set a breakpoint near where the bug occurs to step through code to see exactly what is happening.
In reply to I've discovered that can run… by ericfontainejazz
If this is the new way going forward, could we migrate Travis-CI to use these scripts instead of the older mtests?
In reply to If this is the new way going… by Michael Froelich
All new scripts are used during the testing step along with the mtests. Choose the way to test your changes and add mtest or script.
Mtests are mainly unit tests which are used to check data structures or particular classes behaviour.
Scripts are more (and only) about integration tests which involve interactions of different classes and GUI scenarios.
I had no idea this existed until now. Maybe we should have more stickys? It seems like the kind of info new people coming to the forum would want to have, and returning people would want to reference.
In reply to I had no idea this existed… by Laurelin
Right, it could go to the developer handbook then ? https://musescore.org/en/handbook/developers-handbook
In reply to Right, it could go to the… by [DELETED] 5
Oops, I remember one guy asked me about appending the info to the page... Okay, he couldn't handle it.
Hi!
I added a script test and I tried to build several times but all failed due to the same error:
https://travis-ci.org/musescore/MuseScore/jobs/613114344?utm_medium=not…
"No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#build-times-out-…"
Is this because of me or really the build settings? The PR is https://github.com/musescore/MuseScore/pull/5475.
In reply to Hi! I added a script test… by Howard-C
Hello! I have answered in a comment to the pull request.