Score with countless Smart Layout problems in 3.0-alpha

• Sep 23, 2018 - 22:36

The attached score, created in MuseScore 2, exhibits many serious problems when it is opened with 3.0-alpha and all positions are reset. I believe it may be a useful test case, revealing multiple examples of where Smart Layout can go haywire.

These are serious problems; not only cases where Smart Layout fails to fix collisions (such as the dynamics overlapping the barlines), but ones where it imagines collisions where none exist and pushes things out of line (worth noting in passing that Format / Style / Score / Autoplace / Vertical align range clearly doesn't work).

I hope the developers will mine this score for the treasure trove of layout bugs it reveals—and, most importantly, not ignore them! The new team seems to care a lot about making releases quickly. Quality control must be the higher priority. Don't set a beta deadline you can't meet (the beta is supposed to work, too: https://simplyrobert.wordpress.com/2015/02/17/musescore-2-0-beta/).

Don't leave the layout bugs unfixed.

Attachment Size
Excellence.mscz 498.08 KB

Comments

Isaac, thank you for reporting this score. Btw, the score looks almost completely correct in current master (2018-09-24). That is why I want to update the package available under the alpha label.
I see few inconsistencies with long dynamic names, staves distance sometimes is incorrect and voltas are seemed to incorrectly import positioning values from 2.3.2. All other elements are positioned correctly.
So, could you please mention the inconsistencies you can find with the latest nightly in details?

To sum up, current master is definitly better than 3.0alpha, so I will update the package under the link on downloading page. We don't hurry, but we have issues we should work on to meet deadlines. Your help is appreciated and helps us to do more with better quality!

In reply to by Anatoly-os

We don't hurry, but we have issues we should work on to meet deadlines.

Deadline... Sounds like we can look forward to the continued downward spiral of quality that has occurred since Ultimate Guitar took over MuseScore. Your statement is self contradictory. Either you don't hurry or you have deadline, you can't have it both ways.

Previously someone would ask when the next version would come out and we would tell them when it's ready. And it came out with few bugs that were not obscure and require an odd sequence of events to duplicate.

Now we can tell them that something, usable or not, will come out on the date Anatoly-os has predetermined (or been told, it makes not difference to us customers). This is the Corporate mentality I was concerned about. As an open source community project, making deadlines does nothing but force you to push out something even if it has not been properly tested and fixed like you did starting with version 2.2. The "Emergency" releases that came out quickly after 2.2 & 2.3 are evidence of this.

I wan't MuseScore to be a quality product. I use it all the time and I very much look forward to version 3 coming out. However, I can wait for the long list of reported bugs and the longer list of not yet reported bugs to be fixed. I can wait for as long as it takes to get it right. This used to be the mentality of the people driving it, I hope this mentality will return. I realize this is an alpha release, but it's still and needs a lot of work.

In reply to by mike320

Thank you for your input, Mike. I want to share my vision of how to keep best quality. I hope it will make things as transparent as possible.

An ideal situation is the one when software do have automated tests infrastructure and overall set of tests which are run automatically and show the actual state of the application. All tests cover as many user scenarios as possible and indicate the changes happening during development.

MuseScore is not the ideal situation. The software trade-off is shifted to the performance and the speed of development which allows introducing changes relatively fast. In the meantime, the question of quality of the MuseScore relies almost fully on the community expertise. MuseScore’s community consists of expert users and software developers who love the product and want to make the Number 1 music notation software in the world. That is why testing process fully relies on the feedback.

The only way we have to estimate the quality of the product is a number of issues existing in the tracker. That is why I suggest a transparent and clear way we estimate the quality of the MuseScore Editor. We are counting the percentage of the active issues and bug reports opened under version 3.0-dev. We have about 19% of such issues right now. I think the quality means that we have 0 (Zero) Critical issues, less than 0.5% of major issues and less than 1% of “normal” issues when we are ready to make an official public release. When there are 0 (Zero) critical issues, less than 2% of major issues and less than 5% of “normal” issues we are ready to make an official beta release.

Making releases attract people to trying the software, testing, reporting bugs and contributing to the project. More issues we create in the tracker more issues are fixed by contributors and core team. This approach will lead to increasing metrics when users are testing actively which is good because it shows the real state of the software.

This is the only criterion we use to prepare releases. It is clear and transparent. I'm sharing the current state of the metrics in the following picture. Please, take a look at the proposed quality assurance percentage system and let me know what do you think whether it lets us get a better quality of the product.

2018-09-24_issues_status.PNG

In reply to by Anatoly-os

For the record, I am a big fan of MuseScore and have been for a few years, since I discovered version 2.0.3. I contribute the only way I know how to make it the best music notation software in the world. In many peoples' opinions it's better than Finale and Sibelius. It is neither of these programs, so you will never get everyone to think it's better and I've never used either.

First of all, as you should know very well, I have been testing the alpha release and giving feedback. I open issues when I'm reasonably sure they are not already reported and make comments in the forum when I'm not sure. This is usually because I remember seeing an issue report on the subject but don't remember which word to use to find it. Face it, the search feature on this site is sorry. Discussion on these comments would be most welcomed. There are no doubt in my mind some major issues I don't know how to reproduce on demand. I don't want my issues to be put into "Needs more info" status and forgotten, which is the usual result.

I did very little programming in C 30 years ago, so I remember some things, but I'm in no condition to learn C++ to help in development, so I contribute the best way I can.

Automated testing can only go so far as you have stated. The ultimate test is to have a human try to break it and make sense of it. Music being its own language, as you said, greatly limits any automated testing you can do. Getting testers to actually post results on the site is an entirely different problem. I monitor several of the languages besides where I have seen you, and there is very little feedback on the non-English forums from testing. Actually, besides the issues that are mostly created by the usual users, few people actually create bug reports. I don't believe these testers are not finding bugs, I believe that for some reason they are simply not reporting them either in their own forum or the issue tracker.

As far as critical and major categories are concerned, perhaps there needs to be some rethinking or clarification. Here's how I loosely classify them. If it crashes, it's critical and there's no gray area. If there is a problem that causes the score to be unusable it is at least major and becomes critical if you can't open the score. If an average user can easily workaround the problem then it is major, otherwise it is critical. If it has to do with how items are displayed, it is typically normal, but I can see it becoming major if, for a hypothetical example, the notes are displayed on the wrong line or some similar situation. If the problem requires a huge zoom to see, then I consider it minor, otherwise its normal. If something just doesn't work the way its supposed to, I make a decision of minor, normal or critical based upon how easily you can accomplish the same thing. If I don't know what else to call it, its normal.

There are things you, as the program manager, need to do to assure these are accurately tracked. First, don't leave a status with the word patch in it. You either need to accept or reject the patch and explain the problem if its rejected, either in the issue tracker or git-hub if more appropriate, but you need to change the status to an appropriate state (like active, needs info, won't fix or by design). There are some (only looking at 3.0/master) that have been in this state for 2 years, some are major and critical. Does anyone know if these need to be fixed still?

You need to look at the 2.x issues and decide if they are applicable to 3.0. If not, they need to be closed, otherwise they need to be fixed by someone. 2.1 and before bugs, should not exist in 3.0. 2.2 and later bugs, should probably be fixed in version 3, but they are a bit more understandable to still be left open. I have noticed that you have been working on this.

You have a paid staff that works on some of the programming for 3.0. It seems for the most parts they do not make issues, they simply program, and that's fine. They obviously work on what you want them to work on. If there is an issue you would like a volunteer to look at, perhaps you could make a request for someone to look at it, and no doubt someone will respond that they are on it. You have volunteers who put in a lot of work and do some major projects, like Figured Bass, a lot of the tablature, many of the accessibility features and currently revamping the jump system and creating a Shamisen tablature among others.

If you are using the same metrics you used to release versions 2.2 and 2.3 then you need to rethink your metrics. There's something you're not taking into consideration. I was honestly shocked when these were pushed out. I won't do more than say that your last IOS release appears to have been a total disaster, and people pay for that. Ultimate Guitar is also responsible for both the .com and .org sites. Support for these has been far inferior to when a single person was responsible for most of the support for both sites. The same bugs, from a users point of view, seem to come and go constantly, especially on the .com site.

If there is something I can do to help further improve MuseScore, do not hesitate to ask. I'll let you know if I'm capable fulfilling your request.

Speaking as a volunteer contributor, I think it is safe to say that everybody wants each release of MuseScore to be the most stable release ever. And it is true that not enough care has been taken recently to make sure that this goal is met. I don’t know anything about any deadlines, but I do understand the desire to periodically bring the latest and greatest MuseScore to the public in the form of a new release.

We are constantly finding and fixing bugs in the code. Some of these bugs have been present from the beginning, and others have been introduced more recently. Sometimes a bug fix has unforeseen and unintended side effects, which results in a decrease in the stability of the software. When this is discovered, we attempt to fix it right away. An “emergency” release is an indication that there was a last-minute change to the code just before the previous release that did more harm than good. There is usually a “code freeze” period before the release that is intended to prevent such last-minute changes from happening.

I was sad when I learned that there would not be any more 2.x releases to address the issues that are present in 2.3.2. I did not think that 3.0 was anywhere near ready to call an end to development on 2.x. But it has forced us to focus our attention on 3.0, and we have made much progress since. I still think that MuseScore 2 is a great product, and it still deserves to be maintained. This involves continuing to issue releases that contain bug fixes that increase the stability of the software.

In reply to by mattmcclinch

A quick 'emergency' fix (like 2.3.2) for a minor release (like 2.3) is not a problem IMHO at all, on the contrary, IIRC after 2.1 we had a huge amount of bug reports reg. some marching band bug (forgotten the details, just remember that I was very puzzled about how may users are apparently scoring for marching band), no fix for that until 2.2 about a year later.
And there's always the option for users to go back to the previous release, without losing any score.
And, again IIRC, 2.1 already was supposed to be the last 2.x version (at first is was planed to be named 2.0.4, while master was supposed to become 2.1). As was 2.2...

For a major release though we indeed need to be more careful, as the way back to a previous release, for scores created with the new version, is blocked. And that is exactly why an Alpha release was put out, and Beta releases will be put out, until we can be reasonably sure to not have any major bug left.
But there will never be a 100% bug free release...

In reply to by Jojo-Schmitz

To me, the fact that there was no "emergency" release a week after 2.1 to fix the several extremely critical bugs that were reported almost instantly is nothing to be proud of or hold up as a model of what we should strive for. The fact that we did have these quick point releases after 2.2 and 2.3 to address bugs is to me a sign of responsiveness and is a Good Thing. I'd rather see significant regressions fixed immediately rather than languish for a year, and I suspect most other users would as well. So to me, the fact that we have been moving in this direction is one of best things to have come from the additional resources now available.

In reply to by Marc Sabatella

I completely agree. Many developers regularly push out updates to their apps. These updates do not necessarily bring new features, but do resolve issues that occur in the previous version. Whenever my apps update, I am encouraged that the developers care enough to deliver bug fixes to their users. This is the model that I would like to see MuseScore adopt, and it looks like we are indeed moving in that direction.

In reply to by Anatoly-os

You either pushed 2.3 out too fast or you decided that the bug was unacceptable and you were forced to release a version you had no intention of releasing. In either case, that makes 2.3.1 and 2.3.2 emergency releases. What is very clear is that when UG took over MuseScore there priority was to close the door on version 2 and get 3 released.

In reply to by mike320

It is not true. Please, look through the history on the forum before the acquisition and you will see that 2.2 was scheduled as the last step in 2.x releases. Btw, UG initiated development to push forward notation and playback for percussions, that's why we released 2.3.

Let's concentrate on making 3.0 alive and great. Discussing the past in such a way doesn't help much :)

Do you still have an unanswered question? Please log in first to post your question.