GSoC 2019: Chord Symbol Playback - Week 7
We're past halfway through the official period!
So far
The last day I've beeen flying and having some issues with airport wifi so I apologize for the late blog pose (I'm home now though! :)). This week, I've been full speed ahead on voicing algorithms. I've got some kind of version of a few different algorithms and you can see them in action here:
And here is a look at the actual notes produced by a few of the algorithms for sparse and dense chords:
This is of course not perfect and you can see some issues here with some interesting note placements. In the video you can see some issues with notes and the melody and other issues with muddiness. Certain chords are better voiced with different algorithms which is expected. This will be polished in the coming week.
One realization I've had is that there isn't too much of a substantial audible difference between the different voicings. Of course if we're looking at 3 note voicings compared to close position voicings for a big chord there will be a difference, but in general the difference between voicings is fairly small. In any case, though, it was good to experiment with different ideas and see how they pan out.
This week
This week the plan is to clean up the backend and revisit tests and TODOs. I believe that the project is fairly functional now, but needs to be cleaned up so that it can be more easily expanded on in the future and so that it is easier for users to use. Also I'll need to test the code more intensively to prepare it for users. There's still a lot that can be done, but at this point I want to make sure that it's well presentable and clean before moving on to "non-essential" features. Also, I will look into the auto feature which can potentially pick different voicings for different chords and take into account the melody when voicing a chord directly.
See you all next week!
Comments
Nice! I think you are right to observe that the difference in sound between voicings isn't that great, and the conclusion from this is, I think, to say you can probably stop exploring new algorithms, and instead focus on refining the ones you have. Except for the suggestion I had of having an option to automatically add appropriate extensions, that is definitely worth doing. Short version of my rule: always add a ninth, flat it for dominants resolving down a fifth, except for II7. If you don't want to consider surrounding context (and I wouldn't blame you a bit), just say, use b9 for V7 as well as I7, III7, VI7, and VII7 (these are the secondary dominants for IV, vi, ii, and iii). Then, if you want another note, add 13 for chords that include a major seventh and for dominant seventh chords - again, flatting the 13 in the same cases where you'd flat the 9. If it works better for whatever reason to take the 13 first, fine. Oh, and feel free to use #11 on the dominant chords that don't take b9/b13 (and aren't explicitly written as "sus" or 11).
BTW, if you need to make a choice, I'll take top-heavy clusters like your "A13 Close" over muddy ones like "A 4 Note" any day of the week :-). In other words, when in doubt, aim high.
Now that I have your branch building and am playing with it more, I'm sure I'll have other suggestions regarding code style etc. But I'm a fan of getting something basically working first (not perfect, just enough to convince yourself the approach is viable and hopefully get excited about) before worrying the more boring details, so we're right on track, I'd say!
I will be thinking this week about what I mentioned earlier about Nashville notation and Roman numeral analysis, and we can see if there is a way of refactoring the class hierarchy in a way that works for what you are doing and what I am doing.
In reply to Nice! I think you are right… by Marc Sabatella
-adding a check box to the play panel to play with/without chords would be appreciated.
-fully support the idea to have something functional and SIMPLE rather than complex voicing algorithms, once you start to refine small details of voicing you are in a state where you take over control from the machine anyway.
And to take that control, rather than a complex interface to tune the chords generation, I would rather have an option to generate the staff with the generated chords and manually work from there.
In reply to Nice! I think you are right… by Marc Sabatella
Sounds good! I'll focus on refining everything + auto extend for now!
Yes I fully support that. The voicing algorithm should be simple.
I think you should always ask the user to use 7 (and / or Maj7) for 4+ voice chords (this indicates the chord type/function). *1
With the exception of bass, there should be no more than one-octave interval between other voices.
Always add the third and seventh and add modified degrees into the chord (if available any).
Preferably it is good to write Bass to the additional staff (or make an option for it). And so, if the chord contains many elements (for example: 4+ voices or more), you can ignore the root and the fifth in the upper staff.
*1 : For example:
In reply to I think you should always… by Ziya Mete Demircan
Maybe it's just a cultural difference, but those two points aren't really too applicable in the USA. Most lead sheets I play use C9 rather than C7(9). Also C13 for me means I should be playing at least C, Bb, E, A and omitting the 11 or 4 which is of course different than a C7sus4. That being said, cultural differences are also important to take into account. It might be worth adding a more customizable interpretation system so that we can account for all users around the world, although we might not get to it until quite a bit later. I do agree with the other points though!
In reply to Maybe it's just a cultural… by Peter Hieu Vu
If there is a cultural difference in performing these chords, it is not between USA and another country.
This is between a music genre (style?) and another genre. (eg: between Classical, Jazz and Pop)
I'm just informing.
It is up to you to evaluate and decide.