Automatic panning - interesting effect!

• Aug 29, 2023 - 13:35

This suggestion is probably too way out for immediate attention, but might be of future interest.

I was trying to test out some new sample libraries - and one was a 909 sample emulation from Samplescience - recently on offer.

I took one of my files, assigned the 909 to the trombone part, and listened to see what would happen. Turns out the library does work - and there are some reasonable instruments - various drums, toms and cymbals. Partly I bought this library because it was cheap and I wanted to see what the interface is like.

While testing - with headphones on - I wanted to check the effect of Left/Right panning, so changed the pan settings while the music was playing. The effect was definitely of an instrumentalist moving around in front of me.

In the synth world, in which some parameters are controlled by LFOs, it would probably be possible to get this done automatically. It would be great if MuseScore - or a future version - could have a feature such as LFO controlled panning. Then one could have whole orchestras moving around in different random or sinusoidal etc. patterns.

Ah well - probably nobody is interested, but I think it would be fun!


Comments

Musescore lacks so many of the very basic features of good music software that automated panning if far, far down the list of features that would be of interest to me. Of course, as the Musescore developers have stated elsewhere, Musescore's goal is to produce simple tools for the "average" non-musician. Anyone who aspires to be a musician, or at least, better than "average" is going to choose a stronger package, of which there are many, for their endeavors. Most DAWs, and even other notation software packages, already allow you to automate parameters, including panning if you wish to do so.

In reply to by karolcpm

Panning is not my thing. Most of the libraries that I use have the panning and ambiance recorded in the samples so I seldom mess with the panning. If I am using samples that do not have any spatial reference, then I may use a slight bit of pan to separate the instruments, but there is no need to automate that.

If I were writing for antiphonal ensembles, or perhaps wanted to simulate a performance where spatial movement was an important aspect, then I can see how panning could be a more important aspect. More sophisticated application allow for "stage" position as well as simple left-right pans.

In reply to by bobjp

So to speak, yes. Spitfires BBCSO libraries, both the free Discovery and the Core version postition each section according to the setup in their studio. So the first violins are on the left, the bases on the right, etc. They record the studio ambiance as well so, although there are adjustments for pan and reverb, I do not find that I need to use them. If anything, I find them a bit too wet.

Reaper has some very sophisticated "sound stage" controls. You can group instruments and then place that group where you want it on the sound stage. For example, if I am doing a pop arrangement that has a rhythm section, a horn section, a solo vocal, and backup vocals, I can place the solo vocal front center stage, then create a group for the horns with the tenor in the middle, the trumpet to the left of the tenor, and the trombone to the right of then tenor. I can create a rhythm section group with the bass in the center, the drumset to the left of the bass, and the piano to the right of the bass. I can then move the horn section in relation to the rhythm section. Say place the horns behind the rhythm section and the vocals in front of the rhythm section, or to any other place on the sound stage. So you are not limited to moving sections only right and left, you can move them forward and back on the stage.

Dorico has the same sort of adjustments and I imagine that Cubase and other DAWs offer something similiar. You can get very creative with your spatial arrangement if you choose to do so. I can see it coming in handy if you are writing for antiphonal choirs, etc.

For simple, run-of-the-mill arrangements, about the only time I fool with the pan adjustment is when digital artifacts mess with the timbre of the instruments. For instance, if you are writing for a brass quintet, the 1st and 2nd trumpet parts playing at the same time can create digital artifacts that would not occur with actual trumpets. Sometimes I can play with the pan controls and put a bit of separation between the instruments. It sometimes helps but not always.

In reply to by Jim Ivy

I had BBC for a short time. I got rid of it because the sting sections tended to swell on consecutive half notes. I do remember the setup showing different placement ideas. But I didn't keep it long enough to hear if the first violins were on the left.

In reply to by bobjp

The BBCSO Discovery strings are very much like the Muse-Sound strings. In fact, Musescore got some of their samples from Spitfire and I would not be surprised to find that they are the same samples. But, of course, the end product will be different. The Spitfire libraries are not limitied and will work in DAWs and other notation programs while the Musescore library is limited to only work with MU4.

The more advanced Spitfire packages allow you to control the attack so that you have more flexibility. The advantage of the Spitfire products are immediately apparent in more advanced notation packages like Dorico. After working in Dorico, I have trouble going back to Musescore even though some things are actually easier in Musescore.

In reply to by Jim Ivy

Yes, Muse strings tend to do the same thing. Which is why I use viola sounds for violins, on occasion. Gasp!

I have Sibelius. There are far more variations of sounds available. And slurs work. And note lengths are definable. As well as stage depth. But sounds tend to be slightly dull. For example, when I first got MU4 working, I loaded a score from Sibelius for orchestra that had a big, big climax. I was surprised how powerful MU4 sounded. Far more so than Sibelius. That doesn't make one better than the other. Just different.

In reply to by bobjp

It has been a while since I used Sibelius. I know that it ships with a Sibelius sound set but will also use the Garritan Personal Orchestra or general midi. Which are you using?

I have been thinking about getting the Garritan Jazz and Big Band set which will also work with Musescore 3 and 4.

In reply to by bobjp

I tend not to get rid of things - unless I'm really short of space. You probably mean the BBC SO Discovery package - the other BBC Spitfire packages may not have the issue you mention, but cost more. The Steinberg HALion orchestra pack - currently on sale at a lower than regular price - is not necessarily any better, but does have some neat features. The instruments in that collection, like some others, also seem often to be ensemble instruments, rather than solo. I use the Spitfire BBC SO set as well as MU4 - layering is always possible.

Regarding panning, I notice that some people have suggested taking several solo patches, detuning them slightly relative to each other, and panning them to slightly different locations in order to get an ensemble effect. That is possible in DAWs, and also in the Steinberg HALion plugin which will also work with MU4.

In reply to by dave2020X

You can also do that with the Plogue Sforzando and Aria players, but the problem is that there are not a lot of good solo instrument patches to work with. So it is really a mute point.

Ensemble patches are much more common than solo patches. I am not sure why that is. Perhaps a solo instrument is more difficult to record and requires more playing techniques.

I am still using the SWAM solo strings, but they are difficult for me. They are wonderful but it takes some good keyboard technique, which I do not have. Someone who can really use the keyboard controls can play them so that most folks cannot tell that they are not actual string instruments. But for me, I sound like a beginning string player just learning how to use the bow! The French Violin doe a pretty good job when I need a solo violin. It is not perfect, but it is better than using an ensemble patch when you need a solo violin.

In reply to by dave2020X

The "several solo patches" idea sounds good on the surface. But players don't play the same in a solo situation as they do when they are in an ensemble situation.

I'm not really interested in paying for anything. I could, but I can't really justify it to myself. My Sibelius is almost 10 years old and was bought by my wife when she was a music teacher. I've been trying to learn MuseScore because I suspect that Sibelius won't run on some future version of Windows. I just got it to run on my W11 machine. There are things I like about both programs. For one, Sibelius has an excellent PDF reader. MU4 orchestra sounds tend to have a bit more punch. But the drum palette drives me crazy. No matter how I edit it. This going to sound funny, but I only write for playback, And just as a hobby. So no DAW. No paid anything. Even so, I am surprised at the results I get. I learn to try to get the best out of what I have to work with.

In reply to by bobjp

If you are writing for playback only, and not using a DAW, then you are missing out. If you are a Windows person, and want a free DAW, there are several available. Cakewalk is free and one of the oldest DAWs out there. I was using Cakewalk back in the 80s and it still gets the job done today. You can do things with Cakewalk that you will never be able to do in Musescore. If you you like to work in notation, Cakewalk will do it. It is not good for printing parts for live musicians, but it will even do that if absolutely necessary. Or you can export from Cakewalk and open the file in Musescore and have both. Anther good choice no matter what OS you use is Reaper. Reaper is not free, but it is very reasonable and has an unlimited, fully functional trial period. With either Cakewalk or Reaper you would be able to do things you can only dream about using Musescore 4.

In reply to by Jim Ivy

Yes I know. The problem I have is the learning curve with PRE. It makes no sense to me, I'm not interested in endless possibilities for adjusting notes. Everyone's idea of good playback is different. I just want sounds that respond to the notation I write. That don't try to force their own interpretation on that notation. Slurs are a concern for me. I don't use them anymore because MuseScore doesn't read them. Solo instruments in MU4 tend to be sloppy. But I spend so much time trying to adjust Sine instruments (and still not getting the simple results I want) that it is hardly worth it. And on, and on.

In reply to by bobjp

That is where we have different approaches. I approach creating an audio file the same way I approach working with live musicians. Any ensemble can start the notes together, but the thing that makes a first-rate ensemble is how they handle the endings of the notes. So I am very exacting about what happens at the end and between the notes. That is one reason I do not use MU4. With MU3, or a DAW, slurs and legato styles are easy to simulate and I am perfectly willing to take the time and make the effort to get as close as possible to the sound I have in my head. And, yes, everyone's interpretation is different, but the only interpretation that I want to achieve is my own interpretation. Someone else's interpretation may be as valid, or better, than my interpretation, but I have a specific idea in mind and that is what I want to produce.

I have not tried it, but you might be able to get the slur effect in MU4 by increasing the gatetime for the default articulation from the standard 100 to 101 or 102. That might work with some instruments but not with others depending upon the attack envelope.

In reply to by bobjp

I agree that using solo patches to try to get the impression of more players might not always work - and one of the reasons is indeed that solo players tend to play differently, though it's perhaps not the only reason.

I guess it works sometimes - otherwise people would discover pretty quickly that it doesn't work. It may also depend on the instruments - perhaps brass instruments would fare better than strings - so if you wanted a fanfare ....

In reply to by dave2020X

I was surprised by the comments re "not doing panning" - and that at least in the Spitfire libraries the panning is "built in". So I did some very quick, and not very accurate or scientific tests. It does indeed seem that there is a very slight spatial bias in the Spitfire libraries, with violins to the left indicated by a slightly higher level on the LH channel. I haven't looked at waveforms to discover whether there are any timing features - delays etc.

However whatever effect is baked in is small, and instruments can be repositioned by volume panning. Other methods may also give better effects - if anyone wants to tinker.

If it is desired to shift the violins [or at least the 2nd violins] to the right - which some conductors have preferred in the past - it may make sense to swap the channels for the violins [L->R and L->R]- and also apply volume based panning, otherwise any in-built pan effect from the Spitfire libraries will be fighting against the quite strong shifts due to levels coming from Musescore. DAW users may find other ways of panning and getting better spatial localisation.

Some people may want to convert individual instruments to mono only, and then apply whatever panning approach they like.

Probably most of us won't bother, as we only want something approximate anyway.

In reply to by dave2020X

Well, the thing is that when you buy Spitfire samples you are paying for the room, the microphones and the engineers with years of experience recording and mixing orchestras. It would be a bit insane to throw all of that expertise out the window and start screwing around with the panning. It would be sort of like buying a new Mercedes and then having some local teenager weld on fenders from the junkyard! You could do that, but why? The Spitfire libraries have plenty of adjustments, but when you choose a Spitfire product you are specifically choosing a specific sound of a specific room. Spitfire's literature makes a big deal about the room where the samples are recorded. Their goal is to furnish samples that sound as though they were recorded in a specific room at a specific studio. They are not trying to create generic samples that could have been recorded in someones clothes closet and that you need to process to death to have a decent sound.

In reply to by Jim Ivy

I ran a correlation meter test on some excerpts played on Spitfire BBC SO instruments. So far there has been no indication of any serious difference between the L and R sides of a stereo source, so I am sceptical of a view that they are already panned into position. They can of course be panned using level panning in tools such as the Logic DAW. I'll install a vectorscope or goniometer to see if there are any more indications of built in spatial attripbutes in Spifire's libraries;

In reply to by dave2020X

I would think that you would be able to hear the difference with your ears. I can certainly hear it.
Here is an mp3 of BBCSO Discovery that clearly allows you to hear that some sections are on the right of the stage and others on the left of the stage. NOTE: This is an mp3 file but I had to use the txt extension because this forum will not allow mp3 files. Download the file and change the extension to mp3 and you should be able to hear it. Can You Hear the Difference-.txt

You might be interested in the way that Dorico handles stage positioning. Check this out:
https://steinberg.help/dorico_pro/v5/en/dorico/topics/mixer/mixer_edit_…

In reply to by Jim Ivy

I can certainly hear the spatial effects in your example. I also can see the effects using a corrrelation meter. What I'm not sure about is whether the effects are coming from the Spitfire library, or from something you have set up in Dorico.

I didn't get anything like that pronounced effect for a simple test of the Spitfire library. If there is an effect from the library presumably it should switch over if the L-R channels are swapped on either individual instruments or the master channel if the BBC SO library is used. What you are getting is definitely more pronounced. I'm not disagreeing with that, but we need to be sure of the reason for that.

In reply to by dave2020X

Well, I made certain that it was not something added by Dorico by creating a blank template that has no panning what-so-ever. You can do the same test in MU4 or Logic. The horns are on the far left, the tuba and bass trombone on the far right. The first violins are front left and the trumpets, trobones, bass trombone are on the right. When you assign the instruments in BBCSO Discovery, it shows a seating chart, so it is fairly easy to write a couple of measures for the horns, followed by a couple of measures for the tuba, etc.

I did not keep the test I set up, but you can easily reproduce it in either MU4 or Logic. Incidentally the orchestra sounds you just bought from Steinberg do the same thing - violin 1 on the left, etc.

In reply to by Jim Ivy

Well, against my better judgment, I broke out and updated my old laptop that has the free BBC sounds. They continue to remind me why I haven't included them on my new computer. Yes there is a little panning in the setup. But I don't always like the choices they make. I prefer 2nd violins on the right. Plus, I just can't totally get rid of the highly un-musical string swells. Even some of the brass have strange artifacts on longer notes. I think it was the trombones.

Sibelius will often have sounds labeled 1 or 2. One being recorded close to the mic and two being recorded farther away.

As to why, if I'm writing for playback, I don't buy the things that would make that happen. I enjoy the challenge. Plus it makes me really delve into the software. Does it affect what I write. Sure. But look at it this way. Years ago I had to write at the piano (I'm not a piano player). Then try to guess at some orchestration. And never really hear any of my work anyway. I don't happen to own an orchestra. So I write in software and tweak it the best I can. Perfect? No. But neither is a live performance. Though, of course I would prefer live.

In reply to by bobjp

I know the "swell" you are talking about. It because strings have a slow attack time. There is a "tightness" adjustment that takes care of that when you need a faster attack. There is also a muse-fx plugin that helps quite a bit but I don't remember exactly which one it is. A lot depends upon the style you are writing in. I used the BBCSO Discover sounds in this and I do not notice any real problem with the strings "swelling".
https://audio.com/jim-ivy/evangeline-michael-the-fidler

In reply to by bobjp

Well, the individual parts sound fine using Discovery. However, when you combine them, the voicing makes it sound muddy. Even though the individual parts sound fine, they need more space to breath. The BBCSO libraries are intended for a more "cinematic" idiom. They are not going to work for something like this example - the effect is something like if Leonardo Davinci had used a paint roller when he painted the MonaLisa. BBCSO is simply the wrong paint brush for the tight and intimate trio score you have written.

Here is your stonetest piece with a different orchestration that is more appropriate for the BBCSO library. Change the "txt" extension to "mp3" after you download.

Attachment Size
stone2test-cinematic.txt 1.8 MB

In reply to by dave2020X

I had this response to a query from Spitfire:

All our libraries are recorded "in situ", meaning the players are sitting in their normal spots within the orchestra. The diagram does accurately reflect the seating position.

You'll find this helpful for sure: Where were the players and microphones located during the recording of BBC Symphony Orchestra?

BBC Piano was recorded to blend in with the orchestra naturally, as well as provide a solo piano sound when using the closer microphones.

If you're trying to blend a different piano into the library, there's all sorts of different ways to approach it. I would start by making sure you use the "stereo panners" in your DAW, which allow for moving the entire stereo field while maintaining both channels, as opposed to a standard L/R panner.

This may also be worth a watch for you: In Depth Tutorial - How To Mix Orchestral Samples Recorded In Different Locations - https://www.youtube.com/watch?v=6ApcBIfESHU


If you want any instruments on the opposite side from the Spitfire setting, then you could try swapping the L-R channels for those instruments.

In reply to by Jim Ivy

One issue is surely to do with what have been called core competencies. There are some musicians who are very competent in many fields - composing, playing an instrument, and also computer hardware and software technology. This is not a given though, and there are likely to be many who either find technology difficult, or who just can't find the time to mess with it. Some musicians are good enough that other people will do work for them, so a very good violinist/composer may be able to get others to do recordings, make backing tracks, write out scores, publish pieces etc. Some other musicians are complacent enough to think that they will be able to command such services once they hit the international stage. Only a few will make it.

Essentially what is missing from Musescore is the ability to control and "modulate" features using dynamic patterns or controllers. There may not actually be many features which can be controlled easily. The most obvious one is loudness/volume Another is pitch, while spatial location is another one which is not usually available to musicians playing in a static environment. Some features are difficult to describe - for example "tone". Instrumentalists may be able to vary the tone of what they are playing dynamically, though they might not be able to give more than a rough outline of what or how they do that. Any sonic characteristic which can be controlled by a performer might be used for effect, though often the effects may be specific to individual players, and as such difficult to replicate or control in hardware or software. Some effects may be so unusual that they would be of very limited use in a system such as a notation package or DAW.

Panning is just one effect - and usually it is handled statically in packages - instruments have fixed location in space, but since at least 2D panning can be specificed by a number, there is no reason why panning could not be a controllable effect. 3D panning would require two numbers, and three numbers would be needed to also specify distance from the listener - 3D depth.

Simple 2D panning already exists in Musescore - but what is missing is the ability to control that dynamically. In DAWs many parameters can be varied by automation, and in modular synthesis there are ways to change the nature of sounds dynamically. So the essence of the suggestion here is firstly for Musescore to provide a generic method of changing one or more parameters in time - a form of automation - and in particular then to apply this to spatial location.

In reply to by dave2020X

You can currently control panning as Musescore plays by using your mouse wheel. So you can have an instrument ping-ponging from left to right as you listen, but you cannot record that to the track in Musescore. Marc keeps saying that they are going to add "lanes", so perhaps they will create a lane that will allow you to record CC#10, which is the control for pan. It is not like it is rocket science. Every DAW that I know of has this feature. It is just that I do not suspect that many Musescore types would use it.

In reply to by Jim Ivy

I know people who do surround sound for art installations, and other situations. I think there is a hall in Birmingham [UK] which has at least 60 channel spatial sound, and there are others with at least as many - maybe more. People also write music for those, which definitely have sounds rotating around, and overhead and maybe even underneath the listener.

Conventional panning is moderately simple, but there are several approaches. One is relative amplitude/volume, another is phase shifts, and another - related to phase shifts - is delays. Also impressions of depth are provided - if needed - by adjusting the ambience. It makes a difference also whether the listener is wearing headphones or a VR headset, as other spatial clues can be a change in relative frequencies between the ears. If the intention is to produce a soundfield similar to a "real" one, then something like the aforementioned Birmingham hall are likely to be the best, but for personal listening then either binaural or VR approaches may be better - if people don't mind or actually like wearing headphones.

Some DAWs - such as Logic - provide several different panning modes, though in honesty I don't know enough about them to be sure that some are better than others. I first became aware of the effectiveness of the phase/delay approach when trying to adjust the azimuth angle on a cassette tape deck. Before that I was only aware of the theoretical differences/advantages of phase and delays, but as I changed the azimuth angle the violin soloist very clearly moved from left to right. The differences in amplitude between the channels would have been minimal, so it took me a few minutes to realised that the effect was pretty much as you would expect from incorporating delays into the channels.

Getting moderately accurate panning probably requires at least 4 speakers in front and a four channel system, as well as some non linear recording and replay techniques.

In reply to by dave2020X

I think the problem is going to be finding a way to make the instrument sound like it is farther away from the listener. And being able to pan it left and right, also. Remember the further away from the mic, the more environment affects the sound. Is it in an empty room? Large or small? Or surrounded by other musicians?

Do you still have an unanswered question? Please log in first to post your question.