High quality OSS (opensound.com) driver for linux as replacement to alsa
If you want high qualiy sound you may try OSS. Recently opensound.com/forum has been captured by illuminati and they have made redirection to another forum owned by them where posts are moderated. They want to kill this alternative to alsa sound system. I will try to keep OSS alive and will publish new hacks about how to use OSS on linux because fascist moderators do not allow me to post on the new forum. So.. let's start with some abstracts by Hannu Savolainen, OSS developer: Why sound quality is bad?
I have been following the discussion on many Linux audio related forums and mailing lists for years. Users keep complaining that sound is garbled. This has continued for years and it will continue for decades if nothing is changed.
Why is sound garbled under Linux while it works under Windows/Mac?
There are few reasons to this. The first one is specific to OSSv4. The other ones are application design issues and common to all audio subsystems (OSS, OSS/Free and ALSA). I have said all this many times before so it may sound like I’m repeating myself. However …
• Practically all Linux audio applications open the audio device with O_NDELAY/O_NONBLOCK flags. However after this they don’t handle non-blocking reads and writes as defined by POSIX. OSSv4 implements POSIX compatible non-blocking I/O and applications that don’t expect non-blocking behavior will run in FFWD speed.
• Applications try to get lower latencies than necessary. This makes them less tolerant against the CPU load caused by the other applications running in the same system. Lower latencies mean shorter buffers. Shorter buffers mean that the application has less time to waste in waiting for the CPU to become available to it. After some limit the application will fail to write/read new audio fast enough which causes a pause/gap/click in the signal. If this happens too often then the sound will be completely garbled. This is an application level bug and the sound system (be it ALSA or OSS) has no way to fix it.
• For some reason most Linux audio applications try to avoid blocking on the audio device. They don’t use normal blocking reads/writes. Instead they use asynchronous timers (usleep/nanosleep/poll/select/whatever) to wait until they can read/write without blocking. Unfortunately this method is not reliable. The application may work just fine most of the time. However sooner or later it will run out of luck. This is more likely to happen if there is any other CPU activity in the system.
• Using asynchronous timers together with short buffers (low latencies) is potentially dangerous. Short buffers mean that the application has less spare time to tolerate poor precision of the system timer. If asynchronous timing is used then the buffer size (latency) should be significantly larger than the resolution of the system timer. Typical Linux/Unix systems use 100Hz system timer which permits fragment sizes that are much larger than 1/100th of second (10 ms). I don’t know what is safe but buffer sizes shorter than 30 to 50 ms are likely to be unreliable. Systems that have 1000 Hz system timer can work down to 3 to 5 ms latencies.
• The current trend is to layer different sound systems over each other. In the worst case there is also a server that runs in background and audio streams to/from all applications get looped through it. These top level sound systems are supposed to fix problems in the lower level APIs. IMHO this is pretty much impossible. Bugs should be fixed in the original software. Upper layers can add some workarounds but at the same time they add their own bugs to the soup. Having a sound server running in background makes timing more unreliable and adds additional context switches to the system.
As I said these current problems are caused mostly by application design errors. If the application doesn’t work properly then no sound subsystem can recover that. The situation is different if the sound subsystem enforces use of dangerous techniques like the ones mentioned above.
These problems must be fixed in the application level. The current trend is that the sound subsystem is blamed for the problems. Then a new sound system is developed and the faulty applications get ported to the new one. The new system inherits the problems of the earlier ones and the circle starts again.
Comments
This is the full guide _http://ossnext.trueinstruments.com/forum/viewtopic.php?f=3&t=5801&p=213… to OSS installation and use with ALSA modules for midi input. This manual is for gentu/funtu linuxes. Installation for deb based distros is not covered there but search their new forum for the debian way.
In reply to This is the full guide by ossdriver
OSS with winehq
1. If you start winecfg and you don't see OSS driver in audio tab, it means that during compiling wine your operating system did not see OSS as installed. It means that you need to reinstall OSS (from funtoo-media overlay) and recompile wine with +oss flag activated. When you see the oss driver in winecfg and can choose OSS devices it means that wine has been successfully compiled with oss support.
2. Don't turn vmix off, otherwise u won't get any sound from games running in wine! Vmix must be ON.
3. NI kontakt standalone may crash if vmix is activated.
Here below is the way to hack linux to get midi input support into windows apps running in winehq.
GUIDE
Midi input into programs running in winehq | wine is now possible due to hack by one comrade. You need to compile wine by a special way. The following procedure describes how to compile wine properly on gentu/funtu (i have succeeded with wine 1.9.9). First of all you should have the following flags (features) activated (those without minus):
app-emulation/wine-1.9.9::gentoo USE="[b]X alsa dos fontconfig gecko jpeg lcms mono mp3 ncurses nls openal opengl osmesa oss perl pipelight png realtime run-exes s3tc samba scanner ssl staging threads truetype udisks vaapi xcomposite xinerama xml [/b] -capi -cups -custom-cflags -d3d9 -gphoto2 -gsm -gstreamer -ldap -netapi -odbc -opencl -pcap (-prelink) -pulseaudio (-selinux) {-test} -v4l" ABI_X86="32 64 (-x32)" LINGUAS="-ar -bg -ca -cs -da -de -el -en -en_US -eo -es -fa -fi -fr -he -hi -hr -hu -it -ja -ko -lt -ml -nb_NO -nl -or -pa -pl -pt_BR -pt_PT -rm -ro -ru -sk -sl -sr_RS@cyrillic -sr_RS@latin -sv -te -th -tr -uk -wa -zh_CN -zh_TW"
Now let's issue the following commands:
1. cd /usr/portage/app-emulation/wine && ebuild wine-1.9.9.ebuild unpack
It will unpack the source into the folder /var/tmp/portage/app-emulation/wine-1.9.9/work
2. ebuild wine-1.9.9.ebuild prepare
3. cp /var/tmp/portage/app-emulation/wine-1.9.9/work/wine-1.9.9/dlls/winealsa.drv/midi.c /var/tmp/portage/app-emulation/wine-1.9.9/work/wine-1.9.9/dlls/wineoss.drv/
It takes midi.c file from alsa driver and puts into oss driver to fool winehq.
4. sed -i 's/ALSA_midMessage/OSS_midMessage/g' /var/tmp/portage/app-emulation/wine-1.9.9/work/wine-1.9.9/dlls/wineoss.drv/midi.c && sed -i 's/ALSA_modMessage/OSS_modMessage/g' /var/tmp/portage/app-emulation/wine-1.9.9/work/wine-1.9.9/dlls/wineoss.drv/midi.c && sed -i 's/ALSA_DriverProc/OSS_DriverProc/g' /var/tmp/portage/app-emulation/wine-1.9.9/work/wine-1.9.9/dlls/wineoss.drv/midi.c
It modifies midi.c file with 3 substitutions.
5. Open file /var/tmp/portage/app-emulation/wine-1.9.9/work/wine-1.9.9/dlls/wineoss.drv/Makefile.in and after EXTRAINCL = $(OSS4_CFLAGS) add the new line with the following phrase: EXTRALIBS = $(ALSA_LIBS)
6. ebuild wine-1.9.9.ebuild compile
7. ebuild wine-1.9.9.ebuild install
8. ebuild wine-1.9.9.ebuild qmerge
If all above steps succeded now you have the proper wine which supports midi input via alsa modules into apps running in wine. To test it download midi-ox program and install it by the command:
wine /path/to/midi-ox-setup.exe
Now run midi-ox:
wine ~/.wine/drive_c/Program\ Files\ \(x86\)/MIDIOX/midiox.exe
In midiox go to Options\devices and you will see your all alsaseq devices like virmidi (if you compiled kernel with virmidi support), usb devices (such as midi keyboard or midi-to-usb adapters like UM-ONE). Choose your midi kbd or adapter as input device and close the window. Now press keys on midikbd and you will see messages running in the monitor window of midiox. Then my congratulations! You have done the job for the lazy\reluctant linux developers!
Unfortunately i have not managed to use kontakt 5.3.0 sampler because it just does not work in wine properly due to wine's bugs. But sibelius 7 works. I bet finale, repear will also work. So, comrades, use OSS for sure and for pleasure. Beware that here you use alsa modules for midi input since OSS project was siezed by illuminati and no longer being developed and therefore has no own midi input support. Read other topics to find the guide to use OSS with alsa midi modules.
Any developers are welcome to PM me about OSS development!
In reply to OSS with winehq 1. If you by ossdriver
Hi,
As I said to you a couple of times on IRC, if portaudio would support OSS in its next version, then MuseScore would support it as well. Also, to be clear, Portaudio is not like Alsa, Pulse or OSS, it's a library to use the same API to access all these backends. So I would suggest to file a feature request to portaudio https://www.assembla.com/spaces/portaudio/tickets (and keep illuminati and facists out of the discussion... it will help to get meaningful answers)
In reply to OSS with winehq 1. If you by ossdriver
USING OSS WITH JACKD2
I prefer jackd1 but those who need jackd2 can follow the manual below:
MANUAL
The new way to install jackd2 on gent0u is as follows:
layman -a proaudio
emerge --sync
In a text editor make changes.
leafpad /var/lib/layman/proaudio/media-sound/jack-audio-connection-kit/jack-audio-connection-kit-1.9.10.ebuild
Change
src_prepare() {
default
multilib_copy_sources
}
by
src_prepare() {
default
epatch "${FILESDIR}"/patch.patch
cd "${S}"/linux
ln -s ../solaris/oss
cd "${S}"
multilib_copy_sources
}
Put the jackd2 patch.patch file from the post _http://opensound.com/forum/viewtopic.php?f=3&t=5291&p=21457&sid=a5bbc19… (also attached here) into the needed directory:
cp patch.patch /var/lib/layman/proaudio/media-sound/jack-audio-connection-kit/files/
Add the following lines to /etc/portage/package.use/package.use file:
=media-sound/jack-audio-connection-kit-1.9.10::proaudio abi_x86_32 -libsamplerate -dbus
>=media-libs/libsndfile-1.0.26 abi_x86_32 sqlite static-libs
>=media-libs/flac-1.3.1-r1 abi_x86_32 static-libs
>=media-libs/libvorbis-1.3.4 abi_x86_32 static-libs
>=media-libs/libogg-1.3.1 abi_x86_32 static-libs
Add the following line to /etc/portage/package.accept_keywords file:
=media-sound/jack-audio-connection-kit-1.9.10 ~amd64
Now:
cd /var/lib/layman/proaudio/media-sound/jack-audio-connection-kit
repoman manifest
And finally install jackd2 MULTILIB package! You can't install multilib manually from source from jackaudio.org.
emerge -av =jack-audio-connection-kit-1.9.10::proaudio
Now you can start jackd2 by the command:
/usr/bin/jackd -S -R -P80 -v -doss -r44100 -P/dev/dsp -p1024 -n2 -w32
or if the above command does not work, by
/usr/bin/jackd -S -v -doss -r44100 -P/dev/dsp -p1024 -n3 -w16
or if VMIX is ENABLED:
/usr/local/bin/jackd -S -v -doss -r44100 -C/dev/dsp -P/dev/dsp -p1024 -n2 -w16
Remember that if you update portage or layman (by the commands emerge --sync, layman -S, eix-sync) your custom-modified ebuild can be replaced by the ebuild from portage and you will have to repeat the entire above procedure to reinstall jackd2 1.9.10.
THe patch can be downloaded here patch.zip or here _http://ossnext.trueinstruments.com/forum/download/file.php?id=173&sid=2…
In reply to USING OSS WITH JACKD2 I by ossdriver
Why is linux meant to lag behind Windows?
Here is the answer: _https://web.archive.org/web/20121004094228/http://linuxmusicians.com/vi…
Let my quote the guy who is the developer of musikernel daw:
"I don't have the patience to deal with all of the half-witted, no talent hacks that develop the rest of the Linux stack. I swear some of you are being hired by Steinberg and Native Instruments to lead the Linux audio community right off a cliff, your words and actions defy all reason.
So if I understand the other devs:
1. Linux will never be as good as Windows, yet they spends 23 hours a day working on the stack???
2. Plugins should use the default OS theme, even though Windows plugins don't, because the Windows95 look is so hott. Plus, Linux users are all metrosexuals who value a color-coordinated desktop.
3. It's a sin to even suggest your project could ever possibly compete with top Windows plugins.(geez, wouldn't want anybody to think that, would we? You'd surely all cry if somebody used Linux plugins without the notion that it sucks. That is very suspicious, covert propaganda straight from Steinberg headquarters)
4. Command-line only Linux sampler (or with lame Fantasia UI), is far more advanced than Euphoria, as are lame and outdated Specimin and Petri-Foo.(Hey, 1996 called, they want LinuxSampler back).
5. Only proprietary sampler instruments are worth using, because that's somehow totally different than .wav files plus an open instrument file format. Besides, your friends won't think you're cool if you just load your own wavs without using $500 uber-drums4gigasampler....
This, my friends, is why you can't have nice things. Evey Linux plugin becomes abandon-ware for these same reason.
CALF is a great plugin pack by Linux standards, while being totally sub-par by Windows standards. It had so much potential, why did they stop developing it? The same reason every plugin before and after it stopped being developed. CALF conceivably could've been great with a couple more years of development, but they obviously hit the same wall that I and everybody else did."
So as you may understand the agents of corporations are controlling the open-source community. That's why i STRONGLY recommend against any products promoted on irc freenode server because those trolls hide all information about musikernel and other possibly existing daws. It's not difficult for them to hide that info because they own all mass media and major internet servers and sites. That's why i recommend staying away from ardour, qtractor, carla if you can. You may try installing and testing musikernel. Here it is _https://github.com/j3ffhubb/musikernel
I could compile it on gentu|funtu and it runs amazingly fast. Reapers, qtractor, ardour, etc all suck and WILL suck because they are meant too. With musikernel i understood that linux is not a slow thing, it's deliberately made slow by the malicious agents. Get in touch with that Jeff developer and help the developement if you can. As for my testing of musikernel since i use OSS driver and non-deb based linux it will take some time.
In reply to Why is linux meant to lag by ossdriver
ossdriver: although we're not at all against OSS, this is not the place to repost your whole previous forum/blog.
If you wish MuseScore to support OSS, then by all means, either contribute or do what lasconic suggested and help the guys at PortAudio out.
If you're looking for webspace to post your case and blogposts; do that, register at blogger/wordpress and post your stuff. As it will then be your blog, you won't have to worry about 'others taking over'. Leaving out the insults would improve your argument as well.
In reply to Why is linux meant to lag by ossdriver
Good grief, I wish I understood all that :)
I have tried numerous times to set up a Linux music workstation, with little or no success. Some of it may be down the fact that my internal soundcard was ruined by a repair shop, and I use an external USB soundcard device like this
https://www.pcworldbusiness.co.uk/catalogue/item/N066758W?cidp=Froogle&…
I tried Ubuntu Studio, AVLinux, and KXStudio. I only had success with KXStudio, and that was limited.
I searched the internet for hours looking for solutions, trying to understand Jack, ALSA, etc. to no avail :(
The conspiracy side of thinks that misinformation may be out there from Microsoft, in various forms, but would be difficult to prove. Any evidence of this, anyone?
I suppose the only way, from my understanding, to set up an effective music workstation using Linux and OSS(?) would be to invest in a computer with Linux pre-installed, so that the hardware is compatible
Anyone have any more info on moving forward with this?
PS. I do use LxLe ubuntu on a daily basis. I have had to return to using Windows (XP) because of problems with Skype :(
In reply to Good grief, I wish I by stupot101
Currently there are only 2 linuxes which are oss-friendly. They are archlinux and funt00. All the rest are not even worth trying. Stick to arch for faster installlation. Funtoo is better to block spyware. Beware that pulseaudio is spyware. Disable it by all means. You can use oss_skype_wrapper to use skype with oss. Skype itself is spyware. It reads your hdd and sends out your files encrypted to microsoft for analysis. Follow archlinux wiki skype page to stop spying.
Below is the guid to intall oss under gent00. Don't use gent00. It's hostile now. Install to funt00 from funtoo-media overlay. Or better use arch. PM me for details. You won't get any support from new oss forum. It no longer belongs to oss devs.
In reply to Good grief, I wish I by stupot101
Here _http://www.gnu.org/software/guix/ seems to be an interesting project. YOu can try installing oss to it. Get sources and patches for oss from aur.archlinux.org oss-git page.
In reply to Here by oss_user
Advocates of slow alsa+jack demonstrate their feelings towards alternative audio professional systems. https://community.ardour.org/pd_on_klang Unfortunately the potential successor of oss klang was not accomplished and people will have to continue using slow alsa+jack with all torture and pain accompayning.
About KLANG
KLANG is a new open source audio system in development. Its target platforms are the Linux and the FreeBSD kernel. KLANG offers professional grade audio, that means lowest possible latency, latency compensation and bit exact precision at a very low CPU load.
KLANG has been designed as a signal routing system, supporting seamless and transparent signal transport between all endpoints. In practice this means that there's no distinction between hardware and process endpoints. Each endpoint is either a signal source or a sink, allowing for versatile signal routing topoligies.
All connections are fully latency compensated. A metronome system synchronizes the signal processing to a configurable set of system internal and external clock sources. This greatly simplifies tasks like audio/video synchronization.
Why a audio system in the kernel?
Because it's the only reasonable thing to do. Audio is one of the few applications, where time is of the essence and things can not wait. Audio samples may not glitch. If they do, this results in a very noticeable "pop". Latency is of the essence. Our visual system permits latencies up to 20ms without noticing. In contrast to this latencies as low as 4ms are very noticeable.
Low latencies mean short buffer length. But the shorter a buffer, the more critical the scheduling of a process gets. Current userspace based audio systems put audio processes into realtime or near realtime priority. This works fine, as long there are only a few audio processes involved and the system is under little CPU load. Placing the audio system in the kernel instantly resolves this problem, since processes can be easily rescheduled.
Also one must not forget, that the OS kernel is the ultimate Hardware Abstraction Layer. Ideally a process running in user space is liberated from taking care about all the gritty details. Nobody would expect a networking program to manually implement TCP or taking care of the buffer status of the network interface. Nobody would expect to implement the details of a filesystem to write a file.
Yet the audio APIs like ALSA merely expose the hardware to the user space. Imaging a network driver implemented like this: Exposing the raw network packets to user space and having some daemon taking care of them rerouting to other processes. Or asingle processes taking exclusive use of a network interface.
If a network system was implemented that way, you'd ditch it immediately. However with current open source audio systems, this inane approach is accepted as perfectly normal.
Audio is a routing problem
Back in the good old days of analog audio, one would jokey around patch cabled between signal sources and sinks. Matrix mixers would take in the signal and redistribute it to their destinations. In a well equipped and set up studio any audio source could be routed by the press of a button or the turn of a knob.
This kind of signal handling is matched by the design of audio systems like JACK. However JACK is ultimately flawed by running in the user space. XRuns are the haunt of and dreaded by every JACK user.
Great another API to replace them all. Now we got N+1 APIs...
A new audio system is worth nothing without a base of applications that can make use of it. It is hence not interfaced by a entirely new API. Instead it builds on an existing API, that provides audio the right way:
open(2) a endpoint
ioctl(2) the endpoint to configure it
write(2) to the endpoint to insert signal into the audio system
read(2) from the endpoint to retrieve signal from the audio system
If this reminds you of OSS, well, yes: KLANG exposes a fully OSS compatible API to user space. Where it comes to configure flexible signal routing and synchronization, the OSS API is expanded upon. However only very few applications will actually require to tap into the extended API. So if a program speaks OSS, it also speaks KLANG.
There will be no entirely new API. Actually KLANG reuses an existing API and extends it.
Wait, doesn't OSS4 already provide all this
OSS4 provides mixing, resampling and splicing a signal from a source to a process. But OSS4 still treats audio HW and client processes accessing it as different kinds of citizens. In addition to that OSS4 has no support for power management and sequencer data (=MIDI)
KLANG is different. KLANG has full support for the available power management primitives. KLANG provides transport for non-sample data, like MIDI. And KLANG will integrate into the kernel's namespacing/container primitives.
Sounds great! So where can I get it?
So far KLANG is in a very unstable and experimental state, at which it simply is not yet ready for a initial release. There are just too much in-situ crufts in it, that may be either misleading to early adopters may even lead development into an undesirable direction, if released to early.
The first release of KLANG sources will happen, as soon as both the routing system are stable and a functional native driver for Intel HD-Audio chipsets has been finished.
Yawn... sounds boring
If you say so. This project was started to scratch a few personal itches and pet peeves:
No proper mixing support in ALSA. No, dmix does not do the job. Also there's more to audio than just mixing.
User space audio systems are annoying
JACK has the right design, but consumes just to much CPU and kills power efficiency. Also it doesn't cope well with coarse scheduling.
PulseAudio... meh, but I'm hearing it's getting better recently. Does it still require RealtimeKit to work properly?
ESD? Yeah, makes for a great audio desintegrator effect synthesizer. Also can be abused as a networked tracker music backend. But not to glitchlessly play a continous stream of audio samples
OSS4 does not support sequencers and draws your battery in no time. But it has a great API
Advocates of alsa say: Recently this page surfaced with a description of KLANG which aims/claims to be a new framework for audio on Linux. It contains a number of troubling errors, mistakes and wrong-headed thinking.
As an opening note, I find it a bit odd that this announcement of KLANG is so anonymous. No name, no email addresses. Of course, we do know who the author is, its just strange that this page has no contact information, no links, nothing.
Just in case nobody realized audio is in already done in the kernel. That's where device drivers live, and device drivers are how all I/O with audio interfaces takes place (theoretically, it might be possible to do user-space I/O with USB, but in practice, it doesn't work well).
Dropouts ("xruns") are not really caused by being in user space. They arise from two different types of events:
poor kernel scheduling
incorrect application implementation
You can't solve the first without just doing everything in an interrupt handler. This is 2012, and this is not how you write general purpose operating systems anymore. You can't solve the second with a kernel framework, and incorrect application implementation will remain an issue with any audio API. Remember: most of what an audio app does happens in user space regardless of how the kernel side of things operates.
The unix calls (they are not "OSS calls") open/read/write/ioctl are NOT the right API for streaming media, not because they don't work but because they allow sloppy developers to write code with the wrong design. Every piece of software on Linux that does audio i/o ultimately calls these functions, but they are wrapped in ways that (gently) encourage developers to structure their software in the right way(s) to avoid dropouts and other issues.
KLANG as documented does not appear to offer any approach to inter-application audio, which JACK does in a way that is completely seamless with actual device audio i/o. This is not a small thing.
open/read/write/ioctl are also the wrong API because no interposition is possible without using grotesque hacks like LD_PREOPEN. By using system calls, you basically mandate that everything on the other side of them is done in the kernel. This makes it very inefficient to implement inter-application audio, since everything has to make extra transfers across the kernel/user space boundary.
KLANG is talking about putting mixing (and possibly some other kinds of processing) into the kernel. Since such processing is almost always done in floating point format by almost every piece of software on the planet, this is problematic, since there is no floating point allowed in the kernel.
JACK consumes barely any more CPU than an in-kernel design would. If you don't understand why this is true, then you don't understand enough to be crafting a replacement.
JACK's impact on power consumption is a function of low latency realtime audio, not JACK (i.e. there is no way to do low latency realtime without affecting pwoer consumption). If a user or an application wants to handle audio data with 1msec of latency, you cannot avoid the CPU staying active. You cannot buffer you way out of the requirement to handle very frequent interrupts from the audio interface.
The KLANG "documentation" suggests the need to reimplement kernel side drivers for every audio interface, which is just an absurd effort.
Finally, I would note that ESD is irrelevant and has been for nearly a decade - I don't know why anyone would even mention it.
Klang team answers:
I'd like to address some points given here:
As an opening note, I find it a bit odd that this announcement of KLANG is so anonymous. No name, no email addresses. Of course, we do know who the author is, its just strange that this page has no contact information, no links, nothing.
That is, because the project was not to be officially announced for a long time. This page was meant as a placeholder so that people I discuss this had something to bookmark, that doesn't 404. It's still far too early to be released and this was sort of a FAQ for those who kept asking.
Somebody of those people posting this to Reddit preempted this.
Dropouts ("xruns") are not really caused by being in user space. They arise from two different types of events:
poor kernel scheduling
incorrect application implementation
Exactly. Scheduling is indeed the point here. You can't influence the scheduling from user space. But you can from kernel space. If the buffers of a process doing audio are running low, you can reschedule process for early continuation. If the buffers for a reading process get full, you can reschedule it for early continuation.
KLANG as documented does not appear to offer any approach to inter-application audio, which JACK does in a way that is completely seamless with actual device audio i/o. This is not a small thing.
Actually KLANG is all about inter-application audio. You open /dev/dsp and your FD becoms a endpoint into the KLANG routing system. O_RDONLY opened FDs are sinks, O_WRONLY opened FDs are sources. O_RDWR opened FDs create a sink and a source. You can connect any sink with any source in KLANG as you can do it with JACK. You can route process endpoints to process endpoints, or HW endpoints to HW endpoints.
A lot of design decisions in KLANG were directly influenced by JACK. Two of the key points in KLANG's design were:
- Everything that goes with JACK should be possible with KLANG.
- KLANG should not replace JACK but actually provide a nice environment for it to live in.
In fact I had planned to get in touch with the JACK developers rather soon, so that we could implement a libjack.so that does all routing via KLANG instead of going through a user space server. Routing over a user space server adds some additional expensive context switches. Yes I know that JACK uses floating point, but this is not a problem, because mixing is simple addition, and that can be done without the help of a FPU, and rather efficiently, too, actually.
This makes it very inefficient to implement inter-application audio, since everything has to make extra transfers across the kernel/user space boundary.
Sorry, but this is just FUD. How do you think data is exchanged between user space processes? You cross the userspace-kernel boundary twice doing so. Any sort of IPC always involves system calls. Even if it goes over shared memory. Because shared memory is actually shared address space and all sorts of "kernel-hell" breaks loose, touching it.
Since such processing is almost always done in floating point format by almost every piece of software on the planet, this is problematic, since there is no floating point allowed in the kernel.
KLANG uses integer arithmetic for its whole signal chain. You can do everything with integers just fine, and in fact with higher precision. The main reason to use floating point numbers in audio is for space efficiency when storing large dynamic range audio. A 32 bit float has 24 bits of effective precision in the mantissa. The exponent is "just" a gain factor (so to speak).
KLANG's internal stream format gives at least 8 bits of foot- and headroom for all samples in it. Gain/attenuation is applied by factoring the multiplicator to the closest radix 2 and remainder. Then a bitshift is applied followed by multiplication with the remainder.
JACK consumes barely any more CPU than an in-kernel design would. If you don't understand why this is true, then you don't understand enough to be crafting a replacement.
JACK itself not. But the added context switches between applications do. That's the main problem here.
If a user or an application wants to handle audio data with 1msec of latency, you cannot avoid the CPU staying active.
I thought so, too, for a long time. But then Lennart Poettering (re-)discoverd a rather old method – this is one of the few cases where I think he did something good – how you could get low latency even when operating with large buffer sized. This might sound impossible, but only as long as you assume a filled buffer being intouchable. If you accept, that one may actually perform updates on a already submitted buffer, just slightly ahead from where it's currently read from (for example in a DMA transfer) you can get latency down, even with larger buffers. Lennart implemented this in PA when they were approaching very long buffers (256ms and longer) on mobile devices, but still needed low latency for audio events.
The only drawback of PA's implementation is, that it uses a rather crude scheme to estimate the position of the "readhead", which is prone to phase oscillations (in the readhead position). Basically one wants to use a PLL for this, but PA uses sort of a FLL for this.
The KLANG "documentation" suggests the need to reimplement kernel side drivers for every audio interface, which is just an absurd effort.
True, and this is the biggest roadblock actually. But if you look at the state of many of the sound drivers, many, if not each single of them, requires a major overhaul.
Finally, I would note that ESD is irrelevant and has been for nearly a decade - I don't know why anyone would even mention it.
This was meant more of a joke and was there for completeness. I added it to mention the problems you run into, when designing audio multiplexing systems. Also I recently actually used the ESD protocoll in a crazy hack, where I had a Atmel AVR with Ethersex network stack playing tracker music via ESD (generated and uploaded sampled into a PA esd protocol module, then triggered playback). Totally crazy and useless, but fun.
Please don't feel that I want to replace JACK. JACK has its proper place in the Linux ecosystem. It just doesn't fit as a kitchen sink audio system (although I know plenty of people who actually use JACK as their universal audio backend on their boxes). Actually my intention was to build a healthy relationship with the JACK community for mutual benefit. KLANG's design has been heavily influenced by JACK and its API.
The unix calls (they are not "OSS calls") open/read/write/ioctl are NOT the right API for streaming media, not because they don't work but because they allow sloppy developers to write code with the wrong design.
Actually I'm very interested in what you mean by this. I know a certain coder (goes by the handle mazzoo) who does things like realtime signal processing for very low frequency SDR and exclusively uses the OSS compatibility API because native ALSA is a drag to use. Also I don't see any problems there.
Playing audio: write(2) the samples to the FD and the write call returns when the buffer is almost finished playing. Almost, because the process shall be given enough time to prepare another buffer.
For asynchronous operations you can use either mmap or the async io syscalls (on Linux).
Reading is even simpler, because you request a certain number of samples and read returns either having read that amount of samples or maybe a bit early, but buffering further samples so that nothing (beyond a certain timeout is lost).
I'm very interested in exaples of bad audio program design, though. So if you have them, I'd like to read them, to learn from them, what not to do or encourage.
As you can see alsa team impudently lies to not let us have professional speedy driver. And even jackd itself is configured so to have slow operation. That's why you are encouraged to buy rme and other hw staff. Or PM me for help with OSS driver.
In reply to Advocates of slow alsa+jack by oss_user
oss_user: I'm seriously considering to report the post as spam for just being advertising which has nothing to do with MuseScore.
I'll provide you with the same words already given above:
There is no reason for anyone here to PM you for help with an OSS driver. If you know how to add it to PortAudio, then do it and MuseScore will benefit equally. If you don't know how to do it, PM'ing you won't help anyone else either..