I have an Akai MPK-25 keyboard that have been wanting to try for some time. I'm using Qsynth with Qjactl on a Dell D630 in the current Ubunto Studio. I finally got it to work after figuring out the connections that needed to be made in Qjacktl. I have the general soundfont package that you listed. I understand how to assign each soundfont to a different channel. I'm not entirely clear on how the channels are used. In order to change soundfonts with my setup, I have to change the soundfont listed on channel 1. My keyboard is listed as using MIDI channel 1. Is that why it uses the soundfont on channel 1? If I changed to a different MIDI channel, would Qsynth use the soundfont for that channel?
maybe yes to all accounts :)
Dating from the late previous century;) the soundfont specification is a multi-timbral one which means that you can set or map presets to MIDI channels on a one-to-one basis; each preset, patch or or instrument if you wish to call that either way, is addressed by its corresponding MIDI channel, address or slot (1-16); each slot or preset, mapped on a particular MIDI channel there is, may be changed on the wire by a MIDI program_change message (PC#) or via Qsynth's GUI Channels > Edit... (button or right-click context menu).
From your MPK25 (which btw. I do also have one specimen:)) you can either change the current MIDI channel setting or else try the "Program Change" command to, yes, change the instrument preset/patch which is to be active on the MIDI channel setting there is.
hth.
cheers
each and any MIDI device, port or cable, is inherently conveying all and any of the 16 MIDI channels, because these are not physical rather just logical entities: a MIDI channel is a number (1, 2, ..., 16) or address that is stamped to a MIDI message at the transmitter device (eg. a MIDI keyboard controller) and eventually read on a receiver device (eg. qsynth/fluidsynth); it is on the receiver port that the MIDI message is possibly filtered and routed accordingly to its MIDI channel address (eg. qsynth/fluidsynth respective 1-16 instrument slot or preset if any).
Yes Rui, but how do I physically achieve this in Qsynth and Qjackctl?
I am just using two engines at the moment and they are not separating the Midi.
If there was a button on each Qsynth engine that said MIDI input CH= *, then I could
get it.
Could you give me a concrete example of just say three engines and how to do
the complex routing through Qjackctl and to Qsynth?
Thanks for replying BTW :)
- each Qsynth engine presents you with a separate and unique MIDI input port;
- you don't connect MIDI channels, you do connect MIDI ports instead;
- each MIDI port transmits or receives messages for all and any of the 16 MIDI channels;
- remember: a MIDI channel is just a number or tag on a MIDI message that is transmitted from an output port trough an input port;
- each MIDI channel is logically auto-assigned to each Qsynth engine instrument slot resp. on a one-to-one basis. (exactly 16 slots as seen in Qsynth > Channels window, one for each MIDI channel number resp.);
you just have to connect (possibly via QjackCtl) the MIDI device or application that emits or produces MIDI notes or messages (as source) to the desired Qsynth engine MIDI input port (as target); the soundfont instrument that will be played back is exactly the one that is the respective instrument slot assigned to MIDI channel being transmitted.
you may tell which MIDI channel that is being transmitted only at the source device or appliance (nb. it it's a MIDI keyboard controller for instance then it's usually configured to emit on MIDI channel 1 as default but can be anything else if setup properly).
Thanks for annotating.
It seems as though if the 24: USB Midi Cable could have some way of making virtual ports with one midi channel going into each Qsynth Engine instance,
that would achieve the result. When expanded, it just has one socket.
Interestingly, your Qsampler app does it perfectly (midi channel per channel slot) but Prg change is non functional on that. Which is a bummer!
Thanks for your attention Rui.
Just been reading about the detailed explations figuratively explained as well. Thank you for all the "Cues" ;-) Forgive me if I'm asking a silly question. I've read a lot of the Qtractor documentation and about using it alongside QSynth, QJackCTL. I've managed to educate myself about midi channels, ports, trigerring the right channel and routing the channel midi data to the right assigned channel of Qsynth.
Question 1: Is there a way Qsynth can give sepeate outputs for the the synthesised audio? All my audio buses are triggered on my return input into Qtractor, hence forcing me to record the audio generated by Qsynth separately, one track at a time. The objective here vs creating three separate sound engines is to save on CPU resources by utilizing all the sounds generated from one sound engine, two sound fonts and channel assignment. Note: This is not for recordings, its for playing back recorded midi from Qsynth ( external sound engine) from Qtractor.
The best you can have is turning Qsynth > Setup > Audio > Multiple JACK Outputs on and up with Audio Channels and/or Audio Groups... but I really can't tell much a thing on that cause I've never use it myself... note all of that are in fact FluidSynth provided options, not quite a Qsynth specific feature at all.
It has been mentioned on occasion that you can use Rosegarden without jackd. I have already written a howto on configuring timidity to use a nicer soundfont than the standard issue so that it can be used to make sound in Rosegarden without using jackd. see _up_the_fluidr3_gm.sf2_for_timidity
This page is intended to show how to set up fluidsynth (or rather its GUI version Qsynth) so that Rosegarden can use that as its sound engine. It is perhaps a more versatile option as you can have many instances of qsynth using different soundfonts. I am sure you can do this with timidity but it is not as clear how to do it as with qsynth.
You might notice that I have tabs with Qsynth1, 2 ,3 etc. at the bottom of the window. These are different instances of Qsynth that I have configured to use different soundfonts. Also I you will notice that somewhere I do not have qsynth settings right. If someone knows how I can improve my settings I would appreciate it.
There is a small green + sign in the left bottom corner of the Qsynth dialog. Click on that to open up the Qsynth: Setup dialog. It has a default name of Qsynth+n where n is the number of your last Qsynth instance. You can change this if you want. And maybe if you want to know what soundfont you are using, this is not a bad idea.
Now in the main window you can right click on the track and choose which device you want to produce which sound.If you have used a general soundfont like fluid, you will also be able to choose which patch you want to use in the Track Parameters panel on the bottom left hand of the Rosegarden window.
The keyboard works fine connected to a Windows machine running Sekaiju. So it is sending CCs.
Sometimes if I start my linux setup fresh in the correct order Qsynth, Rosegarden etc
and then very quickly start playing at the Prokeys it will play correctly through speakers and then suddenly stop.
Playing period (if it works at all) is about 10-30 seconds before it shuts off.
I revisited it to see when I last had it working, as I had a problem just playing midi files today. I had forgotten about the prob with Rosegarden starting jackd (not required by us). I had noticed that Qsynth has been updated about seven times by Packman since the beginning of March! That may not have helped your testing.
I connected my Zoom effects processor via usb, to see what 12.2 with KDE + rosegarden/qsynth/pulseaudio made of it if anything. A very long time and several openSUSE releases since I had it connected and working as an audio interface (not midi), and that was with Jack. So far KDE/Phonon detected the device and KMix has it as a playback and capture device. However I think rosegarden will need Jack (IIRC configured for duplex operation) to handle the audio side.
If we consider the output side to be completely independent of the input setup then Rosegarden works fine under PA. I have it working regularly and reliably to play MIDI files as long as jackd is not loaded. On Opensuse the output sound quality is way higher than on the Windows machine with GM synth sounds and it is much easier to manage the various soundfonts on linux.
Just wanted to add that this had a very beneficial effect on screen recordings with xvidcap.
Without these commands in place I would get about a 1 sec. offset between audio and video.
With these commands in place the sync of audio and video was much more acceptable.
So I bypassed the extender, plugged the keyboard directly into the CPU box and applied your suggestion.
The aconnect thing worked. This time I could play directly to Qsynth from the keyboard.
I loaded Rosegarden and found it would record while playing through speakers.
Problem solved.
Last thing to check was whether the system would need the aconnect command each time, so I rebooted, started Qsynth and then Rosegarden, and the keyboard started playing immediately.
As far as I can tell it is Rosegarden that takes care of the aconnect instruction, and as long as that darned USB connector is not in the way it does so effectively.
I found out that qsynth depends on qt6, which appearently is installed, but while the qt5 libraries can be found under /usr/lib/qt/, the qt6 libraries are in /usr/lib/qt6/. All qt packages on my system are installed as dependencies from the official repositories.
i don't think any version of qsynth has ever worked under win98 due to fluidsynth's requirement for newer glib versions. it's possible to strip out glib from fluidsynth, but you can also substitute older win98 friendly glib versions for 1.x.x versions of libfluidsynth.dll. i found this out when i compiled my own win98 compatible dosbox ece build (with fluidsynth patch) from this post here: Re: DOSBox ECE (for Windows & Linux)
93ddb68554