The Holy Grail ? (second reprise)


#81

Man, you guys go deep! Yeah, I forgot about the formant sequencing in the FS1R. So it applied both principles (changing the oscillaotor output AND the filtering. Even without too much detail (Normally I am the champion of that but you guys talk technobable like this is Star Trek Voyager): The aforementioned thinking in the 3 main elements of music (pitch, amplitude and timbre) have always helped me to get a fix on where a certain synthesis system fits into the bigger picture. The fact that systems tend to be software based nowadays does however make the job a lot more difficult since approaches become ever more fluent as your discussions prove.


#82

For me, one question that relates to "the big picture" is:

Is the musician

  • the master of the instrument
  • an active part of the instrument ?

These two points of view lead to very different approaches and designs.


#83

in fact i personally would go as far as to say, if those two points are not met, you can not really call someone a musician, but more an arranger, programmer, whatever....


#84

I did not meant to define what's a "musician" or not.

Most electronic instruments are designed to be directly controlled. Any action will lead to the same exact result. Full control is easy as the player is external to the system.

On the other hand, some acoustic instruments, such as wind instruments, have richer relationships with the player...

[Star treck voyager mode on]
The player is part of a multi feedback loop system with a rich phase space with poles, hysteresis and even chaotic overtones...
[Star treck voyager mode off]

Such a system has a richer dynamic behaviour as it depends on articulation/timing details.

Physical modelling, as suggested by @brasso (point 2) can lead to this kind of interaction.


#85

hmm, I think there are quite a few electronic instruments that don't conform to this,
its largely down to the interface, sure, keyboards (and grids) are as roger linns says on/off switches - but move beyond this to expressive controllers (continuums/seaboards/linnstruments etc), theremins, or any non quantised input (ribbons/ondes) and then a musicians subtle movements become important again, and with this they need and rely on feedback.

I do think that synthesis plays a role, but only once you have captured nuances.

listen to a theremin, it has simple synthesis, yet its quite unique. Ive found the same with the soundplane/eignenharp often simple sounds come to life, because the players actions all have subtle changes to the sound... you don't need so many enveloper/lfo to create the 'movement'

the 'next generation' is the Haken Continuum and Eagen Matrix, here the expressive input is combined with a sound 'engine' that was built from the ground up for it ( and a great sound designer), with each of its (highly crafted) sounds, every note you play is slightly different.


#86

one point to add is:

even if you have the most perfect physical modelling of an instrument, it will never be the same as the original instrument to play. for example pianoteq's modelling is great (for a piano) but you cannot touch the strings or put things inside the piano like on a real piano. that sort of interaction is not going to happen :slight_smile: but i think some of the strongest synthesiser sounds/songs actually are the ones that are very different from human beings playing real instruments. it's the artificial part, the fact that every note is equally loud or always has the same attack that can be very intriguing as well.

but i love physical modelling sounds as well, don't get me wrong.


#87

That's a rather "Kraftwerkian" view of deterministic machines. Even the so called classic analog waveforms Saw, Triangle and Square are much like perfect geometrical platonic solids. There is a form of crystallisation around the ADSR controlled VCO VCF VCA synthesis scheme.

Electronic instruments can be much less deterministic. As stated by @thetechnobear new sensitive controllers are appearing. I can't afford to buy one at the moment, I simply have a little Qu Nexus here... but even so, polyphonic aftertouch is a real plus if the patches are adjusted to take care of pressure nuances.

On the patch side, i think that there is a lot of concepts that can be explored to enhance articulation i.e. timbral variations by taking account of context and actions.


#88

sure and i love that! but it will still NEVER be the same as an acoustic instrument (why should it really) the word controller says it all actually. new sensitive controllers. you control some patch or integrated synth with it. the difference being that on a guitar or violin you actually play that very string that vibrates. that is always going to be different.


#89

Haptic controllers can bring some tactile feedback, even if it is a different feedback maybe it can be useful in a musical instrument...
Maybe, this one could be used with an Axoloti:

To me the main interest is not to imitate acoustic instruments but rather to go beyond the minimoogish synths.


#90

we are very much on the same boat there. cool. it's just a different concept really. one thing that is also very different actually is that on an acoustic instrument you send the soundwaves out of the instrument into the air whereas with an electronic instrument you have at least one "barrier" (a loudspeaker or similar). this alone makes a big difference in sound perception. but enough of this :slight_smile:


#91

I think the issue you mention with speakers, is generally they are point sources - but that's not fixed, you can set up multiple speaker to output sound in multiple directions.
( I seem to remember reading about new 3d speakers too)

back on topic though, the holy grail (for electronic instruments), what is it you look for?

I agree, I'm generally not after acoustic emulation... I tend to look though at acoustic instruments for inspiration, why is that a piano/guitar still feels special? what qualities make them rewarding to play?

  • an interface which is nuanced, one that rewards practice, where there are depths to master
  • a sound that is entangled with the interface, slight changes in playing mean something, a simple but effective interface.

I think there are lots of very different ways to achieve this, in the right hands, this can range from a theremin to a modular synth with live patching.

one thing I do wonder though, with synthesis, because we have so many choices, do we develop too many 'patches', and not spend enough time on one, to nail down the interface subtleties to it, or learn to play it?
Ive heard continuum players say that each preset, is like learning a new instrument, since the way the surface and sound interact is different for each...

thoughts? what do you look for?


#92

Ah, we are getting in very interesting territory now.

Let's be honest. Most instruments we use nowadays are basically consumer-oriented and not musician-oriented. Therefore their interfaces are trigger oriented and not performance oriented. In itself there is nothing wrong with that. It's basically the ultimate tool to democratise music making.

The revolution of the analog subtractive synthesizer was not, as is so often suggested pure real time voltage control as usch. Voltages where also used to control many earlier instruments like the Theremin, Ondes Martenot, Trautonium and who has ever heard of Hugh Le Caines brilliant Electronic Sackbut (You guys might!). I personally do however think the real breakthrough in the Moog-school of thinking was the ADSR. For the first time it was possible to preprogram a time variant function.

But there is nothing wrong with that as such. Even players of "real" instruments depend on such "preprogrammed" functions. As an accoustic player you know that different approaches to twanging or bowing your strings will lead to different responses (like for instant a differant formant content and the corresponding decay.

So I have no qualms with any programmable function. What does however bug me is that even the use of ADSR's is nowadays underrated. Start whatever Youtube video on a new analog machine and the envelopes are hardly touched. On the other had people are almost pulling the filter controls off. So there is a need for direct control after all but the real problem is that it remains rudimentary.

I've started my "carreer" going the proprogramming way becuase it gave me / us the wide pallet of all different synthesis methods.. Then I bought my CS80 (when still just about affordable) and learned the restrictions of the trigger oriented school by simply learnin to appreciate it's real tiem control options, which are just as much part of it's secret as it's sound is. I have since then learned to use use both aspects in a better balance. I have, for instance, by now "built" myself a system with CS80 type expression and modeling versatility. I even constructed my own uniform keyboard adapters to make it more natural to play then the conventional piano. (http://www.brassee.com/instruments.html#starship_one). The next step will be to integrate the wonderful Axoloti in the setup.

Isn't that just the beuatiful thing about this technology anyway? We can all search for our perfect balance between all possibilities. There is so much stuff out there that one can indeed almost "construct" his own ideal system. Never where the times for making music better.

But as always: These are the best of times. These are the worst of times. As so often the popular output produced with all this technology only uses a fraction of it's possibilities. That is the real fight but alas one that can hardly be won. Luckily the whole Hakan etc. trend is a long overdue antiodote. Now the price only has to come down.

To end on yet another positive note: After yet another false start by cocking up the Paypall process I at last received my personal Axoloti yesterday. Hurray!


#93

Dear Lokki,

I understand what you mean but I do actually not fully agree. We are getting so near to perfect emulations that it becomes ever more difficult to determine if the source is real or synthetic. Expert use of virtual isntruments makes it more and more difficult to detect the remaining giveaways.

A very important aspect will however stay if the performer is able to think in the terms of the instrument he is trying to emulate. That is actually the reason why ever better emulations can turn us back on the trail to real performance. That basically is what happened to me.

The next step however is to integrate an instruments response behaviour in the instruments algorythm. Roland has already added such tricks into the V-synth. If you choose the next note the instrument adds aspects of the original acoustic instruments respons into an automated response (0ne coudl call it a sort of performance response envelope).

So alas soon everybody will sound like the same soullles but perfect Duduk player or whatever and the cycle will start again.


#94

yeah i agree if you are listening to a recording. it can be hard to distinguish, but it is still very possible. i would put it that way:

you can perfectly emulate a drummer playing a pop-song, sound wise but also groove wise nowadays.

now try a jazz-drummer. much more versatile. try to fool me there, it will be difficult.

now try a free-drummer that makes some noise on his instrument, impossible to get even close with anything synthetic.

this can be applied to any instrument.

imagine a concert, even the best emulation will not fool somebody into thinking the keyboard is actually a grand piano. the energy travelling directly from musician into the instrument, you can actually hear that sometimes, that ain't gonna happen.

but for me that is not the point. emulating instruments can be interesting to get an idea of sound/timbre, but synthetic sounds have much more to give than this. i personally do not understand why you would want to emulate anything for real, except for monetary reasons, which sucks. (i know this is exaggerated, but i tend to think this way)

it makes me sad and angry that people like hans zimmer compose big scores for big movies and use almost exclusively samples/emulations, what the fuck?? that music could be so great, played by a real orchestra. the reason here is of course also money. if you listen to zimmer and then to williams, you can hear the difference pretty wildly!


#95

yeah, and then why not learn the real instrument instead? i know i know, can be hard to obtain all those instruments :slight_smile:


#96

the holy grail would probably be a brain/computer,instrument,whatever bridge that plays what you think...
so if somebody wants to send me his eeg set, i cook something up..:slight_smile:


#97

i think you are spot on here. put a musician in front of a modular, and music will come out at some point.
put a musician in front of a spoon and a can, and you guess...music!
but at some point you have to learn to be a musician, no instrument/tool should/could ever take that away from you.


#98

That's quite impressive

Since i have tasted polyphonic aftertouch patches on the Axoloti with my two octave rubber keys QuNexus... I simply want more keys :yum:

I see that you used GEM S keyboards. I was not aware that these keyboard have polyphonic aftertouch and even the extremely rare release velocity. I think that the Elka MK55 and MK88 have similar capabilities...

I wonder if there is enough room in these keyboard to incorporate one or two axoloties ?


#99

Whistling into a pitch detector can do the trick without electrodes...


#100

that's of course not what i mean. i can already play what i imagine/sing on my doublebass.
i was thinking more along the lines of a huge modular synth with interconnections to my brain. basically an electronic orchestra at my will.