There are several ways to work with audio in blender. Since audio is mainly a part of the gameengine, this design focusses on use in the gameengine and presetting sounds for the gameengine. This functional design is intended for practical use of audio: the way how an user can load, edit and preset a sound for use in a game.
The road from loading a sample to use in the gameengine consists of three parts:
placing the sound in the logic brick window
Loading the sample is actually a simple part of handling sound. Just like any other data that has to be loaded into blender there should be an easy and intuitive way for handling the loading/unloading of a sample. The current implementation has several flaws. Samples can be loaded into blender but are only removed if the sound isn't used by the logic. If blender saves the file and reopens it the sample won't be reloaded (because it wasn't saved in the first place).
the sound menu as it is now
Why is this wrong?
Because an user may want to listen and test many files before deciding to use the sound. He may or may not be ready with the sound and even haven't decided whether or not he will use the sound. But the fact that he wants to save his work shouldn't interfere with his future plans with the sound. I.e., the user prepares a large amount of sounds. All the sounds are given the right gain, pitch and attenuation etc. But he hasn't decided yet where and how he will connect the sounds to the gamelogic. If he saves his work and quits, all his work is lost because blender thinks there was no use for all of these sounds.
What should be done is make blender less selfthinking: everything the user does in the sound windows (loading and presetting) should be saved. Period. If the user wants to get rid of a sound, he alone can take the action: a remove sound button. Just like the user decides what sounds to load, he must be able to decide what sounds to unload. Even if this is not consistant with the rest of blender I think this basic functionallity is a must-do for easy and intuitive working with blender.
the sound menu as it should be, with a delete and copy button
So what functionallity is required?
Sound creation (implemented, but to be enhanced)
The creation of a sound. A sound is a sample loaded into memory with control parameters. These parameters could be controls like gain, pitch, attenuation, looping, orientation, conesize, dopplerscaling, dopplerfactor, minimum distance, maximum distance, fixed positioning.
Prelistening the sample (to be implemented)
As soon as a sound is created a window opens for loading a sample that is to be used by the sound. This load sound should also have a prelisten button: if you browse through a large list of samples you want to listen to the sound before actually opening it.
Deleting a sound (to be implemented)
The deletion of a created sound. If the user decides not to use this sound, he must be able to delete it. This does not delete the sample on the harddisk. It merely throws away the collection of controlparameters and the reference to the sample.
Copying a sound
A created soundcan be copied. If the user wants to use a sound in several ways with small variations he doesn't want to reopen the sample every time and then tweak the settings for every sample. He wants to copy the original sound and work and the newly made copy.
Playback (implemented, but to be enhanced)
If you want to enable the user to preset the sounds, he must off course be able to playback (and stop!) the sound. A playback button should suffice. While playing the sound button could be toggled to a stop button. The current implementation uses the ESC-key to stop a playing sound but I don't think this is intuitive.
Info box
Information. There is a box with technical information about the sample: the bitrate, the samplerate, mono/stereo, tags. The question still remains whether or not we want blender to be able to change things like samplerate and bitrate. Other more specified programs do it really well. I think functionallity like this may be something for the future (blender 3.0 or even later).
Looppoints (to be implemented)
The wave window with looppoint markers
Right now I don't think there is a lot gained with the visual representation of the wave file. Nothing can be done with it, only information can be read about it. As soon as looppoint functionallity is implemented this representation will become more important: markers could be drawn to set start and end looppoints. Looppoint functionallity is a feature where the sample gets divided into two or three parts. When starting the sound the first part, the preloop, will be played. This part will not be looped. The second part of the sample is the looping part. As long as the sample is to be looped, this part of the sample will be looped. The third part of the sample playback is the postloop. When looping is turned off the playback will finish the current loop and proceed to the postloop. As soon as the postloop is finished playback is stopped.
Interactive audio (to be implemented)
Another feature will be interactive audio. It will be mainly used for music playback. Interactive audio has two modes: static and dynamic interactive audio.
Static: first you compose a playlist. This is a list of N loopable elements. When triggered, the playlist finishes the current loop and then proceeds to the next loop.
Dynamic: the playlist gets composed realtime. This is also a list of N loopable elements. When triggered, the playlist finishes the current loop and then proceeds to the next loop. The trigger decides what this next loop will be, dependend on circumstances.
Parameter setting in the logic window (implemented, but to be enhanced)
Placing the sound in the logic brick window is the final part of the road. Now that the sound is prepared, it must be put in an actuator and connected to a controller (and a sensor) so it can be used in an interactive 3D environment. Now, sound is not something which is triggered and that's it. You want to change it's parameters realtime: change the pitch, gain attenuation etc. The way to do this right now is by means of a python script. The user can write a script to start the sound, pause it, stop it, get and set the gain, pitch, attenuation, looping, position, velocity, orientation. A problem you run into now is that you want to test the sound in a game. So you press 'p' and listen. If its not just the way you want it, you want to change it. But do you want to change it in the soundwindow? I don't think so. Even though you prepared the sound there, you want to tweak it where it is used: the logic window. For that is the place where you're managing *that* sound in *that* place. I.e., if you have a couple of machines running in your game, you may not want to use the exact sound for every machine. But you may want to use the same basic sound for all of them. You have to use an actuator anyway to hang the sound to a sensor/controller anyway, so that is *the* spot to do some last tweaking.
The soundactuator with parameters
Sounddevice setup (to be implemented)
A sounddevice is a kind of environment where you can play your sounds. On windows you can think of devices like DirectSound, DirectSound3D or the windows mmsystem. On linux there can be devices like the native device (operating system native), sdl (Simple DirectMedia Layer), arts, esd (esound), alsa and waveout (WAVE file output).
The device will be selectable from a menu which only shows the devices available. This device will be stored somewhere and be the default device from then on.
Back to index audio