Ketsji unveilled

module owner: Janco Verduin

Version: $Id: sound.html,v 1.4 2001/05/28 08:58:28 janco Exp $

The sound module

What's a game without sound? It adds a complete new dimension to the visual aspect of a game. It makes you feel like a deaf person if there's none.

The sound module (SND) adds audio to the gameengine (gee, is that true?). The sound in the gameengine is used for several things: dynamic, interactive audio (like soundeffects) and static audio (like backgroundmusic or environmental sound). You can also speak in terms of 3D audio and 2D audio. The 3D audio are the sounds positioned in the 3D world and the 2D audio is the sound that has more of an 'overlay' function.

Basically, the sound module works like this: there's this soundobject, and it is positioned somewhere in the 3D environment of the game. It has all kinds of parameters which can be changed by internal and external sources (the player, in game characters/objects/scripts). And then there is the listener, usually this is the camera. A sound made in the environment is processed relative to the listener: closer sounds sound louder, sounds further away sound more quiet. Sounds passing the listener will do something extra: doppler effect. Besides that pitch can be changed, the volume can be adjusted (gain), reflections can be added (this way you can make a bathroom sound like a bathroom or, if you want to, a hallway).

[top]


The sound module in detail

Audio can be thought of as sounds (objects, or sources of sound) and listeners (the perceiver of the sound, like the camera in graphics). In order to control the nature of these sounds, a sound must have parameters. Fortunately, they do.

Some parameters of a sound:

  • Pitch: the frequency.
  • Gain (or volume): the loudness of a sound.
  • Position: the 3D position of the source of a sound.
  • Velocity: the speed by which the source of the sound is moving.
  • Conesize: a source can emit its sound omnidirectional or to a specific direction. Because you can think of a source as a point in space, the emission of the sound will be coneshaped. The conesize is the parameter of controlling the size of that cone.
  • Orientation: the direction the cone of the sound is pointed at.
  • Attenuation: the scaling factor of the gain.
  • Fixed gain/3D gain: whether or not the gain will always be the same to listener, no matter where source of the sound is.
  • Fixed panning/3D panning: whether or not the positioning will be the same, relative to the listener, no matter where the source of the sound is.

    Just like sound, the listener can be controlled too. In a 3D world a listener has properties like position, velocity and even gain (the listener could be quite deaf).

    Some parameters of a listener are the following:

  • Gain: an overall setting of the perception of the loudness of the sounds. Think of it as a 'mastermixer'.
  • Position: the 3D position of the listener, currently only used in the GameEngine. Usually this is the position of the camera.
  • Velocity: the speed by which the listener is moving, currently only used in the GameEngine.
  • Orientation: the direction the listener's ears are pointed at, currently only used in the GameEngine.

    These parameters get bundled in an object.

    So now we have two kinds of objects: SoundObjects and a ListenerObject. A SoundObject is connected to a SoundActuator. This actuator can get triggered and then change the parameters of its SoundObject. If a SoundObject gets modified, it sets a flag. Others are now able to check if something new has happended to the SoundObject and act accordingly to it.

    If you want to use audio in blender, several other aspects arise: the sources must have ways of generating sound. Usually this is done by the use of samples, but other ways are also thinkable (like FM synthesis). Finally, samples and parameters must be given to a soundlibrary (like OpenAL, or FMOD) which does it's own things with it.

    These samples have to be managed, and the SoundObjects themselves too. For managing the samples there's a SND_WaveCache class. Now, the WaveCache initializes the soundlibrary because during a blendersession there's only one instance of a WaveCache: *the* WaveCache. When blender exits it is the WaveCache that exits the soundlibrary.

    Blender and Ketsji can tell the WaveCache what samples they want to be using and the WaveCache does the loading for them. In the case of using OpenAL the WaveCache also gives them back a ticket, corresponding with the buffer the sample was loaded into. Blender and Ketsji now just have to remember what ticket belongs to what samplename. This management is done by the SoundScene. The SoundScene is an abstract scene. It talks to the soundlibrary using an API: SND_LibApi. Every Soundlibraries API has to comply to this SND_LibApi. This way the scene can stay abstract and talks to a library it has no knowledge of. Currently we use two API's: an OpenAL-Api and a Dummy-Api. The Dummy-Api is nothing more then an empty api with no library behind it. It is just a stubb.
    The SoundScene is the heart of the SND module. It contains the SoundListener, it connects the SoundObjects with the information given to it by the WaveCache, check what must be done with it and present to the soundlibrary. When you start Blender a Wavecache gets instantiated. Right after this, a scene follows. This scene stays present till you quit Blender. But when you start a game within Blender, a new scene gets created. This way, nothing can get mixed up, everybody has his own soundscene, everybody is happy.

    An overview of the soundmodule:

    [top]


    Using the sound module

    SND_LibApi

    The SND_Scene talks to a sound library through an API. Each API must implement the following functions, declared virtual in SND_ILibApi.h:

  • void SND_LibActivateScene()
    This function activates a scene that has been suspended. The reason for this is that a limited amount of resources have to be shared by all scenes. A scene that isn't used must remove its resources (and will be suspended) so that an other scene may use all available resources.

  • void SND_LibSuspendScene()
    This is the function that suspends a scene that isn't needed. All resources will be given back.

  • void SND_LibSetListenerGain(float gain)
    Implement this function to set the gain of the listener.

  • void SND_LibSetListenerPosition(MT_Point3 position)
    Implement this function to set the position of the listener.

  • void SND_LibSetListenerVelocity(MT_Vector3 velocity)
    Implement this function to set the velocity of the listener.

  • void SND_LibSetListenerOrientation(MT_Matrix3x3 orientation)
    Implement this function to to set the orientation of the listener.

  • int SND_LibGetObjectStatus(int id)
    Implement this function to retrieve the playstate of a sound.

  • void SND_LibPlayObject(int id, unsigned int buffer)
    Implement this function to play a sound belonging to an object.

  • void SND_LibStopObject(int id, unsigned int buffer)
    Implement this function to stop a sound belonging to an object.

  • void SND_LibStopAllObjects()
    Implement this function to stop all sounds.

  • void SND_LibPauseObject(int id)
    Implement this function to pause the sound belonging to an object.

  • void SND_LibSetObjectPitch(int id, MT_Scalar pitch)
    Implement this function to set the pitch of the sound.

  • void SND_LibSetObjectGain(int id, MT_Scalar gain)
    Implement this function to set the gain of the sound.

  • void SND_LibSetObjectDistance(int id, MT_Scalar distance)
    Implement this function to set the distance at which the Listener will experience gain.

  • void SND_LibSetObjectDopplerVelocity(int id, MT_Scalar dopplervelocity)
    Implement this function to set the value of the propagation speed relative to which the Source velocities are interpreted.

  • void SND_LibSetObjectDopplerFactor(int id, MT_Scalar dopplerfactor)
    Implement this function to set a scaling to exaggerate or deemphasize the Doppler (pitch) shift resulting from the calculation.

  • void SND_LibSetObjectPosition(int id, MT_Point3 position, MT_Point3 lisposition, MT_Scalar attenuation)
    Implement this function to set the position of a sound.

  • void SND_LibSetObjectVelocity(int id, MT_Vector3 velocity)
    Implement this function to set the velocity of a sound.

  • void SND_LibSetObjectOrientation(int id, MT_Matrix3x3 orientation)
    Implement this function to set the orientation of a sound.

  • void SND_LibSetObjectLoop(int id, bool loop)
    Implement this function to set the sound to looping or non-looping.

    python

    Using python, you can change all kinds of parameters of a sound realtime. A python script may look something like this:
    import GameLogic

    cont = GameLogic.getCurrentController()
    sens = cont.getSensors()
    act = cont.getActuator("act")

    pitch = act.getPitch()
    pitch = pitch + 0.1
    act.setPitch(pitch)


    The following pythoncalls are implemented at the moment:

  • SetFilename
  • GetFilename
  • StartSound
  • PauseSound
  • StopSound
  • SetGain
  • GetGain
  • SetPitch
  • GetPitch
  • SetAttenuation
  • GetAttenuation
  • SetLooping
  • GetLooping
  • SetPosition
  • GetPosition
  • SetVelocity
  • GetVelocity
  • SetOrientation
  • GetOrientation

    [top]


    The OpenAL library

    [top]


    Back to index ketsji unveilled