xna3DAudio



xna3DAudio

0 0


xna3DAudio

xna3DAudio

On Github shearer12345 / xna3DAudio

3D Audio in XNA

http://shearer12345.github.io/xna3DAudio/

What is 3D audio anyway?

  • Audio (sound) which gives the listener cues about the 3D world, including:
    • location of audio sources
    • size of the space
    • kind of surfaces in the space (hard, soft)

Also known as:

  • positional audio
  • spatialized audio
  • surround sound

Some examples

Doppler Effect

Your browser does not support the audio element.

Virtual Barbershop and Haircut

Describing the location of sound

  • we usually describe sound locations from the first person perspective and talk about the:
    • azimuth (horizontal angle)
    • elevation (vertical angle)
    • distance (how far)
    • velocity (how fast)

How do we (humans) determine the direction/location of sound?

Our auditory systems use several cues for sound source localization, including:

  • time differences between ears
    • sound from the right side reaches the right ear earlier than the left ear
    • the auditory system evaluates interaural time differences from:
      • phase delays at low frequencies
      • group delays at high frequencies
  • level differences between ears
  • spectral (frequency) information
  • correlation analysis
  • pattern matching

(http://en.wikipedia.org/wiki/Sound_localization)

Head-related transfer function

  • the way sounds arrive at the ear (outer end of auditory canal) is non-trivial
  • it depends on the shape of the ears as well as the shape of the head and torso
  • these can be simulated to varying degrees

http://en.wikipedia.org/wiki/Head-related_transfer_function

Other acoustic effects

  • size of the space
    • reverberation delays
  • kind of surfaces in the space (hard, soft)
    • how fast reverb attenuates
  • sound refracts around corners
  • sound bounces off objects
    • can lead to audio focusing (e.g. acoustic mirrors)
  • NOTE: being very general, simplistic here

What (physical) equipment is needed?

  • headphones - speakers collocated with ears so can provide some very power effects (try the previous examples with headphones)

  • multiple speakers - used together to simulate sound from any locations. Normal surround sound options include:
    • 2.1 (two channels of medium/high frequency (which give location), and one subwoofer (low frequency, which humans can't locate))
    • 4.1 (front left, front right, back left, back right, sub) - not very common
    • 5.1 (centre, front left, front right, back left, back right, sub)
    • 7.1 (centre, front left, front right, left, right, back left, back right, sub)

What libraries exist (for real-time games)

What can those libraries do?

  • only a small subset of auralization techniques:
    • time differences between ears
    • level differences between ears
    • Head-related transfer function if you write a function for it
  • in general, nothing else
    • but you can manually setup reverb etc to simulate spaces, but doing this properly, dynamically, in real-time is an open research problem

Research

Hands on