Virtual Surround Software

0 views
Skip to first unread message

Ariel Wascom

unread,
Aug 5, 2024, 2:56:04 AM8/5/24
to braganperloo
Virtualsurround is an audio system that attempts to create the perception that there are many more sources of sound than are actually present. In order to achieve this, it is necessary to devise some means of tricking the human auditory system into thinking that a sound is coming from somewhere that it is not. Most recent examples of such systems are designed to simulate the true (physical) surround sound experience using one, two or three loudspeakers. Such systems are popular among consumers who want to enjoy the experience of surround sound without the large number of speakers that are traditionally required to do so.[1]

A virtual surround system must provide a means for 2-dimensional imaging of sound, using some properties of the human auditory system. The way that the auditory system localises a sound source is a topic that is studied in the field of psychoacoustics. Thus, virtual surround systems use knowledge of psychoacoustics to "trick" the listener. There are several ways in which this has been attempted.


Some methods use knowledge of head-related transfer function (HRTF). With an appropriate HRTF the signals required at the eardrums for the listener to perceive sound from any direction can be calculated. These signals are then recreated at the eardrum using either headphones or a crosstalk calculation method.[2][3] The disadvantage of this approach is that it is very difficult to get these systems to work for more than one listener at a time.


Some virtual surround systems work by directing a strong beam of sound to reflect off the walls of a room so that the listener hears the reflection at a higher level than the sound directly from the loudspeaker. One example of this technology is a commercially available Digital Sound Projector by Cambridge Mechatronics (formerly 1 Ltd). It employs 40 micro drivers and 2 woofers as well as projection technology to control the direction of the sound. The micro drivers' sound is focused into groups of "beams" that reflect off the room's walls. The center channel's sound is projected directly to the listening position. Another example is S-Logic marketed by the German headphones manufacturer Ultrasone. With this technology (which may also be considered a hybrid of HRTF and reflection-based methods), decentralized transducer positioning is used to spread sound over the outer ear in an attempt to mimic sound heard over speakers.


For virtual surround to be effective, the room should be both physically symmetrical about the perpendicular to the line between the speakers, and the absorbing characteristics of the left and right walls. An absorptive piece of furniture close to one speaker, and not matched on the other side will cause the sound field to shift to the "live" side of the room. The resulting "sound stage" is affected by asymmetry.


When it comes to home theater, a lot of people think big -- a big picture and lots of sound coming from a widescreen TV and an array of speakers. But the typical home-theater setup, with its surround-sound speakers and subwoofer, won't work for every home. Some people don't have enough room for all of that equipment. Others don't want their living rooms cluttered with cables, or they don't want the hassle of adjusting the placement and height of lots of speakers.


That's where virtual surround sound comes in. It mimics the effect of a multi-speaker surround-sound system, but it uses fewer speakers and fewer cables. These systems come in two primary varieties -- 2.1 surround and digital sound projection. Most of the time, 2.1-surround systems use two speakers placed in front of the listener and a subwoofer placed somewhere else in the room. These recreate the effect of a 5.1 surround-sound system, which has five speakers and a subwoofer. Digital sound projectors, on the other hand, tend to use a single strip of small speakers to produce sound. Many digital sound projectors do not include a subwoofer.


Regardless of their exact setup, these systems work on the same basic principles. They use a number of techniques to modify sound waves so that they seem to come from more speakers than are really there. These techniques came from the study of psychoacoustics, or the manner in which people perceive sound. In this article, we'll explore the traits of human hearing that allow two speakers to sound like five, as well as what to keep in mind if you shop for a virtual surround-sound system.


Virtual surround-sound systems take advantage of the basic properties of speakers, sound waves and hearing. A speaker is essentially a device that changes electrical impulses into sound. It does this using a diaphragm -- a cone that rapidly moves back and forth, pushing against and pulling away from the air next to it. When the diaphragm moves outward, it creates a compression, or area of high pressure, in the air. When it moves back, it creates a rarefaction, or area of lower pressure. You can learn more about the details in How Speakers Work.


Compressions and rarefactions are the result of the movement of air particles. When the particles push against each other, they create an area of higher pressure. These particles also press against the molecules next to them. When the particles move apart, they create an area of lower pressure while pulling away from the neighboring particles. In this manner, the compressions and rarefactions travel through the air as a longitudinal wave.


When this wave of high- and low-pressure areas reaches your ear, several things happen that allow you to perceive it as sound. The wave reflects off of the pinna, or external cone, of your ear. This part of your ear is also known as the auricle. The sound also travels into your ear canal, where it physically moves your tympanic membrane, or eardrum. This sets off a chain reaction involving many tiny structures inside your ear. Eventually, the vibrations from the wave of pressure reach your cochlear nerve, which carries them to the brain (brain.htm) as nerve impulses. Your brain interprets these impulses as sound. How Hearing Works (hearing.htm) has lots more information about your ear's internal structures and what it takes to perceive sound.


Your brain's interpretation process allows you to understand the sound's meaning. If the sound is a series of spoken words, you can put them together into an understandable sentence. If the sound is a song, you can interpret the words, experience the tone and rhythm, and decide whether you like what you hear. You can also remember whether you've heard the same song or similar songs before.


In addition to allowing you to interpret the sound, your brain also uses lots of aural cues to help you figure out where it came from. This isn't always something you think about or are even consciously aware of. But being able to locate the source of a sound is an important skill. This ability helps animals locate food, avoid predators and find others of their species. Being able to tell where a sound came from also helps you decide whether someone is following you and whether a knock outside is at your door or your neighbor's.


Most people have had the experience of sitting in a very quiet room, like a classroom during a test, and having the silence broken by an unexpected noise, like change falling from someone's pocket. Usually, people immediately turn their heads toward the source of the sound. Turning toward the sound seems almost instinctive -- in an instant, your brain determines the sound's location. This is often true even if you can only hear in one ear.


A person's ability to pinpoint a sound's location comes from the brain's analysis of the sound's attributes. One attribute has to do with the difference between the sound that your right ear hears and the sound that your left ear hears. Another has to do with the interactions between the sound waves and your head and body. Together, these are the aural cues that the brain uses to figure out where a sound came from.


Imagine that the coins in our quiet classroom example hit the floor somewhere to your right. Because the sound travels as physical waves through the air -- a process that takes time -- it reaches your right ear a fraction of a second before it reaches your left. In addition, the sound is a little quieter by the time it reached your left ear. This reduction in volume is because of the natural dissipation of the sound wave and because your head absorbs and reflects a little bit of the sound. The difference in volume between your left and right ears is the interaural level difference (ILD). The delay is the interaural time difference (ITD).


Time and level differences give your brain a clear idea of whether a sound came from your left or your right. However, these differences carry less information about whether the sound came from above you or below you. This is because changing the elevation of a sound affects the path it takes to reach your ear, but it doesn't affect the difference between what you hear in your left and right ears. In addition, it can be hard to figure out whether a sound is coming from in front of you or behind you if you're only relying on time and level differences. This is because, in some cases, these sounds can produce identical ILDs and ITDs. Even though the sounds are coming from a different location, the differences in what your ears hear are still the same. The ILDs and ITDs are identical in a cone-shaped area extending outward from your ear known as the cone of confusion.


ILDs and ITDs require people to be able to hear in both ears, but people who cannot hear in one ear can still often determine the source of sound. This is because the brain can use the sound's reflection off of the surfaces in one ear to try to localize the sound's source.


When a sound wave reaches a person's body, it reflects off of the person's head and shoulders. It also reflects off of the curved surface of the person's outer ear. Each of these reflections makes subtle changes to the sound wave. The reflecting waves interfere with one another, causing parts of the wave to get bigger or smaller, changing the sound's volume or quality. These changes are known as head-related transfer functions (HRTFs). Unlike with ILDs and ITDs, the sound's elevation, or the angle at which it hits your ears from above or below, affects its reflections of the surfaces of the body. The reflections are also different depending on whether the sound comes from in front of or behind your body.

3a8082e126
Reply all
Reply to author
Forward
0 new messages