stringtranslate.com

Амбисоника

Бывшая торговая марка Ambisonics

Ambisonics — это формат полносферного объемного звука : помимо горизонтальной плоскости он охватывает источники звука выше и ниже слушателя. [1]

В отличие от некоторых других форматов многоканального объемного звучания, его каналы передачи не передают сигналы динамиков. Вместо этого они содержат независимое от динамика представление звукового поля, называемое B-форматом , которое затем декодируется в соответствии с настройкой динамика слушателя. Этот дополнительный шаг позволяет продюсеру думать о направлениях источников, а не о положении громкоговорителей, и предлагает слушателю значительную степень гибкости в отношении расположения и количества громкоговорителей, используемых для воспроизведения.

Ambisonics была разработана в Великобритании в 1970-х годах под эгидой Британской национальной корпорации исследований и развития .

Несмотря на прочную техническую базу и множество преимуществ, Ambisonics до недавнего времени [ когда? ] имел коммерческий успех и выжил только в нишевых приложениях и среди энтузиастов звукозаписи.

Благодаря широкой доступности мощной цифровой обработки сигналов (в отличие от дорогих и подверженных ошибкам аналоговых схем, которые приходилось использовать в первые годы) и успешному внедрению на рынок систем объемного звучания для домашних кинотеатров с 1990-х годов, интерес к Ambisonics среди инженеров звукозаписи, звукорежиссеров, композиторов, медиа-компаний, телерадиовещателей и исследователей вернулось и продолжает расти.

В частности, он оказался эффективным способом представления пространственного звука в приложениях виртуальной реальности (например, YouTube 360 ​​Video), поскольку сцену B-формата можно повернуть в соответствии с ориентацией головы пользователя, а затем декодировать как бинауральное стерео.

Введение

Ambisonics можно понимать как трехмерное расширение стереофонического звука M/S (mid/side) , добавляющее дополнительные разностные каналы для высоты и глубины. Результирующий набор сигналов называется B-форматом . Его составляющие каналы помечены для звукового давления (буква M в M/S), для градиента звукового давления спереди-минус-назад, для левого-минус-правого (S в M/S) и для вверх-минус-вниз. . [примечание 1]

Сигнал соответствует всенаправленному микрофону, а компоненты, которые будут улавливаться восьмерочными капсулами, ориентированными вдоль трех пространственных осей.

Панорамирование источника

Простой паннер Ambisonic (или кодер ) принимает исходный сигнал и два параметра: горизонтальный угол и угол места . Он позиционирует источник под нужным углом, распределяя сигнал по компонентам Ambisonic с разным усилением:

Being omnidirectional, the channel always gets the same constant input signal, regardless of the angles. So that it has more-or-less the same average energy as the other channels, W is attenuated by about 3 dB (precisely, divided by the square root of two).[2] The terms for actually produce the polar patterns of figure-of-eight microphones (see illustration on the right, second row). We take their value at and , and multiply the result with the input signal. The result is that the input ends up in all components exactly as loud as the corresponding microphone would have picked it up.

Virtual microphones

Morphing between different virtual microphone patterns.

The B-format components can be combined to derive virtual microphones with any first-order polar pattern (omnidirectional, cardioid, hypercardioid, figure-of-eight or anything in between) pointing in any direction. Several such microphones with different parameters can be derived at the same time, to create coincident stereo pairs (such as a Blumlein) or surround arrays.

A horizontal virtual microphone at horizontal angle with pattern is given by

.

This virtual mic is free-field normalised, which means it has a constant gain of one for on-axis sounds. The illustration on the left shows some examples created with this formula.

Virtual microphones can be manipulated in post-production: desired sounds can be picked out, unwanted ones suppressed, and the balance between direct and reverberant sound can be fine-tuned during mixing.

Decoding

Naive single-band in-phase decoder for a square loudspeaker layout.

A basic Ambisonic decoder is very similar to a set of virtual microphones. For perfectly regular layouts, a simplified decoder can be generated by pointing a virtual cardioid microphone in the direction of each speaker. Here is a square:

The signs of the and components are the important part, the rest are gain factors. The component is discarded, because it is not possible to reproduce height cues with just four loudspeakers in one plane.

In practice, a real Ambisonic decoder requires a number of psycho-acoustic optimisations to work properly.[3]

Frequency-dependent decoding can also be used to produce binaural stereo; this is particularly relevant in Virtual Reality applications.

Higher-order Ambisonics

Visual representation of the Ambisonic B-format components up to third order. Dark portions represent regions where the polarity is inverted. Note how the first two rows correspond to omnidirectional and figure-of-eight microphone polar patterns.

The spatial resolution of first-order Ambisonics as described above is quite low. In practice, that translates to slightly blurry sources, but also to a comparably small usable listening area or sweet spot. The resolution can be increased and the sweet spot enlarged by adding groups of more selective directional components to the B-format. These no longer correspond to conventional microphone polar patterns, but rather look like clover leaves. The resulting signal set is then called Second-, Third-, or collectively, Higher-order Ambisonics.

For a given order , full-sphere systems require signal components, and components are needed for horizontal-only reproduction.

Historically there have been several different format conventions for higher-order Ambisonics; for details see Ambisonic data exchange formats.

Comparison to other surround formats

Ambisonics differs from other surround formats in a number of aspects:

On the downside, Ambisonics is:

Теоретическая основа

Анализ звукового поля (кодирование)

Сигналы формата B представляют собой усеченное сферическое гармоническое разложение звукового поля. Они соответствуют звуковому давлению и трем компонентам градиента давления (не путать с соответствующей скоростью частицы ) в точке пространства. Вместе они аппроксимируют звуковое поле на сфере вокруг микрофона; формально усечение первого порядка мультипольного разложения . (моносигнал) представляет собой информацию нулевого порядка, соответствующую постоянной функции на сфере, а являются членами первого порядка (диполи или восьмерки). Это усечение первого порядка является лишь приближением общего звукового поля.

Высшие порядки соответствуют дальнейшим членам мультипольного разложения функции на сфере по сферическим гармоникам. На практике более высокие порядки требуют большего количества динамиков для воспроизведения, но увеличивают пространственное разрешение и увеличивают область, где звуковое поле воспроизводится идеально (вплоть до верхней граничной частоты).

Радиус этой области для амбисонического порядка и частоты определяется выражением

, [4]

где обозначает скорость звука.

Эта площадь становится меньше человеческой головы при частоте выше 600 Гц для первого порядка или 1800 Гц для третьего порядка. Для точного воспроизведения на головной громкости до 20 кГц потребуется порядка 32 или более 1000 громкоговорителей.

На тех частотах и ​​позициях прослушивания, где идеальная реконструкция звукового поля больше невозможна, воспроизведение Ambisonics должно быть сосредоточено на доставке правильных сигналов направления, чтобы обеспечить хорошую локализацию даже при наличии ошибок реконструкции.

Психоакустика

Слуховой аппарат человека имеет очень четкую локализацию в горизонтальной плоскости (в некоторых экспериментах расстояние между источниками достигает 2°). Можно выделить два преобладающих сигнала для разных частотных диапазонов:

Низкочастотная локализация

На низких частотах, где длина волны велика по сравнению с длиной головы человека, входящий звук дифрагирует вокруг него, так что акустической тени практически нет и, следовательно, нет разницы в уровнях между ушами. В этом диапазоне единственной доступной информацией является соотношение фаз между двумя ушными сигналами, называемое межушной разницей во времени , или ITD . Оценка этой разницы во времени позволяет точно локализовать внутри конуса растерянности : угол падения однозначен, но ITD одинаков для звуков спереди и сзади. Пока звук не совсем неизвестен субъекту, путаницу обычно можно устранить, воспринимая тембральные вариации спереди и сзади, вызванные ушными вкладышами (или ушными раковинами ).

Высокочастотная локализация

Когда длина волны приближается к размеру головы в два раза, фазовые соотношения становятся неоднозначными, поскольку уже не ясно, соответствует ли разность фаз между ушами одному, двум или даже большему количеству периодов при повышении частоты. К счастью, в этом диапазоне голова создает значительную акустическую тень, что приводит к небольшой разнице в уровне между ушами. Это называется межушной разницей уровней , или ILD (применяется тот же конус замешательства). В совокупности эти два механизма обеспечивают локализацию во всем диапазоне слуха.

Воспроизведение ITD и ILD в Ambisonics

Герзон показал, что качество сигналов локализации в воспроизводимом звуковом поле соответствует двум объективным показателям: длине вектора скорости частицы для ITD и длине вектора энергии для ILD. Герзон и Бартон (1992) определяют декодер для горизонтального объемного звучания как Ambisonic , если

На практике удовлетворительные результаты достигаются при умеренных порядках даже для очень больших зон прослушивания. [6] [7]

Моноауральный сигнал HRTF

Humans are also able to derive information about sound source location in 3D-space, taking into account height. Much of this ability is due to the shape of the head (especially the pinna) producing a variable frequency response depending on the angle of the source. The response can be measured by placing a microphone in a person's ear canal, then playing back sounds from various directions. The recorded head-related transfer function (HRTF) can then be used for rendering ambisonics to headphones, mimicking the effect of the head. HRTFs differ among person to person due to head shape variations, but a generic one can produce a satisfactory result.[8]

Soundfield synthesis (decoding)

In principle, the loudspeaker signals are derived by using a linear combination of the Ambisonic component signals, where each signal is dependent on the actual position of the speaker in relation to the center of an imaginary sphere the surface of which passes through all available speakers. In practice, slightly irregular distances of the speakers may be compensated with delay.

True Ambisonics decoding however requires spatial equalisation of the signals to account for the differences in the high- and low-frequency sound localisation mechanisms in human hearing.[9] A further refinement accounts for the distance of the listener from the loudspeakers (near-field compensation).[10]

A variety of more modern decoding methods are also in use.

Compatibility with existing distribution channels

Ambisonics decoders are not currently being marketed to end users in any significant way, and no native Ambisonic recordings are commercially available. Hence, content that has been produced in Ambisonics must be made available to consumers in stereo or discrete multichannel formats.

Stereo

Ambisonics content can be folded down to stereo automatically, without requiring a dedicated downmix. The most straightforward approach is to sample the B-format with a virtual stereo microphone. The result is equivalent to a coincident stereo recording. Imaging will depend on the microphone geometry, but usually rear sources will be reproduced more softly and diffuse. Vertical information (from the channel) is omitted.

Alternatively, the B-format can be matrix-encoded into UHJ format, which is suitable for direct playback on stereo systems. As before, the vertical information will be discarded, but in addition to left-right reproduction, UHJ tries to retain some of the horizontal surround information by translating sources in the back into out-of-phase signals. This gives the listener some sense of rear localisation.

Two-channel UHJ can also be decoded back into horizontal Ambisonics (with some loss of accuracy), if an Ambisonic playback system is available. Lossless UHJ up to four channels (including height information) exists but has never seen wide use. In all UHJ schemes, the first two channels are conventional left and right speaker feeds.

Multichannel formats

Likewise, it is possible to pre-decode Ambisonics material to arbitrary speaker layouts, such as Quad, 5.1, 7.1, Auro 11.1, or even 22.2, again without manual intervention. The LFE channel is either omitted, or a special mix is created manually. Pre-decoding to 5.1 media has been known as G-Format[11] during the early days of DVD audio, although the term is not in common use anymore.

The obvious advantage of pre-decoding is that any surround listener can be able to experience Ambisonics; no special hardware is required beyond that found in a common home theatre system. The main disadvantage is that the flexibility of rendering a single, standard Ambisonics signal to any target speaker array is lost: the signal is assumes a specific "standard" layout and anyone listening with a different array may experience a degradation of localisation accuracy.

Target layouts from 5.1 upwards usually surpass the spatial resolution of first-order Ambisonics, at least in the frontal quadrant. For optimal resolution, to avoid excessive crosstalk, and to steer around irregularities of the target layout, pre-decodings for such targets should be derived from source material in higher-order Ambisonics.[12]

Production workflow

Ambisonic content can be created in two basic ways: by recording a sound with a suitable first- or higher-order microphone, or by taking separate monophonic sources and panning them to the desired positions. Content can also be manipulated while it is in B-format.

Ambisonic microphones

Native B-format arrays

The array designed and made by Dr Jonathan Halliday of Nimbus Records

Since the components of first-order Ambisonics correspond to physical microphone pickup patterns, it is entirely practical to record B-format directly, with three coincident microphones: an omnidirectional capsule, one forward-facing figure-8 capsule, and one left-facing figure-8 capsule, yielding the , and components.[13][14] This is referred to as a native or Nimbus/Halliday microphone array, after its designer Dr Jonathan Halliday at Nimbus Records, where it is used to record their extensive and continuing series of Ambisonic releases. An integrated native B-format microphone, the C700S[15] has been manufactured and sold by Josephson Engineering since 1990.

The primary difficulty inherent in this approach is that high-frequency localisation and clarity relies on the diaphragms approaching true coincidence. By stacking the capsules vertically, perfect coincidence for horizontal sources is obtained. However, sound from above or below will theoretically suffer from subtle comb filtering effects in the highest frequencies. In most instances this is not a limitation as sound sources far from the horizontal plane are typically from room reverberation. In addition, stacked figure-8 microphone elements have a deep null in the direction of their stacking axis such that the primary transducer in those directions is the central omnidirectional microphone. In practice this can produce less localisation error than either of the alternatives (tetrahedral arrays with processing, or a fourth microphone for the Z axis.)[citation needed]

Native arrays are most commonly used for horizontal-only surround, because of increasing positional errors and shading effects when adding a fourth microphone.

The tetrahedral microphone

Since it is impossible to build a perfectly coincident microphone array, the next-best approach is to minimize and distribute the positional error as uniformly as possible. This can be achieved by arranging four cardioid or sub-cardioid capsules in a tetrahedron and equalising for uniform diffuse-field response.[16] The capsule signals are then converted to B-format with a matrix operation.

Outside Ambisonics, tetrahedral microphones have become popular with location recording engineers working in stereo or 5.1 for their flexibility in post-production; here, the B-format is only used as an intermediate to derive virtual microphones.

Higher-order microphones

Above first-order, it is no longer possible to obtain Ambisonic components directly with single microphone capsules. Instead, higher-order difference signals are derived from several spatially distributed (usually omnidirectional) capsules using very sophisticated digital signal processing.[17]

The em32 Eigenmike[18] and ZYLIA ZM-1[19] is a commercially available 32-channel, ambisonic microphone array.

A recent paper by Peter Craven et al.[20] (subsequently patented) describes the use of bi-directional capsules for higher order microphones to reduce the extremity of the equalisation involved. No microphones have yet been made using this idea.

Ambisonic panning

The most straightforward way to produce Ambisonic mixes of arbitrarily high order is to take monophonic sources and position them with an Ambisonic encoder.

A full-sphere encoder usually has two parameters, azimuth (or horizon) and elevation angle. The encoder will distribute the source signal to the Ambisonic components such that, when decoded, the source will appear at the desired location. More sophisticated panners will additionally provide a radius parameter that will take care of distance-dependent attenuation and bass boost due to near-field effect.

Hardware panning units and mixers for first-order Ambisonics have been available since the 1980s[21][22][23] and have been used commercially. Today, panning plugins and other related software tools are available for all major digital audio workstations, often as free software. However, due to arbitrary bus width restrictions, few professional digital audio workstations (DAW) support orders higher than second. Notable exceptions are REAPER, Pyramix, ProTools, Nuendo and Ardour.

Ambisonic manipulation

First order B-format can be manipulated in various ways to change the contents of an auditory scene. Well known manipulations include "rotation" and "dominance" (moving sources towards or away from a particular direction).[5][24]

Additionally, linear time-invariant signal processing such as equalisation can be applied to B-format without disrupting sound directions, as long as it applied to all component channels equally.

More recent developments in higher order Ambisonics enable a wide range of manipulations including rotation, reflection, movement, 3D reverb, upmixing from legacy formats such as 5.1 or first order, visualisation and directionally-dependent masking and equalisation.

Data exchange

Transmitting Ambisonic B-format between devices and to end-users requires a standardized exchange format. While traditional first-order B-format is well-defined and universally understood, there are conflicting conventions for higher-order Ambisonics, differing both in channel order and weighting, which might need to be supported for some time. Traditionally, the most widespread is Furse-Malham higher order format in the .amb container based on Microsoft's WAVE-EX file format.[25] It scales up to third order and has a file size limitation of 4GB.

New implementations and productions might want to consider the AmbiX[26] proposal, which adopts the .caf file format and does away with the 4GB limit. It scales to arbitrarily high orders and is based on SN3D encoding. SN3D encoding has been adopted by Google as the basis for its YouTube 360 format.[27]

Compressed distribution

To effectively distribute Ambisonic data to non-professionals, lossy compression is desired to keep the data size acceptable. However, simple multi-mono compression is not sufficient, as lossy compression tends to destroy phase information and thus degrade localization in the form of spatial reduction, blur, and phantom source. Reduction of redundancy among channels is desired, not only to enhance compression, but also to reduce the risk of dicernable phase errors.[28] (It is also possible to use post-processing to hide the artifacts.)[29]

As with mid-side joint stereo encoding, a static matrixing scheme (as in Opus) is usable for first-order ambisonics, but not optimal in case of multiple sources. A number of schemes such as DirAC use a scheme similar to parametric stereo, where a downmixed signal is encoded, the principal direction recorded, and some description of ambiance added. MPEG-H 3D Audio, drawing on some work from MPEG Surround, extends the concept to handle multiple sources. MPEG-H uses principal component analysis to determine the main sources and then encodes a multi-mono signal corresponding to the principal directions. These parametric methods provide good quality, so long as they take good care in smoothing sound directions among frames.[28] PCA/SVD is applicable for first-order as well as high-order ambisonics input.[30]

Current development

Open Source

Since 2018 a free and open source implementation exists in the sound codec Opus. Two channel encoding modes are provided: one that simply stores channels individually, and another that weights the channels through a fixed, invertible matrix to reduce redundancy.[31] A listening-test of Opus ambisonics was published in 2020, as calibration for AMBIQUAL, an objective metric for compressed ambisonics by Google. Opus third-order ambisonics at 256 kbps has similar localization accuracy compared to Opus first-order ambisonics at 128 kbps.[32]: Fig. 12 

Corporate interest

Since its adoption by Google and other manufacturers as the audio format of choice for virtual reality, Ambisonics has seen a surge of interest.[33][34][35]

In 2018, Sennheiser released its VR microphone,[36] and Zoom released an Ambisonics Field Recorder.[37] Both are implementations of the tetrahedral microphone design which produces first order Ambisonics.

A number of companies are currently conducting research in Ambisonics:

Dolby Laboratories have expressed "interest" in Ambisonics by acquiring (and liquidating) Barcelona-based Ambisonics specialist imm sound prior to launching Dolby Atmos,[43] which, although its precise workings are undisclosed, does implement decoupling between source direction and actual loudspeaker positions. Atmos takes a fundamentally different approach in that it does not attempt to transmit a sound field; it transmits discrete premixes or stems (i.e., raw streams of sound data) along with metadata about what location and direction they should appear to be coming from. The stems are then decoded, mixed, and rendered in real time using whatever loudspeakers are available at the playback location.

Use in gaming

Higher-order Ambisonics has found a niche market in video games developed by Codemasters. Their first game to use an Ambisonic audio engine was Colin McRae: DiRT, however, this only used Ambisonics on the PlayStation 3 platform.[44] Their game Race Driver: GRID extended the use of Ambisonics to the Xbox 360 platform,[45] and Colin McRae: DiRT 2 uses Ambisonics on all platforms including the PC.[46]

The recent games from Codemasters, F1 2010, Dirt 3,[47] F1 2011[48] and Dirt: Showdown,[49] use fourth-order Ambisonics on faster PCs,[50] rendered by Blue Ripple Sound's Rapture3D OpenAL driver and pre-mixed Ambisonic audio produced using Bruce Wiggins' WigWare Ambisonic Plug-ins.[51]

OpenAL Soft [2], a free and open source implementation of the OpenAL specification, also uses Ambisonics to render 3D audio.[52] OpenAL Soft can often be used as a drop-in replacement for other OpenAL implementations, enabling several games that use the OpenAL API to benefit from rendering audio with Ambisonics.

For many games that do not make use of the OpenAL API natively, the use of a wrapper or a chain of wrappers can help to make these games indirectly use the OpenAL API. Ultimately, this leads to the sound being rendered in Ambisonics if a capable OpenAL driver such as OpenAL Soft is being used.[53]

The Unreal Engine supports soundfield Ambisonics rendering since version 4.25.[54] The Unity engine supports working with Ambisonics audio clips since version 2017.1.[55]

Patents and trademarks

Most of the patents covering Ambisonic developments have now expired (including those covering the Soundfield microphone) and, as a result, the basic technology is available for anyone to implement.

The "pool" of patents comprising Ambisonics technology was originally assembled by the UK Government's National Research & Development Corporation (NRDC), which existed until the late 1970s to develop and promote British inventions and license them to commercial manufacturers – ideally to a single licensee. The system was ultimately licensed to Nimbus Records (now owned by Wyastone Estate Ltd).

The "interlocking circles" Ambisonic logo (UK trademarks UK00001113276 and UK00001113277), and the text marks "AMBISONIC" and "A M B I S O N" (UK trademarks UK00001500177 and UK00001112259), formerly owned by Wyastone Estate Ltd., have expired as of 2010.

See also

Notes

  1. ^ The traditional B-format notation is used in this introductory paragraph, since it is assumed that the reader may have come across it already. For higher-order Ambisonics, use of the ACN notation is recommended.

References

  1. ^ Michael A. Gerzon, Periphony: With-Height Sound Reproduction. Journal of the Audio Engineering Society, 1973, 21(1):2–10.
  2. ^ Gerzon, M.A. (February 1980). Practical Periphony. 65th Audio Engineering Society Convention. London: Audio Engineering Society. p. 7. Preprint 1571. In order to make B-format signals carry more-or-less equal average energy, X,Y,Z have a gain of 2 in their directions of peak sensitivity.
  3. ^ Eric Benjamin, Richard Lee, and Aaron Heller, Is My Decoder Ambisonic?, 125th AES Convention, San Francisco 2008
  4. ^ Darren B Ward and Thushara D Abhayapala, Reproduction of a Plane-Wave Sound Field Using an Array of Loudspeakers Archived 8 October 2006 at the Wayback Machine, IEEE Transactions on Speech and Audio Processing Vol.9 No.6, Sept 2001
  5. ^ a b Michael A Gerzon, Geoffrey J Barton, "Ambisonic Decoders for HDTV", 92nd AES Convention, Vienna 1992. http://www.aes.org/e-lib/browse.cfm?elib=6788
  6. ^ Malham, DG (1992). "Experience with Large Area 3-D Ambisonic Sound Systems" (PDF). Proceedings of the Institute of Acoustics. 14 (5): 209–215. Archived from the original (PDF) on 22 July 2011. Retrieved 24 January 2007.
  7. ^ Jörn Nettingsmeier and David Dohrmann, Preliminary studies on large-scale higher-order Ambisonic sound reinforcement systems, Ambisonics Symposium 2011, Lexington (KY) 2011
  8. ^ Armstrong, Cal; Thresh, Lewis; Murphy, Damian; Kearney, Gavin (23 October 2018). "A Perceptual Evaluation of Individual and Non-Individual HRTFs: A Case Study of the SADIE II Database". Applied Sciences. 8 (11): 2029. doi:10.3390/app8112029.
  9. ^ Eric Benjamin, Richard Lee, and Aaron Heller: Localization in Horizontal-Only Ambisonic Systems, 121st AES Convention, San Francisco 2006
  10. ^ Jérôme Daniel, Spatial Sound Encoding Including Near Field Effect: Introducing Distance Coding Filters and a Viable, New Ambisonic Format, 23rd AES Conference, Copenhagen 2003
  11. ^ Richard Elen, Ambisonics for the New Millennium, September 1998.
  12. ^ Bruce Wiggins, The Generation of Panning Laws for Irregular Speaker Arrays Using Heuristic Methods Archived 17 May 2016 at the Portuguese Web Archive. 31st AES Conference, London 2007
  13. ^ E. M. Benjamin and T. Chen, "The Native B-Format Microphone", AES 119th Convention, New York, 2005, Preprint no. 6621. http://www.aes.org/e-lib/browse.cfm?elib=13348
  14. ^ [1] E. M. Benjamin and T. Chen, "The Native B-Format Microphone: Part II", AES 120th Convention, Paris, 2006, Preprint no. 6640. http://www.aes.org/e-lib/browse.cfm?elib=13444
  15. ^ C700 Variable Pattern Microphones, Josephson Engineering
  16. ^ Michael A. Gerzon, The Design of Precisely Coincident Microphone Arrays for Stereo and Surround Sound, 50th AES Convention, London 1975, http://www.aes.org/e-lib/browse.cfm?elib=2466
  17. ^ Peter Plessas, Rigid Sphere Microphone Arrays for Spatial Recording and Holography, Diploma thesis in Electrical Engineering - Audio Engineering, Graz 2009
  18. ^ "Products | mhacoustics.com". mhacoustics.com. Retrieved 7 April 2018.
  19. ^ "ZYLIA - 3D Audio Recording & Post-processing Solutions". Zylia Inc. Retrieved 19 September 2023.
  20. ^ P G Craven, M J Law, and C Travis, Microphone arrays using tangential velocity sensors Archived 30 June 2009 at the Wayback Machine, Ambisonics Symposium, Graz 2009
  21. ^ Michael A Gerzon and Geoffrey J Barton, Ambisonic Surround-Sound Mixing for Multitrack Studios, AES Preprint C1009, 2nd International Conference: The Art and Technology of Recording May 1984. http://www.aes.org/e-lib/browse.cfm?elib=11654
  22. ^ Richard Elen, Ambisonic mixing – an introduction, Studio Sound, September 1983
  23. ^ Nigel Branwell, Ambisonic Surround-Sound Technology for Recording and Broadcast, Recording Engineer/Producer, December 1983
  24. ^ Dave G. Malham, Spatial Heading Mechanisms and Sound Reproduction 1998, retrieved 2014-01-24
  25. ^ Richard Dobson The AMB Ambisonic File Format Archived 22 April 2014 at the Wayback Machine
  26. ^ Christian Nachbar, Franz Zotter, Etienne Deleflie, and Alois Sontacchi: AmbiX - A Suggested Ambisonics Format Ambisonics Symposium 2011, Lexington (KY) 2011
  27. ^ YouTube Help, Use spatial audio in 360-degree and VR videos
  28. ^ a b Mahé, Pierre; Ragot, Stéphane; Marchand, Sylvain (2 September 2019). First-Order Ambisonic Coding with PCA Matrixing and Quaternion-Based Interpolation. 22nd International Conference on Digital Audio Effects (DAFx-19), Birmingham, UK. p. 284.
  29. ^ Mahé, Pierre; Ragot, Stéphane; Marchand, Sylvain; Daniel, Jérôme (January 2021). Ambisonic Coding with Spatial Image Correction. European Signal Processing Conference (EUSIPCO) 2020.
  30. ^ Zamani, Sina; Nanjundaswamy, Tejaswi; Rose, Kenneth (October 2017). "Frequency domain singular value decomposition for efficient spatial audio coding". 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). pp. 126–130. arXiv:1705.03877. doi:10.1109/WASPAA.2017.8170008. ISBN 978-1-5386-1632-1. S2CID 1036250.
  31. ^ Valin, Jean-Marc. "Opus 1.3 Released". Opus documentation. Retrieved 7 September 2020.
  32. ^ Narbutt, Miroslaw; Skoglund, Jan; Allen, Andrew; Chinen, Michael; Barry, Dan; Hines, Andrew (3 May 2020). "AMBIQUAL: Towards a Quality Metric for Headphone Rendered Compressed Ambisonic Spatial Audio". Applied Sciences. 10 (9): 3188. doi:10.3390/app10093188. hdl:10197/11947.
  33. ^ Google Specifications and tools for 360º video and spatial audio, retrieved May 2016
  34. ^ Upload 360-degree videos, retrieved May 2016
  35. ^ Oculus Developer Center: Supported Features/Ambisonics
  36. ^ "Sennheiser AMBEO VR Mic"
  37. ^ "Ambisonics Field Recorder Zoom H3-VR"
  38. ^ Chris Baume, Anthony Churnside, Upping the Auntie: A Broadcaster's Take on Ambisonics, BBC R&D Publications, 2012
  39. ^ Darius Satongar, Chris Dunn, Yiu Lam, and Francis Li Localisation Performance of Higher-Order Ambisonics for Off-Centre Listening, BBC R&D Publications, 2013
  40. ^ Paul Power, Chris Dunn, W. Davies, and J. Hirst, Localisation of Elevated Sources in Higher-order Ambisonics, BBC R&D Publications, 2013
  41. ^ Johann-Markus Batke and Florian Keiler, Using VBAP-derived Panning Functions for 3D Ambisonics Decoding 2nd International Symposium on Ambisonics and Spherical Acoustics, Paris 2010
  42. ^ Florian Keiler, Sven Kordon, Johannes Boehm, Holger Kropp, and Johann-Markus Batke, Data structure for Higher Order Ambisonics audio data, European Patent Application EP 2450880 A1, 2012
  43. ^ "Dolby Laboratories acquires rival imm sound". The Hollywood Reporter. 23 July 2012.
  44. ^ Deleflie, Etienne (30 August 2007). "Interview with Simon Goodwin of Codemasters on the PS3 game DiRT and Ambisonics". Building Ambisonia.com. Australia: Etienne Deleflie. Archived from the original on 23 July 2011. Retrieved 7 August 2010.
  45. ^ Deleflie, Etienne (24 June 2008). "Codemasters ups Ambisonics again on Race Driver GRID …". Building Ambisonia.com. Australia: Etienne Deleflie. Archived from the original on 23 July 2011. Retrieved 7 August 2010.
  46. ^ Firshman, Ben (3 March 2010). "Interview: Simon N Goodwin, Codemasters". The Boar. Coventry, United Kingdom: The University of Warwick. p. 18. Core of Volume 32, Issue 11. Retrieved 7 August 2010.
  47. ^ "DiRT3". Gaming News. Blue Ripple Sound. 23 May 2011. Retrieved 21 November 2013.
  48. ^ "F1 2011". Gaming News. Blue Ripple Sound. 23 September 2011. Archived from the original on 19 December 2013. Retrieved 21 November 2013.
  49. ^ "DiRT Showdown". Gaming News. Blue Ripple Sound. 18 June 2012. Archived from the original on 14 December 2017. Retrieved 21 November 2013.
  50. ^ "3D Audio for Gaming". Blue Ripple Sound. Archived from the original on 13 December 2013. Retrieved 21 November 2013.
  51. ^ "Improved Spatial Audio from Ambisonic Surround Sound Software - A REF Impact Case Study". Higher Education Funding Council for England (HEFCE). Retrieved 18 February 2016.
  52. ^ "openal-soft/ambisonics.txt at master · kcat/openal-soft · GitHub". GitHub. Retrieved 15 June 2021.
  53. ^ "List of PC games that use DirectSound3D - Google Docs". I Drink Lava. Retrieved 26 June 2021.
  54. ^ "Unreal Engine 4.25 Release Notes | Unreal Engine Documentation". Epic Games, Inc. Retrieved 27 May 2022.
  55. ^ "What's new in Unity 2017.1 - Unity". Unity Technologies. Archived from the original on 24 March 2022. Retrieved 27 May 2022.

External links