Demonstrations - Groupe Audio Acoustique

Transcription

Demonstrations - Groupe Audio Acoustique
2016/04/02 23:38
1/5
Demonstrations
Demonstrations
This page presents a collection demonstration available on our web site, or via our YouTube channel.
Audio Acoustique @ YouTube
AudioAcoustique sur YouTube
Nos travaux récents sur le contrôle gestuel de la synthèse vocale et l'augmentation numérique
du grand orgue
Des extraits de concerts du Cantor Digitalis Chorus
Des extraits de concerts Orgue et Réalité Augmentée
Des exemples du son binaural
… à suivre …
Selected examples
Cantor Digitalis Chorus
First Prize Winners of the 2015 Margaret Guthman Musical Instrument Competition, is the Cantor
Digitalis Chorus. See the videos of the performance here:
Semi-finale
Jump to Cantor Digitalis Chorus performance in the full competition video Semi-Finales
Finale
Jump to Cantor Digitalis Chorus performance in the full competition video Finals
BlenderVR : Virtual Reality Framework
BlenderVR is an adaptation of the open source software Blender to support CAVE/VideoWall, Headmounted (HMD) display and external rendering modality engines.
Groupe Audio Acoustique - http://groupeaa.limsi.fr/
Last update: 2015/09/02 10:48
demos:start
http://groupeaa.limsi.fr/demos:start
Auralizations
Calibrated auralization comparison
This video was originally presented in association with the following article
B.N. Postma & B.F.G. Katz, "Creation and Calibration Method of Acoustical Models for Historic
Virtual Reality Auralizations", Virtual Reality Journal, (accepted 2015)
Also cited in
B. N. Postma, A. Tallon, and B. F. Katz, “Calibrated auralization simulation of the abbey of SaintGermain-des-Prés for historical study,” in International Conference on Auditorium Acoustics,
(Paris), Institute of Acoustics, Nov. 2015.
This work was funded in part by the ECHO project (ANR-13-CULT-0004, https://echo-projet.limsi.fr).
Voice Directivity
An visualization of the changes in directivity of the human singing voice while signing. This
experiment was carried out at IRCAM, using a semicircular array of 24 microphones.
This video was originally presented in association with the following conference article;
B. Katz and C. d'Alessandro, "Directivity measurements of the singing voice," in 19th Intl. Cong.
on Acoustics, (Madrid), pp. 1–6, 2007.
http://www.sea-acustica.es/WEB_ICA_07/fchrs/papers/mus-06-004.pdf
Additional informaiton can also be found at;
http://rs2007.limsi.fr/index.php/PS:Page_14
Thanks to Robert Expert (http://www.robertexpert.net/) for singing for us!
Binaural Demonstrations
3D audio remix : Je ne sais pas", by LFDV
This is a research demonstration of a 3D audio remix of the song title "Je ne sais pas", by LFDV,
produced by Tefa, Sony Music.
This 3d binaural audio remix was created using the multitrack orginal stems.
http://groupeaa.limsi.fr/
Printed on 2016/04/02 23:38
2016/04/02 23:38
3/5
Demonstrations
For a quality spatial rendering, it is necessary to listen to this 3D binaural audio remix over
headphones.
3D Audio Re-mix demonstration of « Je Ne Sais Pas » by LFDV
3D Audio supervisor / Superviseur audio 3D : Brian FG Katz, CNRS
3D Audio engineer / Ingénieur du son 3D : Marc Rébillat, CNRS
Thanks to / Merci à : Sony Music
Producer / Producteur : Tefa
Sound engineer / Ingénieur du son: Frédéric Nlandu
(Original video with stereo audio: http://youtu.be/YEn1N8-L4K0.)
"Station 508"
This radio fiction was designed and created by students from the École Louis-Lumière to highlight the
possibilities of 3D audio for story telling. We hope that this example is an inspiration to sound artists,
making them want to take advantage of this new technology to make the most immersive and
engaging stories in this new era where many people (blind, visually impaired and non-impaired), are
equipped with mobile devices and headphones for listening to music or phone.
Project web site: Station 508
Simple example of Binaural synthesis
This video presents a simple binaural rendering as compared to a traditional stereo rendering for a
source moving around the head. Visual rendering is included to indicated the position of the current
source. The sound moves around the head in azimuth, then elevation. Perception of location is still
further improved with the addition of a spatialized room effect.
This demonstration was realized using binaural rendering algorithms developed in collaboration with
DMS and was presented at the 3D Media – Professional Conference, Liège, 2012.
Effect of HRTF individualization
This video presents the effect of using different non-individual HRTF in a binaural rendering. Using
both a single and multiple static sound sources, the HRTF is changed.
This demonstration was realized in collaboration with Arkamys, in the context of the "Listening with
your own ears" initiative for HRTF individualization and was presented at the 3D Media – Professional
Groupe Audio Acoustique - http://groupeaa.limsi.fr/
Last update: 2015/09/02 10:48
demos:start
http://groupeaa.limsi.fr/demos:start
Conference, Liège, 2012.
Navigation aide for the visually impaired
Video demonstration of the concept of the ANR-NAVIG project.
(Note that to perceive the 3D sounds of this video you need stereo headphones instead of speakers)
NAVIG is a multidisciplinary and innovative project aiming at augmenting the autonomy of visually
impaired people in their basic daily actions the most problematic: navigation and object localization.
This video presents the different components of the systems (artificial vision, 3D sounds rendered by
binaural synthesis, pedestrian GIS, vocal interface,…) and the prototype being developed.
Partners : http://www.irit.fr/ http://cerco.ups-tlse.fr/ http://www.limsi.fr/
http://www.spikenet-technology.com/ http://www.navocap.com/ http://www.ija-toulouse.cict.fr/
http://www.grandtoulouse.org/
Scientific Sonification
Video demonstrations of scientific sonifications as part of the CoRSAIRe project.
CoRSAIRe / BioInfo: VR and Multimodality for Protein Docking
CoRSAIRe / MecaFlu: VR and Multimodality for CFD
The goal of the CoRSAIRe project is to develop new ways of interacting with large or complex digital
worlds. The project aims at significantly enhancing currently existing interfaces by introducing
multiple sensori-motor channels, so that the user will be able to see, hear and touch the data itself (or
objects derived from the data), thus redefining completely conventional interaction mechanisms.
Keywords: Virtual Reality, Multimodal supervision, Haptics, 3D audio, CFD, Bioinformatics,
Ergonomics.
3D Audio for Telepresence
Experimental protocol video for study comparing Binaural and Ambisonic streaming for telepresence
as part of the SACARI project.
(Note that to perceive the 3D sounds of this video you need stereo headphones instead of speakers)
Associated article: B. Katz, A. Tarault, P. Bourdot, and J.-M. Vézien, “The use of 3d-audio in a multimodal teleoperation platform for remote driving/supervision,” in AES 30th Intl. Conf. on Intelligent
http://groupeaa.limsi.fr/
Printed on 2016/04/02 23:38
2016/04/02 23:38
5/5
Audio Environments, (Saariselkä), pp. 1–9, 2007.
From:
http://groupeaa.limsi.fr/ - Groupe Audio Acoustique
Permanent link:
http://groupeaa.limsi.fr/demos:start
Last update: 2015/09/02 10:48
Groupe Audio Acoustique - http://groupeaa.limsi.fr/
Demonstrations

Documents pareils