Tynax ~ Patent Library

Patent for Sale:

System, Method and Apparatus for Co-locating Visual Images and Associated Sound    

It turns the screen itself into the source of audio, tying sounds to the displayed locations of images in the media, which is called audiovisual colocation.


This is a prototype system for the colocation of audio sources with their associated visual objects in screen-based media and virtual reality. The prototype produces a new form of multichannel audiovisual display in which the associated sound emanates from the specific screen or physical enclosure areas of the moving and virtual images, allowing for colocated audio and visuals. The invention adds a new perceptual and experiential layer to the technology of synchronized sound, which has existed now for over a century, by adding its spatial complement, so that sound can now be in place with its image, in addition to being in time with it. In contrast to surround sound arrays, which envelop listeners in an enveloping sound field, it draws attention to areas of screen-based imagery, so that sounds are attached to their visual sources within the display just as they are in natural perception. In VR or AR contexts, it consists of wall panels which emit sound in colocation to objects and events in the virtual environment relative to the users’ position in real space. It colocates audio and visuals through attaching a 2D array of audio exciters to the back side of a visual display– which may be either projection-based or OLED video screens, or wall panels– connected to signal distribution and software. The visual display or enclosure itself becomes the sound emitter. In most audiovisual systems, audio is spatially dislocated from visual sources. e.g. through detached speakers or headphones. Since with this invention, the screen or surrounding surface is the source of the sound, not only is audio tied to the general screen and surface area, but through signal processing the audio associated with the visual objects emanates from the vibrating surface areas in colocation to the visual cues. This invention creates more immersive experiences with interactive media since it emulates the way we actually experience objects and events in everyday experiences.

Colocation in media such as video and VR or AR produces more lifelike and immersive experiences in media environments. Currently, we are developing new prototypes for large exhibition displays, simulation-based training, video conferencing and control rooms to create more effective telepresence tools for users and more engaging experiences for audiences.

Primary Application of the Technology

Videoconferencing, immersive displays, simulation based training

The Problem Solved by the Technology

Many problems are solved, depending on application area. For museum exhibitions, the biggest problems are with nondirectional sound, and sound bleeding into other neighboring spaces. For video conferencing or VR arcades, the problems are poor media immersion.

How the Technology Solves the Problem

Overall sound levels can be quiet and more specific to the location of users or visuals. Also, in virtual reality or augmented reality, one can completely dispense with headphones.

Competitive Advantage

The competition for this invention is speakers and headphones. Most audiovisual systems spatially separate sound and information sources, so that images are in the video display and sounds are in speakers or headphones. This invention creates better immersion and presence effects by tying sounds directly to images in the display.

The seller may consider selling these patents individually.