“MetaMix is a cross between a musical composition, a digital audio player, an interactive experience, a software tool, and a work of conceptual art. Feed MetaMix your favorite audio track and listen as familiar music is transformed into a new listening experience. MetaMix superimposes new musical structures onto existing music by turning special mathematical integer sequences into new musical forms. These musical forms are used to “remix” the audio track you choose.”
Amazing Powerful Audio Opensource Software
“SPEAR is an application for audio analysis, editing and synthesis. The analysis procedure (which is based on the traditional McAulay-Quatieri technique) attempts to represent a sound with many individual sinusoidal tracks (partials), each corresponding to a single sinusoidal wave with time varying frequency and amplitude.”
Press Here To Enter The Web Site: http://www.klingbeil.com/spear/
TAPESTREA : Techniques And Paradigms for Expressive Synthesis, Transformation,
and Rendering of Environmental Audio (also known as taps)
If you are willing to spend the time and have a background in programming, this is a great opensource cross platform audio language.
SuperCollider is an environment and programming language for real time audio synthesis and algorithmic composition. It provides an interpreted object-oriented language which functions as a network client to a state of the art, realtime sound synthesis server.
Press Here to listen to or download SC140, an album of 22 pieces programmed entirely in Supercollider by artists from around the world, each piece created with just 140 characters of code.
KYMA: A SUPERCOMPUTER FOR SOUND
From the album liner notes written by D.H. VanLenten:
“This recording contains samples of synthesized speech – speech artificially constructed from the basic building blocks of the English language. A machine which produces synthesized speech is called, fittingly, a talking machine. There are many possible kinds of speech synthesizers or talking machines. Instead of building and testing a variety of them, scientists at Bell Telephone Laboratories simulate their behavior with a high-speed, general purpose computer. The computer is instructed (programmed) to accept in sequence on punched cards the names of the speech sounds which make up an English sentence. It then processes this information, in accordance with the linguistic rules governing the English language, and produces an output analogous to the output of the talking machine it is programmed to simulate. The talking machine simulated by the computer in this recording would normally be operated by continuously feeding it a set of nine control signals. The signals correspond to voice pitch, voice loudness, lip opening and other speech variables. When every instant of sound is specified, and every variable accounted for, such a machine produces human-sounding speech.
Setting up the computer to simulate this talking machine requires two sets of instructions or, more precisely, a two-part computer program. One part of the computer program performs the actual sound making function – it imitates the “talking’ of a talking machine. The second part consists of rules for combining individual speech sounds into connected speech, and for producing the nine control signals that activate the talking machine. Scientists at Bell Telephone Laboratories have developed a computer program that permits them to feed the names of speech sounds into the computer on punched cards. They also have devised a phonetic code using the letters of the alphabet. At present, it is made up of 22 consonant and 12 vowel sounds:
CONSONANTS: P – B – T – D – K – G – M – N – NG (as in sing) – F – V – S – Z – SH (as in she) – ZH (as in azure) – H – W – R – L – Y – TH (as in thin) – DH (as in then)
VOWELS: EE (as in bee) – I (as in ill) – AY (as in rate) – E (as in end) – AE (as in add) – AH (as in ah) – AW (as in jaw) – (as in go) – OO (as in foot) – UU (as in food) – UH (as in up) – ER (as in her)
Each speech sound is specified on a separate punched card. When a sequence of cards is fed into the computer, it “operates’ on the information – following the rules set up in the second part of its program – to produce the nine control signals that activate the talking machine program. For example, if the sequence of cards, H – EE – S – AW – DH – UH – K – AE – T, is fed into the computer, the machine will say “He saw the cat,’ in flat monotones. Proper inflection and phrasing are achieved by specifying on each card the changes in pitch and timing natural to human speech.
By specifying the pitch of the sounds, it also is possible to make the computer sing. In two of the samples recorded, the computer first sings a familiar tune and then, singing the same song, is accompanied by music played by another computer. The “speech’ of the simulated talking machine comes out of the computer as tiny magnetized spots on half-inch magnetic tape. The tape is fed to another machine which converts the spots to a tape suitable for playing on an ordinary tape recorder.
The first eight and very last samples of synthesized speech on this recording are part of a research program aimed, principally, at formulating a minimum set of rules for making plausible English speech. The ninth and tenth selections were produced by analyzing a person’s speech and re-constructing it synthetically on a computer. The objective of this program is to duplicate the sounds and transitions made by a human speaker, including his accent and dialect.
Knowledge developed through such research programs may be useful in devising new techniques for transmitting speech more efficiently over communications systems. In the near future, for example, a person may be able to type on a keyboard and cause a typing machine thousands of miles away to speak for him. There is also the possibility that talking machines may be built for people who are unable to speak.”
Link To MP3