The programme illustrates the power of frequency analysis of signals by looking at human speech synthesis.
|Module code and title:
|T283, Introductory electronics
|First transmission date:
|Restrictions on use:
|+ Show more...
|M. F. Ashby; Mark Huckvale; Chris Pinches
|BBC Open University
|Fourier analysis; Frequency analysis; Sine waves; Synthesis
|Close up shots of human vocal chords and of a female singer. Chris Pinches introduces the programme. Wave forms of Chris Pinches' voice and those of a singer are displayed on a T.V. screen. Commentary by Pinches compares the two. Chris Pinches demonstrates a wave form synthesiser by adding several sine waves together to form a square wave. Pinches goes on to use a spectrum analyser to determine the components of a square wave. The results are shown on the analyser screen. Pinches points out that the spectrum analyser doesn't give phase information. He demonstrates, using a colour T.V. set, the importance of phase for some signals such as colour video. He also explains that phase information is not important for speech signals and can be ignored. Shots of human vocal chords and of the signal they produce are shown on the frequency analyser. Mike Ashby, with the aid of a model of a human head and neck, explains how sounds are produced. He points out that the system acts as a filter and that the vocal cavitites are able to filter the sound from the vocal chords because of the phenomenon of resonance. Shots of Chris Pinches using a hose pipe and funnel as a trumpet; another example of resonance. Chris Pinches next looks at electronic filters. Several Bode diagrams are used to show some of the characterestics of electronic filters. He Points out that electronic filters can be used to simulate the human vocal cavities. Mike Ashby examines spectrum diagrams of the vowels 'aa', 'ee' and 'oo'. He explains that to produce this sound, a filter with a whole set of peak resonances is required. Chris Pinches then demonstrates an electronic gadget called 'chatterbox' which is a two peak filter and can simulate some human sounds. Using a series of spectrum diagrams of unvoiced sounds, Mike Ashby explains the importance of these sounds in human speech. He points out that in order to synthesise human speech electronically, the unvoiced sounds have to be generated as well. Chris Pinches looks at the spectrum of a possible source of unvoiced sounds, that of white noise. He explains how this is filtered to produce the desired sounds. Recording of synthetic speech over shots of a tape recorder. Mark Huckvale at University College London, demonstrates the computer system in use there to create synthetic human speech. Shots of the computer, spectrum analyser, and a 3-D model showing frequency against time of the word 'noise'. Mike Ashby examines a speech spectrogram of the word 'goldfish'. He points out the components which would have to be assimilated by a computer controlling the voice synthesiser. Finally, Chris Pinches demonstrates why signal phase changes are not important in speech. He demonstrates on a synthesiser and then summarises the programme.
|Master spool number:
|Available to public: