Impact of emotions on fundamental speech signal frequency Article Swipe
YOU?
·
· 2012
· Open Access
·
· OA: W2588702514
The paper deals with recognition of speeches made in a particular emotional state and examines the \nimpact of person’s emotional state on the fundamental speech signal frequency. Vocal chords create audio \nsignals which carry information coded with human language. This process is called human speech. Based on a \nspeech signal several speaker’s attributes such as sex, age, speech disorders (stuttering or cluttering) and \nemotional state can be determined. As for emotions, only about 10% of speaker’s emotional state or state of \nmind is expressed by means of speech. On that ground, a selection and a computation of suitable parameters is \nan important part of a system designed to determine emotions from speech signals. These parameters should be \nas relevant as possible in relation to the speaker’s emotional state. The fundamental signal frequency is one of \nthe speech parameters. We dealt with a method for extracting the fundamental speech signal frequency by \nmeans of a central clipping and its exploitation for the system classifying the speaker’s emotional state.