Emotions (anger, happiness, sadness, etc.) are inseparable components of the natural human speech. Because of that, the level of human speech can only be achieved with the ability to synthesize emotions. We follow data- driven methods to add emotions to the computer speech. Our approach is based on emotional data collected for each one of the targeted emotions (anger, sadness, happiness and frustration). Collected data is segmented into smaller speech units, which later are concatenated to produce the required emotional synthetic output. Adding emotions increases the naturalness and variability of synthetic speech and brings it closer to the level of natural speech. The wide range of applications based on human- machine interaction, the need for more listenable systems for disabled people and the resent developments in the movie industry employing virtual actors are some of motivational factors for the project.