Publications

The SAVEE database was recorded as part of an investigation into audio-visual emotion classification, from which the following articles have been published:

•  S. Haq and P.J.B. Jackson, "Multimodal Emotion Recognition", In W. Wang (ed), Machine Audition: Principles, Algorithms and Systems, IGI Global Press, ISBN 978-1615209194, chapter 17, pp. 398-423, 2010. [ bib | doi | pdf ]
•  S. Haq and P.J.B. Jackson. "Speaker-Dependent Audio-Visual Emotion Recognition", In Proc. Int'l Conf. on Auditory-Visual Speech Processing, pages 53-58, 2009. [ bib | preprint | slides ]
•  S. Haq, P.J.B. Jackson, and J.D. Edge. Audio-Visual Feature Selection and Reduction for Emotion Classification. In Proc. Int'l Conf. on Auditory-Visual Speech Processing, pages 185-190, 2008. [ bib | preprint ]


References

[1] Sebe, N., et al., "Multimodal Emotion Recognition", Handbook of Pattern Recog. and Computer Vision, World Scientific, 2005.
[2] Batliner, A., et al., "You Stupid Tin Box-Children Interacting with the AIBO Robot: A Cross-Linguistic Emotional Speech", Proc. Int'l Conf. Lang. Resources and Evaluation,      171-174, 2004.
[3] Burkhardt, F., et al., "A Database of German Emotional Speech", Proc. Interspeech, 1517-1520, 2005.
[4] Engberg, I.S., et al., "Documentation of the Danish Emotional Speech Database (DES)", Aalborg University, Denmark, 1996.
[5] Kanade, T., Cohn, J. and Tian, Y., "Comprehensive Database for Facial Expression Analysis", Proc. IEEE Int'l Conf. Face and Gesture Recognition, 46-53, 2000.
[6] Ekman, P., "Universals and cultural differences in facial expressions of emotion", Nebr. Symp. Motiv. 1971, 207-283, 1972.
[7] Pantic, M., et al., "Web-Based Database for Facial Expression Analysis", Proc. ACM Int'l Conf. Multimedia, 317-321, 2005.
[8] O'Toole, A.J., et al., "A Video Database of Moving Faces and People". IEEE Trans. PAMI, 27(5):812-816, 2005.
[9] Roisman, G.I., et al., "The Emotional Integration of Childhood Experience: Physiological, Facial Expressive, and Self-Reported Emotional Response during the Adult       Attachment Interview", Developmental Psychology, 40(5):776-789, 2004.
[10] Douglas-Cowie, E., et al., "Emotional Speech: Towards a New Generation of Database", Speech Communication, 40(1-2):33-60, 2003.
[11] Busso, C. and Narayanan, S.S., "Interrelation Between Speech and Facial Gestures in Emotional Utterances: A Single Subject Study", IEEE Trans. ASLP,         15(8):2331-2347, 2007.
[12] Zeng, Z., Pantic, M., Roisman, G.I. and Huang, T.S., "Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions", IEEE Trans. PAMI,         31(1):39-58, 2009.
[13] 3dMD, "3dMD 4D Capture System", Online: http:// www.3dmd.com, accessed on 27 Apr 2010.
[14] Young, S., et al., "Hidden Markov Model Toolkit", Cambridge University Engineering Department, UK. Online: http://htk.eng.cam.ac.uk/, accessed on 27 Apr 2010.
[15] Huckvale, M., "Speech Filing System", UCL Dept. of Phonetics & Linguistics, UK. Online: http://www.phon.ucl.ac.uk/ resource/sfs/, accessed on 27 Apr 2010.
[16] Swerts, M., et al., "Gender-related differences in the production and perception of emotion", Proc. Interspeech, 334–337, 2008.
[17] Edwards, A. L., "Experimental Design in Psychological Research", New York: Holt, Rinehart and Winston, 1962.
[18] Eyben, F., Woellmer, M. and Schuller, B., "openEAR - An introductory tutorial", Institute for Human-Machine Communication, Technische Universitaet Muenchen,         Munich, Germany, 2009.
[19] Lorenzo, M.L., et al., "Use and Re-use of Facial Motion Capture Data", Proc. Vision, Video, and Graphics, 1-8, 2003.
[20] Haq, S. and Jackson, P.J.B., "Speaker-Dependent Audio-Visual Emotion Recognition", Proc. AVSP, 53-58, 2009.
[21] Vlasenko, B., et al., "Combining Frame and Turn-Level Information for Robust Recognition of Emotions within Speech", Proc. Interspeech, 2249-2252, 2007.

Last update: 2 April 2015
Authors: Philip Jackson and Sanaul Haq
Designed by JustDreamweaver.com