ConversationPiece II: Displaced and Rehacked

Thomas Wennekers, Mathew Emmett, Susan L. Denham

Abstract


Conversations are amazing! Although we usually find the experience enjoyable and even relaxing, when one considers the difficulties of simultaneously generating signals that convey an intended message while at the same time trying to understand the messages of another, then the pleasures of conversation may seem rather surprising. We manage to communicate with each other without knowing quite what will happen next. We quickly manufacture precisely timed sounds and gestures on the fly, which we exchange with each other without clashing—even managing to slip in some imitations as we go along! Yet usually meaning is all we really notice. In the ConversationPiece project, we aim to transform conversations into musical sounds using neuro-inspired technology to expose the amazing world of sounds people create when talking with others. Sounds from a microphone are separated into different frequency bands by a computer-simulated “ear” (more precisely “basilar membrane”) and analyzed for tone onsets using a lateral-inhibition network, similar to some cortical neural networks. The detected events are used to generate musical notes played on a synthesizer either instantaneously or delayed. The first option allows for exchanging timed sound events between two speakers with a speech-like structure, but without conveying (much) meaning. Delayed feedback further allows self-exploration of one’s own speech. We discuss the current setup (ConversationPiece version II), insights from first experiments, and options for future applications.

Keywords


: conversation; dialogue; performance; sonification; sound analysis

Full Text:

PDF

References


Adank, P., Hagoort, P., & Bekkering, H. (2010). Imitation improves language comprehension. Psychological Science, 21(12), 1903–1909. doi:10.1177/0956797610389192

Friston, K. J., & Frith, C. D. (2015). Active inference, communication and hermeneutics. Cortex, 68, 129–143. doi:10.1016/j.cortex.2015.03.025

Garrod, S., & Pickering, M. J. (2004). Why is conversation so easy? Trends in Cognitive Sciences, 8(1), 8–11. doi:10.1016/j.tics.2003.10.016

Hasson, U., Ghazanfar, A. A., Galantucci, B., Garrod, S., & Keysers, C. (2012). Brain-to-brain coupling: A mechanism for creating and sharing a social world. Trends in Cognitive Sciences, 16(2), 114–121. doi:10.1016/j.tics.2011.12.007

Lieberman, P., & Blumstein, S. E. (1988). Speech physiology, speech perception, and acoustic phonetics. Cambridge, UK: Cambridge University Press.

Menenti, L., Pickering, M. J., & Garrod, S. C. (2012). Toward a neural basis of interactive alignment in conversation. Frontiers inHumanNeuroscience, 6, 1–9. doi:10.3389/fnhum.2012.00185

Okada, K., Matchin, W., & Hickok, G. (2017). Neural evidence for predictive coding in auditory cortex during speech production. Psychonomic Bulletin & Review. doi:10.3758/s13423-017-1284-x


Refbacks

  • There are currently no refbacks.


Copyright (c) 2018 Thomas Wennekers, Mathew Emmett, Susan L. Denham