Saturday, June 01, 2019

New paper: Structural complexity in visual narratives

2019 so far has been a flurry of published papers for me, and here's yet another. My paper "Structural complexity in visual narratives: Theory, brains, and cross-cultural diversity" is now published in the book collection Narrative Complexity and Media: Experiential and Cognitive Interfaces. The book is an extensive resource (468 pages!) including many chapters about the cognitive study of narrative. Mine is one of several that discusses visual narratives, along with complementary chapters by Joe Magliano and James Cutting. So, the book is highly recommended!

In this paper, I tackle the issue of "narrative complexity" in three ways. First, I describe the way in which sequences of images are built in terms of their underlying structure. This complexity comes from the narrative structure, and how various schematic principles combine to create patterns with "complexity" in their architecture similar to what is found in syntactic structure in sentences.

The second level of complexity comes in how these narrative patterns manifest in different types of comics from around the world. We coded the properties of various comics to see how comics from Europe, the United States, and Asia might differ in their narrative patterns. We found that they indeed vary, with comics from Asia (Japan, Korea, Hong Kong) using more complex sequencing patterns than those from Europe or the United States. This is important because such diversity is systematic, implying that they are encoded in the minds of their authors and readers.

The third level of complexity comes in how visual narratives like comics are processed. Many theories posit that we understand comics by simply linking meanings between panels. This implies a fairly uniform process guided only by updating meaning from image to image. However, neurocognitive research implies that the brain actually uses several interacting mechanisms in the processing of narrative image sequences, balancing both meaning and a narrative structure of the type described in the previous sections.

Altogether, this paper outlines a balance between theoretical, cross-cultural, and neurocognitive research that identifies complexity at multiple levels.

The paper is available in the book itself, but a downloadable preprint version is available here or on my downloadable papers page.

Cohn, Neil. 2019. Structural complexity in visual narratives: Theory, brains, and cross-cultural diversity. In Grishakova, Marina and Maria Poulaki (Ed.). Narrative Complexity and Media: Experiential and Cognitive Interfaces  (pp. 174-199). Lincoln: University of Nebraska Press

New paper: The neurophysiology of event processing in language and visual events

In yet another one of my recent publications, here is a book chapter that's been awaiting publication for many years. My paper with my dear departed friend, Martin Paczynski, "The neurophysiology of event processing in language and visual events" is now finally published in the Oxford Handbook of Event Structure.

Our chapter gives an overview of research on the understanding of events from the perspective of cognitive neuroscience, particularly research using EEG. We actually wanted the original paper to be titled "Events electrified" but the book collection wanted less punchy titles. Our focus is on the N400 and P600 ERP effects, as they manifest in both language about events and in the perception of visual events themselves.

The paper can be downloaded here or at my downloadable papers page.

First paragraph:

"Events are a fundamental part of human experience. All actions that we undertake, discuss, and view are embedded within the understanding of events and their structure. With the increasing complexity of neuroimaging over the past several decades, we have been able for the first time to examine how this tacit knowledge is processed and stored in people’s minds and brains. Among the techniques used to study the brain, electroencephalography (EEG) offers one of the few ways in which we can directly study information processed by the brain. Unlike functional imaging, whether PET or fMRI, which rely on metabolic consequences of neural activity, the EEG signal is generated by post-synaptic potentials in pyramidal cells which make up approximately 80% of neurons within the cerebral cortex. As such, EEG offers a temporal resolution measured in milliseconds, rather than seconds, making it well suited for exploring the rapid nature of language processing. Though there are numerous ways in which the EEG signal can be analyzed, in the current chapter we will focus our attention on the most common measure: event-related potentials (ERPs), the portion of the EEG signal time-locked to an event of interest, such as a word, image, or the start of a video clip."

Cohn, Neil and Martin Paczynski. 2019. The neurophysiology of event processing in language and visual events. In Truswell, Robert (Ed.). Handbook of event structure. (pp. 624-637). Oxford: Oxford University Press.

New paper: Visual narratives and the Mind

My latest paper, "Visual narratives and the mind: Comprehension, cognition, and learning" is published in the collection Psychology of Learning and Motivation. This paper integrates a few threads of research that I've been working on lately.

The first section presents the cognitive processes that go into understanding a sequence of images, integrating two of the most recent psychological models on the issue. These include my own neurocognitive model of sequential image understanding that integrates both semantic and narrative structures, and an approach from some of my colleagues emphasizing aspects of scene perception and   event cognition.

The second section then asks, given these cognitive processes related to visual narrative understanding, how much of them are specialized for that specifically? Are these general mechanisms that also apply to other aspects of cognition, like language? I argue for two levels of this: more specialized processing mostly has to do with the modalities themselves: how you engage written text might be different from how you engage pictures. However, the "back end" processes—how you compute meaning and order them into sequences—likely is more connected across other domains.

Finally, I then examine the relation between these cognitive processes and how children learn to understand a sequence of images. A wide literature points to children only starting to understand the sequential aspects of visual narratives between ages 4 and 6. So, I discuss the stages in children's development of understanding sequential images, and link this to the cognitive processes discussed in the first section.

You can find a direct preprint pdf version of the paper here, as well as on my downloadable papers page. Here's the abstract:

The way we understand a narrative sequence of images may seem effortless, given the prevalence of comics and picture stories across contemporary society. Yet, visual narrative comprehension involves greater complexity than is often acknowledged, as suggested by an emerging field of psychological research. This work has contributed to a growing understanding of how visual narratives are processed, how such mechanisms overlap with those of other expressive modalities like language, and how such comprehension involves a developmental trajectory that requires exposure to visual narrative systems. Altogether, such work reinforces visual narratives as a basic human expressive capacity carrying great potential for exploring fundamental questions about the mind.

Cohn, Neil. 2019. Visual narratives and the mind: Comprehension, cognition, and learning. In Federmeier, Kara D. and Diane M. Beck (Eds). Psychology of Learning and Motivation: Knowledge and Vision. Vol. 70. (pp. 97-128). London: Academic Press