Thursday, September 13, 2018

New paper: Visual and linguistic narrative comprehension in autism spectrum disorders

My new paper with my collaborator, Emily Coderre, is finally out in Brain and Language. Our paper,"Visual and linguistic narrative comprehension in autism spectrum disorders: Neural evidence for modality-independent impairments," examines the neurocognition of how meaning is processed in verbal and visual narratives for individuals with autism and neurotypical controls.

We designed this study because there are many reports that individuals with autism do better with visual than verbal information. In the brain literature, we also see reduced brainwaves indicative of semantic processing for language processing in these individuals. So, we asked here: are these observations about semantic processing due to differences between visual and verbal information, or is it due to processing meaning across a sequence.

Thus, we presented both individuals with autism and neurotypical controls with either verbal or visual narratives (i.e., comics, or comics "translated" into text) and then introduced anomalous words/images at their end to see how incongruous information would be processed in both types of stimuli.

We found that individuals with autism had reduced semantic processing (the N400 brainwaves) to the incongruities in both the verbal and visual narratives. This implies that it's not a deficit in processing of a type of modality, but in a more general type of information processing.

The full paper is available at my Downloadable Papers page, or at this link (pdf).

Abstract

Individuals with autism spectrum disorders (ASD) have notable language difficulties, including with understanding narratives. However, most narrative comprehension studies have used written or spoken narratives, making it unclear whether narrative difficulties stem from language impairments or more global impairments in the kinds of general cognitive processes (such as understanding meaning and structural sequencing) that are involved in narrative comprehension. Using event-related potentials (ERPs), we directly compared semantic comprehension of linguistic narratives (short sentences) and visual narratives (comic panels) in adults with ASD and typically-developing (TD) adults. Compared to the TD group, the ASD group showed reduced N400 effects for both linguistic and visual narratives, suggesting comprehension impairments for both types of narratives and thereby implicating a more domain-general impairment. Based on these results, we propose that individuals with ASD use a more bottom-up style of processing during narrative comprehension.


Coderre, Emily L., Neil Cohn, Sally K. Slipher, Mariya Chernenok, Kerry Ledoux, and Barry Gordon. 2018. "Visual and linguistic narrative comprehension in autism spectrum disorders: Neural evidence for modality-independent impairments." Brain and Language 186:44-59.

Thursday, August 02, 2018

New paper: Visual Language Theory and the scientific study of comics

My latest paper is a chapter in the exciting new book collection, Empirical Comics Research: Digital, Multimodal, and Cognitive Methods, edited by Alexander Dunst, Jochen Laubrock, and Janina Wildfeuer. The book is a collection of empirical studies about comics, summarizing many of the works presented at the Empirical Studies of Comics conference at Bremen University in 2017.

It's fairly gratifying to see a collection like this combining various scholars' work using empirical methods to analyze comics. I've been doing this kind of work for almost two decades at this point, and most if it has been without many other people doing such research, and certainly not coming together in a collaborative way. So, a publication like this is a good marker for what is hopefully an emerging field.

My own contribution to the collection is the last chapter, "Visual Language Theory and the scientific study of comics." I provide an overview of my visual language research across the fields of the visual vocabulary of images, narrative structure, and page layout.

I also give some advice for how to go about such research and the necessity of an interdisciplinary perspective balancing theory, experimentation, and corpus analysis. The emphasis here is that all three of these techniques are necessary to make progress, and using one technique alone is limiting.

 You can find a preprint version of my chapter here, though I recommend checking out the whole book:

Empirical Comics Research: Digital, Multimodal, and Cognitive Methods

Abstract of my chapter:

The past decades have seen the rapid growth of empirical and experimental research on comics and visual narratives. In seeking to understand the cognition of how comics communicate, Visual Language Theory (VLT) argues that the structure of (sequential) images is analogous to that of verbal language, and that these visual languages are structured and processed in similar ways to other linguistic forms. While these visual languages appear prominently in comics of the world, all aspects of graphic and drawn information fall under this broad paradigm, including diverse contexts like emoji, Australian aboriginal sand drawings, instruction manuals, and cave paintings. In addition, VLT’s methods draw from that of the cognitive and language sciences. Specifically, theoretical modeling has been balanced with corpus analysis and psychological experimentation using both behavioral and neurocognitive measures. This paper will provide an overview of the assumptions and basic structures of visual language, grounded in the growing corpus and experimental literature. It will cover the nature of visual lexical items, the narrative grammar of sequential images, and the compositional structure of page layouts. Throughout, VLT emphasizes that these components operate as parallel yet interfacing structures, which manifest in varying ‘visual languages’ of the world that temper a comprehender’s fluency for such structures. Altogether, this review will highlight the effectiveness of VLT as a model for the scientific study of how graphic information communicates.


Cohn, Neil. 2018. Visual Language Theory and the scientific study of comics. In Wildfeuer, Janina, Alexander Dunst, Jochen Laubrock (Ed.). Empirical Comics Research: Digital, Multimodal, and Cognitive Methods. (pp. 305-328) London: Routledge.

Sunday, July 08, 2018

New paper: Listening beyond seeing

Our new paper has just been published in Brain and Language, titled "Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative." My collaborator Mirella Manfredi carried out this study, which builds on her previous work looking at different types of words (Pow! vs. Hit!) substituted into visual narrative sequences.

Here, Mirella showed visual narratives where the climactic event either matched or mismatched auditory sounds or words. So, like the figure to the right, a panel showing Snoopy spitting would accompany the sound of spitting or the word "spitting". Or, we played incongruous sounds, like the sound of something getting hit, or the word "hitting."

We measured participants brainwave responses (ERPs) to these panels/sounds. We found that these stimuli elicited an "N400 response"—which occurs to the processing of meaning in any modality (words, sounds, images, video, etc.). We found that though the overall semantic processing response (N400) was similar to both stimulus types, the incongruous sounds evoked a slightly different response across the scalp than the incongruous words. This suggested that, despite the overall process of computing meaning being similar, these stimuli may be processed in different parts of the brain.

In addition, these patterned responses very much resembled what is typical of showing words or sounds in isolation, and did not resemble what often appear to images. This suggests that, despite the multimodal image-sound/word interaction determining whether stimuli were congruent or incongruent, the semantic processing of the images did not seem to factor into the responses (or, was equally subtracted out across stimulus types).

So, overall, this implies that semantic processing across different modalities uses a similar response (N400), but may differ in neural areas.

You can find the paper here (pdf) or along with my other downloadable papers.

Abstract
Every day we integrate meaningful information coming from different sensory modalities, and previous work has debated whether conceptual knowledge is represented in modality-specific neural stores specialized for specific types of information, and/or in an amodal, shared system. In the current study, we investigated semantic processing through a cross-modal paradigm which asked whether auditory semantic processing could be modulated by the constraints of context built up across a meaningful visual narrative sequence. We recorded event-related brain potentials (ERPs) to auditory words and sounds associated to events in visual narratives—i.e., seeing images of someone spitting while hearing either a word (Spitting!) or a sound (the sound of spitting)—which were either semantically congruent or incongruent with the climactic visual event. Our results showed that both incongruent sounds and words evoked an N400 effect, however, the distribution of the N400 effect to words (centro-parietal) differed from that of sounds (frontal). In addition, words had an earlier latency N400 than sounds. Despite these differences, a sustained late frontal negativity followed the N400s and did not differ between modalities. These results support the idea that semantic memory balances a distributed cortical network accessible from multiple modalities, yet also engages amodal processing insensitive to specific modalities.

Full reference:

Manfredi, Mirella, Neil Cohn, Mariana De Ara├║jo Andreoli, and Paulo Sergio Boggio. 2018. "Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative." Brain and Language 185:1-8. doi: https://doi.org/10.1016/j.bandl.2018.06.008.

Sunday, April 22, 2018

New paper: Combinatorial morphology in visual languages

I'm very pleased to announce that my newest paper, "Combinatorial morphology in visual languages" has now been published in a book collection edited by Geert Booij, The Construction of Words: Advances in Construction Morphology. The overall collection looks excellent and is a great resource for work in linguistics on morphology across domains.

My own contribution makes a first attempt to formalize the structure of combinatorial visual morphology—how visual signs like motion lines or hearts combine with their "stems" to create a larger additive meaning.

This paper also introduces a new concept for these types of signs. Since various visual morphemes are affixes—like the "upfixes" that float above faces (right)—it begs the question: what are these affixes attaching to? In verbal languages, affixes attach to "word" units. But visual representations don't have words, so this paper discusses what type of structure would be required to fill that theoretical gap, and formalizes this within the parallel architecture model of language.

You can download a pre-print of chapter here (pdf) or on my downloadable papers page.

Abstract

Just as structured mappings between phonology and meaning make up the lexicons of spoken languages, structured mappings between graphics and meaning comprise lexical items in visual languages. Such representations may also involve combinatorial meanings that arise from affixing, substituting, or reduplicating bound and self-standing visual morphemes. For example, hearts may float above a head or substitute for eyes to show a person in love, or gears may spin above a head to convey that they are thinking. Here, we explore the ways that such combinatorial morphology operates in visual languages by focusing on the balance of intrinsic and distributional construction of meaning, the variation in semantic reference and productivity, and the empirical work investigating their cross-cultural variation, processing, and acquisition. Altogether, this work draws these parallels between the visual and verbal domains that can hopefully inspire future work on visual languages within the linguistic sciences.


Cohn, Neil. Combinatorial morphology in visual languages. In Booij, Geert (Ed.). The Construction of Words: Advances in Construction Morphology. (pp. 175-199). London: Springer

Tuesday, April 10, 2018

Workshop: How We Make and Understand Drawings

A few weeks back I had the pleasure of doing a workshop with Gabriel Greenberg (UCLA) about the understanding of drawings and visual narratives at the University of Connecticut. The workshop was hosted by Harry van der Hulst from the Linguistics Department, and we explored the connections between graphic systems and the structure of language. UConn has now been nice enough to put our talks online for everyone, and I've posted them below.

On Day 1, Gabriel first talked about his theory of pictorial semantics. Then, I presented my theory about the structure of the "visual lexicon(s)" of drawing systems, and then about how children learn to draw. This covered what it means for people to say "I can't draw," as was the topic of my papers on the structure of drawing.



On Day 2, we covered the understanding of sequential images. Here our views diverged, with Gabriel taking more of a "discourse approach", while I presented my theory of Visual Narrative Grammar and several of the studies supporting it. I finished by presenting my "grand theory of everything" about a multimodal model of language and communication. Unfortunately, the mic ran out of batteries on the second day and we didn't know it, so the sound is very soft. But, if you crank up the volume and listen carefully, you should be able to hear it (hopefully).

Thursday, February 15, 2018

New Paper: In defense of a “grammar” in the visual language of comics

I'm excited to announce that my new paper, "In defense of a 'grammar' in the visual language of comics" is now published in the Journal of Pragmatics. This paper provides an overview of my theory of narrative grammar, and rigorously compares it against other approaches to sequential image understanding.

Since my proposal that a "narrative grammar" operates to guide meaningful information in (visual) narratives, there have been several critiques and misunderstandings about how it works. Some approaches have also been proposed as a counterpoint. I feel all of this is healthy in the course of development of a theory and (hopefully) a broader discipline.

In this paper I address some of these concerns. I detail how my model of Visual Narrative Grammar operates and I review the empirical evidence supporting it. I then compare it in depth to the specifics and assumptions found in other models. Altogether I think it makes for a good review of the literature on sequential image understanding, and outlines what we should expect out of a scientific approach to visual narrative.

The paper is available on my Downloadable Papers page, or direct through this link (pdf).

Abstract:

Visual Language Theory (VLT) argues that the structure of drawn images is guided by similar cognitive principles as language, foremost a “narrative grammar” that guides the ways in which sequences of images convey meaning. Recent works have critiqued this linguistic orientation, such as Bateman and Wildfeuer's (2014) arguments that a grammar for sequential images is unnecessary. They assert that the notion of a grammar governing sequential images is problematic, and that the same information can be captured in a “discourse” based approach that dynamically updates meaningful information across juxtaposed images. This paper reviews these assertions, addresses their critiques about a grammar of sequential images, and then details the shortcomings of their own claims. Such discussion is directly grounded in the empirical evidence about how people comprehend sequences of images. In doing so, it reviews the assumptions and basic principles of the narrative grammar of the visual language used in comics, and it aims to demonstrate the empirical standards by which theories of comics' structure should adhere to.


Full reference:

Cohn, Neil. 2018. In defense of a "grammar" in the visual language of comics. Journal of Pragmatics. 127: 1-19

Tuesday, January 23, 2018

My friend, Martin Paczynski

It was with much surprise and a  heavy heart that I learned last week that my friend and colleague Dr. Martin Paczynski suddenly passed away. Martin and I met in 2006 when I entered graduate school at Tufts University, and he was the first graduate student working with our mentor Gina Kuperberg (I was her second). He quickly grew to be a close collaborator, a mentoring senior student, my first choice for brainstorming, and my best friend throughout graduate school. Here, I'll honor his place in the sciences and my work.

It's always a nice benefit when your closest colleagues are smarter than you, and that meant Martin's influence on me and my research is everywhere. He essentially trained me in using EEG, and helped me formulate and analyze countless studies. Though he started the program a year before me, we graduated together, which I think made it all the more special.

Though he initially studied computer science and worked in that field, Martin's graduate work at Tufts focused on the neurocognition of linguistic semantics, though he was knowledgeable in many more fields. His early work focused on aspects of animacy and event roles. He later turned to issues of inference like aspectual coercion—where we construe an additional meaning about time that isn't in a sentence, such as the sense of repeated action in sentences like "For several minutes, the cat pounced on the toy." His experiments were elegant and brilliant.

Our collaborative work on my visual language research started with my first brain study, for which Martin was the second author. After graduate school we co-authored our work on semantic roles of event building, which united our research interests. This continued until just recently, as my most recent paper again had Martin as my co-author, directly following our earlier work, almost 6 years after we left graduate school together. And it wasn't just me: he is a co-author on many many people's work from our lab, which speaks to both his generosity and insightfulness.

Authorship wasn't his only presence in my work. If you've ever seen me give a talk that mentions film, you'll see him starring in the video clips I created as examples (him walking barefoot down our grad office hallway... a frequent sight). If you look at page 85 of my book, there's Martin, shaking hands with another friend:


After graduation, Martin's interests moved away from psycholinguistics, more towards research on mindfulness, stress, and other clinical and applied aspects of neurocognition. For many years he talked about one day studying architecture and design using EEG, but hadn't implemented those ideas just yet. There seemed to be no topic that he couldn't excel at when he applied himself.

He was warm, kind, creative, funny, brilliant, and intellectually generous. I like to especially remember him with a mischievous grin, foreshadowing a comment which would inevitably be both hilarious and astute.

The sciences have lost a spark of insight in Dr. Martin Paczynski, and the world has lost a progressive and compassionate soul. I've lost that and more. Safe travels my friend.