Thursday, September 13, 2018

New paper: Visual and linguistic narrative comprehension in autism spectrum disorders

My new paper with my collaborator, Emily Coderre, is finally out in Brain and Language. Our paper,"Visual and linguistic narrative comprehension in autism spectrum disorders: Neural evidence for modality-independent impairments," examines the neurocognition of how meaning is processed in verbal and visual narratives for individuals with autism and neurotypical controls.

We designed this study because there are many reports that individuals with autism do better with visual than verbal information. In the brain literature, we also see reduced brainwaves indicative of semantic processing for language processing in these individuals. So, we asked here: are these observations about semantic processing due to differences between visual and verbal information, or is it due to processing meaning across a sequence.

Thus, we presented both individuals with autism and neurotypical controls with either verbal or visual narratives (i.e., comics, or comics "translated" into text) and then introduced anomalous words/images at their end to see how incongruous information would be processed in both types of stimuli.

We found that individuals with autism had reduced semantic processing (the N400 brainwaves) to the incongruities in both the verbal and visual narratives. This implies that it's not a deficit in processing of a type of modality, but in a more general type of information processing.

The full paper is available at my Downloadable Papers page, or at this link (pdf).

Abstract

Individuals with autism spectrum disorders (ASD) have notable language difficulties, including with understanding narratives. However, most narrative comprehension studies have used written or spoken narratives, making it unclear whether narrative difficulties stem from language impairments or more global impairments in the kinds of general cognitive processes (such as understanding meaning and structural sequencing) that are involved in narrative comprehension. Using event-related potentials (ERPs), we directly compared semantic comprehension of linguistic narratives (short sentences) and visual narratives (comic panels) in adults with ASD and typically-developing (TD) adults. Compared to the TD group, the ASD group showed reduced N400 effects for both linguistic and visual narratives, suggesting comprehension impairments for both types of narratives and thereby implicating a more domain-general impairment. Based on these results, we propose that individuals with ASD use a more bottom-up style of processing during narrative comprehension.


Coderre, Emily L., Neil Cohn, Sally K. Slipher, Mariya Chernenok, Kerry Ledoux, and Barry Gordon. 2018. "Visual and linguistic narrative comprehension in autism spectrum disorders: Neural evidence for modality-independent impairments." Brain and Language 186:44-59.

Thursday, August 02, 2018

New paper: Visual Language Theory and the scientific study of comics

My latest paper is a chapter in the exciting new book collection, Empirical Comics Research: Digital, Multimodal, and Cognitive Methods, edited by Alexander Dunst, Jochen Laubrock, and Janina Wildfeuer. The book is a collection of empirical studies about comics, summarizing many of the works presented at the Empirical Studies of Comics conference at Bremen University in 2017.

It's fairly gratifying to see a collection like this combining various scholars' work using empirical methods to analyze comics. I've been doing this kind of work for almost two decades at this point, and most if it has been without many other people doing such research, and certainly not coming together in a collaborative way. So, a publication like this is a good marker for what is hopefully an emerging field.

My own contribution to the collection is the last chapter, "Visual Language Theory and the scientific study of comics." I provide an overview of my visual language research across the fields of the visual vocabulary of images, narrative structure, and page layout.

I also give some advice for how to go about such research and the necessity of an interdisciplinary perspective balancing theory, experimentation, and corpus analysis. The emphasis here is that all three of these techniques are necessary to make progress, and using one technique alone is limiting.

 You can find a preprint version of my chapter here, though I recommend checking out the whole book:

Empirical Comics Research: Digital, Multimodal, and Cognitive Methods

Abstract of my chapter:

The past decades have seen the rapid growth of empirical and experimental research on comics and visual narratives. In seeking to understand the cognition of how comics communicate, Visual Language Theory (VLT) argues that the structure of (sequential) images is analogous to that of verbal language, and that these visual languages are structured and processed in similar ways to other linguistic forms. While these visual languages appear prominently in comics of the world, all aspects of graphic and drawn information fall under this broad paradigm, including diverse contexts like emoji, Australian aboriginal sand drawings, instruction manuals, and cave paintings. In addition, VLT’s methods draw from that of the cognitive and language sciences. Specifically, theoretical modeling has been balanced with corpus analysis and psychological experimentation using both behavioral and neurocognitive measures. This paper will provide an overview of the assumptions and basic structures of visual language, grounded in the growing corpus and experimental literature. It will cover the nature of visual lexical items, the narrative grammar of sequential images, and the compositional structure of page layouts. Throughout, VLT emphasizes that these components operate as parallel yet interfacing structures, which manifest in varying ‘visual languages’ of the world that temper a comprehender’s fluency for such structures. Altogether, this review will highlight the effectiveness of VLT as a model for the scientific study of how graphic information communicates.


Cohn, Neil. 2018. Visual Language Theory and the scientific study of comics. In Wildfeuer, Janina, Alexander Dunst, Jochen Laubrock (Ed.). Empirical Comics Research: Digital, Multimodal, and Cognitive Methods. (pp. 305-328) London: Routledge.

Sunday, July 08, 2018

New paper: Listening beyond seeing

Our new paper has just been published in Brain and Language, titled "Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative." My collaborator Mirella Manfredi carried out this study, which builds on her previous work looking at different types of words (Pow! vs. Hit!) substituted into visual narrative sequences.

Here, Mirella showed visual narratives where the climactic event either matched or mismatched auditory sounds or words. So, like the figure to the right, a panel showing Snoopy spitting would accompany the sound of spitting or the word "spitting". Or, we played incongruous sounds, like the sound of something getting hit, or the word "hitting."

We measured participants brainwave responses (ERPs) to these panels/sounds. We found that these stimuli elicited an "N400 response"—which occurs to the processing of meaning in any modality (words, sounds, images, video, etc.). We found that though the overall semantic processing response (N400) was similar to both stimulus types, the incongruous sounds evoked a slightly different response across the scalp than the incongruous words. This suggested that, despite the overall process of computing meaning being similar, these stimuli may be processed in different parts of the brain.

In addition, these patterned responses very much resembled what is typical of showing words or sounds in isolation, and did not resemble what often appear to images. This suggests that, despite the multimodal image-sound/word interaction determining whether stimuli were congruent or incongruent, the semantic processing of the images did not seem to factor into the responses (or, was equally subtracted out across stimulus types).

So, overall, this implies that semantic processing across different modalities uses a similar response (N400), but may differ in neural areas.

You can find the paper here (pdf) or along with my other downloadable papers.

Abstract
Every day we integrate meaningful information coming from different sensory modalities, and previous work has debated whether conceptual knowledge is represented in modality-specific neural stores specialized for specific types of information, and/or in an amodal, shared system. In the current study, we investigated semantic processing through a cross-modal paradigm which asked whether auditory semantic processing could be modulated by the constraints of context built up across a meaningful visual narrative sequence. We recorded event-related brain potentials (ERPs) to auditory words and sounds associated to events in visual narratives—i.e., seeing images of someone spitting while hearing either a word (Spitting!) or a sound (the sound of spitting)—which were either semantically congruent or incongruent with the climactic visual event. Our results showed that both incongruent sounds and words evoked an N400 effect, however, the distribution of the N400 effect to words (centro-parietal) differed from that of sounds (frontal). In addition, words had an earlier latency N400 than sounds. Despite these differences, a sustained late frontal negativity followed the N400s and did not differ between modalities. These results support the idea that semantic memory balances a distributed cortical network accessible from multiple modalities, yet also engages amodal processing insensitive to specific modalities.

Full reference:

Manfredi, Mirella, Neil Cohn, Mariana De Ara├║jo Andreoli, and Paulo Sergio Boggio. 2018. "Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative." Brain and Language 185:1-8. doi: https://doi.org/10.1016/j.bandl.2018.06.008.

Sunday, April 22, 2018

New paper: Combinatorial morphology in visual languages

I'm very pleased to announce that my newest paper, "Combinatorial morphology in visual languages" has now been published in a book collection edited by Geert Booij, The Construction of Words: Advances in Construction Morphology. The overall collection looks excellent and is a great resource for work in linguistics on morphology across domains.

My own contribution makes a first attempt to formalize the structure of combinatorial visual morphology—how visual signs like motion lines or hearts combine with their "stems" to create a larger additive meaning.

This paper also introduces a new concept for these types of signs. Since various visual morphemes are affixes—like the "upfixes" that float above faces (right)—it begs the question: what are these affixes attaching to? In verbal languages, affixes attach to "word" units. But visual representations don't have words, so this paper discusses what type of structure would be required to fill that theoretical gap, and formalizes this within the parallel architecture model of language.

You can download a pre-print of chapter here (pdf) or on my downloadable papers page.

Abstract

Just as structured mappings between phonology and meaning make up the lexicons of spoken languages, structured mappings between graphics and meaning comprise lexical items in visual languages. Such representations may also involve combinatorial meanings that arise from affixing, substituting, or reduplicating bound and self-standing visual morphemes. For example, hearts may float above a head or substitute for eyes to show a person in love, or gears may spin above a head to convey that they are thinking. Here, we explore the ways that such combinatorial morphology operates in visual languages by focusing on the balance of intrinsic and distributional construction of meaning, the variation in semantic reference and productivity, and the empirical work investigating their cross-cultural variation, processing, and acquisition. Altogether, this work draws these parallels between the visual and verbal domains that can hopefully inspire future work on visual languages within the linguistic sciences.


Cohn, Neil. Combinatorial morphology in visual languages. In Booij, Geert (Ed.). The Construction of Words: Advances in Construction Morphology. (pp. 175-199). London: Springer

Tuesday, April 10, 2018

Workshop: How We Make and Understand Drawings

A few weeks back I had the pleasure of doing a workshop with Gabriel Greenberg (UCLA) about the understanding of drawings and visual narratives at the University of Connecticut. The workshop was hosted by Harry van der Hulst from the Linguistics Department, and we explored the connections between graphic systems and the structure of language. UConn has now been nice enough to put our talks online for everyone, and I've posted them below.

On Day 1, Gabriel first talked about his theory of pictorial semantics. Then, I presented my theory about the structure of the "visual lexicon(s)" of drawing systems, and then about how children learn to draw. This covered what it means for people to say "I can't draw," as was the topic of my papers on the structure of drawing.



On Day 2, we covered the understanding of sequential images. Here our views diverged, with Gabriel taking more of a "discourse approach", while I presented my theory of Visual Narrative Grammar and several of the studies supporting it. I finished by presenting my "grand theory of everything" about a multimodal model of language and communication. Unfortunately, the mic ran out of batteries on the second day and we didn't know it, so the sound is very soft. But, if you crank up the volume and listen carefully, you should be able to hear it (hopefully).

Thursday, February 15, 2018

New Paper: In defense of a “grammar” in the visual language of comics

I'm excited to announce that my new paper, "In defense of a 'grammar' in the visual language of comics" is now published in the Journal of Pragmatics. This paper provides an overview of my theory of narrative grammar, and rigorously compares it against other approaches to sequential image understanding.

Since my proposal that a "narrative grammar" operates to guide meaningful information in (visual) narratives, there have been several critiques and misunderstandings about how it works. Some approaches have also been proposed as a counterpoint. I feel all of this is healthy in the course of development of a theory and (hopefully) a broader discipline.

In this paper I address some of these concerns. I detail how my model of Visual Narrative Grammar operates and I review the empirical evidence supporting it. I then compare it in depth to the specifics and assumptions found in other models. Altogether I think it makes for a good review of the literature on sequential image understanding, and outlines what we should expect out of a scientific approach to visual narrative.

The paper is available on my Downloadable Papers page, or direct through this link (pdf).

Abstract:

Visual Language Theory (VLT) argues that the structure of drawn images is guided by similar cognitive principles as language, foremost a “narrative grammar” that guides the ways in which sequences of images convey meaning. Recent works have critiqued this linguistic orientation, such as Bateman and Wildfeuer's (2014) arguments that a grammar for sequential images is unnecessary. They assert that the notion of a grammar governing sequential images is problematic, and that the same information can be captured in a “discourse” based approach that dynamically updates meaningful information across juxtaposed images. This paper reviews these assertions, addresses their critiques about a grammar of sequential images, and then details the shortcomings of their own claims. Such discussion is directly grounded in the empirical evidence about how people comprehend sequences of images. In doing so, it reviews the assumptions and basic principles of the narrative grammar of the visual language used in comics, and it aims to demonstrate the empirical standards by which theories of comics' structure should adhere to.


Full reference:

Cohn, Neil. 2018. In defense of a "grammar" in the visual language of comics. Journal of Pragmatics. 127: 1-19

Tuesday, January 23, 2018

My friend, Martin Paczynski

It was with much surprise and a  heavy heart that I learned last week that my friend and colleague Dr. Martin Paczynski suddenly passed away. Martin and I met in 2006 when I entered graduate school at Tufts University, and he was the first graduate student working with our mentor Gina Kuperberg (I was her second). He quickly grew to be a close collaborator, a mentoring senior student, my first choice for brainstorming, and my best friend throughout graduate school. Here, I'll honor his place in the sciences and my work.

It's always a nice benefit when your closest colleagues are smarter than you, and that meant Martin's influence on me and my research is everywhere. He essentially trained me in using EEG, and helped me formulate and analyze countless studies. Though he started the program a year before me, we graduated together, which I think made it all the more special.

Though he initially studied computer science and worked in that field, Martin's graduate work at Tufts focused on the neurocognition of linguistic semantics, though he was knowledgeable in many more fields. His early work focused on aspects of animacy and event roles. He later turned to issues of inference like aspectual coercion—where we construe an additional meaning about time that isn't in a sentence, such as the sense of repeated action in sentences like "For several minutes, the cat pounced on the toy." His experiments were elegant and brilliant.

Our collaborative work on my visual language research started with my first brain study, for which Martin was the second author. After graduate school we co-authored our work on semantic roles of event building, which united our research interests. This continued until just recently, as my most recent paper again had Martin as my co-author, directly following our earlier work, almost 6 years after we left graduate school together. And it wasn't just me: he is a co-author on many many people's work from our lab, which speaks to both his generosity and insightfulness.

Authorship wasn't his only presence in my work. If you've ever seen me give a talk that mentions film, you'll see him starring in the video clips I created as examples (him walking barefoot down our grad office hallway... a frequent sight). If you look at page 85 of my book, there's Martin, shaking hands with another friend:


After graduation, Martin's interests moved away from psycholinguistics, more towards research on mindfulness, stress, and other clinical and applied aspects of neurocognition. For many years he talked about one day studying architecture and design using EEG, but hadn't implemented those ideas just yet. There seemed to be no topic that he couldn't excel at when he applied himself.

He was warm, kind, creative, funny, brilliant, and intellectually generous. I like to especially remember him with a mischievous grin, foreshadowing a comment which would inevitably be both hilarious and astute.

The sciences have lost a spark of insight in Dr. Martin Paczynski, and the world has lost a progressive and compassionate soul. I've lost that and more. Safe travels my friend.

Monday, December 18, 2017

2017: My publications in review

Last year I summarized all the papers I published in 2016, and I thought it worked out so well I might as well keep it going. This year wasn't quite the flurry of books and papers as last year (due largely to setting up a new EEG lab and submitting multiple grants), but we had several significant papers come out balancing both brainwave studies and corpus analyses.

So, here are the papers that I published in 2017...

Drawing the Line Between Constituent Structure and Coherence Relations in Visual Narratives (pdf) - This project with my former assistant Patrick Bender looked at people's intuitions for how to "segment" visual narratives into different subsections. Contrary to work on events and discourse, we found that breaks in categories of my model of narrative grammar were better predictors of segmentation than just changes in meaning between images (like spatial or character changes).

When a hit sounds like a kiss (pdf) - This project with Mirella Manfredi and Marta Kutas examined how the brain processes words that replace panels, like Pow! or Hit! replacing a climactic event. We found that the context of the sequence modulated the semantic processing of the words, and that descriptive words (Hit!) generated brain responses consistent with lower probability words than onomatopoeia (Pow!).

What's your neural function, narrative conjunction? (online article, pdf) - I consider this to be one of my coolest and most interesting studies to date. With Marta Kutas, I examined the brain response to a narrative pattern called Environmental-Conjunction. We found that it elicits two types of brain responses consistent with grammatical processing in language. Other work has shown that Environmental-Conjunction appears more in Japanese manga than Western comics, and indeed we found that readership of manga modulated this brain response. So: the brain uses grammatical processing for narrative patterns, and people familiar with this pattern process it in ways that are different from people who are less familiar with it. In other words, the way you process the sequences in comics depends on which ones you read.

A picture is worth more words over time (pdf) - This project with co-authors Ryan Taylor and Kaitlin Pederson is the companion to last year's paper by Kaitlin on how page layouts have changed in superhero comics over the past 80 years. Here, we look at how text-image interactions and storytelling methods have changed from the 1940s to 2010s in American superhero comics. Here's also a link to Ryan presenting this work at Comic-Con International a few years ago.

Path salience in motion events from verbal and visual languages (pdf) - In this corpus study we examined how paths are depicted in 35 different comics from 6 different countries around the world. We found that the patterns of paths differed along dimensions similar to what is found in distinctions of those authors' spoken languages, hinting at possible connections between a visual language that one draws and the spoken language one speaks or writes.

Not so secret agents (pdf) - This paper with Marta Kutas looked at the brain processes of certain postures of characters in events. We found that preparatory postures (like reaching back to throw a  ball or to punch) differed from those that did not hint at such subsequent events.

Not a bad collection, if I do say so myself. I'm already excited about the new work set to come out next year, so stay tuned. All these papers and more are available online here.


Saturday, September 23, 2017

New paper: Not so secret agents

I'm excited to announce a new paper, "Not so secret agents: Event-related potentials to semantic roles in visual event comprehension," in the journal Brain and Cognition. This paper was done during my time in the lab of my co-author, Marta Kutas, and collaborating with my friend from grad school, co-author Martin Paczynski.

This paper is a follow up of a study Martin and I did previously that found that agents-to-be, the doers of actions, elicit more predictions about subsequent events than patients-to-be, the receivers of actions. For example, an agent-to-be would be a person reaching back their arm to punch (like in this image from the classic How to Draw Comics the Marvel Way), which will convey more information about that upcoming event than the patient-to-be (who is about to be punched).

In this follow up, we measured participants' brainwaves to see whether this type of "agent advantage" appears when comparing these agents appear in preparatory postures against patients, and versus those with agents where we took away the preparatory postures. So, instead of reaching back to punch, the agent's arm instead would be hanging next to their body, not indicting an upcoming punch. We found indeed that preparatory actions appear to be more costly prior to an action, and appear to have a downstream influence on processing the subsequent action.

The paper is available here or on my downloadable papers page or this direct pdf link, and is summarized concisely in Experiment 1 of this poster, which has subsequently made for some keen pillows on my couch (right).

Abstract:

Research across domains has suggested that agents, the doers of actions, have a processing advantage over patients, the receivers of actions. We hypothesized that agents as “event builders” for discrete actions (e.g., throwing a ball, punching) build on cues embedded in their preparatory postures (e.g., reaching back an arm to throw or punch) that lead to (predictable) culminating actions, and that these cues afford frontloading of event structure processing. To test this hypothesis, we compared event-related brain potentials (ERPs) to averbal comic panels depicting preparatory agents (ex. reaching back an arm to punch) that cued specific actions with those to non-preparatory agents (ex. arm to the side) and patients that did not cue any specific actions. We also compared subsequent completed action panels (ex. agent punching patient) across conditions, where we expected an inverse pattern of ERPs indexing the differential costs of processing completed actions as a function of preparatory cues. Preparatory agents evoked a greater frontal positivity (600–900 ms) relative to non-preparatory agents and patients, while subsequent completed actions panels following non-preparatory agents elicited a smaller frontal positivity (600–900 ms). These results suggest that preparatory (vs. non-) postures may differentially impact the processing of agents and subsequent actions in real time.


Full reference:

Cohn, Neil, Martin Paczynski, and Marta Kutas. 2017. Not so secret agents: Event-related potentials to semantic roles in visual event comprehension. Brain and Cognition. 119: 1-9.

Thursday, June 15, 2017

New paper: A picture is worth more words over time

I'm excited to announce we have another new paper, "A picture is worth more words over time: Multimodality and narrative structure across eight decades of American superhero comics," now out in the journal Multimodal Communication. This paper examines the changes in text-image relations and storytelling in American superhero comics from the 1940s though the 2010s.

This was a project first undertaken by students in my 2014 Cognition of Comics class, which became expanded into a larger study. My co-authors, Ryan Taylor and Kaitlin Pederson, coded 40 comics across 8 decades (over 9,000 panels), complementing Kaitlin's study of page layout across time in superhero comics.


We examined three aspects of structure: multimodality (text-image relationships and their balance of meaning and narrative), the framing of information in panels (image above), and the linear changes in meaning that occur between panels.

Overall, we found evidence that American superhero comics have shifted to relying less on text, and more towards the visual narrative sequencing carrying more weight of the storytelling. This has accompanied changes in the framing of information in panels to use fewer elements (as in the example figure), and to use fewer spatial location changes with more time changes across panels.

In addition, this trend is not new, but has been steadily occurring over the past forty years. That means it cannot just be attributed to the influence of manga since the 1980s (and indeed, as we discuss, our results suggest the influence of manga may be more complicated than people suspect).

You can download the paper here (pdf), or along with all my other downloadable papers. You can also see Ryan presenting this study at Comic-Con International in our panel in 2015:




Abstract:

The visual narratives of comics involve complex multimodal interactions between written language and the visual language of images, where one or the other may guide the meaning and/or narrative structure. We investigated this interaction in a corpus analysis across eight decades of American superhero comics (1940–2010s). No change across publication date was found for multimodal interactions that weighted meaning towards text or across both text and images, where narrative structures were present across images. However, we found an increase over time of narrative sequences with meaning weighted to the visuals, and an increase of sequences without text at all. These changes coincided with an overall reduction in the number of words per panel, a shift towards panel framing with single characters and close-ups rather than whole scenes, and an increase in shifts between temporal states between panels. These findings suggest that storytelling has shifted towards investing more information in the images, along with an increasing complexity and maturity of the visual narrative structures. This has shifted American comics from being textual stories with illustrations to being visual narratives that use text.

Reference:

Cohn, Neil, Ryan Taylor, and Kaitlin Pederson. 2017. A picture is worth more words over time: Multimodality and narrative structure across eight decades of American superhero comics. Multimodal Communication. 6(1): 19-37.

Wednesday, May 24, 2017

New paper: What's your neural function, narrative conjunction?

I'm excited to announce that my new paper "What's your neural function, narrative conjunction: Grammar, meaning, and fluency in sequential image processing" is now out in the open access journal Cognitive Research: Principles and Implications. This study was co-authored by Marta Kutas, who was my advisor while I was a postdoctoral fellow at UC San Diego.

Simple take home message: The way you process the sequences in comics depends on which ones you read.

The more detailed version... This study is maybe the coolest brain study I've done. Here, we examine a particular pattern in the narrative grammar used in comics: Environmental-Conjunction. This is when you have characters in different panels at the same narrative state, but you infer that they belong to the same spatial environment.

Most approaches to comprehending sequential images focus on just the comprehension of meaning (like in "panel transitions"). However, my theory says that this pattern involves both the construction of meaning and the processing of this narrative pattern. The patterning uses only a narrative grammar which is independent of meaning.

When analyzing people's brain responses to conjunction patterns, we found support for two processes. We found one brainwave associated with an "updating" of a mental model (the P600), and another associated with grammatical processing (an anterior negativity). Crucially, this grammatical processor was insensitive to manipulations of meaning, indicating that it was only processing the conjunction pattern. So, you don't just process meaning, but also the narrative pattern.

But, that's only the first part...

In other analyses, we've shown that Japanese manga use more Environmental-Conjunction than American or European comics. So, we used a statistical analysis to analyze whether participants' background reading habits influenced their brain processing of conjunction. And... it did!

Specifically, we found that participants who more frequently read manga "while growing up" tended to rely more on the grammar processing, while infrequent manga readers used more updating. In other words, since frequent manga readers were exposed to the conjunction pattern more in their reading habits, their brains used a more automatic, grammatical process to comprehend it. Note: this result is especially cool, because our comics stimuli were not manga, they were manipulated Peanuts strips that used a pattern frequent in manga.

This result contradicts the idea that comics are uniformly understood by all people, or even the idea that their processing uses a single cognitive process (like "closure"). Rather, comics are understood based on people's fluency with the patterns found in specific visual languages across the world.

You can read the paper online here, download the pdf here, or check out the poster summary.

Abstract:

Visual narratives sometimes depict successive images with different characters in the same physical space; corpus analysis has revealed that this occurs more often in Japanese manga than American comics. We used event related brain potentials to determine whether comprehension of “visual narrative conjunctions” invokes not only incremental mental updating as traditionally assumed, but also, as we propose, “grammatical” combinatoric processing. We thus crossed (non)/conjunction sequences with character (in)/congruity. Conjunctions elicited a larger anterior negativity (300-500ms) than non-conjunctions, regardless of congruity, implicating “grammatical” processes. Conjunction and incongruity both elicited larger P600s (500-700ms), indexing updating. Both conjunction effects were modulated by participants’ frequency of reading manga while growing up. Greater anterior negativity in frequent manga readers suggests more reliance on combinatoric processing; larger P600 effects in infrequent manga readers suggest more resources devoted to mental updating. As in language comprehension, it seems that processing conjunctions in visual narratives is not just mental updating but also partly grammatical, conditioned by comic readers’ experience with specific visual narrative structures.

Reference:

Cohn, Neil and Marta Kutas. 2017. "What’s your neural function, visual narrative conjunction? Grammar, meaning, and fluency in sequential image processing." Cognitive Research: Principles and Implications. 2(27): 1-13

Sunday, April 16, 2017

Tourist traps in comics land*: Unpublished comics research

In a series of Twitter posts, I recently reflected on the pitfalls of various comics research that hasn't been published. Since I think it contains some valuable lessons, I'm going to repeat and expand on them here...

Though I've written the most about psychological studies about how people understand comics, other people have been doing these types of studies before me. What's interesting is that many of these studies were not published, because they found null results. There are a few trends in this work...

Space = Time

The topic I've heard about the most is the testing of McCloud's idea that panel size relates to the duration of conceived time, and that longer vs. shorter gutters relates to longer vs. shorter spaces of "time" between panels. I critiqued the theory behind this idea that "space = time" back in this paper, but I've heard of several scholars who have tested this with experiments. Usually these studies involved presenting participants with different size panels/gutters and then having participants rate their perceived durations.

In almost all of these studies, no one found any support of the idea that "physical space = conceived time". I can only think of one study that did find something supporting it, and it was only for a subset of the stimuli, and thus warranted further testing (which hasn't been done yet).

Because these studies found null results, they weren't deemed noteworthy enough to warrant publication. And since none got published, other labs didn't know about it, so they tried it too with the same null results. I think it's a good case for importance of publishing null results: they serve to both disprove hypotheses, and inform others not to try to grab at the same smoke.

Eye-tracking

The other type of study on comics that usually doesn't get published is eye-tracking. I know of at least half-a-dozen unpublished eye-tracking studies looking at people reading comic pages. The main reason these studies aren't published is because they're often exploratory, with no real hypotheses to be tested. Most comics eye-tracking studies just examine what people look at, which doesn't really tell you much if you don't manipulate anything. This can be useful for telling you basic facts about what people look (types of information, how long, etc.), but without a specific manipulation, it is less informative and has lots of confounds.

An example: Let's say you run an eye-tracking study of a particular superhero comic and find that people spend more time fixating on text than on the images (which is a frequent finding). Now the questions arise: Is it because of the specific comic you chose? Is it because your comic had a particular uncontrolled multimodal interaction that weights meaning more to the text? Is it because your participants lacked visual language fluency, and so they relied more on text than images? Is it because you chose a superhero comic, but your participants read more manga? Without more controls, it's hard to know anything substantial.

Good science means testing a hypothesis, which means having a theory that can possibly be tested by manipulating something. Without a testable theory you don't have any real hypothesis to create a manipulation, which results in not a publishable eye-tracking study about comics. Eye-tracking is an informative tool, but the real "meat" of the research needs to be in the thing that is being manipulated.

I'll note that this is the same as when people do (or advise) using fMRI or EEG to study processing (visual) narratives in the brain. I've seen several studies of "narrative" or "visual narrative" where they simply measure the brain activity to non-manipulated materials and then claim that "these are the brain areas involved in comics/visual narrative/narrative!"

In fact, such research is wholly uninformative, because nothing specific is being tested, and such research betrays an ignorance for just how complex these structures actually are. It would be inconceivable for any serious scholar of language to simply have someone passively read sentences and then claim that they "know how they work" by measuring fMRI or eye-tracking to them. Why then the presumption of simplicity for visual narratives?

Final remarks

One feature of unpublished research on comics is that they are often undertaken by very good researchers who had little knowledge-base for what goes on in comics and/or the background literature of that field. It is basically "scientific tourism." While it is of course great that people are interested enough in the visual language of comics to invest the time and effort to run experiments, it's also a recipe for diminishing returns. Without background knowledge or intuition, it's hard to know why your experiment might not be worth running.

Nevertheless, I also agree that it would be useful to know what types of unpublished studies people have done. Publishing such results would be informative for what isn't found, and would prevent future researchers from chasing topics they maybe shouldn't.

So, let me conclude with an "open call"...

If you've done a study on comics that hasn't been published (or know someone who has!): Please contact me. At the least, I'll feature a summary (or link) to your study on this blog, and if I accrue enough of them, perhaps I can curate a journal or article for reporting such results.


*Thanks to Emiel van Miltenburg for the post title!