Monday, February 01, 2016

New Paper: The pieces fit

Magical things happen at conferences sometimes. Back at the Cognitive Neuroscience Society conference in 2014, I ran into my graduate school friend, Carl Hagmann, who mentioned he was doing interesting work on rapid visual processing, where people are asked to detect certain objects within an image sequence that changes at really fast speeds (like 13 milliseconds). He noticed that I was doing things with image sequences too and thought we should try this rapid pace with visual narratives (similar to this old paper I blogged about).

Lo and behold, it actually happened, and now our paper is published in the journal Acta Psychologia!

Our paper examines how quickly people process visual narrative sequences by showing participants the images from comics at either 1 second or half a second. In some sequences, we flipped the order that images appeared. In general, we found that "switches" of greater distances were recognized with better accuracy and those sequences were rated as less comprehensible. Also, switching panels between groupings of panels were recognized better than those within groups, again showing further evidence that visual narratives group information into constituents.

This was quite the fun project to work on, and it marks a milestone: It's the first "visual language" paper I've had published where I'm not the first author! Very happy about that, and there will be several more like it coming soon...

You can find the paper via direct link here (pdf) or on my downloadable papers page.


Abstract:

Recent research has shown that comprehension of visual narrative relies on the ordering and timing of sequential images. Here we tested if rapidly presented 6-image long visual sequences could be understood as coherent narratives. Half of the sequences were correctly ordered and half had two of the four internal panels switched. Participants reported whether the sequence was correctly ordered and rated its coherence. Accuracy in detecting a switch increased when panels were presented for 1 s rather than 0.5 s. Doubling the duration of the first panel did not affect results. When two switched panels were further apart, order was discriminated more accurately and coherence ratings were low, revealing that a strong local adjacency effect influenced order and coherence judgments. Switched panels at constituent boundaries or within constituents were most disruptive to order discrimination, indicating that the preservation of constituent structure is critical to visual narrative grammar.


Hagmann, Carl Erick, and Neil Cohn. 2016. "The pieces fit: Constituent structure and global coherence of visual narrative in RSVP." Acta Psychologica 164:157-164. doi: 10.1016/j.actapsy.2016.01.011.

Thursday, January 28, 2016

New Book: The Visual Narrative Reader

I'm very excited to announce that today is the release date for my new book, The Visual Narrative Reader! What makes this one so fun is that I didn't write the whole thing—it features chapters from many luminaries of studying visual narratives. Here's how it came about...

Shortly after the release of my last book, The Visual Language of Comics, I was biking home from work and started reflecting on the important papers that had influenced me along my journey of doing this research.

I thought about David Wilkin's great paper on Australian sand narratives that fully challenged my conceptions of drawings, which I read from a third generation photocopy right after college. I thought about Charles Forceville's great work on visual metaphor in comics, and Jun Nakazawa's psychology experiments looking at how kids (and adults) in Japan comprehend comics. Or, Brent Wilson's 40 years of research looking at how kids across the world draw visual narratives. Or, maybe there were the interesting dissertations that looked at the relations between McCloud's panel transitions and linguistic discourse theories.

All of this work greatly influenced my theories. And yet, much of the people in Comics Studies or other fields looking at visual narratives had no idea that most of this work existed. In many cases, these papers were incredibly hard to find! (One of the dissertations I had to print off of microfiche, another paper was unable to be found by interlibrary loan)

So, I decided that someone aught to compile this work together so that it would be readable by a larger audience, and I decided that that "someone" should be me! The result is the new book that just became available.

I feel very proud to have been able to combine these great works into one volume that can hopefully enrich people's knowledge of visual narratives and the various research that has gone into its cross-disciplinary study over the years.

You can find more information about the book on my website here, along with praise from scholars and creators of comics alike. I hope you like it as much as I do!

Here's the table of contents:

Preface

1. Interdisciplinary approaches to visual narrative, Neil Cohn

Section 1: Theoretical approaches to sequential images
2. Linguistically-oriented comics research in Germany, John Bateman and Janina Wildfeuer
3. No content without form: Graphic style as the primary entrance to a story, Pascal Lefèvre
4. Conceptual Metaphor Theory, Blending Theory, and other Cognitivist perspectives on comics, Charles Forceville
5. Relatedness: Aspects of textual connectivity in comics, Mario Saraceni
6. A little cohesion between friends; Or, we're just exploring our textuality, Eric Stainbrook

Section 2: Psychology and development of visual narrative
7. Manga literacy and manga comprehension in Japanese Children, Jun Nakazawa
8. What happened and what happened next: Kids’ visual narratives across cultures, Brent Wilson

Section 3: Visual narratives across cultures
9. The Walbiri sand story, Nancy Munn
10. Alternative representations of space: Arrernte Narratives in Sand, David Wilkins
11. Sequential text-image pairing among the Classic Maya, Søren Wichmann and Jesper Nielsen
12. Linguistic relativity and conceptual permeability in visual narratives: New distinctions in the relationship between language(s) and thought, Neil Cohn

Further Reading
Index

Wednesday, December 30, 2015

New paper: The vocabulary of manga

I'm happy to announce that my new article with co-author Sean Ehly, "The vocabulary of manga: Visual morphology in dialects of Japanese Visual Language" is now published in the Journal of Pragmatics!

This paper is especially exciting, because my co-author is a former student who did this project as part of his class project. It now joins previous publications stemming from projects from that class, with more on the way!

Sean wanted to investigate the "morphology" of the Japanese Visual Language that are used in manga—the graphic elements like bloody noses for lust or a giant sweat drop for anxiety. I had discussed some of these in my book, but Sean recognized that there were many that I missed. He listed over 70 of these elements related to emotion alone! In fact, as a resource to other researchers and fans, we've now compiled this "visual vocabulary" into a list:

Morphology in Japanese Visual Language

We don't consider it exhaustive, so if you think of others that should be added, please let us know!**

We then used this list to investigate how they are used in 20 different manga—10 shojo and 10 shonen—which amounted to over 5,000 panels coded across these books. Overall, we show that most of these "visual morphemes" appear in both types of books, though certain morphemes are more prevalent in one type or antoher. We take this as first empirical evidence that there may be distinct "dialects" within a broader Japanese Visual Language, at least for this one dimension of structure.

The paper is available along with all others at my Downloadable Papers page, and directly as a pdf. Here's the full abstract:

Abstract
The visual representations of non-iconic elements in comics of the world often take diverse and interesting forms, such as how characters in Japanese manga get bloody noses when lustful or have bubbles grow out their noses when they sleep. We argue that these graphic schemas belong to a larger ‘‘visual vocabulary’’ of a ‘‘Japanese Visual Language’’ used in the visual narratives from Japan. Our study first described and categorized 73 conventionalized graphic schemas in Japanese manga, and we then used our classification system to seek preliminary evidence for differences in visual morphology between the genres of shonen manga (boys’ comics) and shojo manga (girls’ comics) through a corpus analysis of 20 books. Our results find that most of these graphic schemas recur in both genres of manga, and thereby provide support for the idea that there is a larger Japanese Visual Language that pervades across genres. However, we found different proportions of usage for particular schemas within each genre, which implies that each genre constitutes their own ‘‘dialect’’ within this broader system.

Cohn, Neil and Sean Ehly. 2016. The vocabulary of manga: Visual morphology in dialects of Japanese Visual Language. Journal of Pragmatics. 92: 17-29.



** Longtime followers of this site may remember that we attempted a similar listing for morphology across different visual languages based on a discussion on my now defunct forum over 10 years ago. Perhaps I'll have to create additional pages for other visual languages as well, now that we have ongoing corpus research underway...

Tuesday, November 24, 2015

New Paper: A multimodal parallel architecture

I'm excited to share that I now have a new article in the latest issue of Cognition: "A multimodal parallel architecture: A cognitive framework for multimodal interactions." This paper presents my overall model of language, and then uses it to explore different aspects of multimodal communication.

The key distinctions in this paper are about multimodal relations that must balance grammar in multiple domains. Many models of multimodal relations describe the various meaningful (i.e., semantic) interactions between modalities. This paper extends beyond these relationships to talk about how the dominance of meaning in one modality or another must negotiate grammatical structure in one or multiple modalities.

This paper has had a long journey... I first had many of these ideas way back in 2003, and they were part of an early draft on my website about multimodality called "Interactions and Interfaces." In 2010, I started reconsidering how to integrate the theory in the context of my mentor Ray Jackendoff's model of language—the parallel architecture. The component parts were always the same, but articulating them in this way allowed for a better grounding in a model of cognition, and in further elaborating on how these distinctions about multimodality fit within a broader architecture. I then tinkered with the manuscript on and off for another 5 years...

So, 12 years later, this paper is finally out! It pretty much lays out how I conceive of language and different modalities of language (verbal, signed, visual), not to mention their relationships. I suppose that makes it a pretty significant paper for me.

The paper can be found on my Downloadable Papers page, and a direct link (pdf) is here.

Abstract:
Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech–gesture and text–image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated ‘‘grammatical” sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff’s (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it ‘‘dominates” the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the ‘‘linguistic” system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality.


Cohn, Neil. 2015. A multimodal parallel architecture: A cognitive framework for multimodal interactions. Cognition 146: 304-323

Monday, November 09, 2015

How to analyze comics with narrative grammar

Over the past several years, I've presented a lot of evidence that panel-to-panel "transitions" cannot account for how we understand sequences of images in visual narratives like comics. Rather, I've argued that narrative sequential images use a "narrative grammar" that assigns roles to panels, and then groups panels into hierarchic relationships.

Though there are many reasons panel transitions don't work to explain how we understand sequential images, one of the reasons why panel transitions as a theory may be attractive is because it is intuitive to see outright. A person can easily look at a sequence and assign transitions between panels and it "feels" right because that matches one's conscious experience of reading a comic (though it is not very cognitively accurate).

In contrast, my theory of narrative grammar is fairly complex, and much harder to intuit. Though, I think this is somewhat as it should be—there's a lot of complicated things going on sequential images that people don't realize! However, this complexity means that people might have a hard time of implementing the theory in practice.

SO... to help rectify this issue I've now written a "tutorial" that aims to explain the process that people should follow when analyzing a visual narrative sequence and are attempting to implement this theory of narrative grammar.

You can download a pdf of the tutorial here, while it can be found also on my Downloadable Papers page and my Resources page.

The simple summary is that one cannot simply look at a sequence and assign labels to things. There are a series of procedures and diagnostics to use, and there is an order of operations that is optimal for arriving at an analysis. This is the same as most any linguistic theory, which usually requires instruction or dedicated learning in order to implement.

This tutorial is aimed at researchers (or anyone curious) who wish to implement this theory in practice and/or learn more about the underlying logic for how it works. It is also aimed at teachers who might wish to instruct this theory in their classrooms, but may not know how to do it with competence.**

As you'll find in the tutorial, it only somewhat actually covers the basic principles of the theory. For this you should reference my papers and my book, The Visual Language of Comics. The tutorial can thus supplement these works for a full understanding and implementation of the theory.


**On this point, note: anyone who wants to learn how to do this, especially with the intent of putting into practice in research or instruction should feel free to contact me for more guidance and resources.

Monday, November 02, 2015

Dispelling emoji myths

In my recent BBC article and my blog posts about emoji, I have tried to explain how emoji are not an emerging language, but that they do serve important functions that resemble other limited communicative systems.

Having now poked around online quite a bit looking at what people say about emoji, I'm particularly struck by the repetition of a few myths. Since these misunderstandings creep up all over the place, I wanted to address them here...

1. Emoji are not like hieroglyphics

First off, many people have compared emoji to Egyptian hieroglyphics, either saying that they work exactly the same and/or that emoji are a "modern hieroglyphics."

This is simply not true, mostly because hieroglyphics were a full blown writing system where each sign had a mapping to sound. Hieroglyphics are not "symbol systems" made up of pictures. To me, this seems like the kind of misperception that people who are only used to an alphabet have about other writing systems: "if each sign isn't a sound like a letter, it must be just about meanings!"

There are actually several ways that hieroglyphics operated as a writing system. Some signs did indeed mean what they represented. For example, the sign for "owl" looked like an owl, and was pronounced "m":


However, the use of "rebus" signs meant that those signs could also be used without that iconic meaning, and only would be used for their sound value (i.e., that owl sign would be used for many words using the sound "m," but not for its meaning of "owl").

From there, these both of these types of signs could be combined into compound signs. For example, this combination takes the rebus of owl (using just the sound "m") and the sign for ear (using its meaning, but not pronunciation) for the word "to hear":


This type of compound used signs both for their meaning value and for their sound value. There are no compounds made up of two signs that just contribute to meaning—they always have some sound-based sign present. Hieroglyphics also sometimes use fairly abstract representations, and purely sound-based signs which vary based on the number of consonants they represent.

In sum, unlike the purely imagistic meanings found in emoji, hieroglyphics are a fully functioning writing system that is intrinsically tied to the Egyptian language. This is totally different from emoji in context also because the imagistic emoji accompany a separate writings system (for English speakers, the alphabet). In the case of hieroglyphics, they are the writing system.

I'll note also, these same things apply to Chinese characters. Though they work a little different than hieroglyphics, the same basic principles apply: it's a writing system tied to the sounds of languages, not a series of pictures that only have imagistic meaning.


2. There is no such thing as a universal language

I have seen many people exhort that one of the exciting things about emoji is their quality of transcending spoken languages to be a "universal language." This is also hogwash, for many reasons. No language is universal, whether verbal, signed, or visual. Here are several reasons why images (including emoji) are not, and cannot be, universal:

How they convey meaning

Just because images may be iconic—they look like what they represent—does not mean that they are culturally universal. Even simple things like the way people dress does not translate across cultures, not to mention variables in facial expressions or, even more dramatic, fully conventionalized meanings like giant sweat drops to convey anxiety. Note that, since they were created in Japan originally, many emoji are already culturally specific in ways that do not translate well outside Japan.

This is not to mention the limitations of emoji that I discussed in my BBC article, such as that they rely on a non-producible vocabulary that does not allow the easy creation of new signs, and their sequence maintain a simple system characteristic of impoverished grammar. In other words, they are highly limited in what they can express, even as a graphic system.

Cultural exposure

We also know that images are not universal because a host of studies have shown that people who do not have cultural exposure to images often have difficulty understanding the meanings of images. Such deficits were investigated prevalently in the 1970s and 1980s under the umbrella of "visual literacy." Here's how I summarized one such study examining individuals from Nepal from Fussell & Haaland (1978):

As described in the paper, the individuals tested "had significant deficits understanding many aspects of single images, even images of faces with simple emotions in cartoony styles (happy - 33%, sad - 60%). They had even more difficultly related to actions (only 3% understood an image trying to convey information about drinking boiled water). Some respondents had radically different interpretations of images. For example, a fairly simple cartoony image of a pregnant woman was interpretted as a woman by 75%, but 11% thought it was a man, and others responded that it was a cow, rabbit, bird, or other varied interpretations. They also had difficulty understanding when images cut off parts of individuals by the framing of a panel (such as images of only hands holding different ingredients)."

Such findings are not rare. For example, Research into Illustration by Evelyn Goldsmith summarizes several studies along these lines. Bottom line: Understanding drawings requires exposure to a graphic system, just like learning a language.

There is not just one visual language

Most discussion of the universality of images focuses on how they are comprehended. But, this overlooks the fact that someone also had to create those images, and images vary widely in their patterns across the world.

That is, as I argue in my book, there is not just one visual language, but rather there are many visual languages in the world too. There's a reason why the "style" of American superhero comics differs from Japanese manga or French band desinee or instruction manuals, etc. Drawing "styles" reflect the patterns of graphic information stored in the minds of those who create them. These patterns vary across the world, both within and between cultures.

This happens because people are different and brains are not perfect. There will always be variation and change within what is perceived to be a coherent system. This is in part because any given language is actually a socio-cultural label applied to the system(s) used by individual people. There is no "English" floating out in the ether to which we all link up. Rather, "English" is created by the similarities of patterning between the languages spoken by many people.

Indeed, though many who speak "English" can communicate in mutually intelligible ways, there are hundreds of sub-varieties of "English" with variations that range from subtle (slight accents) to dramatic (changing vocabulary and grammar), both across geographic, cultural, and generational dimensions.

Similarly, even if there was to be a universal language—be it spoken or visual—sub-varieties would emerge based on who is using the system and how they do it. Just because images are typically iconic does not mean that they are transparent and outside of cognitive/cultural patterns.

Emoji in part exemplify this facade that a language is external to the patterns in people's minds, since the vocabulary is provided by tech companies and does not directly emerge from people's creations. Someone (ICANN) decides which emoji can be used, and then makes them available. This is the opposite of how actual languages work, as manifestations of similarities between cognitive structures across speakers.

In sum, drawings are not universal because drawings differ based on the cultural "visual languages" that result from people using different patterns across the world.

Tuesday, October 27, 2015

New Paper: Narrative Conjunction's Junction Function

I'm excited to announce that my new paper, "Narrative Conjunction's Junction Function" is now out in the Journal of Pragmatics! This is the first major theoretical paper I've had in a long time, and it goes into extensive detail about several aspects of my theory of how narrative image sequences are comprehended, Visual Narrative Grammar.

The main topic of this paper is "conjunction" which is when multiple panels are grouped together and play the same role in a sequence. I argue that this narrative pattern is mapped to meaning in several different ways. In addition to these arguments, the paper provides a fairly extensive treatment of the basics of my narrative theory along with the underlying logic it is guided by (i.e., diagnostic tests).

You can find the paper here (pdf) or along with my other downloadable papers. Here's the full abstract:

Abstract

While simple visual narratives may depict characters engaged in events across sequential images, additional complexity appears when modulating the framing of that information within an image or film shot. For example, when two images each show a character at the same narrative state, a viewer infers that they belong to a broader spatial environment. This paper argues that these framings involve a type of “conjunction,” whereby a constituent conjoins images sharing a common narrative role in a sequence. Situated within the parallel architecture of Visual Narrative Grammar, which posits a division between narrative structure and semantics, this narrative conjunction schema interfaces with semantics in a variety of ways. Conjunction can thus map to the inference of a spatial environment or an individual character, the repetition or parts of actions, or disparate elements of semantic associative networks. Altogether, this approach provides a theoretical architecture that allows for numerous levels of abstraction and complexity across several phenomena in visual narratives.


Cohn, Neil. 2015. "Narrative conjunction’s junction function: The interface of narrative grammar and semantics in sequential images." Journal of Pragmatics 88:105-132. doi: 10.1016/j.pragma.2015.09.001.

Tuesday, October 13, 2015

Emoji and visual languages

I'm excited that my recent article on the BBC website about emoji has gotten such a good response. So, I figured I'd write an addendum here on my blog to expand on things I couldn't get a chance to write in the article. I of course had a lot to say in that article, and it was inevitable that not everything could be included.

The overall question I was addressing was, "are emoji a visual language?" or "could emoji become a visual language?" My answer to both of these is "no."

Here's a quick breakdown of why, which I say in the article:

1. Emoji have a limited vocabulary set that is made of whole-unit pieces, and that vocabulary has no internal structure (i.e., you can't adjust the mouth of the faces while keeping other parts constant, or change the heads on bodies, or change the position of arms)

2. Emoji force these stripped-down units into unit-unit sequences, which just isn't how drawings work to communicate. (More on this below)

3. Emoji use a limited grammatical system, mostly using the "agent before act" heuristic found across impoverished communication systems.

All of these things limit emoji from being able to communicate like actual languages. Plus, these also limit emoji from communicating like actual drawings that are not mediated by a technological interface.

There are two addendums I'd like to offer here.

First, these limitations are not just constrained to emoji. They are limitations of every so-called "pictogram language," which are usually created to be "universal" across spoken languages. Here, the biggest problem is in believing that graphic information works the way that writing does: putting individual units, each which have a "single meaning," into a unit-unit sequence.

However, drawings don't work this way to communicate. There are certainly ways to put images in sequence, such as what is found in the visual language of comics. The nature of this sequencing has been my primary topic of research for about 15 years. When images are put into sequence, they have characteristics unlike any of the ways that are used in these "writing imitative" pictogram sequences.

For example, actual visual language grammars typically depict events across the image sequence. This requires the repetition of the same information in one image as in the other, only slightly modified to show a change in state. Consider this emoji sequence:


This can either be seven different monkeys, or it can be one monkey at seven different points in time (and the recognition of this difference requires at least some cultural learning). Visual language grammars allow for both options. Note though that it doesn't parcel out the monkey as separate from the actions. It does not read "monkey, cover eyes" and then "monkey, cover mouth" etc. where the non-action monkey just gives object information and the subsequent one just gives action information. Rather, both object and event information is contained in the same unit.

So, what I'm saying is that the natural inclination for grammars in the visual form is not like the grammars that operate in the verbal or written form. They just don't work the same, and pushing graphics to try to work in this way will never work, because it goes against the way in which our brains have been built to deal with graphic information.

Again: No system that strips down graphics into isolated meanings and puts them in a sequence will ever communicate on par with actual languages. Nor will it actually communicate the way that actual visual languages do...

And this is my second point: There are already visual languages in the world that operate as natural languages that don't have the limitations of emoji.

As I describe in my book, The Visual Language of Comics, the structure of drawing naturally is built like other linguistic systems. It becomes a "full" visual language when a drawing system is shared across individuals (not a single person's style) and has 1) a large visual vocabulary that can create new and unique forms, and 2) that those vocabulary items can be put into sequences with underlying hierarchic structure.

This structure often becomes the most complex and robust in the visual languages used in comics, but we find complex visual languages in other places too. For example, in my book I devote a whole chapter to the sand drawings of Australian Aboriginals, which is a full visual language far outside the context of comics (and is used in real-time interactive communicative exchagnes). But, whether a drawing system becomes a full visual language or not, the basis for those parts is similar to other linguistic systems that are spoken or signed.

The point here is this: emoji are not a visual language, and can never be one because of the intrinsic limitations on the way that they are built. Drawings don't work like writing, and they never will.

However, the counter point is this: we already have visual languages out in the world—we just haven't been using them in ways that "feel" like language.

... yet.

Monday, September 28, 2015

New paper: Getting a cue before getting a clue

It seems the last few months on this blog have been all about inference generation... I'm happy to say this post is also the case! I'm excited to announce that I have a new paper out in the journal Neuropsychologia entitled "Getting a cue before getting a clue: Event-related potentials to inference in visual narrative comprehension."

This paper examines the brain response to the generation of inference in a particular narrative construction in comics. As far as I know, it's the first neuroscience paper to examine inference specifically in visual narratives. Specifically, our analysis focused on comparing sequences like these:


The top sequence (a) is from an actual Peanuts strip. What is key here is that you never see the main event of the sequence: Linus retrieving the ball. In my narrative structure, this "climactic" state would be called a "Peak." Rather, the image of Charlie watching ambiguously hides this event, but that panel is more characteristic of a "Prolongation" that extends the narrative further without much action.

Contrast this with (b), which has a structure that also appears in several Peanuts strips. Here, the third panel also does not show the main event (the same event as "a") but here the exclamation mark implies at least that some event is happening at least. In my narrative structure, this cue is enough to tell you that this panel is the climax, despite not showing you what the climax is.

We were curious then if the brain distinguishes between these types of sequences which both should require inference (indeed, the same inference) but differ in their narrative structure (spoiler: it does!). You can read a full pdf of the paper here. Here's the full abstract and reference:

Abstract:

Inference has long been emphasized in the comprehension of verbal and visual narratives. Here, we measured event-related brain potentials to visual sequences designed to elicit inferential processing. In Impoverished sequences, an expressionless “onlooker” watches an undepicted event (e.g., person throws a ball for a dog, then watches the dog chase it) just prior to a surprising finale (e.g., someone else returns the ball), which should lead to an inference (i.e., the different person retrieved the ball). Implied sequences alter this narrative structure by adding visual cues to the critical panel such as a surprised facial expression to the onlooker implying they saw an unexpected, albeit undepicted, event. In contrast, Expected sequences show a predictable, but then confounded, event (i.e., dog retrieves ball, then different person returns it), and Explicit sequences depict the unexpected event (i.e., different person retrieves then returns ball). At the critical penultimate panel, sequences representing depicted events (Explicit, Expected) elicited a larger posterior positivity (P600) than the relatively passive events of an onlooker (Impoverished, Implied), though Implied sequences were slightly more positive than Impoverished sequences. At the subsequent and final panel, a posterior positivity (P600) was greater to images in Impoverished sequences than those in Explicit and Implied sequences, which did not differ. In addition, both sequence types requiring inference (Implied, Impoverished) elicited a larger frontal negativity than those explicitly depicting events (Expected, Explicit). These results show that neural processing differs for visual narratives omitting events versus those depicting events, and that the presence of subtle visual cues can modulate such effects presumably by altering narrative structure.


Cohn, Neil, and Marta Kutas. 2015. Getting a cue before getting a clue: Event-related potentials to inference in visual narrative comprehension. Neuropsychologia 77:267-278. doi: 10.1016/j.neuropsychologia.2015.08.026.

Tuesday, September 01, 2015

Inference generating comic panels

Since my last post discussed my new paper on action stars, I thought it would be worth doing a refresher on these types of panels in the visual language of comics. "Action stars" are a type of panel that replaces a primary action of a sequence with a star-shaped flash, which on its own usually represents an impact. In the case of action stars, this representation is blown up so large that it encompasses the whole panel, as in the third panel here:


Interestingly, the "star shaped flash" of action stars does not necessarily just convey an impact—my study has shown that seems to generalize to lots of events even without an impact. One reason might be because the "star shaped flash" representation is also the way to typically represent the "carrier" of sound effects. Sound effects, like "Pow!" do typically—but not always—accompany action stars. So, this representation is technically polysemous between impacts and loud sounds—the same physical representation can have multiple meanings—and in the case of action stars it is a little ambiguous.

The key thing I want to focus on here though is that action stars replace the primary actions of the sequence, and thus cause those events to be inferred. In the example above, you infer that Snoopy is run over by the boys playing football, though you don't see it. This doesn't happen "in between the images," but happens at the action star itself, though you don't know what that event is until the next panel.

I discuss these types of "replacing panels" ("suppletive" in linguistic parlance) quite a bit in my book, The Visual Language of Comics, where I pointed out that not all images can work in this way. For example, the "fight cloud" in (b) does work effectively to replace panels—here meaning specifically a fight, not a general action like action stars. But, not all panels can play this "replacing" role. For example, using a heart to replace a whole panel doesn't work as well (c), even when it's used in a context where it could be possible (d):


So, not all elements can replace actions in panels. Recently, I stumbled on another one though in the comic Rickety Stitch and the Gelatinous Goo where an imp uses magic to transform into looking like a gnome:


Again, a full panel here does not depict the action, but replaces the event, leaving it to be inferred. In this case, the "poof cloud" provides a particularly useful covering for avoiding the representation of the physical transformation (which might be a pain to draw). Instead, this transformation is left to the audience's imagination.

In many cases, the onomatopoeia is not needed for these replacement panels, and I've found examples both with and without text. Similar replacement seems to occur without the visual language of the images (clouds, stars), and with the onomatopoeia alone, as in this Calvin and Hobbes strip:


So, onomatopoeia and these replacing panels can go together, or separately. In all though we seem to have an overarching technique for "replacing actions" with visual and/or verbal information which causes inference for the missing information. In the case of the visual information, it seems we have at least three systematic usages: action stars, fight clouds, and poof clouds. Perhaps there are more?


Wednesday, August 26, 2015

New paper: Action starring narrative and events

Waaaay back in 2008 I first posted about a phenomenon in comics that I called an "action star", such as the third panel in this sequence:


I argued that these panels force a reader to make an inference about the missing information (in this case Snoopy getting hit by football players), and that these images also play a narrative role in the sequence—they are narrative climaxes. Because this inference omits information within this panel, it is different than the type of "closure" proposed by McCloud to take place between the panels. Rather, you need to get to the last panel to figure out what happened in the one prior, not what happens between panels 3 and 4.

So, to test this back 7 years ago, I ran a few experiments...

At long last, those studies are now published in my new paper, "Action starring narrative and events" in the Journal of Cognitive Psychology. Though McCloud placed inference as one of the most important parts of sequential image understanding over 20 years ago, and this has been stressed in most all theories of comics, this is one of the first papers to explore inference with actual experiments. I know of a few more papers that will be following too, both by me and others. Exciting!

You can find the paper along with all of my other downloadable papers, or you can check it out directly here (pdf).

Here's the full abstract:

Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped “flashes” commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These “action star” panels depict a narrative culmination (a “Peak”), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence.

Cohn, Neil, and Eva Wittenberg. 2015. Action starring narratives and events: Structure and inference in visual narrative comprehension. Journal of Cognitive Psychology.

Wednesday, August 19, 2015

Comicology conference in Japan

For anyone who might happen to be in Japan at the end of next month, I'll be speaking at Kyoto Seika University's upcoming conference, Comicology: Probing Practical Scholarship from September 25-27th. The conference will be hosted by the Kyoto International Manga Museum, and there's an impressive lineup of speakers, so it should be a great time.

You can find more information online here (link in Japanese... looks like their English site hasn't been updated with it yet), though you can email for information here.

Here's the official poster (right click on it to check out a larger version):


I'll actually be doing a few speaking/workshops while I'm in Japan, both in Tokyo and Kyoto. Most are by invitation only, but you can email me if you're interested in learning more. My talk as part of the Comicology conference will be on Saturday the 26th.

I'm very excited to meet many of the other speakers, and it will especially be nice to see Natsume Fusanosuke again, given the great time I spent with him the last time I spoke in Japan.



(Interesting tidbit: yes, ニール•コーン is the standard way to write my name in katakana, though when I was living in Japan I started using my chosen kanji of 公安寝留. If you read kanji, it might help to know there's a little Buddhist joke in it, a remnant of my undergrad studies. I did that in part because my last name is how you spell "corn" in Japanese. I still use my hanko stamp with the kanji, and I used to have it on my business card up until just this year).