Tuesday, October 27, 2015

New Paper: Narrative Conjunction's Junction Function

I'm excited to announce that my new paper, "Narrative Conjunction's Junction Function" is now out in the Journal of Pragmatics! This is the first major theoretical paper I've had in a long time, and it goes into extensive detail about several aspects of my theory of how narrative image sequences are comprehended, Visual Narrative Grammar.

The main topic of this paper is "conjunction" which is when multiple panels are grouped together and play the same role in a sequence. I argue that this narrative pattern is mapped to meaning in several different ways. In addition to these arguments, the paper provides a fairly extensive treatment of the basics of my narrative theory along with the underlying logic it is guided by (i.e., diagnostic tests).

You can find the paper here (pdf) or along with my other downloadable papers. Here's the full abstract:


While simple visual narratives may depict characters engaged in events across sequential images, additional complexity appears when modulating the framing of that information within an image or film shot. For example, when two images each show a character at the same narrative state, a viewer infers that they belong to a broader spatial environment. This paper argues that these framings involve a type of “conjunction,” whereby a constituent conjoins images sharing a common narrative role in a sequence. Situated within the parallel architecture of Visual Narrative Grammar, which posits a division between narrative structure and semantics, this narrative conjunction schema interfaces with semantics in a variety of ways. Conjunction can thus map to the inference of a spatial environment or an individual character, the repetition or parts of actions, or disparate elements of semantic associative networks. Altogether, this approach provides a theoretical architecture that allows for numerous levels of abstraction and complexity across several phenomena in visual narratives.

Cohn, Neil. 2015. "Narrative conjunction’s junction function: The interface of narrative grammar and semantics in sequential images." Journal of Pragmatics 88:105-132. doi: 10.1016/j.pragma.2015.09.001.

Tuesday, October 13, 2015

Emoji and visual languages

I'm excited that my recent article on the BBC website about emoji has gotten such a good response. So, I figured I'd write an addendum here on my blog to expand on things I couldn't get a chance to write in the article. I of course had a lot to say in that article, and it was inevitable that not everything could be included.

The overall question I was addressing was, "are emoji a visual language?" or "could emoji become a visual language?" My answer to both of these is "no."

Here's a quick breakdown of why, which I say in the article:

1. Emoji have a limited vocabulary set that is made of whole-unit pieces, and that vocabulary has no internal structure (i.e., you can't adjust the mouth of the faces while keeping other parts constant, or change the heads on bodies, or change the position of arms)

2. Emoji force these stripped-down units into unit-unit sequences, which just isn't how drawings work to communicate. (More on this below)

3. Emoji use a limited grammatical system, mostly using the "agent before act" heuristic found across impoverished communication systems.

All of these things limit emoji from being able to communicate like actual languages. Plus, these also limit emoji from communicating like actual drawings that are not mediated by a technological interface.

There are two addendums I'd like to offer here.

First, these limitations are not just constrained to emoji. They are limitations of every so-called "pictogram language," which are usually created to be "universal" across spoken languages. Here, the biggest problem is in believing that graphic information works the way that writing does: putting individual units, each which have a "single meaning," into a unit-unit sequence.

However, drawings don't work this way to communicate. There are certainly ways to put images in sequence, such as what is found in the visual language of comics. The nature of this sequencing has been my primary topic of research for about 15 years. When images are put into sequence, they have characteristics unlike any of the ways that are used in these "writing imitative" pictogram sequences.

For example, actual visual language grammars typically depict events across the image sequence. This requires the repetition of the same information in one image as in the other, only slightly modified to show a change in state. Consider this emoji sequence:

This can either be seven different monkeys, or it can be one monkey at seven different points in time (and the recognition of this difference requires at least some cultural learning). Visual language grammars allow for both options. Note though that it doesn't parcel out the monkey as separate from the actions. It does not read "monkey, cover eyes" and then "monkey, cover mouth" etc. where the non-action monkey just gives object information and the subsequent one just gives action information. Rather, both object and event information is contained in the same unit.

So, what I'm saying is that the natural inclination for grammars in the visual form is not like the grammars that operate in the verbal or written form. They just don't work the same, and pushing graphics to try to work in this way will never work, because it goes against the way in which our brains have been built to deal with graphic information.

Again: No system that strips down graphics into isolated meanings and puts them in a sequence will ever communicate on par with actual languages. Nor will it actually communicate the way that actual visual languages do...

And this is my second point: There are already visual languages in the world that operate as natural languages that don't have the limitations of emoji.

As I describe in my book, The Visual Language of Comics, the structure of drawing naturally is built like other linguistic systems. It becomes a "full" visual language when a drawing system is shared across individuals (not a single person's style) and has 1) a large visual vocabulary that can create new and unique forms, and 2) that those vocabulary items can be put into sequences with underlying hierarchic structure.

This structure often becomes the most complex and robust in the visual languages used in comics, but we find complex visual languages in other places too. For example, in my book I devote a whole chapter to the sand drawings of Australian Aboriginals, which is a full visual language far outside the context of comics (and is used in real-time interactive communicative exchagnes). But, whether a drawing system becomes a full visual language or not, the basis for those parts is similar to other linguistic systems that are spoken or signed.

The point here is this: emoji are not a visual language, and can never be one because of the intrinsic limitations on the way that they are built. Drawings don't work like writing, and they never will.

However, the counter point is this: we already have visual languages out in the world—we just haven't been using them in ways that "feel" like language.

... yet.