Sunday, April 22, 2018

New paper: Combinatorial morphology in visual languages

I'm very pleased to announce that my newest paper, "Combinatorial morphology in visual languages" has now been published in a book collection edited by Geert Booij, The Construction of Words: Advances in Construction Morphology. The overall collection looks excellent and is a great resource for work in linguistics on morphology across domains.

My own contribution makes a first attempt to formalize the structure of combinatorial visual morphology—how visual signs like motion lines or hearts combine with their "stems" to create a larger additive meaning.

This paper also introduces a new concept for these types of signs. Since various visual morphemes are affixes—like the "upfixes" that float above faces (right)—it begs the question: what are these affixes attaching to? In verbal languages, affixes attach to "word" units. But visual representations don't have words, so this paper discusses what type of structure would be required to fill that theoretical gap, and formalizes this within the parallel architecture model of language.

You can download a pre-print of chapter here (pdf) or on my downloadable papers page.

Abstract

Just as structured mappings between phonology and meaning make up the lexicons of spoken languages, structured mappings between graphics and meaning comprise lexical items in visual languages. Such representations may also involve combinatorial meanings that arise from affixing, substituting, or reduplicating bound and self-standing visual morphemes. For example, hearts may float above a head or substitute for eyes to show a person in love, or gears may spin above a head to convey that they are thinking. Here, we explore the ways that such combinatorial morphology operates in visual languages by focusing on the balance of intrinsic and distributional construction of meaning, the variation in semantic reference and productivity, and the empirical work investigating their cross-cultural variation, processing, and acquisition. Altogether, this work draws these parallels between the visual and verbal domains that can hopefully inspire future work on visual languages within the linguistic sciences.


Cohn, Neil. Combinatorial morphology in visual languages. In Booij, Geert (Ed.). The Construction of Words: Advances in Construction Morphology. (pp. 175-199). London: Springer

Tuesday, April 10, 2018

Workshop: How We Make and Understand Drawings

A few weeks back I had the pleasure of doing a workshop with Gabriel Greenberg (UCLA) about the understanding of drawings and visual narratives at the University of Connecticut. The workshop was hosted by Harry van der Hulst from the Linguistics Department, and we explored the connections between graphic systems and the structure of language. UConn has now been nice enough to put our talks online for everyone, and I've posted them below.

On Day 1, Gabriel first talked about his theory of pictorial semantics. Then, I presented my theory about the structure of the "visual lexicon(s)" of drawing systems, and then about how children learn to draw. This covered what it means for people to say "I can't draw," as was the topic of my papers on the structure of drawing.



On Day 2, we covered the understanding of sequential images. Here our views diverged, with Gabriel taking more of a "discourse approach", while I presented my theory of Visual Narrative Grammar and several of the studies supporting it. I finished by presenting my "grand theory of everything" about a multimodal model of language and communication. Unfortunately, the mic ran out of batteries on the second day and we didn't know it, so the sound is very soft. But, if you crank up the volume and listen carefully, you should be able to hear it (hopefully).

Thursday, February 15, 2018

New Paper: In defense of a “grammar” in the visual language of comics

I'm excited to announce that my new paper, "In defense of a 'grammar' in the visual language of comics" is now published in the Journal of Pragmatics. This paper provides an overview of my theory of narrative grammar, and rigorously compares it against other approaches to sequential image understanding.

Since my proposal that a "narrative grammar" operates to guide meaningful information in (visual) narratives, there have been several critiques and misunderstandings about how it works. Some approaches have also been proposed as a counterpoint. I feel all of this is healthy in the course of development of a theory and (hopefully) a broader discipline.

In this paper I address some of these concerns. I detail how my model of Visual Narrative Grammar operates and I review the empirical evidence supporting it. I then compare it in depth to the specifics and assumptions found in other models. Altogether I think it makes for a good review of the literature on sequential image understanding, and outlines what we should expect out of a scientific approach to visual narrative.

The paper is available on my Downloadable Papers page, or direct through this link (pdf).

Abstract:

Visual Language Theory (VLT) argues that the structure of drawn images is guided by similar cognitive principles as language, foremost a “narrative grammar” that guides the ways in which sequences of images convey meaning. Recent works have critiqued this linguistic orientation, such as Bateman and Wildfeuer's (2014) arguments that a grammar for sequential images is unnecessary. They assert that the notion of a grammar governing sequential images is problematic, and that the same information can be captured in a “discourse” based approach that dynamically updates meaningful information across juxtaposed images. This paper reviews these assertions, addresses their critiques about a grammar of sequential images, and then details the shortcomings of their own claims. Such discussion is directly grounded in the empirical evidence about how people comprehend sequences of images. In doing so, it reviews the assumptions and basic principles of the narrative grammar of the visual language used in comics, and it aims to demonstrate the empirical standards by which theories of comics' structure should adhere to.


Full reference:

Cohn, Neil. 2018. In defense of a "grammar" in the visual language of comics. Journal of Pragmatics. 127: 1-19

Tuesday, January 23, 2018

My friend, Martin Paczynski

It was with much surprise and a  heavy heart that I learned last week that my friend and colleague Dr. Martin Paczynski suddenly passed away. Martin and I met in 2006 when I entered graduate school at Tufts University, and he was the first graduate student working with our mentor Gina Kuperberg (I was her second). He quickly grew to be a close collaborator, a mentoring senior student, my first choice for brainstorming, and my best friend throughout graduate school. Here, I'll honor his place in the sciences and my work.

It's always a nice benefit when your closest colleagues are smarter than you, and that meant Martin's influence on me and my research is everywhere. He essentially trained me in using EEG, and helped me formulate and analyze countless studies. Though he started the program a year before me, we graduated together, which I think made it all the more special.

Though he initially studied computer science and worked in that field, Martin's graduate work at Tufts focused on the neurocognition of linguistic semantics, though he was knowledgeable in many more fields. His early work focused on aspects of animacy and event roles. He later turned to issues of inference like aspectual coercion—where we construe an additional meaning about time that isn't in a sentence, such as the sense of repeated action in sentences like "For several minutes, the cat pounced on the toy." His experiments were elegant and brilliant.

Our collaborative work on my visual language research started with my first brain study, for which Martin was the second author. After graduate school we co-authored our work on semantic roles of event building, which united our research interests. This continued until just recently, as my most recent paper again had Martin as my co-author, directly following our earlier work, almost 6 years after we left graduate school together. And it wasn't just me: he is a co-author on many many people's work from our lab, which speaks to both his generosity and insightfulness.

Authorship wasn't his only presence in my work. If you've ever seen me give a talk that mentions film, you'll see him starring in the video clips I created as examples (him walking barefoot down our grad office hallway... a frequent sight). If you look at page 85 of my book, there's Martin, shaking hands with another friend:


After graduation, Martin's interests moved away from psycholinguistics, more towards research on mindfulness, stress, and other clinical and applied aspects of neurocognition. For many years he talked about one day studying architecture and design using EEG, but hadn't implemented those ideas just yet. There seemed to be no topic that he couldn't excel at when he applied himself.

He was warm, kind, creative, funny, brilliant, and intellectually generous. I like to especially remember him with a mischievous grin, foreshadowing a comment which would inevitably be both hilarious and astute.

The sciences have lost a spark of insight in Dr. Martin Paczynski, and the world has lost a progressive and compassionate soul. I've lost that and more. Safe travels my friend.