As we’ve read, there is a pretty well-established screen grammar for basic elements of camera work.  Make the camera look down on someone and you’re diminishing him/her, make the camera look up at someone and you’re making them seem larger than life, etc.

But, how much further can we take this?  If we use a musical analogy, this is like saying that chord X sounds happy and chord Y sounds sad.  But how do we approach the analysis of an entire symphony?  Properties of user experience emerge out of complicated systems (like tv shows or symphonies) that can’t necessarily be explained by the individual objects in the system (the camera movements or the particular chords).  What made me think of this was the reaction of my boyfriend to a TV show he was watching this afternoon (discussed with me over gchat):

so now… is this man Moriarty, or Mycroft?
YES I WAS RIGHT!  EXCELLENT!  BRILLIANT I LOVE IT HAHAHAH EXCELLENT

This show clearly made him feel particularly crafty / intelligent, something he obviously enjoyed.  This is not a user experience property that one element like a camera panning to the left can produce.  So.. how do we get there?  How do we begin to tackle an analysis of the emergent properties of a system?  Is this simply too complicated to reduce down?

Can we take this argument a step forward.. can we say that interaction experiences are so hard to articulate in terms of atoms (we have no equivalent of “screen grammar” yet) because nearly every part of the user experience is due to an emergent property that has arisen out of the interaction between the formal properties of the object and the user?  In other words, the “atoms” of interaction design are a step further away from user experiences than they are in other, less interactive media?

Advertisements