You are currently browsing the category archive for the ‘User Experience’ category.

This is the skeleton of my outline of what I want to talk about in my paper:

Human food interaction require Third Wave/Experience Design

Framework [Celebratory]

  • Positive experiences

Projects that want to design for experience

Food Media/CoDine

  • Goal with experiences
  • What they did wrong

Telematic Dinner Party

  • What they did better
  • What they still did wrong

Inform future experience design for Human food interaction

Food Journey (Capstone Process)

  • Focus on the experience people have
  • Low fidelity/simulation to get the positive experience before build high fidelity prototype

So I just finally pieced together what I want to do and am currently pulling quotes from different papers. The basic idea comes from Don Norman’s Emotional Design.

When machines display emotions, they provide a rich and satisfying interaction with people, even though most of the richness and satisfaction, most of the interpretation and understanding, comes from within the head of the person, not from the artificial system

I basically want to argue that emotional intelligence is important for the future developments of computers and robots. I will contrast R2D2 and C3P0 with Siri and Cortana (apple and Windows phone) and show the difference in interactions of systems that are capable of emotional intelligence vs systems that only interpret commands.

For example, the other day Jeff Gadzala was showing off Cortana and was trying to get Cortana make a reference to the video game. Unfortunately, Cortana took him literally (“Cortana can you tell me about Master Chief”) and gave him a wiki answer! In this situation for example, had his phone been able to recognize the emotions (casual, joking), it would have been able to offer a joke or two!

I am probably going to dissect each example based on the readings (Sutcliff, McCarthy and Wright, Folkman, Bradzell and Bradzell) and show why emotional intelligence is important.

My question is, does this seem reasonable and narrowed down enough? Are there any seminal papers that I am missing out? Other thoughts and concerns?

The discussion we had on Tuesday reminded me this morning of a quote from Stolterman & Nelson in The Design Way:

“We are lame gods in the service of prosthetic gods.”

The word “prosthetic” was, I think, carefully chosen. According to the dictionary, a prosthesis is, “A device, either external or implanted, that substitutes for or supplements a missing or defective part of the body.” It’s an approximation, at best, of an organic limb or organ.

We closed class by establishing that Kieślowski used formalistic techniques to approximate the inarticulate felt experience of longing, and that this formalistic approximation was analogous to what we do as designers.

In the same way Kieślowski at best could only approximate that inarticulate felt experience, we can only approximate how people will react to and use our designs. Because of our education and experience we can make a pretty damn good guess, but a guess is the best we can hope for.

Technology is a means by which we can create prosthetics for our bodies and minds. We can remember things better, communicate over greater distances, and access information more readily than ever before in human history. But in the same way a prosthetic arm can’t communicate a sense of touch, our technology only can increase our abilities so much.

The best we can hope for is an approximation: there are a million to-do list mobile apps, but I still manage to forget to post on this blog; I can FaceTime with Hillary in Philadelphia, but it can never compare to sitting across a dinner table from her;  I can look up Nelson Mandela’s birthday with Wikipedia in an instance, but the same article could also describe Mr. Mandela as the spawn of Cthulhu. I think this relates heavily to several of Dennis’ posts from earlier in the semester regarding the danger/necessity of normative thinking in design practice.

We build prosthetics, supplements, substitutes, extensions…but nothing more. But my question is: Why not? Why can’t we do better than that? Is it a human shortcoming? Is our technology not “advanced” enough?

The philosophical version of that question could be this: If we could easily manipulate the very fabric of our reality, would we then be able to design the ‘perfect’ prosthesis? What do you think?

Regardless of the mental exercise that a lot of these readings present, I can’t help but jump to, how is this helping me being a better designer? How is this pushing me to think of different design paradigms, etc ? (This has no real conclusion… you have been warned)

So one thing that definitely stood out was the idea of context, and how a narrative builds not only on “cultural categories, norms, and conceptual schemes”, but being part of that context, and how the lack can alter meaning making. I see this related to our previous reading on how through the narrative, the author taps into the sensorium, and tries to trigger that reaction empathically (this person feels fear, so I am feeling fear).

“In these cases, it seems to me that once one excerpts these quotations from their narrative contexts, the danger that has been building up in the story disappears, and primarily only the anomaly remains in a way which, my theory predicts, is apt to cause laughter.” (p 252, Horror and Humor)

 

Taking this to say, interaction design for mobile devices, where the narrative is not continuous. So when we design for the user journey we are more susceptible to the aesthetic codings we embed in the interaction, since the point of interaction might be short. Alternatively, we also have an opportunity to build that narrative in broader terms perhaps, where the journey is everyday activity.

What other paradoxes are we designing through aesthetic codings and context building in digital experiences?

I was going to argue about why certain games and movies do not pass the test of time and more specifically why older games will die faster than older movies.

I want to be clear, I am excluding games like Tetris, old mario, games that have simplistic mechanisms. I want to talk about more complex game mechanics and how the complex interactions have gotten better over time which in result makes the mechanics of previous games feel sluggish and problematic. I am claiming that the better more advanced versions of certain interactions are slowly killing off the older games. The same way better visual effect driven movies are sort of ruining the older ones. If you do not have nostalgia associated with an older game, you will have a hard time playing it, or even understanding why it was such a good game when it came out.

Example: Re-watch “Star wars the Phantom Menace” Do not think about the movie, look especially at the visual effects. Back when it came out, it was the most spectacular thing you had seen, now, you can easily see the pixels! Now I am not saying this ruins movies like the old school “Clash of the titans.” I am saying Visual effects is killing itself.

Similarly, try going back and playing Half life, or even the first assassins creed. The interactions specifically feel problematic. For example in the newer games, if you character reaches an edge, they do not fall off. They sort of step near the edge and back up unless you force them to fall off. In older games like Prince of persia and even Zelda, if you reached a corner and stepped a little bit more than you should have you would fall. For long distance jumping you had to get the exact steps. Whereas in todays games, the game mechanics sort of compromise and complete the task any way. So I propose if you go and play an older game, these tiny differences will add up and make it a frustrating experience.

I need to do more research.

 

“We not experience any movie only through our eyes. We see and and feel films with our entire bodily being, informed by the full history and carnal knowledge of our accultured sensorium.”‘; Even if the body is often forgotten or not consciously experienced by spectators while watching a film, it nevertheless represents the irreducible condition of the possibility of sensory and aesthetic experience.” p 116 (Cinema as skin and touch)

The quote above really resonated with me as it relates to my capstone. Through my pre-writing I became aware of this idea of the “Sensorium”, and a book called Sensorium: Embodied Experience, Technology, and Contemporary Art, where the main argument is that embodied experience through the senses (and their necessary and unnecessary mediations is how we think” (p5)

In particular these 2 quotes:

“The making of subjects through psychological discourses and pharmacology, I argue, is part of what Foucault so provocatively termed “technologies of the self.” Our bodies do not allow us to “escape” from technological mediation – they are themselves mediating apparatuses, without which there can be no knowledge of the world.” (p 2) Our current yearnings for materiality, for thingness, for the concrete stuff of the physical world are here located in the body’s desiring negotiations with the virtual and the mediated – ever more intimately naturalized as the sensory technological envelope in which we live.” (p 4)

and

“In conjunction with the visuality historians have charted as characteristic of the modern, we should begin to reckon the auditory, the olfactory, and the tactile as similarly crucial sites of embodied knowledge. The resulting set of experiences can be called a sensorium: the subject’s way of coordinating all the body’s perceptual and proprioceptive signals as well as the changing sensory envelope of the self.” (p 8)

(Arning, B., Farver, J., Hasegawa, Y., Jacobson, M., Jones, C. “Sensorium: Embodied Experience, Technology, and Contemporary Art”. (2006), MITPress)

I think this is extremely relevant to our roles as designers, in particular when it comes to creating digital experiences. I feel we have been neglecting the rest of our bodies when it comes to designing digital experiences. We are designing for people that move around in the world, who think, feel, see, hear, and smell, not just see. So how can we leverage the ideas exposed in this cinema as skin and touch article? Even if the interactions we craft are not tangible in some way?

Alternatively, with the hype that is surrounding the Internet of Things, and ubiquitous computing, there is definitely room for embodied interactions.

What is more interesting to me is this idea of thinking through our senses, and how we make sense of the world, or an experience, and how we can index the “carnal knowledge”, and through that, communicate with others… How much richer could our interactions be if we leveraged other senses? Where is the balance between overwhelming the senses and crafting an out of the ordinary sensorial experience?

I think this also yields itself very well to third wave approaches (not just first wave approaches of information processing from different senses), but in the sense that by designing contingent designs, for contingent individuals, we cannot ignore the physical world. So how does contextual experiences vary from individual to individual?  Can we facilitate physical states through our experience and interaction design?

What do y’all think?

So here is a list of some Dark Patterns I’ve encountered across my capstone travels. Feel free to critique and comment, and see if you can find the Persuasive Pattern I’ve snuck in amongst the lot (they’re the good guys, at least from my research so far!).

 

linkedinsocialproof1

nextcouk trammel net patternNYCCTwitter2squre persuasivetrammelnet2Network Solutionsimagedarkpatterns_applewebwizards

I don’t know if it is just me. Maybe it’s because one of the first things we are introduced when we started this program was good design in an industrial design perspective (like with the tissue box and door handles from IDP) but I’ve always thought of designers have to think of not just screens but also everyday things around us. In some cases, it is the combination of things around us and screens. When thinking about something to design, I tend to like to think of both. What I realized, when looking at our job descriptions for UX designer and Interaction designers, is that most company thinks of us as a purely screen-oriented designer (though there is the thing with different sized screens). This isn’t a surprise to me but I’m starting to be conscious of how separated some people think of us and that UX and Interaction design solely deals with screens. I don’t think I like that. I like working with things that go on screens but I also don’t want to think that that is ALL of our work. That makes it seems like screens are the only answer when there are a lot more possibility and potential. This really struck home when there was a talk at Interactions that was raising awareness that UX designers should work more closely with Industrial designers. That we don’t want things to just be screens because screens isn’t something that stay in people’s hearts. Objects are things that people tend to treasure.

Anyways, since we were talking about critical design recently and thinking about designing for a world that we want, it made me think of this area that has woken in me. While I doubt I’ll be doing critical design when I go into the workplace; it will probably be lots of screens, learning about critical design really has started making me aware of how the world is now and how, as a designer, I may like the world to be. I think, once this way of thinking has become more ingrained in me, it will effect the choices I make and the directions I take my designs and maybe one day, I may work on a project that moves the world towards “my” ideal.

 

Image

“This studio revolves around the exploration of (tangible and actuated) interactive products and systems by means of physical sketching and prototyping. It is a hands-on studio where cardboard modeling techniques are combined with Arduino controlled sensors and actuators (the advanced cardboard modeling platform) to explore the notion of ‘the aesthetics of the third way’. The ‘aesthetics of the third way’ recognizes different approaches to ‘dematerialization’ (the process of the physical becoming digital, e.g., LPs and CDs become digital files and loose the physical media) and tries to balance the qualities of both the physical and the digital in a new manner.”

http://www.tei-conf.org/14/studios.php#s1

This studio session was conducted by Dr.ir. Joep (J.W.), Frens, Assistant Professor at Technische Universiteit, at the 8th International Conference on Tangible, Embedded and Embodied Interaction.

Joep is actively researching the ‘dematerializing’ of normal objects and how digital interfaces and electronics can be combined in order to create products or experiences that aesthetically seem like one piece – to use the “power of programming” and analog materials to create artifacts that show no signs of separation. This end process-goal-technique is what Joep refers to this as the ‘third way’. This is not to be confused with ‘third wave’ of Human-Computer Interaction/Design.

Examples of cardboard modeling:

ImageImageImageImageImageImageImageImage

For more information on cardboard modeling, you can go here: http://cardboardmodeling.com/

After a few presentations, I found myself thinking about Google’s Nexus 5 phone. Though the studio session focused on ordinary products and scenarios  (the design challenge was to create a new experience to remind people to take their medicine) I couldn’t help but think how I love the the Nexus’s hardware but hate the experience of the Android.

The two do not seem like one piece. The Nexus feels great in the hand, smooth, sexy, simple, but Android and its customization for me is confusing, not polished, and doesn’t feel integrated ‘with’ the phone.

This integration ‘with’ approach, a perspective on aesthetics, is one reason I find products like the iPhone to be so successful. A majority of the software, though I find some things of iOS 7 to be problematic, feels apart of the phone – the physical and digital at times are one.

Though Joep was discussing a method called ‘third way’ (which I thought he was referring to the third wave at first – we had a discussion about that and he is going to change the title), the third wave in HCI – especially ubiquitous computing, does have a shared goal in creating computing that is intergraded, everywhere, but not obtrusive, and sometimes not obvious.

http://www.forbes.com/sites/gordonkelly/2014/02/10/how-google-used-motorola-to-smack-down-samsung-twice/

This is a very interesting read on how Google is maintaining its dominance in the mobile market by stopping Samsung from developing its own OS. Samsung has been building on top of Android and hiding a lot of essential apps in favor of its own while putting their UI, TouchWiz, at the front of the experience.

Samsung’s goal has been to use Android as a platform to build its own services and UI while slowly dropping Android in favor or its own OS called Tizen.

“Samsung…began building its own Android rival – Tizen – which, thanks to its TouchWiz interface, looks identical to the casual observer. The long term strategy was clear: switch over to Tizen and take the majority of the handset market with it. Google had to act.”

Should we consider coping a part of an OS’s functionality or aesthetic as counterfeit?

Counterfeit: “made in imitation of something else with intent to deceive”

http://www.merriam-webster.com/dictionary/counterfeit