Tuesday, April 26, 2011

Paper Reading #27 - TeslaTouch

Comments:
Comment 1
Comment 2

References:
Title: TeslaTouch, Electrovibration for Touch Surfaces
Authors: Olivier Bau, Ivan Poupyrev, Ali Israr, Chris Harrison
Venue: UIST '10, Oct. 3-6 2010.

Summary:
In this paper, the authors describe a method of generating tactile feedback on touch screens using only electricity. This technology, which requires no moving parts, can generate very realistic touch sensations when the user moves their hand across the touch screen.

The authors first describe how the technology works and what they used to test the technology. TeslaTouch works by using the position-sensing driver to send electrical signals across an insulating coating on the surface. As the finger moves across the surface, an attractive force is generated that feels like friction to the user. To test the technology, the authors placed the coating on a 3M multitouch table, shown above.

They then test how people react to the tactile feedback generated by the technology. They found that they were able to generate a variety of feelings in the users by slightly changing the frequency of the electrical signals. They also found that users can feel this feedback at the same level as normal vibration technology.

They then describe why this technology is better than current haptic vibration interfaces. Some of the reasons they cite are that: it's silent, it can uniformly generate touch over the entire surface and it has no mechanical motion. However, traditional haptic vibration can generate stronger feelings than this system.

Discussion:
I am really excited by the prospect of this technology. I have thought for years that touch screens need better touch feedback to truly work as well as traditional inputs, especially when doing similar tasks like keyboarding. I really want to try this system myself, because it apparently generates very real touch experiences.

I have two main concerns with this system, however. For multitouch surfaces, I am curious whether TeslaTouch can generate two different tactile sensations at once. If it can't, it's not a big deal, but it could decrease the realistic feel of the screen. Second, in the pictures above, you can see that the image appears to be slightly blurry. If the resistive layer causes the image to be less sharp, then in order for this technology to be practical they need to find a better coating.

(Image courtesy of: )

Paper Reading #26 - Critical Gameplay

Comments:
Comment 1
Comment 2

References:
Title: Critical Gameplay: Software Studies in Computer Gameplay
Author: Lindsay D. Grace
Venue: CHI 2010, April 10-15, 2010

Summary:
In this paper, the author describes tried-and-true game mechanics, and then shows games that defy these mechanics. The objective of this research is to both show the game design aspects that common today, but also to show what game mechanics have been unexplored in the industry today.

The first mechanic is friend-or-foe identification. She says that most games allow the player to quickly identify enemies by appearance. To counter this, she shows the game at right, called black and white. In this game, enemies and friends are the same color, and the player must instead identify by behavior instead of appearance.

She then continues by describing games that use mechanics like collection, violence, and rushing through the game and then shows counterexamples that instead demonstrate frugality, nonviolence, and calm observation.

Discussion:
I thought this paper was cool because it took all of the games that I am used to playing and turned them around. Some of the mechanics, for example trying to avoid item collection, sounded like they would be really interesting to play.

Interestingly enough, I have actually seen a game lately in which you actually have to observe behavior over appearance, and it got really great reviews. So, I am actually interested in seeing if some of these other mechanics might be used in games to make them even more original.

(Image courtesy of: this paper)

Wednesday, April 20, 2011

Book Reading #8 - Living with Complexity

References:
Title: Living with Complexity
Author: Donald A. Norman
Editor: Julie Norman, 2011

Summary:
In this book, Donald Norman attempts to support complexity in our daily lives -- at least, when complexity is actually necessary.

He begins by discussing the double standard we have with complexity. In many older devices like musical instruments, we accept the complexity required to operate them -- especially if we are exposed when young. An example he uses is violins versus keyboards. Both have caused repetitive stress injuries to people who have used them, but people haven't sued violin companies over it, while people have sued keyboard makers.

Then, he discusses the psychological influences behind our perceptions of complexity. He mentions that often, an increase in outer simplicity leads to an increase in inner complexity. Furthermore, even though we often talk about how we like simplicity, devices with more features, and thus more complexity, actually sell better. He also talks about how even simple devices can become complex when there is a lack of standardization. For example, locks are simple devices, but since which direction is locked and unlocked isn't standardized, remembering which way to turn the lock is actually quite hard.

Finally, he briefly discusses how social influences can affect complexity. He shows how observing how someone uses an unknown device can assist you in learning it correctly. However, if they have a poor idea of how the device works their influence can be harmful.

Discussion:
This book was interesting because I find it odd that Norman is, in a way, stepping back from his early stances on design. In his early books, he seems to decry any complexity in devices, but in his last two we read, he seems to be far more accepting of small amounts of complexity in device designs.

I was also intrigued by the double standard he illustrated behind how people describe devices. I can't talk to anyone without hearing how today's devices are too complicated, but when device makers show more minimalist designs, they don't sell. I actually like maximum functionality when I buy things, and then just accept the cost of the learning curve.

(Image courtesy of: Hussman Funds)

Microblogs #10 - Living with Complexity

References:
Title: Living with Complexity
Author: Donald A. Norman
Editor: Julie Norman, 2011

Chapter #1:
Summary:
In this chapter, Norman describes his definition of complexity and how we sometimes actually like it. He talks about how for devices people get frustrated with only a small amount of difficulty. However, for devices like musical instruments, which are extremely complicated, people actually enjoy spending time learning.

Discussion:
I thought his point about the complexity of musical instruments was actually quite interesting. I never really thought about how I treated the complexity of musical instruments from other devices.


Chapter #2:
Summary:
In this chapter, Norman discusses how simplicity is more often something we perceive than is actually there. He talks about how simple outer interfaces mean more complex innards, as well as how simplicity never sells.

Discussion:
This chapter was interesting because I thought he comment on how complex things sell better was funny yet true. I always try to get hardware that has more features, but at the same time I am willing to take a learning curve. I am sure others do so while simultaneously asking for simplicity -- which, while not impossible like Norman says -- is certainly difficult.


Chapter #3:
Summary:
In this chapter, Norman describes how quickly simple objects can become complex. From doorknobs to books, lack of standardization and difficult organization can make complexity from simplicity. However, these problems can be fixed by additional organization and correct design.

Discussion:
This chapter was interesting because of the toilet paper example. When he started the first thing I thought of was having two rolls with one restricted, but I liked hearing about why having two open rolls was a bad idea. I think this example was the best way to illustrate his point.


Chapter #4:
Summary:
In this chapter, Norman describes what he calls social signifiers -- or affordances created through the influences of others. From following other peoples' behaviors to the influences of culture, how we use and perceive an object and its state can be heavily affected by how others do.

Discussion:
This main thing that interested me in this chapter was his example of a computer program that created wear and tear on heavily used objects. I thought this was a great, natural way of displaying this information that I would never have thought of until I heard it.

Book Reading #7 - Why We Make Mistakes

References:
Title: Why We make Mistakes
Author: Joseph T. Hallinan
Editor: Donna Sinisgalli, 2009

Summary:
In this book, Joseph Hallinan describes the psychological reasons behind why people make mistakes.

In each of the main thirteen chapters, Hallinan describes an error we have in our ways of thinking that causes us to make mistakes in our daily lives. Some of these include: skimming, believing we are above average, wearing rose-colored glasses, and multitasking. In each, he uses many different examples to illustrate not only how we make mistakes, but how the people and businesses around us manipulate these mistakes for their own gain.

Finally, in the conclusion, he describes some small changes we can make in our thought processes to fix many of these errors. The main one he provides is to "think small," that is, to pay attention to small details, since that is where most of these manipulators lie.

Discussion:
I really liked this book. Not only was each chapter entertaining to read, there was a significant amount I learned about how my mind works. Each chapter provided a lot of interesting questions that gave me something to think about during bus rides.

I really didn't think the solutions that he provided in the final chapter would be very useful, however. Many of the issues he describes in the book seem almost hard-wired to our psyches, so it is unlikely we could surpass them with any amount of self-training. Then again, thinking that way might be a mistake.

(Image courtesy of: Humanology)

Paper Reading #25 - Email Overload

Comments:
Comment 1
Comment 2

References:
Title: Agent-Assisted Task Management that Reduces Email Overload
Author: Andrew Faulring, Brad Myers, Ken Mohnkern, Bradley Schmerl, Aaron Steinfeld, John Zimmerman, Asim Smailagic, Jeffery Hansen, and Daniel Siewiorek
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a new mail system that uses AI to divide e-mails into a selection of tasks. They then show that this very different method provides positive results.

They begin by describing the intricacies of the task system. When e-mails enter the inbox, an AI assistant parses the e-mails and tries to figure out what classification the task should lie under. Then, it chooses to either place it into a classification or place it in an area where the user can choose.

In addition, the e-mail client also provides a scheduling interface, which also includes an AI assistant. The AI assistant looks through the e-mail tasks and assigns what it believes to be a good amount of time for each, and prioritizes the user's future schedule. The user can then choose what tasks they are working on.

They then show the results of using this system on productivity. People using this system with the AI task assistant and e-mail assistant tend to get more meaningful tasks done than those who do not. With only the e-mail assistant, users get more overall tasks done, but they get less important tasks done.

Discussion:
I was actually quite excited about this research. The idea of having a small AI assisting me with my tasks seems like a really cool, sci-fi idea. Additionally, even at this stage, it seems to be working well, so I hope they can actually bring this to market soon.

One concern I have with the software is that they do not describe how configuration will work. I am curious if in a final design if there will be editable categories, or if because of how the AI works, there will only be preset task categories.

(Image courtesy of: Download Software)

Paper Reading #24 - Finding Your Way

Comments:
Comment 1
Comment 2

References:
Title: Finding Your Way in a Multi-dimensional Semantic Space with Luminoso
Authors: Robert Speer, Catherine Havasi, Nichole Treadway, and Henry Lieberman
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a program called Luminoso that provides an interface for parsing text input and displaying it.

The Luminoso system displays the relations between text sets in N-dimensions. These dimensions are first created by finding the occurrences of words in each document, and then analyzing the meanings of each of the highest words. Then, the system gathers the words with similar meanings, and assigns a dimension to each gathering.

To examine the data, the program defines the interface above. The lines connect the words related to the current selection. The colors define how "hot" a particular relation is, from white being highest, down to red. The user can navigate by selecting a particular point and then rotating into the semantic dimension you want.

Discussion:
While I think it's important to be able to quickly navigate through text to find what you need for situations like surveys, I don't think that this is the best navigation method. The concept of n-dimensionality is confusing to start with, but when you add in the abstractness of the data you will be sorting through using these dimensions, and I would feel completely lost.

Also, this paper mentions a lot of sorting and data modification methods without defining them, so in many cases I was unable to understand the backbone behind how the system worked. I think they probably would have done better to lengthen the paper and add some short definitions for each term.

(Image courtesy of: )

Sunday, April 17, 2011

Paper Reading #23 - Automatic Warning Cues

Comments:
Comment 1
Comment 2

References:
Title: Evaluating Automatic Warning Cues for Visual Search in Vascular Images
Authors: Boris van Schooten, Betsy van Dijk, Anton Nijholt, and Johan Reiber
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors perform a study on automatic warning systems for MRA machines. They find that warning systems that warn more often but produce false positives work better than both the ones that warn less or ones with no warnings at all.

The authors created a warning system for viewing images of the vascular system. In order to test it, they created a series of test vessels for users and the system to find. The vessels were classified by difficulty and type of problem. They then had users attempt to find these errors using a system that warned more, a system that warned less, a system with no warnings, and a "perfect" system that generated no errors.

They found first of all that the perfect system did the best. After that, the warning system with more warnings did next best, even though previous studies have shown the opposite to be true. Following that, the less warning and the no warning systems placed second-t0-last and last, respectively.

Discussion:
I think that this research is of great importance, since proper detection of blood vessels could prevent heart attacks and other vascular issues. Furthermore, I am sure these warning systems have other applications that are just as useful.

One issue I have with the paper, however, is that I do not think that their study was big enough. The results from the small group that they tested show nearly even results between false positives and negatives; they should have done more research to see which one did better.

(Image courtesy of: Imaging Group)

Tuesday, April 12, 2011

Special Reading #4 - Media Equation

Comments:
Comment 1
Comment 2

References:
Paper 1:
Title: Machines and Mindlessness: Social Responses to Computers
Authors: Clifford Nass and Youngme Moon
Venue: Journal of Societal Issues, Vol. 56-1, 2000

Paper 2:
Title: Computers are Social Actors
Authors: Clifford Nass, Jonathan Steuer, and Ellen R. Tauber
Venue: CHI '94, April 1994

Paper 3:
Title: Can Computer Personalities Be Human Personalities?
Authors: Clifford Nass, Youngme Moon, BJ Fogg, Byron Reeves, and Chris Dryer
Venue: CHI '95, May 7-11 1995


Summary:
In these papers, the authors show that people unconsciously think of computers as people -- in some respects, anyway. When it came to factors like gender, race, and aggressiveness, their tests indicated that indeed humans applied these qualities to machines, even though they said doing so was ridiculous.

They used many methods for the tests, but the most common was the three-computer setup shown at right. One computer feeds information to the user, then they take a test on computer number two, and then the third evaluates their scores. Different selections of male and female voices as well as aggressive and less aggressive language to use.



Discussion:
I wasn't very surprised by the results of these papers. I have referred to computers in a human way before, especially when I get frustrated. Also, when you place human voices on the machines, you make them seem more humanlike, so it's not surprising that gender and racial biases get applied.

However, the fact that these results are so evident means that we must be careful about what sound clips we use when we design programs with a voice component, as well as how we give information through the user through text. Incorrect usage could make our program seem rude and cause people to dislike it.

(Image courtesy of: Paper #2)

Paper Reading #22 - POMDP Approach

Comments:
Comment 1
Comment 2

References:
Title: A POMDP Approach to P300-Based Brain-Computer Interfaces
Authors: Jaeyoung Park, Kee-Eung Kim, and Sungho Jo
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a method of optimizing the number of attempts a brain-computer interface (BCI) needs to use to find information from the user. They did this by creating a program that displayed a 2x2 or 2x3 matrix to the user and then flashed prospective letters at them (at left).

They used a learning system called POMDP to attempt to guide in the flashes to what the user wanted. This system required a lot of training before the system could be used on human subjects. However, when they tested it on humans, they found that the accuracy was much higher by 30 flashes than the normal algorithm, and maintained that accuracy all the way to the maximum of 60.

Discussion:
This paper was interesting because we get to see some more bleeding-edge research in BCI. The paper this time was significantly more readable than the previous one, but I still got lost in the probabilities.

One thing that bothers me again is I don't really see the big benefits of this at this stage. Of course, EEG-based programs are still in their infancy, but I don't see how flashing letters can later become navigating the pointer or typing with my mind. Maybe in a few years I will be able to see the connection.

(Image courtesy of: this paper)

Microblogs #9 - Media Equation

Part 1:
Summary:
In this paper, the authors show that people have a tendency to mindlessly apply human characteristics to computers, even if consciously they believe the idea to be ridiculous. They show in a series of tests that people profiled computers based on gender, ethnicity, and loyalty, just by slightly changing the stimulus that the computer gave. Additionally, they displayed social behaviors as well.

Discussion:
To be honest, I didn't find this very surprising. I always refer to my computers like they are people, especially if they act up. Now that I write that down here, that is kind of weird to say, but it's true nonetheless.


Part 2:
Summary:
In this paper, the authors try to see if human social cues will be applied to computers. They attempt five different tests: politeness, self and other, voice self and other, gender, and programmer v. computer. They found that people do in fact apply politeness and gender roles to the computer based upon its voice.

Discussion:
To me, this paper is basically a repeat of the previous paper. However, it is interesting to see a repeat of the validity of the experiment, even though I think it is quite obvious that people place human qualities on computers.


Part 3:
Summary:
In this paper, the authors demonstrate that personality can be given to a computer without any special artificial intelligence. They showed that changing the way computers gave information not only made them seem more dominant or submissive, it also made people like or dislike them more based on their own personalities.

Discussion:
Again, no surprises here. I have applied personalities to people I have never met when reading a book, so finding it out that people apply it to a machine isn't incredibly exciting.

Thursday, April 7, 2011

Paper Reading #21 - Automatically Identifying Targets

Comments:
Comment 1
Comment 2

References:
Title: Automatically Identifying Targets Users Interact with During Real World Tasks
Authors: Amy Hurst, Scott E. Hudson, and Jennifer Mankoff
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a method of gathering user click data in an accurate, device-agnostic way. They do this by using a hybrid method of kernel-based tracking combined with image identification.

Their user data gatherer, called CRUMBS, works in two levels. At the lower level, a series of different data gatherers reports on what they each think is what the user clicked on. For example, some of the low-level gatherers include Microsoft's Accessibility API, an image difference checker, as shown above, and a template checker. Then, at the high level, a machine learning interface decides based on all the data gathered from the low-level gatherers to make a final decision on what the user clicked on.

With their method, they reported a 92% correct click identification rate, which they mention is higher than using only the accessibility API. Furthermore, they mention if they captured a larger portion of the screen on a click (they currently grab only a 300x300 space), they could get an even larger portion of clicks correct.

Discussion:
I think that CRUMBS could be a very useful tool to use when testing how users make use of your program. If the collection data from other sources is as bad as they say in the paper, then it is very difficult to gather real usage information for programs, and this could help.

One thing I am curious about though is if this information gatherer must be turned on and off manually or if it does so automatically. Otherwise, it might gather usage information in a clicking space where it isn't needed, such as in a video game.

(Image courtesy of: this paper)

Tuesday, April 5, 2011

Book Reading #6 - Things That Make us Smart

References:
Title: Things that Make us Smart
Author: Donald Norman
Editor:Bill Patrick, 1993

Summary:
In this book, Norman talks about how technology can assist or hinder us in our daily lives based on how well it is designed. He begins by discussing how most design today is technology-centered, which is why it's so confusing to use. Instead, if the designs were human-centered, they would fit much better into our lives.

He then discusses two states of the mind, experiential and reflective, and discusses how good designs place us in the right state for a task, while bad designs do not. He then discusses the importance of using the correct designs when displaying data as well as using the correct tools when working on a job. Doing so will keep you in the correct mindset for the task.

Discussion:
I feel like for learning how to design, this book is not as important as the last two we have read, as it talks less about the design of products and instead looks at their effects on the user. However, as an examination of the effects of technology on us it is quite successful. I am definitely going to think more about how I use objects in relation to the tasks I am trying to perform from now on.

(Image courtesy of: PBS)

Paper Reading #20 - Data-Centric Physiology

Comments:
Comment 1
Comment 2

References:
Title: Addressing the Problems of Data-Centric Physiology-Affect Relations Modeling
Authors: Roberto Legaspi, Ken-ichi Fukui, Koichi Moriyama, Satoshi Kurihara, Masayuki Numao, and Merlin Suarez
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a new method of analyzing the emotions -- or affect -- of people. They describe some of the problems with current emotion modeling solutions like the time it takes to analyze them, and how they believe it can be improved by changing how the data is analyzed.

They mention that analyzing the data continuously along the entire spectrum of data for a user will produce better results than the current method of discretely analyzing emotions. To prove this, they analyzed the emotion changes of two subjects by using the sensors shown in the pictures above and playing music that affected them emotionally.

Then, they describe in detail the algorithms behind their continuous analysis, and show that it is as fast as discrete analysis and should provide better results in certain situations.

Discussion:
To be perfectly honest, this paper was so difficult to read that I'm not exactly sure that I got the correct analysis out of it. It took me half of the paper to figure out what they were trying to do with the emotion readings, and I am still not exactly sure what the point was.

Additionally, I am curious what the benefits are behind sensing emotions of users, especially if it requires the elaborate equipment shown in the picture. I have seen some cool little games that used the user's emotions to modify the game, but no other real applications.

(Image courtesy of: this paper)

Sunday, April 3, 2011

Paper Reading #19 - Personalized News

Comments:
Comment 1
Comment 2

References:
Title: Personalized News Recommendation Based on Click Behavior
Authors: Jiahui Liu, Peter Dolan, Elin Ronby Pedersen
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a new method of recommending news articles to read on the Google News service. They do this by analyzing both the articles users choose to click on, as well as current news trends.

They begin by doing an in-depth analysis of the influences on what news readers are interested in. They then find that readers are not only influenced by their own personal tastes due to age, gender and job, but also that they are influenced by current news stories. Furthermore, the news stories that people have an interest in have a large correlation based on location.

From this data, they design two probabilistic models that show the user's likelihood of clicking a particular link. One model is based upon their individual clicks, which should correspond to their personal interests, and the other is based on news stories clicked upon by people who are nearby, to get an idea of current breaking news in the area. They then find that this new method caused a noticeable increase in clicks on Google News recommendations.

Discussion:
I thought this article was interesting mainly because it gave a sort of window into how they use user information at Google. I was impressed by how creative they are in using not only personal clicks but also those of the collective to get a larger picture of the data.

As far as the product goes, I think it's a positive move, since I definitely don't have a lot of time to spend searching through articles for what I'm looking for. I also wonder if they use a similar algorithm to this in Google Reader, since that is my main news hub.

(Image courtesy of: Google News)

Book Reading #5 - Coming of Age

References:
Title: Coming of Age in Samoa
Author: Margaret Mead
Venue: Edited by M.L. Loeb; 1908

Summary:
In this book, Mead discusses the culture of Samoa, as she found while living there as an ethnography. She then takes this knowledge to find why girls in our society have so much trouble.

She begins by describing the overall culture of Samoa. She describes what women can and cannot do, the social structure, and what qualities they like and dislike. She then goes on to describe the daily lives of a few of the girls, and then describes their conflicts.

After that, she makes some theories about why, in many cases, the lives of the Samoan girls have less conflict that girls in our society. She believes it is because of the lack of conflicting views in their society as opposed to ours. She also believes another reason is that children in their civilization are exposed to death and sex in childhood, which makes them better equipped to deal with them later in life.

Discussion:
While I cannot say that I liked this book, I think that as far as teaching us the basics of ethnography the book is a success. I really didn't get the grasp of how detailed we needed to be until I read the appendices in this book, where she has her research notes and polls. When I saw those, I finally got the concept of how the ethnography needs to be.

Additionally, I think that the points she makes about society in this book are quite valid. The many conflicting viewpoints lead to quite a lot of unnecessary conflict between people. However, I don't think we would ever be able to excise this from our society.

(Image courtesy of: Election Guide)

Ethnography Results - Week 8

This week, we return to Schotzi's one last time on Saturday night to watch The Conglomerate and Strawberry Jam. Both artists are very reminiscent of the 70s and 80s, with electronic keyboards and classical instruments.

We arrived at 10:00, and The Conglomerate had already gotten started with their act. There were less than twenty people at this time, and it appeared that most of them were involved with the band or were working at the bar.









By the end of their act, the amount of people had swelled to around thirty people. At this point I noticed that they were holding another big concert next door (you can see the float in the top right of the picture.) This may have had something to do with the sluggishness of the concert.







Then, at 11:30, Strawberry Jam came up. By the time they came on there were significantly more people, somewhere in the range of 50-60. I think this large jump in people in half an hour was due to the ending of the concert next door. There was also more interest than in the previous band, with more people clustered at the front, as you can see in the photo.






This shot, taken near the end of the concert, shows how intent the crowd was on the performance. By the end of the concert there were nearly 100 people in attendance, and a large majority were heavily focused on the band, with a lot of dancing going on.








I think that out of all the bands we watched for this project, this group was my favorite. Both of them had an original sound and felt quite developed as artists. If given the opportunity, I would definitely go to another performance of these two.

As far as the culture is concerned, Strawberry Jam had by far the most focus from the crowd out of any band that we have seen over the course of this project. I believe this is because they have been on tour the longest of any of the groups we have seen, and thus people came for the show instead of the bar. Thus, I think that for our project, we need to design a program that can help artists increase their presence and traction with people who go to concerts.

Microblogs #8 - Things that Make us Smart

References:
Title: Things that Make us Smart
Author: Donald Norman
Editor:Bill Patrick, 1993

Chapter #1:
Summary:
In this chapter, Norman talks about how technology is both helping and hindering us. He then talks about how he thinks this could be changed if we shifted from technology-centered design to human-centered design. Finally, he discusses two different thinking modes for people, and how technology can help and hinder them.

Discussion:
I don't think that this chapter was very exciting, but it does a good job of setting the stage for the chapters to come. I did find the descriptions of modes of the mind interesting though, and I am curious if he is going to describe more of them later.


Chapter #2:
Summary:
In this chapter, Norman goes into more detail about the experiential and reflective states of the mind, and how technology lures us into these states. He also talks about the three levels of learning: accretion, tuning, and restructuring, and how many new-age education methods help very little with any of these learning types.

Discussion:
I thought this chapter was interesting mostly because of the discussion of optimal flow. I have noticed before that I go into states of complete focus in situations from programming to video games. I wish after reading about it that people knew more about getting into and out of this state, since I would love to go into a Zen trance on command.


Chapter #3:
Summary:
In this chapter, Norman discusses the power of representation -- that is, how tasks can be made easier or more difficult by changing how we look at them. From different types of numerals to different color usages in graphs, the way we display information can be almost as important as the information itself, since it can affect whether we look at it in an experiential or reflective mindset.

Discussion:
This chapter was interesting because of how obvious it quickly became as he went through each example that ordering information right is important. My personal favorite was the tic-tac-toe example, since it showed a method of representing the board on a computer that I had never thought about before.


Chapter #4:
Summary:
In this chapter, Norman describes how what object we choose for assistance can help or hinder our task. From newspaper versus television to digital versus analog, the correct choice of entertainment or watch can make all the difference with how enjoyable we find the experience.

Discussion:
This chapter was good because we finally got our feet wet with some design concepts. The discussion about how computer interfaces that succeed give the best representation for the task was the most helpful part.