Comments:
Comment 1
Comment 2
References:
Title: TeslaTouch, Electrovibration for Touch Surfaces
Authors: Olivier Bau, Ivan Poupyrev, Ali Israr, Chris Harrison
Venue: UIST '10, Oct. 3-6 2010.
Summary:
In this paper, the authors describe a method of generating tactile feedback on touch screens using only electricity. This technology, which requires no moving parts, can generate very realistic touch sensations when the user moves their hand across the touch screen.
The authors first describe how the technology works and what they used to test the technology. TeslaTouch works by using the position-sensing driver to send electrical signals across an insulating coating on the surface. As the finger moves across the surface, an attractive force is generated that feels like friction to the user. To test the technology, the authors placed the coating on a 3M multitouch table, shown above.
They then test how people react to the tactile feedback generated by the technology. They found that they were able to generate a variety of feelings in the users by slightly changing the frequency of the electrical signals. They also found that users can feel this feedback at the same level as normal vibration technology.
They then describe why this technology is better than current haptic vibration interfaces. Some of the reasons they cite are that: it's silent, it can uniformly generate touch over the entire surface and it has no mechanical motion. However, traditional haptic vibration can generate stronger feelings than this system.
Discussion:
I am really excited by the prospect of this technology. I have thought for years that touch screens need better touch feedback to truly work as well as traditional inputs, especially when doing similar tasks like keyboarding. I really want to try this system myself, because it apparently generates very real touch experiences.
I have two main concerns with this system, however. For multitouch surfaces, I am curious whether TeslaTouch can generate two different tactile sensations at once. If it can't, it's not a big deal, but it could decrease the realistic feel of the screen. Second, in the pictures above, you can see that the image appears to be slightly blurry. If the resistive layer causes the image to be less sharp, then in order for this technology to be practical they need to find a better coating.
(Image courtesy of: )
Tuesday, April 26, 2011
Paper Reading #26 - Critical Gameplay
Comments:
Comment 1
Comment 2
References:
Title: Critical Gameplay: Software Studies in Computer Gameplay
Author: Lindsay D. Grace
Venue: CHI 2010, April 10-15, 2010
Summary:
In this paper, the author describes tried-and-true game mechanics, and then shows games that defy these mechanics. The objective of this research is to both show the game design aspects that common today, but also to show what game mechanics have been unexplored in the industry today.
The first mechanic is friend-or-foe identification. She says that most games allow the player to quickly identify enemies by appearance. To counter this, she shows the game at right, called black and white. In this game, enemies and friends are the same color, and the player must instead identify by behavior instead of appearance.
She then continues by describing games that use mechanics like collection, violence, and rushing through the game and then shows counterexamples that instead demonstrate frugality, nonviolence, and calm observation.
Discussion:
I thought this paper was cool because it took all of the games that I am used to playing and turned them around. Some of the mechanics, for example trying to avoid item collection, sounded like they would be really interesting to play.
Interestingly enough, I have actually seen a game lately in which you actually have to observe behavior over appearance, and it got really great reviews. So, I am actually interested in seeing if some of these other mechanics might be used in games to make them even more original.
(Image courtesy of: this paper)
Comment 1
Comment 2
References:
Title: Critical Gameplay: Software Studies in Computer Gameplay
Author: Lindsay D. Grace
Venue: CHI 2010, April 10-15, 2010
Summary:
In this paper, the author describes tried-and-true game mechanics, and then shows games that defy these mechanics. The objective of this research is to both show the game design aspects that common today, but also to show what game mechanics have been unexplored in the industry today.
The first mechanic is friend-or-foe identification. She says that most games allow the player to quickly identify enemies by appearance. To counter this, she shows the game at right, called black and white. In this game, enemies and friends are the same color, and the player must instead identify by behavior instead of appearance.
She then continues by describing games that use mechanics like collection, violence, and rushing through the game and then shows counterexamples that instead demonstrate frugality, nonviolence, and calm observation.
Discussion:
I thought this paper was cool because it took all of the games that I am used to playing and turned them around. Some of the mechanics, for example trying to avoid item collection, sounded like they would be really interesting to play.
Interestingly enough, I have actually seen a game lately in which you actually have to observe behavior over appearance, and it got really great reviews. So, I am actually interested in seeing if some of these other mechanics might be used in games to make them even more original.
(Image courtesy of: this paper)
Wednesday, April 20, 2011
Book Reading #8 - Living with Complexity
References:
Title: Living with Complexity
Author: Donald A. Norman
Editor: Julie Norman, 2011
Summary:
In this book, Donald Norman attempts to support complexity in our daily lives -- at least, when complexity is actually necessary.
He begins by discussing the double standard we have with complexity. In many older devices like musical instruments, we accept the complexity required to operate them -- especially if we are exposed when young. An example he uses is violins versus keyboards. Both have caused repetitive stress injuries to people who have used them, but people haven't sued violin companies over it, while people have sued keyboard makers.
Then, he discusses the psychological influences behind our perceptions of complexity. He mentions that often, an increase in outer simplicity leads to an increase in inner complexity. Furthermore, even though we often talk about how we like simplicity, devices with more features, and thus more complexity, actually sell better. He also talks about how even simple devices can become complex when there is a lack of standardization. For example, locks are simple devices, but since which direction is locked and unlocked isn't standardized, remembering which way to turn the lock is actually quite hard.
Finally, he briefly discusses how social influences can affect complexity. He shows how observing how someone uses an unknown device can assist you in learning it correctly. However, if they have a poor idea of how the device works their influence can be harmful.
Discussion:
This book was interesting because I find it odd that Norman is, in a way, stepping back from his early stances on design. In his early books, he seems to decry any complexity in devices, but in his last two we read, he seems to be far more accepting of small amounts of complexity in device designs.
I was also intrigued by the double standard he illustrated behind how people describe devices. I can't talk to anyone without hearing how today's devices are too complicated, but when device makers show more minimalist designs, they don't sell. I actually like maximum functionality when I buy things, and then just accept the cost of the learning curve.
(Image courtesy of: Hussman Funds)
Title: Living with Complexity
Author: Donald A. Norman
Editor: Julie Norman, 2011
Summary:
In this book, Donald Norman attempts to support complexity in our daily lives -- at least, when complexity is actually necessary.
He begins by discussing the double standard we have with complexity. In many older devices like musical instruments, we accept the complexity required to operate them -- especially if we are exposed when young. An example he uses is violins versus keyboards. Both have caused repetitive stress injuries to people who have used them, but people haven't sued violin companies over it, while people have sued keyboard makers.
Then, he discusses the psychological influences behind our perceptions of complexity. He mentions that often, an increase in outer simplicity leads to an increase in inner complexity. Furthermore, even though we often talk about how we like simplicity, devices with more features, and thus more complexity, actually sell better. He also talks about how even simple devices can become complex when there is a lack of standardization. For example, locks are simple devices, but since which direction is locked and unlocked isn't standardized, remembering which way to turn the lock is actually quite hard.
Finally, he briefly discusses how social influences can affect complexity. He shows how observing how someone uses an unknown device can assist you in learning it correctly. However, if they have a poor idea of how the device works their influence can be harmful.
Discussion:
This book was interesting because I find it odd that Norman is, in a way, stepping back from his early stances on design. In his early books, he seems to decry any complexity in devices, but in his last two we read, he seems to be far more accepting of small amounts of complexity in device designs.
I was also intrigued by the double standard he illustrated behind how people describe devices. I can't talk to anyone without hearing how today's devices are too complicated, but when device makers show more minimalist designs, they don't sell. I actually like maximum functionality when I buy things, and then just accept the cost of the learning curve.
(Image courtesy of: Hussman Funds)
Microblogs #10 - Living with Complexity
References:
Title: Living with Complexity
Author: Donald A. Norman
Editor: Julie Norman, 2011
Chapter #1:
Summary:
In this chapter, Norman describes his definition of complexity and how we sometimes actually like it. He talks about how for devices people get frustrated with only a small amount of difficulty. However, for devices like musical instruments, which are extremely complicated, people actually enjoy spending time learning.
Discussion:
I thought his point about the complexity of musical instruments was actually quite interesting. I never really thought about how I treated the complexity of musical instruments from other devices.
Chapter #2:
Summary:
In this chapter, Norman discusses how simplicity is more often something we perceive than is actually there. He talks about how simple outer interfaces mean more complex innards, as well as how simplicity never sells.
Discussion:
This chapter was interesting because I thought he comment on how complex things sell better was funny yet true. I always try to get hardware that has more features, but at the same time I am willing to take a learning curve. I am sure others do so while simultaneously asking for simplicity -- which, while not impossible like Norman says -- is certainly difficult.
Chapter #3:
Summary:
In this chapter, Norman describes how quickly simple objects can become complex. From doorknobs to books, lack of standardization and difficult organization can make complexity from simplicity. However, these problems can be fixed by additional organization and correct design.
Discussion:
This chapter was interesting because of the toilet paper example. When he started the first thing I thought of was having two rolls with one restricted, but I liked hearing about why having two open rolls was a bad idea. I think this example was the best way to illustrate his point.
Chapter #4:
Summary:
In this chapter, Norman describes what he calls social signifiers -- or affordances created through the influences of others. From following other peoples' behaviors to the influences of culture, how we use and perceive an object and its state can be heavily affected by how others do.
Discussion:
This main thing that interested me in this chapter was his example of a computer program that created wear and tear on heavily used objects. I thought this was a great, natural way of displaying this information that I would never have thought of until I heard it.
Title: Living with Complexity
Author: Donald A. Norman
Editor: Julie Norman, 2011
Chapter #1:
Summary:
In this chapter, Norman describes his definition of complexity and how we sometimes actually like it. He talks about how for devices people get frustrated with only a small amount of difficulty. However, for devices like musical instruments, which are extremely complicated, people actually enjoy spending time learning.
Discussion:
I thought his point about the complexity of musical instruments was actually quite interesting. I never really thought about how I treated the complexity of musical instruments from other devices.
Chapter #2:
Summary:
In this chapter, Norman discusses how simplicity is more often something we perceive than is actually there. He talks about how simple outer interfaces mean more complex innards, as well as how simplicity never sells.
Discussion:
This chapter was interesting because I thought he comment on how complex things sell better was funny yet true. I always try to get hardware that has more features, but at the same time I am willing to take a learning curve. I am sure others do so while simultaneously asking for simplicity -- which, while not impossible like Norman says -- is certainly difficult.
Chapter #3:
Summary:
In this chapter, Norman describes how quickly simple objects can become complex. From doorknobs to books, lack of standardization and difficult organization can make complexity from simplicity. However, these problems can be fixed by additional organization and correct design.
Discussion:
This chapter was interesting because of the toilet paper example. When he started the first thing I thought of was having two rolls with one restricted, but I liked hearing about why having two open rolls was a bad idea. I think this example was the best way to illustrate his point.
Chapter #4:
Summary:
In this chapter, Norman describes what he calls social signifiers -- or affordances created through the influences of others. From following other peoples' behaviors to the influences of culture, how we use and perceive an object and its state can be heavily affected by how others do.
Discussion:
This main thing that interested me in this chapter was his example of a computer program that created wear and tear on heavily used objects. I thought this was a great, natural way of displaying this information that I would never have thought of until I heard it.
Book Reading #7 - Why We Make Mistakes
References:
Title: Why We make Mistakes
Author: Joseph T. Hallinan
Editor: Donna Sinisgalli, 2009
Summary:
In this book, Joseph Hallinan describes the psychological reasons behind why people make mistakes.
In each of the main thirteen chapters, Hallinan describes an error we have in our ways of thinking that causes us to make mistakes in our daily lives. Some of these include: skimming, believing we are above average, wearing rose-colored glasses, and multitasking. In each, he uses many different examples to illustrate not only how we make mistakes, but how the people and businesses around us manipulate these mistakes for their own gain.
Finally, in the conclusion, he describes some small changes we can make in our thought processes to fix many of these errors. The main one he provides is to "think small," that is, to pay attention to small details, since that is where most of these manipulators lie.
Discussion:
I really liked this book. Not only was each chapter entertaining to read, there was a significant amount I learned about how my mind works. Each chapter provided a lot of interesting questions that gave me something to think about during bus rides.
I really didn't think the solutions that he provided in the final chapter would be very useful, however. Many of the issues he describes in the book seem almost hard-wired to our psyches, so it is unlikely we could surpass them with any amount of self-training. Then again, thinking that way might be a mistake.
(Image courtesy of: Humanology)
Title: Why We make Mistakes
Author: Joseph T. Hallinan
Editor: Donna Sinisgalli, 2009
Summary:
In this book, Joseph Hallinan describes the psychological reasons behind why people make mistakes.
In each of the main thirteen chapters, Hallinan describes an error we have in our ways of thinking that causes us to make mistakes in our daily lives. Some of these include: skimming, believing we are above average, wearing rose-colored glasses, and multitasking. In each, he uses many different examples to illustrate not only how we make mistakes, but how the people and businesses around us manipulate these mistakes for their own gain.
Finally, in the conclusion, he describes some small changes we can make in our thought processes to fix many of these errors. The main one he provides is to "think small," that is, to pay attention to small details, since that is where most of these manipulators lie.
Discussion:
I really liked this book. Not only was each chapter entertaining to read, there was a significant amount I learned about how my mind works. Each chapter provided a lot of interesting questions that gave me something to think about during bus rides.
I really didn't think the solutions that he provided in the final chapter would be very useful, however. Many of the issues he describes in the book seem almost hard-wired to our psyches, so it is unlikely we could surpass them with any amount of self-training. Then again, thinking that way might be a mistake.
(Image courtesy of: Humanology)
Paper Reading #25 - Email Overload
Comments:
Comment 1
Comment 2
References:
Title: Agent-Assisted Task Management that Reduces Email Overload
Author: Andrew Faulring, Brad Myers, Ken Mohnkern, Bradley Schmerl, Aaron Steinfeld, John Zimmerman, Asim Smailagic, Jeffery Hansen, and Daniel Siewiorek
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a new mail system that uses AI to divide e-mails into a selection of tasks. They then show that this very different method provides positive results.
They begin by describing the intricacies of the task system. When e-mails enter the inbox, an AI assistant parses the e-mails and tries to figure out what classification the task should lie under. Then, it chooses to either place it into a classification or place it in an area where the user can choose.
In addition, the e-mail client also provides a scheduling interface, which also includes an AI assistant. The AI assistant looks through the e-mail tasks and assigns what it believes to be a good amount of time for each, and prioritizes the user's future schedule. The user can then choose what tasks they are working on.
They then show the results of using this system on productivity. People using this system with the AI task assistant and e-mail assistant tend to get more meaningful tasks done than those who do not. With only the e-mail assistant, users get more overall tasks done, but they get less important tasks done.
Discussion:
I was actually quite excited about this research. The idea of having a small AI assisting me with my tasks seems like a really cool, sci-fi idea. Additionally, even at this stage, it seems to be working well, so I hope they can actually bring this to market soon.
One concern I have with the software is that they do not describe how configuration will work. I am curious if in a final design if there will be editable categories, or if because of how the AI works, there will only be preset task categories.
(Image courtesy of: Download Software)
Comment 1
Comment 2
References:
Title: Agent-Assisted Task Management that Reduces Email Overload
Author: Andrew Faulring, Brad Myers, Ken Mohnkern, Bradley Schmerl, Aaron Steinfeld, John Zimmerman, Asim Smailagic, Jeffery Hansen, and Daniel Siewiorek
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a new mail system that uses AI to divide e-mails into a selection of tasks. They then show that this very different method provides positive results.
They begin by describing the intricacies of the task system. When e-mails enter the inbox, an AI assistant parses the e-mails and tries to figure out what classification the task should lie under. Then, it chooses to either place it into a classification or place it in an area where the user can choose.
In addition, the e-mail client also provides a scheduling interface, which also includes an AI assistant. The AI assistant looks through the e-mail tasks and assigns what it believes to be a good amount of time for each, and prioritizes the user's future schedule. The user can then choose what tasks they are working on.
They then show the results of using this system on productivity. People using this system with the AI task assistant and e-mail assistant tend to get more meaningful tasks done than those who do not. With only the e-mail assistant, users get more overall tasks done, but they get less important tasks done.
Discussion:
I was actually quite excited about this research. The idea of having a small AI assisting me with my tasks seems like a really cool, sci-fi idea. Additionally, even at this stage, it seems to be working well, so I hope they can actually bring this to market soon.
One concern I have with the software is that they do not describe how configuration will work. I am curious if in a final design if there will be editable categories, or if because of how the AI works, there will only be preset task categories.
(Image courtesy of: Download Software)
Paper Reading #24 - Finding Your Way
Comments:
Comment 1
Comment 2
References:
Title: Finding Your Way in a Multi-dimensional Semantic Space with Luminoso
Authors: Robert Speer, Catherine Havasi, Nichole Treadway, and Henry Lieberman
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a program called Luminoso that provides an interface for parsing text input and displaying it.
The Luminoso system displays the relations between text sets in N-dimensions. These dimensions are first created by finding the occurrences of words in each document, and then analyzing the meanings of each of the highest words. Then, the system gathers the words with similar meanings, and assigns a dimension to each gathering.
To examine the data, the program defines the interface above. The lines connect the words related to the current selection. The colors define how "hot" a particular relation is, from white being highest, down to red. The user can navigate by selecting a particular point and then rotating into the semantic dimension you want.
Discussion:
While I think it's important to be able to quickly navigate through text to find what you need for situations like surveys, I don't think that this is the best navigation method. The concept of n-dimensionality is confusing to start with, but when you add in the abstractness of the data you will be sorting through using these dimensions, and I would feel completely lost.
Also, this paper mentions a lot of sorting and data modification methods without defining them, so in many cases I was unable to understand the backbone behind how the system worked. I think they probably would have done better to lengthen the paper and add some short definitions for each term.
(Image courtesy of: )
Comment 1
Comment 2
References:
Title: Finding Your Way in a Multi-dimensional Semantic Space with Luminoso
Authors: Robert Speer, Catherine Havasi, Nichole Treadway, and Henry Lieberman
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a program called Luminoso that provides an interface for parsing text input and displaying it.
The Luminoso system displays the relations between text sets in N-dimensions. These dimensions are first created by finding the occurrences of words in each document, and then analyzing the meanings of each of the highest words. Then, the system gathers the words with similar meanings, and assigns a dimension to each gathering.
To examine the data, the program defines the interface above. The lines connect the words related to the current selection. The colors define how "hot" a particular relation is, from white being highest, down to red. The user can navigate by selecting a particular point and then rotating into the semantic dimension you want.
Discussion:
While I think it's important to be able to quickly navigate through text to find what you need for situations like surveys, I don't think that this is the best navigation method. The concept of n-dimensionality is confusing to start with, but when you add in the abstractness of the data you will be sorting through using these dimensions, and I would feel completely lost.
Also, this paper mentions a lot of sorting and data modification methods without defining them, so in many cases I was unable to understand the backbone behind how the system worked. I think they probably would have done better to lengthen the paper and add some short definitions for each term.
(Image courtesy of: )
Sunday, April 17, 2011
Paper Reading #23 - Automatic Warning Cues
Comments:
Comment 1
Comment 2
References:
Title: Evaluating Automatic Warning Cues for Visual Search in Vascular Images
Authors: Boris van Schooten, Betsy van Dijk, Anton Nijholt, and Johan Reiber
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors perform a study on automatic warning systems for MRA machines. They find that warning systems that warn more often but produce false positives work better than both the ones that warn less or ones with no warnings at all.
The authors created a warning system for viewing images of the vascular system. In order to test it, they created a series of test vessels for users and the system to find. The vessels were classified by difficulty and type of problem. They then had users attempt to find these errors using a system that warned more, a system that warned less, a system with no warnings, and a "perfect" system that generated no errors.
They found first of all that the perfect system did the best. After that, the warning system with more warnings did next best, even though previous studies have shown the opposite to be true. Following that, the less warning and the no warning systems placed second-t0-last and last, respectively.
Discussion:
I think that this research is of great importance, since proper detection of blood vessels could prevent heart attacks and other vascular issues. Furthermore, I am sure these warning systems have other applications that are just as useful.
One issue I have with the paper, however, is that I do not think that their study was big enough. The results from the small group that they tested show nearly even results between false positives and negatives; they should have done more research to see which one did better.
(Image courtesy of: Imaging Group)
Comment 1
Comment 2
References:
Title: Evaluating Automatic Warning Cues for Visual Search in Vascular Images
Authors: Boris van Schooten, Betsy van Dijk, Anton Nijholt, and Johan Reiber
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors perform a study on automatic warning systems for MRA machines. They find that warning systems that warn more often but produce false positives work better than both the ones that warn less or ones with no warnings at all.
The authors created a warning system for viewing images of the vascular system. In order to test it, they created a series of test vessels for users and the system to find. The vessels were classified by difficulty and type of problem. They then had users attempt to find these errors using a system that warned more, a system that warned less, a system with no warnings, and a "perfect" system that generated no errors.
They found first of all that the perfect system did the best. After that, the warning system with more warnings did next best, even though previous studies have shown the opposite to be true. Following that, the less warning and the no warning systems placed second-t0-last and last, respectively.
Discussion:
I think that this research is of great importance, since proper detection of blood vessels could prevent heart attacks and other vascular issues. Furthermore, I am sure these warning systems have other applications that are just as useful.
One issue I have with the paper, however, is that I do not think that their study was big enough. The results from the small group that they tested show nearly even results between false positives and negatives; they should have done more research to see which one did better.
(Image courtesy of: Imaging Group)
Tuesday, April 12, 2011
Special Reading #4 - Media Equation
Comments:
Comment 1
Comment 2
References:
Paper 1:
Title: Machines and Mindlessness: Social Responses to Computers
Authors: Clifford Nass and Youngme Moon
Venue: Journal of Societal Issues, Vol. 56-1, 2000
Paper 2:
Title: Computers are Social Actors
Authors: Clifford Nass, Jonathan Steuer, and Ellen R. Tauber
Venue: CHI '94, April 1994
Paper 3:
Title: Can Computer Personalities Be Human Personalities?
Authors: Clifford Nass, Youngme Moon, BJ Fogg, Byron Reeves, and Chris Dryer
Venue: CHI '95, May 7-11 1995
Summary:
In these papers, the authors show that people unconsciously think of computers as people -- in some respects, anyway. When it came to factors like gender, race, and aggressiveness, their tests indicated that indeed humans applied these qualities to machines, even though they said doing so was ridiculous.
They used many methods for the tests, but the most common was the three-computer setup shown at right. One computer feeds information to the user, then they take a test on computer number two, and then the third evaluates their scores. Different selections of male and female voices as well as aggressive and less aggressive language to use.
Discussion:
I wasn't very surprised by the results of these papers. I have referred to computers in a human way before, especially when I get frustrated. Also, when you place human voices on the machines, you make them seem more humanlike, so it's not surprising that gender and racial biases get applied.
However, the fact that these results are so evident means that we must be careful about what sound clips we use when we design programs with a voice component, as well as how we give information through the user through text. Incorrect usage could make our program seem rude and cause people to dislike it.
(Image courtesy of: Paper #2)
Comment 1
Comment 2
References:
Paper 1:
Title: Machines and Mindlessness: Social Responses to Computers
Authors: Clifford Nass and Youngme Moon
Venue: Journal of Societal Issues, Vol. 56-1, 2000
Paper 2:
Title: Computers are Social Actors
Authors: Clifford Nass, Jonathan Steuer, and Ellen R. Tauber
Venue: CHI '94, April 1994
Paper 3:
Title: Can Computer Personalities Be Human Personalities?
Authors: Clifford Nass, Youngme Moon, BJ Fogg, Byron Reeves, and Chris Dryer
Venue: CHI '95, May 7-11 1995
Summary:
In these papers, the authors show that people unconsciously think of computers as people -- in some respects, anyway. When it came to factors like gender, race, and aggressiveness, their tests indicated that indeed humans applied these qualities to machines, even though they said doing so was ridiculous.
They used many methods for the tests, but the most common was the three-computer setup shown at right. One computer feeds information to the user, then they take a test on computer number two, and then the third evaluates their scores. Different selections of male and female voices as well as aggressive and less aggressive language to use.
Discussion:
I wasn't very surprised by the results of these papers. I have referred to computers in a human way before, especially when I get frustrated. Also, when you place human voices on the machines, you make them seem more humanlike, so it's not surprising that gender and racial biases get applied.
However, the fact that these results are so evident means that we must be careful about what sound clips we use when we design programs with a voice component, as well as how we give information through the user through text. Incorrect usage could make our program seem rude and cause people to dislike it.
(Image courtesy of: Paper #2)
Paper Reading #22 - POMDP Approach
Comments:
Comment 1
Comment 2
References:
Title: A POMDP Approach to P300-Based Brain-Computer Interfaces
Authors: Jaeyoung Park, Kee-Eung Kim, and Sungho Jo
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a method of optimizing the number of attempts a brain-computer interface (BCI) needs to use to find information from the user. They did this by creating a program that displayed a 2x2 or 2x3 matrix to the user and then flashed prospective letters at them (at left).
They used a learning system called POMDP to attempt to guide in the flashes to what the user wanted. This system required a lot of training before the system could be used on human subjects. However, when they tested it on humans, they found that the accuracy was much higher by 30 flashes than the normal algorithm, and maintained that accuracy all the way to the maximum of 60.
Discussion:
This paper was interesting because we get to see some more bleeding-edge research in BCI. The paper this time was significantly more readable than the previous one, but I still got lost in the probabilities.
One thing that bothers me again is I don't really see the big benefits of this at this stage. Of course, EEG-based programs are still in their infancy, but I don't see how flashing letters can later become navigating the pointer or typing with my mind. Maybe in a few years I will be able to see the connection.
(Image courtesy of: this paper)
Comment 1
Comment 2
References:
Title: A POMDP Approach to P300-Based Brain-Computer Interfaces
Authors: Jaeyoung Park, Kee-Eung Kim, and Sungho Jo
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a method of optimizing the number of attempts a brain-computer interface (BCI) needs to use to find information from the user. They did this by creating a program that displayed a 2x2 or 2x3 matrix to the user and then flashed prospective letters at them (at left).
They used a learning system called POMDP to attempt to guide in the flashes to what the user wanted. This system required a lot of training before the system could be used on human subjects. However, when they tested it on humans, they found that the accuracy was much higher by 30 flashes than the normal algorithm, and maintained that accuracy all the way to the maximum of 60.
Discussion:
This paper was interesting because we get to see some more bleeding-edge research in BCI. The paper this time was significantly more readable than the previous one, but I still got lost in the probabilities.
One thing that bothers me again is I don't really see the big benefits of this at this stage. Of course, EEG-based programs are still in their infancy, but I don't see how flashing letters can later become navigating the pointer or typing with my mind. Maybe in a few years I will be able to see the connection.
(Image courtesy of: this paper)
Microblogs #9 - Media Equation
Part 1:
Summary:
In this paper, the authors show that people have a tendency to mindlessly apply human characteristics to computers, even if consciously they believe the idea to be ridiculous. They show in a series of tests that people profiled computers based on gender, ethnicity, and loyalty, just by slightly changing the stimulus that the computer gave. Additionally, they displayed social behaviors as well.
Discussion:
To be honest, I didn't find this very surprising. I always refer to my computers like they are people, especially if they act up. Now that I write that down here, that is kind of weird to say, but it's true nonetheless.
Part 2:
Summary:
In this paper, the authors try to see if human social cues will be applied to computers. They attempt five different tests: politeness, self and other, voice self and other, gender, and programmer v. computer. They found that people do in fact apply politeness and gender roles to the computer based upon its voice.
Discussion:
To me, this paper is basically a repeat of the previous paper. However, it is interesting to see a repeat of the validity of the experiment, even though I think it is quite obvious that people place human qualities on computers.
Part 3:
Summary:
In this paper, the authors demonstrate that personality can be given to a computer without any special artificial intelligence. They showed that changing the way computers gave information not only made them seem more dominant or submissive, it also made people like or dislike them more based on their own personalities.
Discussion:
Again, no surprises here. I have applied personalities to people I have never met when reading a book, so finding it out that people apply it to a machine isn't incredibly exciting.
Summary:
In this paper, the authors show that people have a tendency to mindlessly apply human characteristics to computers, even if consciously they believe the idea to be ridiculous. They show in a series of tests that people profiled computers based on gender, ethnicity, and loyalty, just by slightly changing the stimulus that the computer gave. Additionally, they displayed social behaviors as well.
Discussion:
To be honest, I didn't find this very surprising. I always refer to my computers like they are people, especially if they act up. Now that I write that down here, that is kind of weird to say, but it's true nonetheless.
Part 2:
Summary:
In this paper, the authors try to see if human social cues will be applied to computers. They attempt five different tests: politeness, self and other, voice self and other, gender, and programmer v. computer. They found that people do in fact apply politeness and gender roles to the computer based upon its voice.
Discussion:
To me, this paper is basically a repeat of the previous paper. However, it is interesting to see a repeat of the validity of the experiment, even though I think it is quite obvious that people place human qualities on computers.
Part 3:
Summary:
In this paper, the authors demonstrate that personality can be given to a computer without any special artificial intelligence. They showed that changing the way computers gave information not only made them seem more dominant or submissive, it also made people like or dislike them more based on their own personalities.
Discussion:
Again, no surprises here. I have applied personalities to people I have never met when reading a book, so finding it out that people apply it to a machine isn't incredibly exciting.
Thursday, April 7, 2011
Paper Reading #21 - Automatically Identifying Targets
Comments:
Comment 1
Comment 2
References:
Title: Automatically Identifying Targets Users Interact with During Real World Tasks
Authors: Amy Hurst, Scott E. Hudson, and Jennifer Mankoff
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a method of gathering user click data in an accurate, device-agnostic way. They do this by using a hybrid method of kernel-based tracking combined with image identification.
Their user data gatherer, called CRUMBS, works in two levels. At the lower level, a series of different data gatherers reports on what they each think is what the user clicked on. For example, some of the low-level gatherers include Microsoft's Accessibility API, an image difference checker, as shown above, and a template checker. Then, at the high level, a machine learning interface decides based on all the data gathered from the low-level gatherers to make a final decision on what the user clicked on.
With their method, they reported a 92% correct click identification rate, which they mention is higher than using only the accessibility API. Furthermore, they mention if they captured a larger portion of the screen on a click (they currently grab only a 300x300 space), they could get an even larger portion of clicks correct.
Discussion:
I think that CRUMBS could be a very useful tool to use when testing how users make use of your program. If the collection data from other sources is as bad as they say in the paper, then it is very difficult to gather real usage information for programs, and this could help.
One thing I am curious about though is if this information gatherer must be turned on and off manually or if it does so automatically. Otherwise, it might gather usage information in a clicking space where it isn't needed, such as in a video game.
(Image courtesy of: this paper)
Comment 1
Comment 2
References:
Title: Automatically Identifying Targets Users Interact with During Real World Tasks
Authors: Amy Hurst, Scott E. Hudson, and Jennifer Mankoff
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a method of gathering user click data in an accurate, device-agnostic way. They do this by using a hybrid method of kernel-based tracking combined with image identification.
Their user data gatherer, called CRUMBS, works in two levels. At the lower level, a series of different data gatherers reports on what they each think is what the user clicked on. For example, some of the low-level gatherers include Microsoft's Accessibility API, an image difference checker, as shown above, and a template checker. Then, at the high level, a machine learning interface decides based on all the data gathered from the low-level gatherers to make a final decision on what the user clicked on.
With their method, they reported a 92% correct click identification rate, which they mention is higher than using only the accessibility API. Furthermore, they mention if they captured a larger portion of the screen on a click (they currently grab only a 300x300 space), they could get an even larger portion of clicks correct.
Discussion:
I think that CRUMBS could be a very useful tool to use when testing how users make use of your program. If the collection data from other sources is as bad as they say in the paper, then it is very difficult to gather real usage information for programs, and this could help.
One thing I am curious about though is if this information gatherer must be turned on and off manually or if it does so automatically. Otherwise, it might gather usage information in a clicking space where it isn't needed, such as in a video game.
(Image courtesy of: this paper)
Tuesday, April 5, 2011
Book Reading #6 - Things That Make us Smart
References:
Title: Things that Make us Smart
Author: Donald Norman
Editor:Bill Patrick, 1993
Summary:
In this book, Norman talks about how technology can assist or hinder us in our daily lives based on how well it is designed. He begins by discussing how most design today is technology-centered, which is why it's so confusing to use. Instead, if the designs were human-centered, they would fit much better into our lives.
He then discusses two states of the mind, experiential and reflective, and discusses how good designs place us in the right state for a task, while bad designs do not. He then discusses the importance of using the correct designs when displaying data as well as using the correct tools when working on a job. Doing so will keep you in the correct mindset for the task.
Discussion:
I feel like for learning how to design, this book is not as important as the last two we have read, as it talks less about the design of products and instead looks at their effects on the user. However, as an examination of the effects of technology on us it is quite successful. I am definitely going to think more about how I use objects in relation to the tasks I am trying to perform from now on.
(Image courtesy of: PBS)
Title: Things that Make us Smart
Author: Donald Norman
Editor:Bill Patrick, 1993
Summary:
In this book, Norman talks about how technology can assist or hinder us in our daily lives based on how well it is designed. He begins by discussing how most design today is technology-centered, which is why it's so confusing to use. Instead, if the designs were human-centered, they would fit much better into our lives.
He then discusses two states of the mind, experiential and reflective, and discusses how good designs place us in the right state for a task, while bad designs do not. He then discusses the importance of using the correct designs when displaying data as well as using the correct tools when working on a job. Doing so will keep you in the correct mindset for the task.
Discussion:
I feel like for learning how to design, this book is not as important as the last two we have read, as it talks less about the design of products and instead looks at their effects on the user. However, as an examination of the effects of technology on us it is quite successful. I am definitely going to think more about how I use objects in relation to the tasks I am trying to perform from now on.
(Image courtesy of: PBS)
Paper Reading #20 - Data-Centric Physiology
Comments:
Comment 1
Comment 2
References:
Title: Addressing the Problems of Data-Centric Physiology-Affect Relations Modeling
Authors: Roberto Legaspi, Ken-ichi Fukui, Koichi Moriyama, Satoshi Kurihara, Masayuki Numao, and Merlin Suarez
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a new method of analyzing the emotions -- or affect -- of people. They describe some of the problems with current emotion modeling solutions like the time it takes to analyze them, and how they believe it can be improved by changing how the data is analyzed.
They mention that analyzing the data continuously along the entire spectrum of data for a user will produce better results than the current method of discretely analyzing emotions. To prove this, they analyzed the emotion changes of two subjects by using the sensors shown in the pictures above and playing music that affected them emotionally.
Then, they describe in detail the algorithms behind their continuous analysis, and show that it is as fast as discrete analysis and should provide better results in certain situations.
Discussion:
To be perfectly honest, this paper was so difficult to read that I'm not exactly sure that I got the correct analysis out of it. It took me half of the paper to figure out what they were trying to do with the emotion readings, and I am still not exactly sure what the point was.
Additionally, I am curious what the benefits are behind sensing emotions of users, especially if it requires the elaborate equipment shown in the picture. I have seen some cool little games that used the user's emotions to modify the game, but no other real applications.
(Image courtesy of: this paper)
Comment 1
Comment 2
References:
Title: Addressing the Problems of Data-Centric Physiology-Affect Relations Modeling
Authors: Roberto Legaspi, Ken-ichi Fukui, Koichi Moriyama, Satoshi Kurihara, Masayuki Numao, and Merlin Suarez
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a new method of analyzing the emotions -- or affect -- of people. They describe some of the problems with current emotion modeling solutions like the time it takes to analyze them, and how they believe it can be improved by changing how the data is analyzed.
They mention that analyzing the data continuously along the entire spectrum of data for a user will produce better results than the current method of discretely analyzing emotions. To prove this, they analyzed the emotion changes of two subjects by using the sensors shown in the pictures above and playing music that affected them emotionally.
Then, they describe in detail the algorithms behind their continuous analysis, and show that it is as fast as discrete analysis and should provide better results in certain situations.
Discussion:
To be perfectly honest, this paper was so difficult to read that I'm not exactly sure that I got the correct analysis out of it. It took me half of the paper to figure out what they were trying to do with the emotion readings, and I am still not exactly sure what the point was.
Additionally, I am curious what the benefits are behind sensing emotions of users, especially if it requires the elaborate equipment shown in the picture. I have seen some cool little games that used the user's emotions to modify the game, but no other real applications.
(Image courtesy of: this paper)
Sunday, April 3, 2011
Paper Reading #19 - Personalized News
Comments:
Comment 1
Comment 2
References:
Title: Personalized News Recommendation Based on Click Behavior
Authors: Jiahui Liu, Peter Dolan, Elin Ronby Pedersen
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a new method of recommending news articles to read on the Google News service. They do this by analyzing both the articles users choose to click on, as well as current news trends.
They begin by doing an in-depth analysis of the influences on what news readers are interested in. They then find that readers are not only influenced by their own personal tastes due to age, gender and job, but also that they are influenced by current news stories. Furthermore, the news stories that people have an interest in have a large correlation based on location.
From this data, they design two probabilistic models that show the user's likelihood of clicking a particular link. One model is based upon their individual clicks, which should correspond to their personal interests, and the other is based on news stories clicked upon by people who are nearby, to get an idea of current breaking news in the area. They then find that this new method caused a noticeable increase in clicks on Google News recommendations.
Discussion:
I thought this article was interesting mainly because it gave a sort of window into how they use user information at Google. I was impressed by how creative they are in using not only personal clicks but also those of the collective to get a larger picture of the data.
As far as the product goes, I think it's a positive move, since I definitely don't have a lot of time to spend searching through articles for what I'm looking for. I also wonder if they use a similar algorithm to this in Google Reader, since that is my main news hub.
(Image courtesy of: Google News)
Comment 1
Comment 2
References:
Title: Personalized News Recommendation Based on Click Behavior
Authors: Jiahui Liu, Peter Dolan, Elin Ronby Pedersen
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a new method of recommending news articles to read on the Google News service. They do this by analyzing both the articles users choose to click on, as well as current news trends.
They begin by doing an in-depth analysis of the influences on what news readers are interested in. They then find that readers are not only influenced by their own personal tastes due to age, gender and job, but also that they are influenced by current news stories. Furthermore, the news stories that people have an interest in have a large correlation based on location.
From this data, they design two probabilistic models that show the user's likelihood of clicking a particular link. One model is based upon their individual clicks, which should correspond to their personal interests, and the other is based on news stories clicked upon by people who are nearby, to get an idea of current breaking news in the area. They then find that this new method caused a noticeable increase in clicks on Google News recommendations.
Discussion:
I thought this article was interesting mainly because it gave a sort of window into how they use user information at Google. I was impressed by how creative they are in using not only personal clicks but also those of the collective to get a larger picture of the data.
As far as the product goes, I think it's a positive move, since I definitely don't have a lot of time to spend searching through articles for what I'm looking for. I also wonder if they use a similar algorithm to this in Google Reader, since that is my main news hub.
(Image courtesy of: Google News)
Book Reading #5 - Coming of Age
References:
Title: Coming of Age in Samoa
Author: Margaret Mead
Venue: Edited by M.L. Loeb; 1908
Summary:
In this book, Mead discusses the culture of Samoa, as she found while living there as an ethnography. She then takes this knowledge to find why girls in our society have so much trouble.
She begins by describing the overall culture of Samoa. She describes what women can and cannot do, the social structure, and what qualities they like and dislike. She then goes on to describe the daily lives of a few of the girls, and then describes their conflicts.
After that, she makes some theories about why, in many cases, the lives of the Samoan girls have less conflict that girls in our society. She believes it is because of the lack of conflicting views in their society as opposed to ours. She also believes another reason is that children in their civilization are exposed to death and sex in childhood, which makes them better equipped to deal with them later in life.
Discussion:
While I cannot say that I liked this book, I think that as far as teaching us the basics of ethnography the book is a success. I really didn't get the grasp of how detailed we needed to be until I read the appendices in this book, where she has her research notes and polls. When I saw those, I finally got the concept of how the ethnography needs to be.
Additionally, I think that the points she makes about society in this book are quite valid. The many conflicting viewpoints lead to quite a lot of unnecessary conflict between people. However, I don't think we would ever be able to excise this from our society.
(Image courtesy of: Election Guide)
Title: Coming of Age in Samoa
Author: Margaret Mead
Venue: Edited by M.L. Loeb; 1908
Summary:
In this book, Mead discusses the culture of Samoa, as she found while living there as an ethnography. She then takes this knowledge to find why girls in our society have so much trouble.
She begins by describing the overall culture of Samoa. She describes what women can and cannot do, the social structure, and what qualities they like and dislike. She then goes on to describe the daily lives of a few of the girls, and then describes their conflicts.
After that, she makes some theories about why, in many cases, the lives of the Samoan girls have less conflict that girls in our society. She believes it is because of the lack of conflicting views in their society as opposed to ours. She also believes another reason is that children in their civilization are exposed to death and sex in childhood, which makes them better equipped to deal with them later in life.
Discussion:
While I cannot say that I liked this book, I think that as far as teaching us the basics of ethnography the book is a success. I really didn't get the grasp of how detailed we needed to be until I read the appendices in this book, where she has her research notes and polls. When I saw those, I finally got the concept of how the ethnography needs to be.
Additionally, I think that the points she makes about society in this book are quite valid. The many conflicting viewpoints lead to quite a lot of unnecessary conflict between people. However, I don't think we would ever be able to excise this from our society.
(Image courtesy of: Election Guide)
Ethnography Results - Week 8
This week, we return to Schotzi's one last time on Saturday night to watch The Conglomerate and Strawberry Jam. Both artists are very reminiscent of the 70s and 80s, with electronic keyboards and classical instruments.
We arrived at 10:00, and The Conglomerate had already gotten started with their act. There were less than twenty people at this time, and it appeared that most of them were involved with the band or were working at the bar.
By the end of their act, the amount of people had swelled to around thirty people. At this point I noticed that they were holding another big concert next door (you can see the float in the top right of the picture.) This may have had something to do with the sluggishness of the concert.
Then, at 11:30, Strawberry Jam came up. By the time they came on there were significantly more people, somewhere in the range of 50-60. I think this large jump in people in half an hour was due to the ending of the concert next door. There was also more interest than in the previous band, with more people clustered at the front, as you can see in the photo.
This shot, taken near the end of the concert, shows how intent the crowd was on the performance. By the end of the concert there were nearly 100 people in attendance, and a large majority were heavily focused on the band, with a lot of dancing going on.
I think that out of all the bands we watched for this project, this group was my favorite. Both of them had an original sound and felt quite developed as artists. If given the opportunity, I would definitely go to another performance of these two.
As far as the culture is concerned, Strawberry Jam had by far the most focus from the crowd out of any band that we have seen over the course of this project. I believe this is because they have been on tour the longest of any of the groups we have seen, and thus people came for the show instead of the bar. Thus, I think that for our project, we need to design a program that can help artists increase their presence and traction with people who go to concerts.
We arrived at 10:00, and The Conglomerate had already gotten started with their act. There were less than twenty people at this time, and it appeared that most of them were involved with the band or were working at the bar.
By the end of their act, the amount of people had swelled to around thirty people. At this point I noticed that they were holding another big concert next door (you can see the float in the top right of the picture.) This may have had something to do with the sluggishness of the concert.
Then, at 11:30, Strawberry Jam came up. By the time they came on there were significantly more people, somewhere in the range of 50-60. I think this large jump in people in half an hour was due to the ending of the concert next door. There was also more interest than in the previous band, with more people clustered at the front, as you can see in the photo.
This shot, taken near the end of the concert, shows how intent the crowd was on the performance. By the end of the concert there were nearly 100 people in attendance, and a large majority were heavily focused on the band, with a lot of dancing going on.
I think that out of all the bands we watched for this project, this group was my favorite. Both of them had an original sound and felt quite developed as artists. If given the opportunity, I would definitely go to another performance of these two.
As far as the culture is concerned, Strawberry Jam had by far the most focus from the crowd out of any band that we have seen over the course of this project. I believe this is because they have been on tour the longest of any of the groups we have seen, and thus people came for the show instead of the bar. Thus, I think that for our project, we need to design a program that can help artists increase their presence and traction with people who go to concerts.
Microblogs #8 - Things that Make us Smart
References:
Title: Things that Make us Smart
Author: Donald Norman
Editor:Bill Patrick, 1993
Chapter #1:
Summary:
In this chapter, Norman talks about how technology is both helping and hindering us. He then talks about how he thinks this could be changed if we shifted from technology-centered design to human-centered design. Finally, he discusses two different thinking modes for people, and how technology can help and hinder them.
Discussion:
I don't think that this chapter was very exciting, but it does a good job of setting the stage for the chapters to come. I did find the descriptions of modes of the mind interesting though, and I am curious if he is going to describe more of them later.
Chapter #2:
Summary:
In this chapter, Norman goes into more detail about the experiential and reflective states of the mind, and how technology lures us into these states. He also talks about the three levels of learning: accretion, tuning, and restructuring, and how many new-age education methods help very little with any of these learning types.
Discussion:
I thought this chapter was interesting mostly because of the discussion of optimal flow. I have noticed before that I go into states of complete focus in situations from programming to video games. I wish after reading about it that people knew more about getting into and out of this state, since I would love to go into a Zen trance on command.
Chapter #3:
Summary:
In this chapter, Norman discusses the power of representation -- that is, how tasks can be made easier or more difficult by changing how we look at them. From different types of numerals to different color usages in graphs, the way we display information can be almost as important as the information itself, since it can affect whether we look at it in an experiential or reflective mindset.
Discussion:
This chapter was interesting because of how obvious it quickly became as he went through each example that ordering information right is important. My personal favorite was the tic-tac-toe example, since it showed a method of representing the board on a computer that I had never thought about before.
Chapter #4:
Summary:
In this chapter, Norman describes how what object we choose for assistance can help or hinder our task. From newspaper versus television to digital versus analog, the correct choice of entertainment or watch can make all the difference with how enjoyable we find the experience.
Discussion:
This chapter was good because we finally got our feet wet with some design concepts. The discussion about how computer interfaces that succeed give the best representation for the task was the most helpful part.
Title: Things that Make us Smart
Author: Donald Norman
Editor:Bill Patrick, 1993
Chapter #1:
Summary:
In this chapter, Norman talks about how technology is both helping and hindering us. He then talks about how he thinks this could be changed if we shifted from technology-centered design to human-centered design. Finally, he discusses two different thinking modes for people, and how technology can help and hinder them.
Discussion:
I don't think that this chapter was very exciting, but it does a good job of setting the stage for the chapters to come. I did find the descriptions of modes of the mind interesting though, and I am curious if he is going to describe more of them later.
Chapter #2:
Summary:
In this chapter, Norman goes into more detail about the experiential and reflective states of the mind, and how technology lures us into these states. He also talks about the three levels of learning: accretion, tuning, and restructuring, and how many new-age education methods help very little with any of these learning types.
Discussion:
I thought this chapter was interesting mostly because of the discussion of optimal flow. I have noticed before that I go into states of complete focus in situations from programming to video games. I wish after reading about it that people knew more about getting into and out of this state, since I would love to go into a Zen trance on command.
Chapter #3:
Summary:
In this chapter, Norman discusses the power of representation -- that is, how tasks can be made easier or more difficult by changing how we look at them. From different types of numerals to different color usages in graphs, the way we display information can be almost as important as the information itself, since it can affect whether we look at it in an experiential or reflective mindset.
Discussion:
This chapter was interesting because of how obvious it quickly became as he went through each example that ordering information right is important. My personal favorite was the tic-tac-toe example, since it showed a method of representing the board on a computer that I had never thought about before.
Chapter #4:
Summary:
In this chapter, Norman describes how what object we choose for assistance can help or hinder our task. From newspaper versus television to digital versus analog, the correct choice of entertainment or watch can make all the difference with how enjoyable we find the experience.
Discussion:
This chapter was good because we finally got our feet wet with some design concepts. The discussion about how computer interfaces that succeed give the best representation for the task was the most helpful part.
Tuesday, March 29, 2011
Book Reading #4 - Emotional Design
References:
Title: Emotional Design
Author: Donald Norman
Editor: Jo Ann Miller, New York, 2004.
Summary:
In this book, Norman describes how the designs of products can positively or negatively impact our perceptions before we even use them. He describes these influences as three different levels -- visceral, behavioral, and reflective.
The visceral level is influenced by the look and feel of the product. This level occurs before a user begins to use the product, and can put them in a positive mindset which can make the product "work" better.
The behavioral level is influenced by how well the product works. This level occurs as the user makes use of the product, and is slightly affected by the performance at the lower level. This level has been well described in Norman's other books.
The reflective level is influenced by the remembrances of the product. This level occurs far after the usage of the device; a positive influence here can bring the user back to use the product again if it works correctly.
With each of these in the book, Norman goes into significant detail of what influences each of these levels, as well as general design tips.
Discussion:
In general, like his last book, I liked this book because of how much I learned. However, after reading his last book he seems like less of a sage in this one, mostly because of how he seems to often reuse tips he as already given.
However, because the focus of this class is on design, I think that this along with the other design books by Norman are the most important books in this class. Following these design concepts well can help us make good products far down the road.
(Image courtesy of: )
Title: Emotional Design
Author: Donald Norman
Editor: Jo Ann Miller, New York, 2004.
Summary:
In this book, Norman describes how the designs of products can positively or negatively impact our perceptions before we even use them. He describes these influences as three different levels -- visceral, behavioral, and reflective.
The visceral level is influenced by the look and feel of the product. This level occurs before a user begins to use the product, and can put them in a positive mindset which can make the product "work" better.
The behavioral level is influenced by how well the product works. This level occurs as the user makes use of the product, and is slightly affected by the performance at the lower level. This level has been well described in Norman's other books.
The reflective level is influenced by the remembrances of the product. This level occurs far after the usage of the device; a positive influence here can bring the user back to use the product again if it works correctly.
With each of these in the book, Norman goes into significant detail of what influences each of these levels, as well as general design tips.
Discussion:
In general, like his last book, I liked this book because of how much I learned. However, after reading his last book he seems like less of a sage in this one, mostly because of how he seems to often reuse tips he as already given.
However, because the focus of this class is on design, I think that this along with the other design books by Norman are the most important books in this class. Following these design concepts well can help us make good products far down the road.
(Image courtesy of: )
Microblogs #7 - Why We Make Mistakes
References:
Title: Why We Make Mistakes
Author: Joseph T. Hallinan
Editor: Donna Sinisgalli, 2009
Chapter #0:
Summary:
In this chapter, Hallinan describes a few examples of common mistakes, why they can sometimes be helpful, and also some ways we can fix them. He then goes into an overview of the topics of the book in brief.
Discussion:
This chapter was interesting because of all of the experiments he talked about related to mistakes. My definite favorite was being able to remember better if you studied in the same situation as you are remembering. I want to try this for a test at some point.
Chapter #1:
Summary:
In this chapter, Hallinan describes how people don't see as well as they think they do. From missing the switching of people in a movie scene to misjudging the sizes of tables, he describes many examples of our lack of sight. He also mentions that this one we cannot correct.
Discussion:
I really enjoyed this chapter because the author presented many examples that I enjoyed reading about and subjecting myself to. Not to mention, he also mentioned some scary facts about cancer examination and TSA screenings.
Chapter #2:
Summary:
In this chapter, Hallinan discusses errors in our memory. He talks about forgetting passwords, faces, and names. He then talks about how our minds remember based on meaning, not base observations, and how by applying meaning to objects in your life, you can help your own memory.
Discussion:
As with the last chapter, the most memorable part of the chapter was the finish, in which he describes the unreliability of eyewitness testimony. I figured it was quite reliable, like I am sure most people do, so learning that it is not is shocking.
Chapter #3:
Summary:
In this chapter, Hallinan describes how we rely on our first instinct in many situations when we shouldn't. From the influence of faces to the influences of colors, he shows that we allow our initial impression to take control.
Discussion:
The most interesting part of this chapter for me is again at the end, when he talks about switching test answers. I remember when I was studying for the SAT that I was told not to switch my answer, and it still seems right to me now. I don't know if given the chance I could change my answer on a test even if I wanted to.
Chapter #4:
Summary:
In this chapter, Hallinan describes how our perceptions are biased, often without us knowing. He talks about how we like to make ourselves look better in hindsight -- from grades, to the statements we make -- and how we deny it later. He then describes the negative influence this causes in the real world.
Discussion:
The part of this chapter that was the most eye-opening was the section discussing the effects of industry warnings. I found it interesting that people would take a bigger advantage of you if you were warned about them than otherwise.
Chapter #5:
Summary:
In this chapter, Hallinan discusses our ability to multitask -- or more accurately how we do not have an ability to multitask. He talks about how the constant distractions of multiple tasks actually impede our ability to work. He then describes how this can be a fatal problem in real-time situations such as planes or cars.
Discussion:
This chapter had a lot of scary examples that truly hammer in the point of how bad distractions can be. When I saw the Microsoft sync system before I thought it was a really cool idea, but now after seeing all of the statistics of accidents caused by such systems, I think that maybe they should hold off on it.
Chapter #6:
Summary:
In this chapter, Hallinan discusses the psychological influences of framing -- that is, influences around a decision that affect that decision consciously or otherwise. For example, low sale prices driving sales when other prices actually rise or french music causing an increase in french wine sales.
Discussion:
This chapter was interesting because you get to see more of the tactics of economics. So far, many of our books have pointed out the interesting psychology behind making money-- and every time I've seen it so far, it's been frighteningly effective. I am never going to think about buying things in the same way again.
Chapter #7:
Summary:
In this chapter, Hallinan describes the human tendency to skim -- not just with words, but also with information. He discusses the problems with proofreading as well as how much we rely on context to determine the truth of a situation.
Discussion:
This chapter was interesting to me because of the section on how experts tend to skip over the details. For programmers like us, this is probably a liability, since we might make small mistakes and it will take us forever to finally discover the real problem. This is also why it's a good idea to have someone else look over your code.
Chapter #8:
Summary:
In this chapter, Hallinan discusses the human trait of reorganizing information. From distorting locations in maps to distorting the truth in stories, people change the information that they give all of the time. Even more interestingly, they sometimes do it unconsciously.
Discussion:
This chapter was interesting because Stanley Milgram got mentioned again. However, for once, he wasn't mentioned solely for the obedience to authority experiment, although it was mentioned. I have to say that the experiment he mentioned here seemed a lot less interesting than the authority experiment, though.
Chapter #9:
Summary:
In this chapter, Hallinan describes the differences in psychology between the sexes. From asking for directions to confidence in stock trading, men and women have differing viewpoints on the world.
Discussion:
By far the most interesting part of this chapter was the section on how overconfident men tend to be. I think this is probably because we are raised to show ourselves as confident even when we are not. Also, the section on the lack of women in CS was also interesting to read, mostly because I can see how true it is by looking in our department.
Chapter #10:
Summary:
In this chapter, Hallinan shows us the power of overconfidence in shaping our mistakes. From horse races to credit cards, people everywhere are taking advantage of our tendencies to think we are better than we are.
Discussion:
This chapter was interesting because of how many of these mistakes I have made. Both the gym example and the confidence quiz example made a fool out of me. Interestingly enough though, I think that overconfidence is actually necessary; if we didn't have it, we would probably be frozen whenever we had to make a choice.
Chapter #11:
Summary:
In this chapter, Hallinan describes the errors that come because we would rather not learn new things. From nail-guns to simple puzzles, we would rather not read the instructions -- and even if we do, we probably won't do it a second time.
Discussion:
This chapter was interesting because we got to hear from our best friend Donald Norman again. Also, I liked the thinking outside the box problem, even though I didn't figure it out.
This probably has some lessons for computer scientists, too. For example, people probably won't use all of the new features on your product because they like doing it their way -- even if the new feature makes that method shorter.
Chapter #12:
Summary:
In this chapter, Hallinan discusses the problems we have when we do not use constraints. Without constraints and affordances, people are significantly more likely to make mistakes. Additionally, since these mistakes are often blamed on the lowest person, we also miss their root causes.
Discussion:
This chapter was interesting because Donald Norman appeared again in this book, but not by name. Instead, his concepts of affordances and constraints showed up. I guess that shows how useful they are.
Chapter #13:
Summary:
In this chapter, Hallinan describes the effects of our future perspectives. He shows us that while we think we know what we will like later, often we are wrong. In fact, many of the people we expect to be least happy are actually the happiest, for example.
Discussion:
I thought this chapter was interesting because I never new there was such an immigration and emigration flow into California. I have relatives there and visit regularly and would NEVER want to live there. I think that I probably have incorrect perceptions about living in other areas, though.
Conclusion:
Summary:
In this chapter, Hallinan sums up the book by giving us some fixes we can apply to our own lives to keep from repeating the errors he has shown. His biggest tip is just to think small -- that is, look at the small details behind your actions and you will be able to see why you make mistakes.
Discussion:
I liked this chapter because we finally get a list of tips we can use to keep from making all of these errors ourselves. Hallinan does a good job of iterating through all of the chapters and giving small tips that can help us. I don't know if I will be able to learn any or all of them, but it's something to strive for.
Title: Why We Make Mistakes
Author: Joseph T. Hallinan
Editor: Donna Sinisgalli, 2009
Chapter #0:
Summary:
In this chapter, Hallinan describes a few examples of common mistakes, why they can sometimes be helpful, and also some ways we can fix them. He then goes into an overview of the topics of the book in brief.
Discussion:
This chapter was interesting because of all of the experiments he talked about related to mistakes. My definite favorite was being able to remember better if you studied in the same situation as you are remembering. I want to try this for a test at some point.
Chapter #1:
Summary:
In this chapter, Hallinan describes how people don't see as well as they think they do. From missing the switching of people in a movie scene to misjudging the sizes of tables, he describes many examples of our lack of sight. He also mentions that this one we cannot correct.
Discussion:
I really enjoyed this chapter because the author presented many examples that I enjoyed reading about and subjecting myself to. Not to mention, he also mentioned some scary facts about cancer examination and TSA screenings.
Chapter #2:
Summary:
In this chapter, Hallinan discusses errors in our memory. He talks about forgetting passwords, faces, and names. He then talks about how our minds remember based on meaning, not base observations, and how by applying meaning to objects in your life, you can help your own memory.
Discussion:
As with the last chapter, the most memorable part of the chapter was the finish, in which he describes the unreliability of eyewitness testimony. I figured it was quite reliable, like I am sure most people do, so learning that it is not is shocking.
Chapter #3:
Summary:
In this chapter, Hallinan describes how we rely on our first instinct in many situations when we shouldn't. From the influence of faces to the influences of colors, he shows that we allow our initial impression to take control.
Discussion:
The most interesting part of this chapter for me is again at the end, when he talks about switching test answers. I remember when I was studying for the SAT that I was told not to switch my answer, and it still seems right to me now. I don't know if given the chance I could change my answer on a test even if I wanted to.
Chapter #4:
Summary:
In this chapter, Hallinan describes how our perceptions are biased, often without us knowing. He talks about how we like to make ourselves look better in hindsight -- from grades, to the statements we make -- and how we deny it later. He then describes the negative influence this causes in the real world.
Discussion:
The part of this chapter that was the most eye-opening was the section discussing the effects of industry warnings. I found it interesting that people would take a bigger advantage of you if you were warned about them than otherwise.
Chapter #5:
Summary:
In this chapter, Hallinan discusses our ability to multitask -- or more accurately how we do not have an ability to multitask. He talks about how the constant distractions of multiple tasks actually impede our ability to work. He then describes how this can be a fatal problem in real-time situations such as planes or cars.
Discussion:
This chapter had a lot of scary examples that truly hammer in the point of how bad distractions can be. When I saw the Microsoft sync system before I thought it was a really cool idea, but now after seeing all of the statistics of accidents caused by such systems, I think that maybe they should hold off on it.
Chapter #6:
Summary:
In this chapter, Hallinan discusses the psychological influences of framing -- that is, influences around a decision that affect that decision consciously or otherwise. For example, low sale prices driving sales when other prices actually rise or french music causing an increase in french wine sales.
Discussion:
This chapter was interesting because you get to see more of the tactics of economics. So far, many of our books have pointed out the interesting psychology behind making money-- and every time I've seen it so far, it's been frighteningly effective. I am never going to think about buying things in the same way again.
Chapter #7:
Summary:
In this chapter, Hallinan describes the human tendency to skim -- not just with words, but also with information. He discusses the problems with proofreading as well as how much we rely on context to determine the truth of a situation.
Discussion:
This chapter was interesting to me because of the section on how experts tend to skip over the details. For programmers like us, this is probably a liability, since we might make small mistakes and it will take us forever to finally discover the real problem. This is also why it's a good idea to have someone else look over your code.
Chapter #8:
Summary:
In this chapter, Hallinan discusses the human trait of reorganizing information. From distorting locations in maps to distorting the truth in stories, people change the information that they give all of the time. Even more interestingly, they sometimes do it unconsciously.
Discussion:
This chapter was interesting because Stanley Milgram got mentioned again. However, for once, he wasn't mentioned solely for the obedience to authority experiment, although it was mentioned. I have to say that the experiment he mentioned here seemed a lot less interesting than the authority experiment, though.
Chapter #9:
Summary:
In this chapter, Hallinan describes the differences in psychology between the sexes. From asking for directions to confidence in stock trading, men and women have differing viewpoints on the world.
Discussion:
By far the most interesting part of this chapter was the section on how overconfident men tend to be. I think this is probably because we are raised to show ourselves as confident even when we are not. Also, the section on the lack of women in CS was also interesting to read, mostly because I can see how true it is by looking in our department.
Chapter #10:
Summary:
In this chapter, Hallinan shows us the power of overconfidence in shaping our mistakes. From horse races to credit cards, people everywhere are taking advantage of our tendencies to think we are better than we are.
Discussion:
This chapter was interesting because of how many of these mistakes I have made. Both the gym example and the confidence quiz example made a fool out of me. Interestingly enough though, I think that overconfidence is actually necessary; if we didn't have it, we would probably be frozen whenever we had to make a choice.
Chapter #11:
Summary:
In this chapter, Hallinan describes the errors that come because we would rather not learn new things. From nail-guns to simple puzzles, we would rather not read the instructions -- and even if we do, we probably won't do it a second time.
Discussion:
This chapter was interesting because we got to hear from our best friend Donald Norman again. Also, I liked the thinking outside the box problem, even though I didn't figure it out.
This probably has some lessons for computer scientists, too. For example, people probably won't use all of the new features on your product because they like doing it their way -- even if the new feature makes that method shorter.
Chapter #12:
Summary:
In this chapter, Hallinan discusses the problems we have when we do not use constraints. Without constraints and affordances, people are significantly more likely to make mistakes. Additionally, since these mistakes are often blamed on the lowest person, we also miss their root causes.
Discussion:
This chapter was interesting because Donald Norman appeared again in this book, but not by name. Instead, his concepts of affordances and constraints showed up. I guess that shows how useful they are.
Chapter #13:
Summary:
In this chapter, Hallinan describes the effects of our future perspectives. He shows us that while we think we know what we will like later, often we are wrong. In fact, many of the people we expect to be least happy are actually the happiest, for example.
Discussion:
I thought this chapter was interesting because I never new there was such an immigration and emigration flow into California. I have relatives there and visit regularly and would NEVER want to live there. I think that I probably have incorrect perceptions about living in other areas, though.
Conclusion:
Summary:
In this chapter, Hallinan sums up the book by giving us some fixes we can apply to our own lives to keep from repeating the errors he has shown. His biggest tip is just to think small -- that is, look at the small details behind your actions and you will be able to see why you make mistakes.
Discussion:
I liked this chapter because we finally get a list of tips we can use to keep from making all of these errors ourselves. Hallinan does a good job of iterating through all of the chapters and giving small tips that can help us. I don't know if I will be able to learn any or all of them, but it's something to strive for.
Paper Reading #18 - News Browsing
Comments:
Comment 1
Comment 2
References:
Title: Aspect-level News Browsing: Understanding News Events from Multiple Viewpoints
Authors: Souneil Park, SangJeong Lee, and Junhwa Song
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a method of alleviating media bias in the news by providing varied versions of stories on the same subject. They call this method aspect-level news browsing.
Their system involves partitioning articles about a certain topic into different quadrants depending on the article's subject matter. They do this by analyzing articles in two different ways. One way they analyze is by dissecting the article and examining the first paragraph, where journalists tend to cluster main information. The second way they analyze it is to analyze articles near the beginning and ends of events, since journalists tend to report on more diverse parts of the issue further after the event. They then do an examination on how good their system is by checking its results to other algorithms.
Discussion:
I think that this system is a great idea. I personally know family members who will not talk to each other because they are so politically polarized from each other. Hopefully with a system like this they could learn to analyze the issues further.
In reality though, I think that a system like this won't help because most people won't spend the time reading multiple articles. Most people will just read one article and move on, which completely ruins the point of the program.
(Image courtesy of: Frogtown blog)
Comment 1
Comment 2
References:
Title: Aspect-level News Browsing: Understanding News Events from Multiple Viewpoints
Authors: Souneil Park, SangJeong Lee, and Junhwa Song
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a method of alleviating media bias in the news by providing varied versions of stories on the same subject. They call this method aspect-level news browsing.
Their system involves partitioning articles about a certain topic into different quadrants depending on the article's subject matter. They do this by analyzing articles in two different ways. One way they analyze is by dissecting the article and examining the first paragraph, where journalists tend to cluster main information. The second way they analyze it is to analyze articles near the beginning and ends of events, since journalists tend to report on more diverse parts of the issue further after the event. They then do an examination on how good their system is by checking its results to other algorithms.
Discussion:
I think that this system is a great idea. I personally know family members who will not talk to each other because they are so politically polarized from each other. Hopefully with a system like this they could learn to analyze the issues further.
In reality though, I think that a system like this won't help because most people won't spend the time reading multiple articles. Most people will just read one article and move on, which completely ruins the point of the program.
(Image courtesy of: Frogtown blog)
Paper Reading # 17 - Personalized Reading Support
Comments:
Comment 1
Comment 2
References:
Title: Personalized Reading Support for Second-Language Web Documents by Collective Intelligence
Authors: Yo Ehara, Nobuyuki Shimizu, Takashi Ninomiya, and Hiroshi Nakagawa
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a new method of providing definitions for ESL readers through information gathering. Many people for whom English is the second language use programs called glossers when reading; these programs provide definitions for unfamiliar words. Most glossers automatically show the definitions for some words, and they provide this feature by choosing words that appear less frequently in the language.
With their program, they instead choose words picked by what words each individual person clicks for definitions on. They take this information and use it to calculate a person's difficulty index; that is, they determine what difficulty words they are likely to know and only gloss words that are above that difficulty. They discuss many varying algorithms that they could have used, and then show that only one method for this was suitable because it is the only one that works online. Finally, they show that the online algorithm is just as efficient as local algorithms.
Discussion:
I thought this article was interesting because I had never heard of a glosser before, and now that I have I feel like I could use one sometimes, especially when reading papers like this. The paper was very technical, which I feel is a positive, but I had a significant amount of trouble following the algorithmic analysis during the latter part of the paper. Overall however, I feel like their method would work better than previous methods and that they should make a final product out of this.
(Image courtesy of: this paper)
Comment 1
Comment 2
References:
Title: Personalized Reading Support for Second-Language Web Documents by Collective Intelligence
Authors: Yo Ehara, Nobuyuki Shimizu, Takashi Ninomiya, and Hiroshi Nakagawa
Venue: IUI 2010, Feb. 7-10 2010
Summary:
In this paper, the authors describe a new method of providing definitions for ESL readers through information gathering. Many people for whom English is the second language use programs called glossers when reading; these programs provide definitions for unfamiliar words. Most glossers automatically show the definitions for some words, and they provide this feature by choosing words that appear less frequently in the language.
With their program, they instead choose words picked by what words each individual person clicks for definitions on. They take this information and use it to calculate a person's difficulty index; that is, they determine what difficulty words they are likely to know and only gloss words that are above that difficulty. They discuss many varying algorithms that they could have used, and then show that only one method for this was suitable because it is the only one that works online. Finally, they show that the online algorithm is just as efficient as local algorithms.
Discussion:
I thought this article was interesting because I had never heard of a glosser before, and now that I have I feel like I could use one sometimes, especially when reading papers like this. The paper was very technical, which I feel is a positive, but I had a significant amount of trouble following the algorithmic analysis during the latter part of the paper. Overall however, I feel like their method would work better than previous methods and that they should make a final product out of this.
(Image courtesy of: this paper)
Paper Reading #16 - UIMarks
Comments:
Comment 1
Comment 2
References:
Title: UIMarks: Quick Graphical Interaction with Specific Targets
Authors: Olivier Chapuis and Nicolas Roussel
Venue: UIST 2010, Oct 3-6, 2010
Summary:
In this paper, the authors describe a system called UIMark for integrating target-aware pointing techniques with normal pointing techniques. The system allows for the programming of hot spots like the one pictured on the right. The user can activate these nodes by switching into a special pointing mode and moving a bubble cursor towards it. Then, the system performs the action specified by the node. The hot spots have symbols on them to indicate the actions performed on the hot spot; in this case, the hot spot will single click on the icon and return to the previous mouse position.
They then perform a study to determine the usability of the pointing system. They found that for most complex clicking tasks, UIMarks is faster than the traditional pointing method. However, if it is only used for mouse movement and not clicking, the system is slower than the traditional method. They then describe some future studies they would like to perform with the system.
Discussion:
I think that this is a reasonably good pointing system, but I'm not sure if many people would use it. I believe that having to program the system to provide the marks makes it a little to difficult and time-consuming for most. However, for power users, the system would be a boon. Being able to quickly move and click icons would be very useful; for example, the example above shows possibilities for Photoshop.
(Image courtesy of: the UIMarks paper)
Comment 1
Comment 2
References:
Title: UIMarks: Quick Graphical Interaction with Specific Targets
Authors: Olivier Chapuis and Nicolas Roussel
Venue: UIST 2010, Oct 3-6, 2010
Summary:
In this paper, the authors describe a system called UIMark for integrating target-aware pointing techniques with normal pointing techniques. The system allows for the programming of hot spots like the one pictured on the right. The user can activate these nodes by switching into a special pointing mode and moving a bubble cursor towards it. Then, the system performs the action specified by the node. The hot spots have symbols on them to indicate the actions performed on the hot spot; in this case, the hot spot will single click on the icon and return to the previous mouse position.
They then perform a study to determine the usability of the pointing system. They found that for most complex clicking tasks, UIMarks is faster than the traditional pointing method. However, if it is only used for mouse movement and not clicking, the system is slower than the traditional method. They then describe some future studies they would like to perform with the system.
Discussion:
I think that this is a reasonably good pointing system, but I'm not sure if many people would use it. I believe that having to program the system to provide the marks makes it a little to difficult and time-consuming for most. However, for power users, the system would be a boon. Being able to quickly move and click icons would be very useful; for example, the example above shows possibilities for Photoshop.
(Image courtesy of: the UIMarks paper)
Book Reading #3 - Obedience to Authority
References:
Title: Obedience to Authority
Author: Stanley Milgram
Editor: 1975
Summary:
In this book, Milgram details his famous experiment on the effects of authority, from its inception to the many different iterations of the experiment that were performed.
Milgram begins by discussing how he came up with the experiment and how they brought in the candidates for the experiment. He then briefly describes the expected results that he received from asking people.
Then, he begins describing the various versions of his experiment. He mentions many different iterations, including learner in separate room, conflicting orders, female teachers, and even one where the authority is not in the room. After each set of experiments, he gives stories of individual subjects, which further illustrate the quandaries that the subjects were in.
At the end of the book, he describes in deep detail his theories on why we are so susceptible to authority, from both a biological and mechanical perspective. He then closes the book by refuting some of the most basic arguments against his theories.
Discussion:
I really liked this book because of how it changed my perceptions of the experiment. In the other book we read as well as the IRB tests, they lambasted Milgram and his experiment. However, even though the experiment definitely caused people some problems, it also taught people valuable lessons about themselves.
Furthermore, because of the detailed manner in which he experimented, I feel like Milgram gave us valuable knowledge about the nature of authority. In fact, in a few parts of the book where he mentioned experiments he was unable to do, I was saddened because I was curious what theories he could have gleaned had he been able to do them.
(Image courtesy of: All About Psychology)
Title: Obedience to Authority
Author: Stanley Milgram
Editor: 1975
Summary:
In this book, Milgram details his famous experiment on the effects of authority, from its inception to the many different iterations of the experiment that were performed.
Milgram begins by discussing how he came up with the experiment and how they brought in the candidates for the experiment. He then briefly describes the expected results that he received from asking people.
Then, he begins describing the various versions of his experiment. He mentions many different iterations, including learner in separate room, conflicting orders, female teachers, and even one where the authority is not in the room. After each set of experiments, he gives stories of individual subjects, which further illustrate the quandaries that the subjects were in.
At the end of the book, he describes in deep detail his theories on why we are so susceptible to authority, from both a biological and mechanical perspective. He then closes the book by refuting some of the most basic arguments against his theories.
Discussion:
I really liked this book because of how it changed my perceptions of the experiment. In the other book we read as well as the IRB tests, they lambasted Milgram and his experiment. However, even though the experiment definitely caused people some problems, it also taught people valuable lessons about themselves.
Furthermore, because of the detailed manner in which he experimented, I feel like Milgram gave us valuable knowledge about the nature of authority. In fact, in a few parts of the book where he mentioned experiments he was unable to do, I was saddened because I was curious what theories he could have gleaned had he been able to do them.
(Image courtesy of: All About Psychology)
Ethnography Results - Week 7
The only local band that played this weekend cancelled, so we were unable to attend a concert this weekend.
Tuesday, March 22, 2011
Ethnography Results - Week 6
I was out of town both weekends of the break and was thus unable to attend a local concert.
Paper Reading #15 - Jogging over a Distance
Comments:
Comment 1
Comment 2
References:
Title: Jogging over a Distance between Europe and Australia
Authors: Florian Mueller, Frank Vetere, Martin Gibbs, Darren Edge, Stefan Agamanolis, and Jennifer Sheridan.
Venue: UIST 2010, Oct 3-6, 2010
Summary:
In this paper, the authors describe a framework for distributed social exercising called Jogging over a Distance. This system allows two users to be able to jog together regardless of their location together -- hopefully allowing them to exercise better.
The system works by having a headset connected to a heart monitor, a mobile phone, and a small computer, which makes it similar to the Nike plus system pictured at left. After setting a heart rate, the user can converse with their workout partner over the course of the exercise session using the headset.
The social aspect comes in because instead of displaying results after the workout, real-time heart rate information is given to the user and their partner through the direction of the sound during their conversation. If the user is not working as hard as their partner, they sound like they are ahead of you, and vice-versa. This links the desire to talk to the desire to work out; this strengthens the jogger's resolve.
The authors conducted a usage study on the framework, and found that most users liked the system, and thought it helped in their workouts. They found that they worked harder because they wanted to be able to hear the conversation better.
Discussion:
This paper is interesting because this is a product I could see myself using. In order to stick to a workout plan, I need to have another person working with me. A system like this could allow me to expand the pool out beyond the local area.
Additionally, I think the system has more applications than just exercise. A more refined version of this system could probably allow people to "race" each other on foot without being in the same location, using GPS or other methods. Additionally, a further social aspect could be added by allowing people to randomly connect to other joggers if they don't have someone available to partner with. My other concern with the current implementation is all of the equipment involved. They need to make a more compact prototype to determine how feasible this design would be as a real product.
(Image courtesy of: Apple Gazette).
Comment 1
Comment 2
References:
Title: Jogging over a Distance between Europe and Australia
Authors: Florian Mueller, Frank Vetere, Martin Gibbs, Darren Edge, Stefan Agamanolis, and Jennifer Sheridan.
Venue: UIST 2010, Oct 3-6, 2010
Summary:
In this paper, the authors describe a framework for distributed social exercising called Jogging over a Distance. This system allows two users to be able to jog together regardless of their location together -- hopefully allowing them to exercise better.
The system works by having a headset connected to a heart monitor, a mobile phone, and a small computer, which makes it similar to the Nike plus system pictured at left. After setting a heart rate, the user can converse with their workout partner over the course of the exercise session using the headset.
The social aspect comes in because instead of displaying results after the workout, real-time heart rate information is given to the user and their partner through the direction of the sound during their conversation. If the user is not working as hard as their partner, they sound like they are ahead of you, and vice-versa. This links the desire to talk to the desire to work out; this strengthens the jogger's resolve.
The authors conducted a usage study on the framework, and found that most users liked the system, and thought it helped in their workouts. They found that they worked harder because they wanted to be able to hear the conversation better.
Discussion:
This paper is interesting because this is a product I could see myself using. In order to stick to a workout plan, I need to have another person working with me. A system like this could allow me to expand the pool out beyond the local area.
Additionally, I think the system has more applications than just exercise. A more refined version of this system could probably allow people to "race" each other on foot without being in the same location, using GPS or other methods. Additionally, a further social aspect could be added by allowing people to randomly connect to other joggers if they don't have someone available to partner with. My other concern with the current implementation is all of the equipment involved. They need to make a more compact prototype to determine how feasible this design would be as a real product.
(Image courtesy of: Apple Gazette).
Monday, March 21, 2011
Microblogs #6 - Obedience to Authority
References:
Title: Obedience to Authority
Author: Stanley Milgram
Editor:
Chapter #1:
Summary:
In this chapter, Milgram describes the events in WWII that gestated the experiment, as well as some of the difficulties with finding an answer to this question. He then describes in brief the surprising findings of this experiment
Discussion:
This chapter was a solid beginning to the book. Milgram clearly provides his reasoning behind beginning this now controversial experiment, and so far I can agree with his principles.
Chapter #2:
Summary:
In this chapter, Milgram describes the experimental setup from invitation to debriefing and details how the participants were dealt with in each. He also describes briefly some of the iterations they went through in the development of the experiment.
Discussion:
This chapter was interesting because we got to see the scientific basis behind the experiment. I enjoyed seeing some of the photos of the equipment and people behind the act. Also, I find it interesting that they had to scale up the experiment to show less amazing results, instead of trying to pad it to get more amazing results.
Chapter #3:
Summary:
In this chapter, Milgram describes a survey he performed, in which he asked a series of psychiatrists, students, and other people to predict how they and others would perform. They believed that very few people would go to the max level and that they definitely would not.
Discussion:
This chapter is interesting mostly because I already know the results. If I didn't already know the results, I would probably be caught off-guard just as much as the people who took this poll. However, knowing the outcome, the irony and foreshadowing here is quite interesting to watch.
Chapter #4:
Summary:
In this chapter, Milgram details the influences of distance to the subject on obedience. He finds that the closer the subject and the victim were, the more they identified with him, and vice-versa. He also gives a few reasons why this could be.
Discussion:
This chapter was interesting because I didn't know that they tried changing how much the subject and victim interacted from any of the other books. I was also equally surprised by the fact that even when the subjects had to force the victim to take shocks, more than a quarter of subjects went to maximum power.
Chapter #5:
Summary:
In this chapter, Milgram describes the reactions of many of the subjects of the experiments. While he doesn't show any anecdotes from the least proximity experiment, he shows a large gamut of responses from different people.
Discussion:
This chapter was interesting because, like in Skinner's Box, we can see how the differing values of each subject not only affect how they respond during the experiment, but also how they are affected afterwards. I was actually not surprised at all when the military man followed through with barely any pushing.
Chapter #6:
Summary:
In this chapter, Milgram describes a few other versions of the experiment they attempted to discern how much differing social factors had on obedience. Some of the changes they tried were female subjects, non-local scientist, and having the subjects choose the voltage. Many of them had no effect on the results, but a few had drastic results.
Discussion:
This chapter was interesting because we got to see many different theories as to the powers of authority. My favorite part was seeing the passive-aggressive behavior displayed by the subjects when the scientist character was not in the room.
Chapter #7:
Summary:
In this chapter, Milgram describes more examples of reactions given by participants. In this case, the participants were involved in the special situations shown in chapter 6. Most of the focus is on the women subjects in this set.
Discussion:
This chapter was interesting because we get to see more human reactions to this unreal situation. I don't have a favorite out of this batch, but that's because there are quite a few that I thought were quite intriguing responses.
Chapter #8:
Summary:
In this chapter, Milgram describes further modifications to the experiment. In this grouping, they tested different combinations of scientist and ordinary man as victim and authority. They found that the ordinary man could not wield authority as the scientist did, but the scientist could easily become the victim.
Discussion:
This was interesting because of how clear-cut the responses were to authority as victim and victim wanting the shocks. In both cases the shocking did not continue 100% of the time. I figured that if the victim wanted to continue, there would be a decent chance that someone would continue even just one more up the board, but apparently not.
Chapter #9:
Summary:
In this chapter, Milgram describes two more modifications to his experiment involving the influences of others in our ability to obey. In these modifications, he tests both the effects of allowing someone else to do the dirty work and having others disobey first.
Discussion:
Even though it seemed obvious, seeing that people were more likely to quit when someone else did first was interesting to see. Additionally, finding out that having someone else doing the bad activity decreased the disobedience rate was intriguing as well.
Chapter #10:
Summary:
In this chapter, Milgram describes his theories of why people are obedient. He describes it using a robotic model, as well as an evolutionary model, and then uses these to define a state of mind he calls the agentic state.
Discussion:
This chapter was interesting because we finally get to see the theories he creates to manage the data. I have to admit that the descriptions of both models bored me a little bit, but I am curious to see how he will tie it together in the next chapter.
Chapter #11:
Summary:
In this chapter, Milgram describes the agentic state from its inception to how we enter and exit it. He describes the factors that teach us to obey when we are young, what the agentic state feels like to us, and the factors that keep us from exiting it.
Discussion:
This chapter was interesting because as he was listing off the factors, I was able to think of situations that I had been involved with that I had felt that way in. I was also interested to learn of why we have so much trouble disobeying.
Chapter #12:
Summary:
In this chapter, Milgram begins to theorize how disobedience begins through the process of strain. He describes some of the methods we use to reduce the strain and how it may be a positive thing.
Discussion:
This chapter was interesting because he actually turned around the perspective. Suddenly, he begins to describe disobedience as a positive, and then describes obedience in many situations as being negative.
Chapter #13:
Summary:
In this chapter, Milgram describes the most often hypothesis to counter his theory, aggression by the participants, and gives his own rebuttal. He describes why most people think that aggression is the true cause, and why his idea works better.
Discussion:
This chapter was interesting because we get to see how he counters what seems to be the obvious explanation for the behavior in his experiments. I liked how he gave examples not only from his experiments but also others to show how incorrect that hypothesis is.
Chapter #14:
Summary:
In this chapter, Milgram continues to defend his hypothesis by showing that the claims that his method is incorrect are wrong. He shows that he did select a diverse population, they did believe they were administering real shocks, and that the laboratory can be extrapolated to real life.
Discussion:
I liked this chapter mostly because I was impressed by Milgram's arguments. I thought of a few of these arguments, and was pleased enough by his rebuttals to believe that the experiment was accurate.
Title: Obedience to Authority
Author: Stanley Milgram
Editor:
Chapter #1:
Summary:
In this chapter, Milgram describes the events in WWII that gestated the experiment, as well as some of the difficulties with finding an answer to this question. He then describes in brief the surprising findings of this experiment
Discussion:
This chapter was a solid beginning to the book. Milgram clearly provides his reasoning behind beginning this now controversial experiment, and so far I can agree with his principles.
Chapter #2:
Summary:
In this chapter, Milgram describes the experimental setup from invitation to debriefing and details how the participants were dealt with in each. He also describes briefly some of the iterations they went through in the development of the experiment.
Discussion:
This chapter was interesting because we got to see the scientific basis behind the experiment. I enjoyed seeing some of the photos of the equipment and people behind the act. Also, I find it interesting that they had to scale up the experiment to show less amazing results, instead of trying to pad it to get more amazing results.
Chapter #3:
Summary:
In this chapter, Milgram describes a survey he performed, in which he asked a series of psychiatrists, students, and other people to predict how they and others would perform. They believed that very few people would go to the max level and that they definitely would not.
Discussion:
This chapter is interesting mostly because I already know the results. If I didn't already know the results, I would probably be caught off-guard just as much as the people who took this poll. However, knowing the outcome, the irony and foreshadowing here is quite interesting to watch.
Chapter #4:
Summary:
In this chapter, Milgram details the influences of distance to the subject on obedience. He finds that the closer the subject and the victim were, the more they identified with him, and vice-versa. He also gives a few reasons why this could be.
Discussion:
This chapter was interesting because I didn't know that they tried changing how much the subject and victim interacted from any of the other books. I was also equally surprised by the fact that even when the subjects had to force the victim to take shocks, more than a quarter of subjects went to maximum power.
Chapter #5:
Summary:
In this chapter, Milgram describes the reactions of many of the subjects of the experiments. While he doesn't show any anecdotes from the least proximity experiment, he shows a large gamut of responses from different people.
Discussion:
This chapter was interesting because, like in Skinner's Box, we can see how the differing values of each subject not only affect how they respond during the experiment, but also how they are affected afterwards. I was actually not surprised at all when the military man followed through with barely any pushing.
Chapter #6:
Summary:
In this chapter, Milgram describes a few other versions of the experiment they attempted to discern how much differing social factors had on obedience. Some of the changes they tried were female subjects, non-local scientist, and having the subjects choose the voltage. Many of them had no effect on the results, but a few had drastic results.
Discussion:
This chapter was interesting because we got to see many different theories as to the powers of authority. My favorite part was seeing the passive-aggressive behavior displayed by the subjects when the scientist character was not in the room.
Chapter #7:
Summary:
In this chapter, Milgram describes more examples of reactions given by participants. In this case, the participants were involved in the special situations shown in chapter 6. Most of the focus is on the women subjects in this set.
Discussion:
This chapter was interesting because we get to see more human reactions to this unreal situation. I don't have a favorite out of this batch, but that's because there are quite a few that I thought were quite intriguing responses.
Chapter #8:
Summary:
In this chapter, Milgram describes further modifications to the experiment. In this grouping, they tested different combinations of scientist and ordinary man as victim and authority. They found that the ordinary man could not wield authority as the scientist did, but the scientist could easily become the victim.
Discussion:
This was interesting because of how clear-cut the responses were to authority as victim and victim wanting the shocks. In both cases the shocking did not continue 100% of the time. I figured that if the victim wanted to continue, there would be a decent chance that someone would continue even just one more up the board, but apparently not.
Chapter #9:
Summary:
In this chapter, Milgram describes two more modifications to his experiment involving the influences of others in our ability to obey. In these modifications, he tests both the effects of allowing someone else to do the dirty work and having others disobey first.
Discussion:
Even though it seemed obvious, seeing that people were more likely to quit when someone else did first was interesting to see. Additionally, finding out that having someone else doing the bad activity decreased the disobedience rate was intriguing as well.
Chapter #10:
Summary:
In this chapter, Milgram describes his theories of why people are obedient. He describes it using a robotic model, as well as an evolutionary model, and then uses these to define a state of mind he calls the agentic state.
Discussion:
This chapter was interesting because we finally get to see the theories he creates to manage the data. I have to admit that the descriptions of both models bored me a little bit, but I am curious to see how he will tie it together in the next chapter.
Chapter #11:
Summary:
In this chapter, Milgram describes the agentic state from its inception to how we enter and exit it. He describes the factors that teach us to obey when we are young, what the agentic state feels like to us, and the factors that keep us from exiting it.
Discussion:
This chapter was interesting because as he was listing off the factors, I was able to think of situations that I had been involved with that I had felt that way in. I was also interested to learn of why we have so much trouble disobeying.
Chapter #12:
Summary:
In this chapter, Milgram begins to theorize how disobedience begins through the process of strain. He describes some of the methods we use to reduce the strain and how it may be a positive thing.
Discussion:
This chapter was interesting because he actually turned around the perspective. Suddenly, he begins to describe disobedience as a positive, and then describes obedience in many situations as being negative.
Chapter #13:
Summary:
In this chapter, Milgram describes the most often hypothesis to counter his theory, aggression by the participants, and gives his own rebuttal. He describes why most people think that aggression is the true cause, and why his idea works better.
Discussion:
This chapter was interesting because we get to see how he counters what seems to be the obvious explanation for the behavior in his experiments. I liked how he gave examples not only from his experiments but also others to show how incorrect that hypothesis is.
Chapter #14:
Summary:
In this chapter, Milgram continues to defend his hypothesis by showing that the claims that his method is incorrect are wrong. He shows that he did select a diverse population, they did believe they were administering real shocks, and that the laboratory can be extrapolated to real life.
Discussion:
I liked this chapter mostly because I was impressed by Milgram's arguments. I thought of a few of these arguments, and was pleased enough by his rebuttals to believe that the experiment was accurate.
Thursday, March 17, 2011
Book Reading #2 - Opening Skinner's Box
Comments:
Comment 1
Comment 2
References:
Title: Opening Skinner's Box
Author: Lauren Slater
Editor: Angela von der Lippe, 2004
Summary:
In this book, Lauren Slater describes ten different psychology experiments that broke the mold and became controversial for one reason or another.
In each chapter, Slater carefully describes the story behind one side of the issue, usually the scientist's, describing their past up until they began their experiment. Then, she describes counterarguments that refute the previous experiments. Occasionally these reversals are scientifically based, but more often they are personal stories of people whose actions refute the theory.
For example, for B.F. Skinner, she described his past and experiments, including his cages like the one at left. She also described some of the rumors and legends about him that have propagated over the years. Then, she changed sides and showed the softer side of him from the perspective of his family members, and tried to refute the rumors.
Discussion:
I have mixed feelings about this book. On one hand, it was a very interesting read and set up the debates on these issues very well. By the end of each chapter, I felt that I could argue both for and against the issue at hand in equal measure. Additionally, the narrative style made most of the stories riveting from start to finish.
However, I didn't like a lot of the chapter material. Many of the stories were so depressing that I could barely keep reading. I understand that it is unavoidable in some cases because of the subject matter, but I feel like in some cases it was unnecessary. In summary, while I enjoyed reading the book the first time, I'm probably not going to read it again.
(Image courtesy of: blog.games.com)
Comment 1
Comment 2
References:
Title: Opening Skinner's Box
Author: Lauren Slater
Editor: Angela von der Lippe, 2004
Summary:
In this book, Lauren Slater describes ten different psychology experiments that broke the mold and became controversial for one reason or another.
In each chapter, Slater carefully describes the story behind one side of the issue, usually the scientist's, describing their past up until they began their experiment. Then, she describes counterarguments that refute the previous experiments. Occasionally these reversals are scientifically based, but more often they are personal stories of people whose actions refute the theory.
For example, for B.F. Skinner, she described his past and experiments, including his cages like the one at left. She also described some of the rumors and legends about him that have propagated over the years. Then, she changed sides and showed the softer side of him from the perspective of his family members, and tried to refute the rumors.
Discussion:
I have mixed feelings about this book. On one hand, it was a very interesting read and set up the debates on these issues very well. By the end of each chapter, I felt that I could argue both for and against the issue at hand in equal measure. Additionally, the narrative style made most of the stories riveting from start to finish.
However, I didn't like a lot of the chapter material. Many of the stories were so depressing that I could barely keep reading. I understand that it is unavoidable in some cases because of the subject matter, but I feel like in some cases it was unnecessary. In summary, while I enjoyed reading the book the first time, I'm probably not going to read it again.
(Image courtesy of: blog.games.com)
Saturday, March 5, 2011
Ethnography Results - Week 5
This week, Joe, Shena, and I went to downtown Bryan on Friday night to the Lonely Hunter release party. Additionally, since it was First Friday, we walked around and examined some of the music events going on in the area.
First, we looked around and examined the music going on in the area for First Friday. Two music events were going on outside the concert. There was a TAMU drum group going on, which ended soon after we arrived, unfortunately. However, next to a wine bar in the area was a small jazzy group whose name we didn't get. They had a group of about 50 people who were much older than all of the concerts we had been at so far. I would guess that the average age was about 35-40 years old.
Then, we made our way to Stafford Main Street, where the release party was being held. Stafford Main Street is a renovated theater turned into a bar which hosts live music weekly. The bar is kind of hard to find, with only a small sign next to the door that I almost walked straight by.
We walked during the end of Bobby Pearson's performance. They were an acoustic rock band of some sort, but we were unable to get a good idea of their style before they finished their set. There were about 65 people in the bar, but most of them were at the back and didn't seem interested in the performance.
Next, at 9:50, Mike Mains and the Branches came on. They had a more traditional rock feel, and also did some really bizarre gestures during the performance. The crowd didn't grow much if at all during the performance, but they seemed much more focused on the act than earlier.
At 10:40, Gaitlin Elms came on, and they were a softer rock band compared to the others we had heard so far. The crowd had swelled to about 80 at this point, and a great majority were at the front with the band.
The final band we watched, Lonely Hunter, came on at 11:30. They had the most developed sound of the groups, and were the ones putting out their new CD. At this point, the bar was packed at the front, with some room at the back.
We probably won't be going back to this bar again for our studies. As you can probably see in the shots, the bar has bad ventilation and due to the fact that smoking indoors is legal in Bryan the bar was extremely smoky.
Some of the conclusions I have come by after this week are that release parties seem to bring in more bands as well as bigger bands than normal nights. Half of the bands were selling CDs and shirts, and most of them were actually from out of town, as opposed to the local bands we have seen lately who were not.
First, we looked around and examined the music going on in the area for First Friday. Two music events were going on outside the concert. There was a TAMU drum group going on, which ended soon after we arrived, unfortunately. However, next to a wine bar in the area was a small jazzy group whose name we didn't get. They had a group of about 50 people who were much older than all of the concerts we had been at so far. I would guess that the average age was about 35-40 years old.
Then, we made our way to Stafford Main Street, where the release party was being held. Stafford Main Street is a renovated theater turned into a bar which hosts live music weekly. The bar is kind of hard to find, with only a small sign next to the door that I almost walked straight by.
We walked during the end of Bobby Pearson's performance. They were an acoustic rock band of some sort, but we were unable to get a good idea of their style before they finished their set. There were about 65 people in the bar, but most of them were at the back and didn't seem interested in the performance.
Next, at 9:50, Mike Mains and the Branches came on. They had a more traditional rock feel, and also did some really bizarre gestures during the performance. The crowd didn't grow much if at all during the performance, but they seemed much more focused on the act than earlier.
At 10:40, Gaitlin Elms came on, and they were a softer rock band compared to the others we had heard so far. The crowd had swelled to about 80 at this point, and a great majority were at the front with the band.
The final band we watched, Lonely Hunter, came on at 11:30. They had the most developed sound of the groups, and were the ones putting out their new CD. At this point, the bar was packed at the front, with some room at the back.
We probably won't be going back to this bar again for our studies. As you can probably see in the shots, the bar has bad ventilation and due to the fact that smoking indoors is legal in Bryan the bar was extremely smoky.
Some of the conclusions I have come by after this week are that release parties seem to bring in more bands as well as bigger bands than normal nights. Half of the bands were selling CDs and shirts, and most of them were actually from out of town, as opposed to the local bands we have seen lately who were not.
Friday, March 4, 2011
Paper Reading #14 - Sensing Foot Gestures
Comments:
Comment 1
Comment 2
References:
Title: Sensing Foot Gestures from the Pocket
Authors: Jeremy Scott, David Dearman, Koji Yatani, and Khai N Truong
Venue: UIST 2010, Oct 3-6, 2010
Summary:
In this paper, the authors describe a method of input for mobile phones using the foot of the user. In order to create a concept, they performed a study to find out what gestures have the highest accuracy and are the most comfortable to the user using the apparatus to the right. They discovered that heel rotation, flexing at the toe, and double-tapping the foot were the most accurate and comfortable.
Then, they created an application for the iPhone to recognize these gestures, and then tested their accuracy from different locations on the body. They found that on a holster on the side and in a side pocket are the most accurate, after a small machine learning period. They then describe the limitations of the current system, including differentiating running from the double-tap and keeping the gestures accurate if the phone moves around in the user's pocket.
Discussion:
This paper, while scientifically strong and with an original concept, didn't interest me all that much. The uses for foot gestures don't seem readily apparent to me; additionally, I don't think it will be easy to determine inputs that the user means to perform versus innocuous activities like walking. I feel that other alternative input methods that they listed early in the paper such as speech input or rear buttons would probably work better than this one.
Comment 1
Comment 2
References:
Title: Sensing Foot Gestures from the Pocket
Authors: Jeremy Scott, David Dearman, Koji Yatani, and Khai N Truong
Venue: UIST 2010, Oct 3-6, 2010
Summary:
In this paper, the authors describe a method of input for mobile phones using the foot of the user. In order to create a concept, they performed a study to find out what gestures have the highest accuracy and are the most comfortable to the user using the apparatus to the right. They discovered that heel rotation, flexing at the toe, and double-tapping the foot were the most accurate and comfortable.
Then, they created an application for the iPhone to recognize these gestures, and then tested their accuracy from different locations on the body. They found that on a holster on the side and in a side pocket are the most accurate, after a small machine learning period. They then describe the limitations of the current system, including differentiating running from the double-tap and keeping the gestures accurate if the phone moves around in the user's pocket.
Discussion:
This paper, while scientifically strong and with an original concept, didn't interest me all that much. The uses for foot gestures don't seem readily apparent to me; additionally, I don't think it will be easy to determine inputs that the user means to perform versus innocuous activities like walking. I feel that other alternative input methods that they listed early in the paper such as speech input or rear buttons would probably work better than this one.
Wednesday, March 2, 2011
Paper Reading #13 - Multitoe
Comments:
Comment 1
Comment 2
References:
Title: Multitoe: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input
Authors: Thomas Augsten, Konstantin Kaefer, Rene Meusel, Caroline Fetzer, Dorian Kanitz, Thomas, Stoff, Torsten Becker, Christian Holz, and Patrick Baudisch
Venue: UIST 2010, Oct 3-6, 2010
Summary:
In this paper, the authors describe their method of making a multitouch floor, show some applications made for the floor, and briefly show the next level of their project. They begin by listing a series of problems they had during the design process; they then enumerate a series of studies they performed to find a solution. Some of the problems they dealt with were control size, where to place the user's inputs, and what inputs the users liked.
Then, in the second half, they detail the components that make up the floor and the applications developed for the floor. The floor is made up of a projector, some IR LEDs, and a IR camera covered by layers of glass, acrylic, and a projection screen. The floor uses diffuse illumination to get the outline of a users shoes(a), and uses the IR camera to find the pressure exerted by the foot(c). By observing the outline of the shoes and the pressure, the cameras can identify the user based on their shoe pattern, and subdivide the foot into pressure zones that can be watched for input. They showed a fish tank game and a foot-controlled version of Unreal Tournament 2004 that could run using these processes.
Discussion:
I am thoroughly impressed by this paper. The idea they have shown here is one I had never thought of before, but is one I would like to play around with. Additionally, I am impressed with all of the thought they put into devising this concept. They got large amounts of user inputs and used it augment their design in the way Norman's books described, and they seem to have created a solid product. Also, I can't wait to see what they do with the concept they showed at the very end of the paper, although it seems to take up a significant amount of space.
Comment 1
Comment 2
References:
Title: Multitoe: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input
Authors: Thomas Augsten, Konstantin Kaefer, Rene Meusel, Caroline Fetzer, Dorian Kanitz, Thomas, Stoff, Torsten Becker, Christian Holz, and Patrick Baudisch
Venue: UIST 2010, Oct 3-6, 2010
Summary:
In this paper, the authors describe their method of making a multitouch floor, show some applications made for the floor, and briefly show the next level of their project. They begin by listing a series of problems they had during the design process; they then enumerate a series of studies they performed to find a solution. Some of the problems they dealt with were control size, where to place the user's inputs, and what inputs the users liked.
Then, in the second half, they detail the components that make up the floor and the applications developed for the floor. The floor is made up of a projector, some IR LEDs, and a IR camera covered by layers of glass, acrylic, and a projection screen. The floor uses diffuse illumination to get the outline of a users shoes(a), and uses the IR camera to find the pressure exerted by the foot(c). By observing the outline of the shoes and the pressure, the cameras can identify the user based on their shoe pattern, and subdivide the foot into pressure zones that can be watched for input. They showed a fish tank game and a foot-controlled version of Unreal Tournament 2004 that could run using these processes.
Discussion:
I am thoroughly impressed by this paper. The idea they have shown here is one I had never thought of before, but is one I would like to play around with. Additionally, I am impressed with all of the thought they put into devising this concept. They got large amounts of user inputs and used it augment their design in the way Norman's books described, and they seem to have created a solid product. Also, I can't wait to see what they do with the concept they showed at the very end of the paper, although it seems to take up a significant amount of space.
Thursday, February 24, 2011
Ethnography Results - Week 4
This week, I went to Schotzi's again on Thursday night to watch two bands: Shane Smith and The Secret of Boris. The website said that it would start at 9:00, but when I arrived at 9:30 they were still in the process of sound testing. In fact, they continued sound testing until about 10:10, at which point some of the band actually did some karaoke inside. Despite the large genre difference between the artists, they actually seemed to have a rapport with one another.
As far as attendance went, by 10:00 there were only 20 people in the bar, not including the artists. By 11:00, there was only somewhere between 30 and 35. At about 11:30 however, that number nearly doubled. I do believe that this increase is due more to the time than the bands, because most of these people didn't go outside or upstairs to watch them.
By the time I left at slightly past midnight, nearly 100 people were in the bar as a whole, but only about 20 were outside with Shane Smigh and 5 were upstairs with the rock band. I am curious if this is due more to the day or the bands.
Next week, I am going to delay heading to the bar until 10:30 or 11, since it appears that the artists rarely get started until then.
Subscribe to:
Posts (Atom)