Showing posts with label Special Reading. Show all posts
Showing posts with label Special Reading. Show all posts

Tuesday, April 12, 2011

Special Reading #4 - Media Equation

Comments:
Comment 1
Comment 2

References:
Paper 1:
Title: Machines and Mindlessness: Social Responses to Computers
Authors: Clifford Nass and Youngme Moon
Venue: Journal of Societal Issues, Vol. 56-1, 2000

Paper 2:
Title: Computers are Social Actors
Authors: Clifford Nass, Jonathan Steuer, and Ellen R. Tauber
Venue: CHI '94, April 1994

Paper 3:
Title: Can Computer Personalities Be Human Personalities?
Authors: Clifford Nass, Youngme Moon, BJ Fogg, Byron Reeves, and Chris Dryer
Venue: CHI '95, May 7-11 1995


Summary:
In these papers, the authors show that people unconsciously think of computers as people -- in some respects, anyway. When it came to factors like gender, race, and aggressiveness, their tests indicated that indeed humans applied these qualities to machines, even though they said doing so was ridiculous.

They used many methods for the tests, but the most common was the three-computer setup shown at right. One computer feeds information to the user, then they take a test on computer number two, and then the third evaluates their scores. Different selections of male and female voices as well as aggressive and less aggressive language to use.



Discussion:
I wasn't very surprised by the results of these papers. I have referred to computers in a human way before, especially when I get frustrated. Also, when you place human voices on the machines, you make them seem more humanlike, so it's not surprising that gender and racial biases get applied.

However, the fact that these results are so evident means that we must be careful about what sound clips we use when we design programs with a voice component, as well as how we give information through the user through text. Incorrect usage could make our program seem rude and cause people to dislike it.

(Image courtesy of: Paper #2)

Wednesday, February 2, 2011

Special Reading #3 - Contextual Gaps

Comments:
Comment 1
Comment 2

References:
Title: Contextual Gaps: Privacy Issues on Facebook
Authors: Gordon Hull, Heather Richter Lipford, and Celine Latulipe
Venue:

Summary:
In this paper, the authors describe the changing face of privacy in online social networking services using Facebook as the example. At the start of the article they describe the changing social norms online through blogging and webcams, using examples such as Jennicam and Washingtonienne to illustrate the problems they have caused. The authors then map these same social issues to Facebook's applications and news feed. They talk about the silent sharing of data from Facebook to these applications, and how many people are completely unaware that the applications are actually sharing data at all. They then ask that the users be better informed of the applications' data mining by having a page display the information shared with pictures of their friends. Then the authors talk about the news feed, and how on the inception of the news feed service, a huge backlash ensued. They then talk about how users apply a social context to their actions online like they do in real life, and when a large change like the feed occurs, the sudden change changes the users' concepts of that context. They then mention that after a while when peoples' contexts adapted, the complaints died out. They then describe some slight changes that might improve Facebook's news feed.

(Image courtesy of Technorati.)

Wednesday, January 19, 2011

Special Reading #2 - On Computers

Comments:
Comment 1
Comment 2

References:
Title: On Plants
Author: Aristotle
Venue: Edited by Jonathan Barnes; 1994

Summary:
In this paper, Aristotle argues that plants have souls in the same way that animals do. He does this by describing the great varieties in plants as there is in animals, as well as how plants grow and change during their lives like animals. He then talks about how animals are affected by the presence or absence of the elements, similar to animals. Finally, he describes how changes in location can change plants like they change animals.



Comments:
I found his argument interesting because it is made during the early days of science. Since they didn't have as much of an understanding of the processes of organic beings, they described them in terms of elements. Despite this, at the time, I believe his argument is reasonably sound. Since plants have the myriad variations that animals do, and since they respond to many stimuli like animals do, it is reasonable to assume that they would have souls like animals do, assuming you think souls exist, of course. Interestingly enough, since computers can respond from outside input and require outside elements to work, a case could be made for computers to have souls as well. However, after reading the Chinese Room paper, I am inclined to disagree.

(image courtesy of the Colorado Carnivorous Plants Society)

Special Reading #1 - The Chinese Room

Comments:
Comment 1
Comment 2

References:
Title: Minds, Brains, and Programs.
Author: John Searle
Venue: Behavioral and Brain Sciences, 1980

Summary:
In this paper, John Searle makes the statement that the traditional method of making programs can never make a true AI that understands like humans do. He accomplishes this through a simple thought experiment, wherein an individual is given a book that has a series of directions of what to do when someone slips Chinese characters through a hole in the wall. The directions would look something like this image: (citation below)

The person inside the room reads the directions, looks at the slips of paper dropped in the door, and writes the supplied characters on sheets of paper and slides them another hole.

Now, a Chinese person outside the room, assuming the program in the books was written well enough, might think that this room understands Chinese quite well. However, the person inside has no understanding of what the Chinese characters mean and, no matter how many times this task is performed, will never know what they mean. This means that this construct, and by extension a program following the same principles, cannot truly be intelligent. He then posits that the only way to make a true AI would be to emulate the processes in the human mind, which we still do not understand.

Image courtesy of: http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_chinese_room.php

Discussion:
I find this argument interesting because it destroyed my preconceptions of what AI could be. Especially in movies today, AIs are shown to have levels of intelligence rivaling humans, and this paper showed me convincingly that these portrayals are false. The argument that finally got to me was that even though a computer can do a great simulation of an explosion, you do not expect to get hit by shrapnel, so why would a simulation of intelligence create a true intelligence? In fact, as far as the argument is concerned, I can't think of a fault in this argument that does not discredit human intelligence to make it equivalent to machine intelligence.

Possible future work that could come out of this paper could include research into real brains to determine this quality that AIs are missing or research into AI programming methods to get around this limit.