Showing posts with label Paper Reading. Show all posts
Showing posts with label Paper Reading. Show all posts

Tuesday, April 26, 2011

Paper Reading #27 - TeslaTouch

Comments:
Comment 1
Comment 2

References:
Title: TeslaTouch, Electrovibration for Touch Surfaces
Authors: Olivier Bau, Ivan Poupyrev, Ali Israr, Chris Harrison
Venue: UIST '10, Oct. 3-6 2010.

Summary:
In this paper, the authors describe a method of generating tactile feedback on touch screens using only electricity. This technology, which requires no moving parts, can generate very realistic touch sensations when the user moves their hand across the touch screen.

The authors first describe how the technology works and what they used to test the technology. TeslaTouch works by using the position-sensing driver to send electrical signals across an insulating coating on the surface. As the finger moves across the surface, an attractive force is generated that feels like friction to the user. To test the technology, the authors placed the coating on a 3M multitouch table, shown above.

They then test how people react to the tactile feedback generated by the technology. They found that they were able to generate a variety of feelings in the users by slightly changing the frequency of the electrical signals. They also found that users can feel this feedback at the same level as normal vibration technology.

They then describe why this technology is better than current haptic vibration interfaces. Some of the reasons they cite are that: it's silent, it can uniformly generate touch over the entire surface and it has no mechanical motion. However, traditional haptic vibration can generate stronger feelings than this system.

Discussion:
I am really excited by the prospect of this technology. I have thought for years that touch screens need better touch feedback to truly work as well as traditional inputs, especially when doing similar tasks like keyboarding. I really want to try this system myself, because it apparently generates very real touch experiences.

I have two main concerns with this system, however. For multitouch surfaces, I am curious whether TeslaTouch can generate two different tactile sensations at once. If it can't, it's not a big deal, but it could decrease the realistic feel of the screen. Second, in the pictures above, you can see that the image appears to be slightly blurry. If the resistive layer causes the image to be less sharp, then in order for this technology to be practical they need to find a better coating.

(Image courtesy of: )

Paper Reading #26 - Critical Gameplay

Comments:
Comment 1
Comment 2

References:
Title: Critical Gameplay: Software Studies in Computer Gameplay
Author: Lindsay D. Grace
Venue: CHI 2010, April 10-15, 2010

Summary:
In this paper, the author describes tried-and-true game mechanics, and then shows games that defy these mechanics. The objective of this research is to both show the game design aspects that common today, but also to show what game mechanics have been unexplored in the industry today.

The first mechanic is friend-or-foe identification. She says that most games allow the player to quickly identify enemies by appearance. To counter this, she shows the game at right, called black and white. In this game, enemies and friends are the same color, and the player must instead identify by behavior instead of appearance.

She then continues by describing games that use mechanics like collection, violence, and rushing through the game and then shows counterexamples that instead demonstrate frugality, nonviolence, and calm observation.

Discussion:
I thought this paper was cool because it took all of the games that I am used to playing and turned them around. Some of the mechanics, for example trying to avoid item collection, sounded like they would be really interesting to play.

Interestingly enough, I have actually seen a game lately in which you actually have to observe behavior over appearance, and it got really great reviews. So, I am actually interested in seeing if some of these other mechanics might be used in games to make them even more original.

(Image courtesy of: this paper)

Wednesday, April 20, 2011

Paper Reading #25 - Email Overload

Comments:
Comment 1
Comment 2

References:
Title: Agent-Assisted Task Management that Reduces Email Overload
Author: Andrew Faulring, Brad Myers, Ken Mohnkern, Bradley Schmerl, Aaron Steinfeld, John Zimmerman, Asim Smailagic, Jeffery Hansen, and Daniel Siewiorek
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a new mail system that uses AI to divide e-mails into a selection of tasks. They then show that this very different method provides positive results.

They begin by describing the intricacies of the task system. When e-mails enter the inbox, an AI assistant parses the e-mails and tries to figure out what classification the task should lie under. Then, it chooses to either place it into a classification or place it in an area where the user can choose.

In addition, the e-mail client also provides a scheduling interface, which also includes an AI assistant. The AI assistant looks through the e-mail tasks and assigns what it believes to be a good amount of time for each, and prioritizes the user's future schedule. The user can then choose what tasks they are working on.

They then show the results of using this system on productivity. People using this system with the AI task assistant and e-mail assistant tend to get more meaningful tasks done than those who do not. With only the e-mail assistant, users get more overall tasks done, but they get less important tasks done.

Discussion:
I was actually quite excited about this research. The idea of having a small AI assisting me with my tasks seems like a really cool, sci-fi idea. Additionally, even at this stage, it seems to be working well, so I hope they can actually bring this to market soon.

One concern I have with the software is that they do not describe how configuration will work. I am curious if in a final design if there will be editable categories, or if because of how the AI works, there will only be preset task categories.

(Image courtesy of: Download Software)

Paper Reading #24 - Finding Your Way

Comments:
Comment 1
Comment 2

References:
Title: Finding Your Way in a Multi-dimensional Semantic Space with Luminoso
Authors: Robert Speer, Catherine Havasi, Nichole Treadway, and Henry Lieberman
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a program called Luminoso that provides an interface for parsing text input and displaying it.

The Luminoso system displays the relations between text sets in N-dimensions. These dimensions are first created by finding the occurrences of words in each document, and then analyzing the meanings of each of the highest words. Then, the system gathers the words with similar meanings, and assigns a dimension to each gathering.

To examine the data, the program defines the interface above. The lines connect the words related to the current selection. The colors define how "hot" a particular relation is, from white being highest, down to red. The user can navigate by selecting a particular point and then rotating into the semantic dimension you want.

Discussion:
While I think it's important to be able to quickly navigate through text to find what you need for situations like surveys, I don't think that this is the best navigation method. The concept of n-dimensionality is confusing to start with, but when you add in the abstractness of the data you will be sorting through using these dimensions, and I would feel completely lost.

Also, this paper mentions a lot of sorting and data modification methods without defining them, so in many cases I was unable to understand the backbone behind how the system worked. I think they probably would have done better to lengthen the paper and add some short definitions for each term.

(Image courtesy of: )

Sunday, April 17, 2011

Paper Reading #23 - Automatic Warning Cues

Comments:
Comment 1
Comment 2

References:
Title: Evaluating Automatic Warning Cues for Visual Search in Vascular Images
Authors: Boris van Schooten, Betsy van Dijk, Anton Nijholt, and Johan Reiber
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors perform a study on automatic warning systems for MRA machines. They find that warning systems that warn more often but produce false positives work better than both the ones that warn less or ones with no warnings at all.

The authors created a warning system for viewing images of the vascular system. In order to test it, they created a series of test vessels for users and the system to find. The vessels were classified by difficulty and type of problem. They then had users attempt to find these errors using a system that warned more, a system that warned less, a system with no warnings, and a "perfect" system that generated no errors.

They found first of all that the perfect system did the best. After that, the warning system with more warnings did next best, even though previous studies have shown the opposite to be true. Following that, the less warning and the no warning systems placed second-t0-last and last, respectively.

Discussion:
I think that this research is of great importance, since proper detection of blood vessels could prevent heart attacks and other vascular issues. Furthermore, I am sure these warning systems have other applications that are just as useful.

One issue I have with the paper, however, is that I do not think that their study was big enough. The results from the small group that they tested show nearly even results between false positives and negatives; they should have done more research to see which one did better.

(Image courtesy of: Imaging Group)

Tuesday, April 12, 2011

Paper Reading #22 - POMDP Approach

Comments:
Comment 1
Comment 2

References:
Title: A POMDP Approach to P300-Based Brain-Computer Interfaces
Authors: Jaeyoung Park, Kee-Eung Kim, and Sungho Jo
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a method of optimizing the number of attempts a brain-computer interface (BCI) needs to use to find information from the user. They did this by creating a program that displayed a 2x2 or 2x3 matrix to the user and then flashed prospective letters at them (at left).

They used a learning system called POMDP to attempt to guide in the flashes to what the user wanted. This system required a lot of training before the system could be used on human subjects. However, when they tested it on humans, they found that the accuracy was much higher by 30 flashes than the normal algorithm, and maintained that accuracy all the way to the maximum of 60.

Discussion:
This paper was interesting because we get to see some more bleeding-edge research in BCI. The paper this time was significantly more readable than the previous one, but I still got lost in the probabilities.

One thing that bothers me again is I don't really see the big benefits of this at this stage. Of course, EEG-based programs are still in their infancy, but I don't see how flashing letters can later become navigating the pointer or typing with my mind. Maybe in a few years I will be able to see the connection.

(Image courtesy of: this paper)

Thursday, April 7, 2011

Paper Reading #21 - Automatically Identifying Targets

Comments:
Comment 1
Comment 2

References:
Title: Automatically Identifying Targets Users Interact with During Real World Tasks
Authors: Amy Hurst, Scott E. Hudson, and Jennifer Mankoff
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a method of gathering user click data in an accurate, device-agnostic way. They do this by using a hybrid method of kernel-based tracking combined with image identification.

Their user data gatherer, called CRUMBS, works in two levels. At the lower level, a series of different data gatherers reports on what they each think is what the user clicked on. For example, some of the low-level gatherers include Microsoft's Accessibility API, an image difference checker, as shown above, and a template checker. Then, at the high level, a machine learning interface decides based on all the data gathered from the low-level gatherers to make a final decision on what the user clicked on.

With their method, they reported a 92% correct click identification rate, which they mention is higher than using only the accessibility API. Furthermore, they mention if they captured a larger portion of the screen on a click (they currently grab only a 300x300 space), they could get an even larger portion of clicks correct.

Discussion:
I think that CRUMBS could be a very useful tool to use when testing how users make use of your program. If the collection data from other sources is as bad as they say in the paper, then it is very difficult to gather real usage information for programs, and this could help.

One thing I am curious about though is if this information gatherer must be turned on and off manually or if it does so automatically. Otherwise, it might gather usage information in a clicking space where it isn't needed, such as in a video game.

(Image courtesy of: this paper)

Tuesday, April 5, 2011

Paper Reading #20 - Data-Centric Physiology

Comments:
Comment 1
Comment 2

References:
Title: Addressing the Problems of Data-Centric Physiology-Affect Relations Modeling
Authors: Roberto Legaspi, Ken-ichi Fukui, Koichi Moriyama, Satoshi Kurihara, Masayuki Numao, and Merlin Suarez
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a new method of analyzing the emotions -- or affect -- of people. They describe some of the problems with current emotion modeling solutions like the time it takes to analyze them, and how they believe it can be improved by changing how the data is analyzed.

They mention that analyzing the data continuously along the entire spectrum of data for a user will produce better results than the current method of discretely analyzing emotions. To prove this, they analyzed the emotion changes of two subjects by using the sensors shown in the pictures above and playing music that affected them emotionally.

Then, they describe in detail the algorithms behind their continuous analysis, and show that it is as fast as discrete analysis and should provide better results in certain situations.

Discussion:
To be perfectly honest, this paper was so difficult to read that I'm not exactly sure that I got the correct analysis out of it. It took me half of the paper to figure out what they were trying to do with the emotion readings, and I am still not exactly sure what the point was.

Additionally, I am curious what the benefits are behind sensing emotions of users, especially if it requires the elaborate equipment shown in the picture. I have seen some cool little games that used the user's emotions to modify the game, but no other real applications.

(Image courtesy of: this paper)

Sunday, April 3, 2011

Paper Reading #19 - Personalized News

Comments:
Comment 1
Comment 2

References:
Title: Personalized News Recommendation Based on Click Behavior
Authors: Jiahui Liu, Peter Dolan, Elin Ronby Pedersen
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a new method of recommending news articles to read on the Google News service. They do this by analyzing both the articles users choose to click on, as well as current news trends.

They begin by doing an in-depth analysis of the influences on what news readers are interested in. They then find that readers are not only influenced by their own personal tastes due to age, gender and job, but also that they are influenced by current news stories. Furthermore, the news stories that people have an interest in have a large correlation based on location.

From this data, they design two probabilistic models that show the user's likelihood of clicking a particular link. One model is based upon their individual clicks, which should correspond to their personal interests, and the other is based on news stories clicked upon by people who are nearby, to get an idea of current breaking news in the area. They then find that this new method caused a noticeable increase in clicks on Google News recommendations.

Discussion:
I thought this article was interesting mainly because it gave a sort of window into how they use user information at Google. I was impressed by how creative they are in using not only personal clicks but also those of the collective to get a larger picture of the data.

As far as the product goes, I think it's a positive move, since I definitely don't have a lot of time to spend searching through articles for what I'm looking for. I also wonder if they use a similar algorithm to this in Google Reader, since that is my main news hub.

(Image courtesy of: Google News)

Tuesday, March 29, 2011

Paper Reading #18 - News Browsing

Comments:
Comment 1
Comment 2

References:
Title: Aspect-level News Browsing: Understanding News Events from Multiple Viewpoints
Authors: Souneil Park, SangJeong Lee, and Junhwa Song
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a method of alleviating media bias in the news by providing varied versions of stories on the same subject. They call this method aspect-level news browsing.

Their system involves partitioning articles about a certain topic into different quadrants depending on the article's subject matter. They do this by analyzing articles in two different ways. One way they analyze is by dissecting the article and examining the first paragraph, where journalists tend to cluster main information. The second way they analyze it is to analyze articles near the beginning and ends of events, since journalists tend to report on more diverse parts of the issue further after the event. They then do an examination on how good their system is by checking its results to other algorithms.

Discussion:
I think that this system is a great idea. I personally know family members who will not talk to each other because they are so politically polarized from each other. Hopefully with a system like this they could learn to analyze the issues further.

In reality though, I think that a system like this won't help because most people won't spend the time reading multiple articles. Most people will just read one article and move on, which completely ruins the point of the program.

(Image courtesy of: Frogtown blog)

Paper Reading # 17 - Personalized Reading Support

Comments:
Comment 1
Comment 2

References:
Title: Personalized Reading Support for Second-Language Web Documents by Collective Intelligence
Authors: Yo Ehara, Nobuyuki Shimizu, Takashi Ninomiya, and Hiroshi Nakagawa
Venue: IUI 2010, Feb. 7-10 2010

Summary:
In this paper, the authors describe a new method of providing definitions for ESL readers through information gathering. Many people for whom English is the second language use programs called glossers when reading; these programs provide definitions for unfamiliar words. Most glossers automatically show the definitions for some words, and they provide this feature by choosing words that appear less frequently in the language.

With their program, they instead choose words picked by what words each individual person clicks for definitions on. They take this information and use it to calculate a person's difficulty index; that is, they determine what difficulty words they are likely to know and only gloss words that are above that difficulty. They discuss many varying algorithms that they could have used, and then show that only one method for this was suitable because it is the only one that works online. Finally, they show that the online algorithm is just as efficient as local algorithms.

Discussion:
I thought this article was interesting because I had never heard of a glosser before, and now that I have I feel like I could use one sometimes, especially when reading papers like this. The paper was very technical, which I feel is a positive, but I had a significant amount of trouble following the algorithmic analysis during the latter part of the paper. Overall however, I feel like their method would work better than previous methods and that they should make a final product out of this.

(Image courtesy of: this paper)

Paper Reading #16 - UIMarks

Comments:
Comment 1
Comment 2

References:
Title: UIMarks: Quick Graphical Interaction with Specific Targets
Authors: Olivier Chapuis and Nicolas Roussel
Venue: UIST 2010, Oct 3-6, 2010

Summary:
In this paper, the authors describe a system called UIMark for integrating target-aware pointing techniques with normal pointing techniques. The system allows for the programming of hot spots like the one pictured on the right. The user can activate these nodes by switching into a special pointing mode and moving a bubble cursor towards it. Then, the system performs the action specified by the node. The hot spots have symbols on them to indicate the actions performed on the hot spot; in this case, the hot spot will single click on the icon and return to the previous mouse position.

They then perform a study to determine the usability of the pointing system. They found that for most complex clicking tasks, UIMarks is faster than the traditional pointing method. However, if it is only used for mouse movement and not clicking, the system is slower than the traditional method. They then describe some future studies they would like to perform with the system.


Discussion:
I think that this is a reasonably good pointing system, but I'm not sure if many people would use it. I believe that having to program the system to provide the marks makes it a little to difficult and time-consuming for most. However, for power users, the system would be a boon. Being able to quickly move and click icons would be very useful; for example, the example above shows possibilities for Photoshop.

(Image courtesy of: the UIMarks paper)

Tuesday, March 22, 2011

Paper Reading #15 - Jogging over a Distance

Comments:
Comment 1
Comment 2

References:
Title: Jogging over a Distance between Europe and Australia
Authors: Florian Mueller, Frank Vetere, Martin Gibbs, Darren Edge, Stefan Agamanolis, and Jennifer Sheridan.
Venue: UIST 2010, Oct 3-6, 2010

Summary:
In this paper, the authors describe a framework for distributed social exercising called Jogging over a Distance. This system allows two users to be able to jog together regardless of their location together -- hopefully allowing them to exercise better.

The system works by having a headset connected to a heart monitor, a mobile phone, and a small computer, which makes it similar to the Nike plus system pictured at left. After setting a heart rate, the user can converse with their workout partner over the course of the exercise session using the headset.

The social aspect comes in because instead of displaying results after the workout, real-time heart rate information is given to the user and their partner through the direction of the sound during their conversation. If the user is not working as hard as their partner, they sound like they are ahead of you, and vice-versa. This links the desire to talk to the desire to work out; this strengthens the jogger's resolve.

The authors conducted a usage study on the framework, and found that most users liked the system, and thought it helped in their workouts. They found that they worked harder because they wanted to be able to hear the conversation better.

Discussion:
This paper is interesting because this is a product I could see myself using. In order to stick to a workout plan, I need to have another person working with me. A system like this could allow me to expand the pool out beyond the local area.

Additionally, I think the system has more applications than just exercise. A more refined version of this system could probably allow people to "race" each other on foot without being in the same location, using GPS or other methods. Additionally, a further social aspect could be added by allowing people to randomly connect to other joggers if they don't have someone available to partner with. My other concern with the current implementation is all of the equipment involved. They need to make a more compact prototype to determine how feasible this design would be as a real product.

(Image courtesy of: Apple Gazette).

Friday, March 4, 2011

Paper Reading #14 - Sensing Foot Gestures

Comments:
Comment 1
Comment 2

References:
Title: Sensing Foot Gestures from the Pocket
Authors: Jeremy Scott, David Dearman, Koji Yatani, and Khai N Truong
Venue: UIST 2010, Oct 3-6, 2010

Summary:
In this paper, the authors describe a method of input for mobile phones using the foot of the user. In order to create a concept, they performed a study to find out what gestures have the highest accuracy and are the most comfortable to the user using the apparatus to the right. They discovered that heel rotation, flexing at the toe, and double-tapping the foot were the most accurate and comfortable.

Then, they created an application for the iPhone to recognize these gestures, and then tested their accuracy from different locations on the body. They found that on a holster on the side and in a side pocket are the most accurate, after a small machine learning period. They then describe the limitations of the current system, including differentiating running from the double-tap and keeping the gestures accurate if the phone moves around in the user's pocket.

Discussion:
This paper, while scientifically strong and with an original concept, didn't interest me all that much. The uses for foot gestures don't seem readily apparent to me; additionally, I don't think it will be easy to determine inputs that the user means to perform versus innocuous activities like walking. I feel that other alternative input methods that they listed early in the paper such as speech input or rear buttons would probably work better than this one.

Wednesday, March 2, 2011

Paper Reading #13 - Multitoe

Comments:
Comment 1
Comment 2

References:
Title: Multitoe: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input
Authors: Thomas Augsten, Konstantin Kaefer, Rene Meusel, Caroline Fetzer, Dorian Kanitz, Thomas, Stoff, Torsten Becker, Christian Holz, and Patrick Baudisch
Venue: UIST 2010, Oct 3-6, 2010

Summary:
In this paper, the authors describe their method of making a multitouch floor, show some applications made for the floor, and briefly show the next level of their project. They begin by listing a series of problems they had during the design process; they then enumerate a series of studies they performed to find a solution. Some of the problems they dealt with were control size, where to place the user's inputs, and what inputs the users liked.

Then, in the second half, they detail the components that make up the floor and the applications developed for the floor. The floor is made up of a projector, some IR LEDs, and a IR camera covered by layers of glass, acrylic, and a projection screen. The floor uses diffuse illumination to get the outline of a users shoes(a), and uses the IR camera to find the pressure exerted by the foot(c). By observing the outline of the shoes and the pressure, the cameras can identify the user based on their shoe pattern, and subdivide the foot into pressure zones that can be watched for input. They showed a fish tank game and a foot-controlled version of Unreal Tournament 2004 that could run using these processes.

Discussion:
I am thoroughly impressed by this paper. The idea they have shown here is one I had never thought of before, but is one I would like to play around with. Additionally, I am impressed with all of the thought they put into devising this concept. They got large amounts of user inputs and used it augment their design in the way Norman's books described, and they seem to have created a solid product. Also, I can't wait to see what they do with the concept they showed at the very end of the paper, although it seems to take up a significant amount of space.

Thursday, February 24, 2011

Paper Reading #12 - Cosaliency

Comments:
Comment 1
Comment 2

References:
Title: Cosaliency: Where People Look When Comparing Images
Authors: David E. Jacobs, Dan B. Goldman, and Eli Shechtman
Venue: UIST 2010, Oct 3-6, 2010

Summary:
In this paper, the authors discuss a computerized method of locating the changes between a pair of images. This algorithm would be useful in a situation they call "image triage," in which a photographer needs to decide which images to keep and which images to delete on a camera, usually to open space for more photos. To assist with the design of this algorithm, they come up with a concept called cosaliency, which is similar to image saliency, where you are trying to find the important part of an image, but instead between two pictures.

They began by creating a research project on Amazon Mechanical Turk wherein they asked the workers to compare series of image pairs and select small crops of the images that they thought had the biggest changes in them. From this data, they created an equation to model this behavior, and then applied it in their software for cropping. Finally, they generated images cropped by this algorithm and again used Amazon Mechanical Turk to determine how much people liked them. In general, they found that the users liked it more than traditional methods.

Discussion:
I liked this article because it used hard numerical data to back up its points. Even though it used a lot of jargon that I was unfamiliar with, the equations they showed were all solid and it provided plenty of images to demonstrate the process. For this reason, I am confident in the usefulness of this paper.

From a content perspective, I am not a huge photographer, but I can see how useful this would be to people who do like to take photos. This and many of the other works that they mentioned as they created their new algorithm will surely help make the newer model digital cameras even easier to use.

Wednesday, February 23, 2011

Paper Reading #11 - Web Automation

Comments:
Comment 1
Comment 2

References:
Title: A Conversational Interface to Web Automation
Authors: Tessa Lau, Julian Cerruti, Guillermo Manzato, Mateo Bengualid, Jefferey P. Bigham, and Jeffrey Nichols
Venue: UIST 2010, Oct 3-6, 2010

Summary:
In this paper, the authors describe a server-based web script service that works automatically based on web commands. This interface, called CoCo (short for CoScripter Concierge, not Conan O' Brien) can take short commands from twitter, email, or SMS and quickly find and use a web script to automate the required task on its own.

It does this in three steps. First, it parses the plain language passed in for the parameters using a simple engine made by the authors. Then, if the command has been used before, CoCo uses the same script as before and runs the task with the parameters given. However, if CoCo has not run the task before, it tries two things to find a script. First, it mines the web history of the user to find a task and attempts to use those scripts to solve the problem. Failing that, it searches the CoScript database for a matching script and uses that instead. Finally, CoCo sends a response back to the user indicating the success or failure of the task.

After showing the run process and giving some usage examples, the authors show some statistics to prove that the text returned by CoCo of success or failure is useful to the user. They found that it worked as well as pictures to indicate that the job was completed correctly.

Discussion:
This paper was interesting for similar reasons to those of Watson from Jeopardy. Strong sentence syntax recognition is important to make computers easier for those who aren't very computer literate. Interfaces like these could be quite useful for other programs as well, for example that playlist maker from a few papers ago.

However, one thing that I definitely find unnerving is the data mining segment of this application. I can see why it would be useful, but I feel that data mining of any sort is an unsafe proposition, especially when the data is being sent to an external server. If a company was selling this software full time, they could probably use that data for possibly unsavory purposes.

(Picture courtesy of: popwatch.ew.com)

Sunday, February 20, 2011

Paper Reading #10 - Designing Adaptive Feedback

Comments:
Comment 1
Comment 2

References:
Title: Designing Adaptive Feedback for Improving Data Entry Accuracy
Authors: Kuang Chen, Joseph M. Hellerstein, and Tapan S. Parikh
Venue: UIST 2010, Oct 3-6, 2010
Summary:
In this paper, the authors describe a new method for data entry that uses prediction to minimize error rates. The method is called USHER and works by building a probabilistic model that figures out the likelihood of potential entries. It then offers defaults, highlights likely options, and gives warnings. By using this system, the authors hope to decrease entry error rates while also avoiding using the technique of double entry.

They then test the effectiveness of their system by building a java frontend for USHER and having African data entry clerks enter a series of medical entries into a database. These entries were premade for the experiment and were checked for validity afterwards. They discovered that while defaults made the process quicker, they did not decrease the error rates. However, the highlighting system made a significant difference.

Discussion:
There are a lot of reasons why I like this paper. First off, they did a good job of using hard scientific data to back up their claims. Many of the papers so far did not do this. Second, they used some concepts from cognitive psychology that are similar to the ones we learned from The Design of Everyday Things. Finally, they showed practical applications for the technology that could be used later.

As far as the data entry is concerned, I definitely support this method. I have done data entry in the past and it's very boring; I often let my mind wander when I was doing the entries. In the medical world, where data entry is common, this is a very bad thing. Hopefully by adding these entries in, less errors will be made.

(Image courtesy of: anecdote.com)

Tuesday, February 15, 2011

Paper Reading #9 - Creating Collections

Comments:
Comment 1
Comment 2

References:
Title: Creating collections with automatic suggestions and example-based refinement
Authors: Adrian Secord, Holger Winnemoeller, Wilmot Li, Mira Dontcheva
Venue: UIST 2010, Oct 3-6, 2010

Summary:
In this paper, the authors describe a new hybrid method of creating playlists that merges automatic and manual creation methods; they then show two programs they created for managing media using this method, SongSelect and PhotoSelect. They begin by talking about automatic playlist making methods like iTunes' Genius tool and how many users thought it didn't give them what they want. Then, they discussed how many users selected music for their playlist that sufficed instead of looking for a best candidate despite having concrete goals, a method they call satisficing. So, when they designed their two applications, they kept a two step process: keywords are entered into a box that automatically generates a set of songs or pictures meeting the criteria; then, the user selects the best media for the list. Furthermore, the user can get further suggestions from the list if necessary to further customize their choices.

Discussion:
I thought that this paper was interesting because it's trying to solve an issue that I run into often. As they mention in the article, automatic playlist makers often give unsatisfying results, and doing it manually is a giant chore. I actually hope that this idea takes off.

However, and this is really picky of me to mention, I didn't like the use of a portmanteau inside of a professional paper. I feel like it looks unprofessional, and they could have still gotten their point across without it.

(image courtesy of: megaleecher.net)

Friday, February 11, 2011

Paper Reading #8 - Exploring Mobile Technologies

Comments:
Comment 1
Comment 2

References:
Title: Exploring Mobile Technologies for the Urban Homeless
Author: Christopher A. Le Dantec
Venue: CHI EA 2010, April 10-15, 2010

Summary:
In this paper, Le Dantec describes a study he performed to examine the effects of technology on the homeless. He executed two studies: one on the homeless directly and another on nonprofit agencies that assist them. From his studies of the homeless, he determined that technology made for the homeless doesn't only need to be designed cheaper; it also needs to be designed specially for them. From his second study, he discovered that nonprofit organizations had trouble keeping stable workforces because of their volunteer nature as well as how they needed to be tailored to their environments. Finally, he describes a system he is building utilizing smartphones called the Community Resource Map that should help the homeless find shelters, soup kitchens, and other services that can help them in their daily lives.

Discussion:
I'm not really sure how to feel about this paper. On one hand, it is important to bring technology to all people, regardless of status; however, as of now his method using smartphones seems impractical given that most homeless wouldn't be able to afford such devices.This isn't to say that in another 10 years this design would be feasible, but I just feel that there is probably a better answer that could be implemented sooner.

Additionally, I am curious about the methodology of Le Dantec's studies, as they are never really mentioned at all outside of their results. I would assume this is because of limited space, but it would have lent credibility to the paper if we could have seen how these conclusions were found.

(Image courtesy of The New Digital Divide)