Advisor: Profs. Aaron Cass (Computer Science) and Younghwan Song (Economics)
Examining the Effect of Technology on the Earning Power of Disabled People
This article presents for the first time a study of the extent of effectiveness of computer use on earning power of disabled people. Disabled people face struggles and discrimination in the workplace and this causes a wage gap to arise between them and their non-disabled counterparts. In order to examine said wage gap, I exploit the Bureau of Labor Statistic Current Population Survey and its special supplement on Internet and computer use bi-yearly from 2011 to 2015. This research paper focuses on the wage gap pro- duced by computer use and disabilities in the work place as well as the level of affect computer use has on the income or earning power of disabled workers. The nature of the data in this study allows for a broader perspective on this subject, which is a new angle than what has been studied in previous research. Earning regressions were used to estimate the effect of computer use on disabled wages. The significant diversity across America and its disabled population are captured and incorporated by a number of different vari- ables in the model. The findings in this paper show that some disabled workers who use computers earn more than disabled workers who don’t use computers. However, the lack of robust findings supports the belief that although results were found to be significant there are no strong economic or rational arguments to explain this correlation.
Advisor: Prof. Matt Anderson
Program Satisfaction Based on the Perception of Bugs as Features
Bugs are an ever-present problem facing software developers. What if you could mitigate some of the user dissatisfaction associated with a bug by telling your users that the bug is actually a feature of the program? In this paper, I describe an experiment in which I tested whether telling a user that a bug is a feature affects their satisfaction with using the program. I had participants draw three UML diagrams using a modified version of ArgoUML and had them report their satisfaction with using the program. The result of this experiment was that you should not tell users about bugs in your program.
Advisor: Prof. Nick Webb
Data Mining & Machine Learning Approach to Predicting the March Madness Tournament
To find a model for predicting games in the NCAA Basketball Tournament that relied on statistics. A hybrid model will provide higher classification accuracy than simply picking the higher seed.
Kwuan H. Chee
Advisor: Prof. Aaron Cass
Twitter Cashtags and Sentiment Analysis in Predicting Stock Price Movements
Despite the Efficient Market Hypothesis, stock market prediction is an area of interest for many re- searchers and investors. There are many observed variables which impact the movement of stock prices such as financial reports and economic policy. However, the expected behavior of the market may not always be the actual outcome, thus there are some variables that are not captured initially unaccounted for. We believe one of these variables is sentiment. More specifically, investor sentiment towards a stock or a company will affect their trading decisions, thus affecting the stock price. By gathering large amounts of data on individuals’ moods, we can aggregate this data and interpret it as public sentiment. There are sev- eral possible sources of public sentiment information, however given the expressive nature of social media participants and the growing amounts of data available via social media, we can use it as a source to extract public sentiment knowledge. One social media in which the users are more expressive is Twitter. Although much work has been done on predicting stock price movements based on general public sentiment data on Twitter, we would like to further investigate this topic. We will investigate whether or not including a cashtag attribute, which indicates whether or not a Tweet contains a cashtag or not, in our model leads to better prediction of the daily DJIA change. The cashtag is essentially a stock ticker symbol and is im- plemented and used in a very similar way to hashtags. We will build two models, one with the cashtag attribute and one without and we will compare them. This will provide an indication of whether or not Tweets with cashtags are more beneficial to the overall performance of stock price movement prediction.
Frank Chiarulli Jr.
Advisor: Prof. John Rieffel
Controlling 3D Printers with Artificial Neural Networks
To determine if artificial neural networks (ANNs) can perform well enough to supplant the use or need for linear instruction on a 3D printer.
Advisor: Prof. Valerie Barr
The Effects of Multisensory Notifications on User Reactivity
In the past decade, notifications have become an integral aspect of mobile applications. Mobile notifications are used to present users with a variety of information, such as current events, a new message from a friend, a social media comment, an upcoming scheduled event or a scheduled reminder, and much more. Depending on the notification's importance, the mobile user's current context, or the fashion in which the notification is displayed, users can react immediately, slowly, or not at all to a notification. Typically, mobile application developers are limited by the capabilities of a mobile device when notifying their users (the device’s sound output, text banners, etc). Developers of traditional alarm clock applications are even more limited, as they rely strictly on the mobile device's sound output to wake sleeping users with their notifications.
Advisor: Prof. Valerie Barr
Contextual Captions: Improved Captioning for the Hearing Impaired
This paper outlines a brief history of closed captioning and addresses problems with the current system. It proposes a solution to these raised problems, especially concerning hearing impaired audiences. These solutions were then tested in a user study to determine their significance.
John L. Enquist
Advisor: Prof. Matthew Anderson
The Effects of Strategy-Based Video games on Word Memorization
In a society that has become increasingly geared towards playing video games as a pastime, it is important that we fully explore all the effects that playing these games can have on our bodies and minds. In my experiment I divided a group of 40 subjects equally into an experimental and control group. Each group was tested on their ability to memorize a list of terms over several trials through a memory game called Word Order , while taking a break in between rounds to simulate a study break. The exper- imental groups study break was to play a video game, while the control group simply walked around and stretched. Through this experiment I was able to track the effects of each study break on the groups successfulness with word order.
Advisor: Prof. John Rieffel
Evolving “Skate Guts”
Biological Inspiration for the project: Prof Theodosiou in the Biology department is doing research on skates and their guts. Skates have interesting guts. People in the bio field have inferences as to why skates developed this way, based on anatomical restrictions and dietary needs, but there is no way to know the actual parameters evolution is “optimizing” for through traditional methods.
Advisor: Prof. Nick Webb
Mimicking Human Gestures During Conversation
Social robots are designed to interact with people. Due to physical limitations they sometimes require human assistance to meet their goals. In this experiment SARAH, a social robot, invites people in her environment to ask her a set of questions. For SARAH to succeed she must be able to engage someone in a short conversation and have them assist with a task. A dynamic social robot could take advantage of nonverbal cues such as gestures may be more successful in interacting with a person than a static robot that does not take advantage of nonverbal cues. SARAH has been programmed with a set of gestures made to mimic common gestures used by people in conversations. This experiment investigates whether the use of the gestures influences people’s perception of SARAH’s intelligence, naturalness. Participants are also invited to describe SARAH in their own words.
Advisor: Profs. Kristina Striegnitz and Nick Webb
Does Robot Eye-gaze Help Humans Identify Objects?
The success of a social robot relies on its interactions with humans. To enhance human-robot interaction I looked toward useful forms of communication in human-human interaction. Humans communicate through verbal and non-verbal cues, in particular, my study focuses on eye gaze and the effects of its application on human-robot interaction. Specifically, I wanted to know whether having a robot exhibit eye gaze in a human-robot interaction would lead to humans feeling more comfortable and engaged, and whether this eye gaze would be a useful assistant in distinguishing between similar objects. My experiment involved a robot instructing a human to identify a target object. In the experimental conditions the robot would move its eyes toward the target object while describing the target through speech. Participants did not identify objects faster when the robot moved its eyes toward the target and participants did not report feeling more engaged when the robot moved its eyes. However, participants did report higher levels of comfort when the robot exhibited eye gaze and participants also felt the robot was more natural as well. Thus, in the hopes of creating successful social robots it is important to implement eye gaze cues in the robot in order for humans to feel more comfortable during the human-robot interaction. Having humans feel more comfortable might then encourage them to interact with the robot longer and might also encourage them to interact with the robot, or any other social robot, more often.
Advisor: Prof. Matthew Anderson
Detecting Confusion Using Eye-Tracking and Machine Learning Techniques
Eye Tracking is widely applied in studying how people interact with the interface of software. Re- searchers use eye trackers to find confusion during users’ interactions with the interface. They search for flaws in the user interface design and make users less confused in the development stage. This project explores the possibility of using machine learning methods to find the patterns of confusion in eye move- ments and mouse activity using real time interaction data. An experiment is designed to collect eye track- ing data from participants. The positions of gaze, fixation, and cursor are used to generate feature data. Two versions of feature data are generated: the Euclidean distances of gaze, fixation, and cursor position and the standard deviation of gaze, fixation, and cursor position in a five-second windows. Then the models built from two feature sets are compared. 60% of two feature sets are training sets, and the rest of 40% are validation sets. Models produce insignificant result on both test sets. A K-Nearest Neighbor model classifies the first feature set with the highest classification rate of 60% on both class instances with kappa statistics of 0.14. The KStar model best classifies the second feature set with 53.5117% of classifica- tion accuracy on both class instances with kappa statistics of 0.09. Individual categories of feature data are evaluated to find the correlations with confusion using logistic regression. The cursor feature data in both feature sets are found to be strongly correlated with confusion.
Advisor: Prof. Aaron G. Cass
Navigation in Dynamic Environment
A* algorithm has been proved that it will provide optimal solutions in a static environment. We also want to introduce the A* algorithm to a dynamic environment where exists some moving objects which are unknown to the path planner, the agent. Since A* algorithm requires a perfect knowledge to the en- vironment, we cannot directly implement A* in this situation. But we will enlarge the agent’s knowledge to the map while moving towards the goals state by detecting the moving object, so that we can use A* to plan a new optimal path from the current state to the goal state if it is necessary. The definition of ”opti- mal” means the path should be the shortest without colliding to the object. Our algorithm will accomplish two subgoals: collision prediction and path re-planning. The collision prediction determines whether a re-planning is necessary.
John R. Peterson
Advisor: Prof. John Rieffel
Effective ANN Topologies for use as Genotypes for Evolutionary Design and Invention
There is promise in the field of Evolutionary Design for systems that evolve not only what to manufac- ture but also how to manufacture it. EvoFab is a system that uses Genetic Algorithms to evolve Artificial Neural Networks (ANNs) which control a modified 3d-printer with the goal of automating some level of invention. ANNs are an obvious choice for use with a system like this as they are canonically evolvable encodings, and have been successfully used as evolved control systems in Evolutionary Robotics. How- ever, there is little known about how the structural characteristics of an ANN affect the shapes that can be produced when that ANN controls a system like a 3d-printer. We consider the relationship between certain structural characteristics of an ANN and the ability of that ANN to produce complex geometric shapes by controlling a 3d-printer. We develop an understanding of shape complexity for 2d-shapes in a simulated 3d-printer in order to use Genetic Algorithms to optimize ANNs with fixed structures to produce complex outputs and assess the relationship between topologies of ANNs and the systems success in producing complex outputs under evolutionary optimization.
Advisor: Profs. Kristina Striegnitz (Computer Science) and Fernando Orellana (Visual Arts)
Flip Animated Road Warning Sign
Studies of visual effects of road signs have shown that having dynamic imagery, and even better, animation, helps improve road signs as it increases a viewer’s attentiveness. However, currently existing animated imagery signs, which are usually digital billboards, have drawbacks including distraction caused by bright LED lights, large energy consumption, and expensiveness. To overcome the problems, this project proposes creating a Flip Animated Road Sign, a new road sign without the use of a digital display that uses a mechanism of an automatically running mechanical flip book to produce animated imagery. The project proposes that features of animated visuals through flip animation can lead to perceived movement and prepare the observer for action. The research is operationalized within the context of warning sign icons and show how animated iconography can affect human behavioral response. Experiments measuring attention, quickness of response to, and perceived movement and risk of animated warning sign icons using driving simulations and surveys showed that there was no statistical difference between the data sets of flip animated warning signs and the data sets of regular warning signs. However, none of the flip animation or designed images were shown to make the meaning of the sign too confusing to understand.
Cameron J. Smith
Advisor: Prof. Kristina Striegnitz
A Visual Interface for Product Exploration on Online Shopping Websites
Most online shopping websites provide a customer interface that simply lists all products available to the user. The user can then scroll up and down to explore different options. Everyone that has shopped online is familiar with this interface type. However, there are drawbacks to list interfaces. There is no obvious relationship between any two products that are displayed right next to each other. This means that during exploration users cannot use the position of a product in the list to draw information about that product. Is there a different type of user interface that is more helpful for shopping for products? This paper describes the implementation and evaluation of a novel two-dimensional visual shopping interface which arranges products on the display by similarity. The interface consists of an undirected graph with each node of the graph representing a different product. The products that are placed closer to one another can be assumed to be more similar, and those that are farther away are less similar. We propose that this new interface will be better for product exploration, because the user can now intuit information about a product based on where it is located on the screen and what products are next to it. We perform a user study to evaluate the claim that our interface is better. The results of this user study were inconclusive, however we got valuable feedback on the usability problems of our interface.
Nicolas Suarez-Canton Trueba
Advisor: Prof. Chris Fernandes
Apple’s 3D Touch Technology and its Impact on User Experience
Approximately 2 years ago, Apple Inc. introduced the first of its lineup of devices to include 3D Touch, the iPhone 6S. This touchscreen technology is responsive to the amount of pressure that users apply on the screen, and thus developers can use it to add contextual functions to the present-on-screen software. This study presents three different experiments that test 3D Touch as: (a) a input device to accurately set a parameter, (b) a less disruptive way to interact with notifications and (c) an alternative error recovery mechanism. Although only experiment (a) was fully implemented and tested, there does not seem to be an improvement of user experience. This experiment was a task in which users were asked to input a rating using 3D-Touch or a traditional slider. The users performed better in terms of number of attempts and time taken on the slider interface; however, they appeared substantially more engaged when interacting with the 3D-Touch interface. Experiments (b) and (c) remain as future work, and the user engagements suggest that 3D-Touch could be better suited for its implementation within a video-game.
Advisor: Profs. Chris Fernandes (Computer Science) and Steven Sargent (History)
Multi-Agent Simulation of The Battle of Ankara 1402
In 1402, at the north of city Ankara, Turkey, a battle between Ottoman Empire and Tamerlane Empire decided the fate of Europe and Asia. Although historians largely agree on the general battle procedure, the details are still open to dispute. Several factors may have contributed to the Ottoman defeat, such as the overwhelming size of Tamerlanes army, poisoned water, the tactical formations of the military units, and betrayal by the Tartar cavalry in the Ottoman left wing. The approach is divided into two stages: the simulation stage, which provides data to analyze the complex interactions of autonomous agents, and the analysis stage, which uses data mining to examine the battle outcomes. The simulation is built on a finite state machine to evaluate the current situation of each agent and then choose the most appropriate action. To achieve historical accuracy, the simulation takes into account the topography of the battlefield, line- of-sight issues, period-specific combat tactics, and the armor and weapons used by the various military units at that time. The analysis stage uses WEKAs AttributeSelection Classifier to evaluate the association strength between the battle outcome and the various factors that historians consider crucial to the outcome.
Advisor: Profs. Valerie Barr (Computer Science) and Hans-Friedrich Mueller (Classics)
Accurately Simulating the Battle of Thermopylae to Analyze “What-If” Scenarios
The Battle of Thermopylae (480 BCE) was a last ditch effort to stall the Persian army as it marched south toward Athens. Led by Leonidas and his personal guard of 300 Spartans, a citizen army of Greeks was able to delay a Persian army of over 100,000 soldiers at the town of Thermopylae for several days. Although the Greeks were ultimately defeated at Thermopylae, the battle provided enough time for the Greek states to regroup and plan a counter attack, eventually defeating the invading Persians. This battle was crucial not only for the preservation of Greek independence, but also the first democratic society. But how, against all odds, did the small Greek force withstand the Persians for as long as they did? What might have happened had key factors and choices been slightly different? By modeling the battle with historical accuracy, we can simulate various counter-factual scenarios to determine which factors were crucial for the Greeks’ incredible achievement.
Advisor: Prof. Nicholas Webb
Attracting Human Attention Using Robotic Facial Expressions and Gestures
Robots will soon interact with humans in settings outside of a lab. Since it will be likely that their bodies will not be as developed as their programming, they will not have the complex limbs needed to perform simple tasks. Thus they will need to seek human assistance by asking them for help appropriately. But how will these robots know how to act? This research will focus on the specific nonverbal behaviors a robot could use to attract someone’s attention and convince them to interact with the robot. In particular, it will need the correct facial expressions and gestures to convince people to help them.