2018 Computer Science Senior Design Projects

Student Report Title Student Report Title


Some links below may require MS Powerpoint 2007 or Adobe Reader
Click on a student picture for a larger view.

Hein H. Aung

Advisor: Prof. Matthew Anderson

Specializing Generative Adversarial Networks to Render Terrain Textures

I studied Neural Networks for texture generation. In particular, I have researched and created Generative Adversarial Networks (GANs) for rendering new images of different types of ground terrain textures (such as soil, grass plains, drought plains, etc). A GAN consists of two neural networks: a discriminator and a generator. The generator create a batch of images with random pixels. On the other hand, the discriminator tries distinguish between the real image and fake image by taking the inputs from a set of real images and a set of fake images from the generator. There has been ample research done with GANs that produce synthetic 2D images or 3D graphical models. The GAN I have created, given a set of input images, will dynamically render a set of different variations of original images of terrain textures. For example, if the training input is a set of green grass, the output will be a set of withered grass plain, fresh grass plain, and so on. I sent out a survey to the campus to evaluate these images and my final results indicate that people can identify the generated textures type.

Poster Link    Report Link    Presentation Link   


Rory Bennett

Advisor: Prof. Kristina Striegnitz

Different Modes of Semantic Representation in Image Retrieval

Image retrieval systems are supposed to only retrieve images that are relevant to a given query. Therefore, they need methods by which to represent the meaning of both the query word and the image, so that these meanings can then be compared. Distributional semantic models typically use semantic vectors to represent words' meanings, based on the extent to which they appear near other words in text. By comparing these semantic vectors, we can compare words' meanings, and thus find words that are simlar or relevant to each other. In this study, I extend this idea, to implement an improved image retrieval system: I build semantic vectors for both words in text and captioned images, and compare these vectors to find, for each query term, the image whose meaning is most relevant to the query's meaning. I consider taking information from captioned images and inserting it into text, to build ``multi-modal'' semantic vectors that contain information across modes of meaning, from both textual and visual data; I also consider filtering which images I consider for vector comparison, based on whether they contain, in their captions, words that are similar to the query term in the text. Results show that overall, inserting perceptual information into the text actually causes the image retrieval system to retrieve less relevant images.

Poster Link    Report Link    Presentation Link   


James M. Boggs

Advisor: Prof. John Rieffel

Exploiting the Dynamics of Tensegrities Via Morphological Computation

Tensegrity robots are a class of soft robots which are comprised of rigid struts connected by springs under tension in such a way as to maintain a flexible resting form. These robots have dynamic and complex physical properties which make it impossible to predict how an actuation will af- fect the system. We are attempting to utilize, rather than mitigate, this physical dynamism by using morphological computation, which treats the physical system as implicitly performing computations. Specifically, we will implement a simple spiking neuron on each strut and treat the sys- tem as a whole as a spiking neural network, with communication between neurons implicit in the vibrational activity of the strut and looking for emergent behavior. Ultimately, we hope that the physical system can be used to produce a robust central pattern generator to generate an effective forward gait for the robot.

Poster Link    Report Link    Presentation Link   


John E. Driscoll

Advisor: Prof. Nicholas Webb

Localizing Webpages for Francophone Audiences with Machine Translation

Localization is the process of translating the text and adapting the visual content of a software product for a specific target region. Given the global nature of the economy, organizations can appeal to non-English speaking consumers by offering localized versions of their software or web application. Currently, the localization process calls for time-consuming and costly translations of text strings by language service providers (LSP) with each additional locale incurring a new cost for LSP's manual translations. The high cost in terms of both time and money makes localizing software intimidating for organizations. This project explores the automation of the translation aspect of the localization process by using machine translation (MT) services. A Python script was developed to scrape HTML data and translate strings to French using Google Translate. The automatically generated translations were scored by human evaluators and the automated metric, BLEU. Participants were surprised at the quality of the translations, which were preserved meaning and were understandable despite some grammatical errors. This research suggests that the translation aspect of localizing software can be at least partially automated then edited for quality and fluency.

Poster Link    Report Link    Presentation Link   


Yuan Gao

Advisor: Profs. Christopher Fernandes (CS) and Steve Rice (BIO)

Classifying Physiological States of Polytrichum Moss Based on Digital Images Using Machine Learning

Mosses are widespread vegetations on the ground layer level in boreal forests. They play important roles in productivity, soil hydroclimate regulation and nutrient cycling in the ecosystem. The Polytrichum mosses are desiccation-tolerant and have two physiological states: a hydrated state and a desiccated state. The physiological features and growth rates of mosses differ in different states. Monitoring the physiolog- ical states of Polytrichum moss using near-surface remote sensing will be helpful in predicting the growth of mosses and assessing the vegetation condition in boreal forests. The initiative of this project is to classify the physiological states of the mosses based on digital images of moss canopies. In this project, we took images of moss canopies in fields and used OpenCV library to extract attributes that quantify the color and the structure of mosses from images. We then compiled a dataset with extracted attributes and use Weka Machine Learning Library to find ideal machine learning algorithms to do classification. The results showed that kNN classification algorithm had the best performance among the tested algorithms. The trained kNN model was used to predict images in the mixed state in multiple scales. The predictions were mapped back to the original images to compare with manual classifications of canopy images. On average, 66.4% of the area was predicted correctly. The median of this number was 74.1%. Overall, this model could provide a reasonable prediction of the physiological state of moss in images.

Poster Link    Report Link    Presentation Link   


Akshay S. Kashyap

Advisor: Prof. John Rieffel

Transfer Learning for Faster Discovery of Tensegrity Robot Gaits

Tensegrity Robots are a class of Soft Robots gaining attention due to their ease-of-assembly, ability to preserve structural integrity under physical deformation, and ability to move with dynamic gaits. Previous work has shown how optimization techniques can be used to find the optimal gait in a given environment to move as efficiently and quickly as possible. The problem with these techniques, however, is that they suffer from the “Cold Start" problem, where they need to re-learn the optimal gait for the robot when it is deployed in new, unseen conditions. Our research explores how we Transfer Learning can be used to leverage knowledge from a previous gait discovery experience, called Source task, to a new environment, called Target task, in order to learn these gaits faster. We use Bayesian Optimization as our base optimization technique and a transfer learning framework that models the distance between the Source and Target tasks. This approach shows an improvement in both the performance of the learned gait in the Target task and the number of optimization trials it takes to learn this optimal gait.

Poster Link    Report Link    Presentation Link   


Emily Kern

Advisor: Prof. Kristina Striegnitz

Neural Network Architectures for Image Captioning

Image captioning is a popular field of research in the real of neural networks, as it combines two neural network problems into a single challenge: object recognition in images and sentence generation. Each subproblem can be dealt with using a convolutional neural network (CNN) and a recurrent or long short- term memory neural network (RNN/LSTM). Given an extensive dataset of raw images and crowd-sourced captions courtesy of Flickr8k and Flickr30k, we set up a CNN and RNN/LSTM in an encoder-decoder architecture. The CNN takes the raw image and extracts higher-level features. The RNN/LSTM receives the output of the CNN and, based on the training captions in the dataset, adjusts its weights so that when a new input image is presented, the neural network can generate an accurate caption that describes the image. Training the neural network on both datasets yielded syntactically correct caption predictions, but the captions were not descriptive of their respective images. This could be a result of word bias that accrued during the training process, causing the weights in th neural network to put extra favor on certain words.

Poster Link    Report Link    Presentation Link   


Benjamin E. Kopchains

Advisor: Prof. Christopher Fernandes

The Efficacy of Augmented Reality in the Classroom

My senior year capstone project consisted of an iOS application utilizing augmented reality (AR) in teaching students the processes of photosynthesis. An emerging technology made popular recently through applications such as Nintendo’s mobile game Pokemon Go and IKEA’s IKEA Place, augmented reality makes use of a device’s camera to overlay 3-dimensional graphics and images onto the device’s screen to draw a composite image. Stemming from previous research positively correlating student engagement with stu- dent performance, I used the application I’ve built to assist students in learning while making use of this new and interactive technology. In doing so, I measured student learning through pre- and post-tests, and determined the usefulness of augmented reality in the classroom when compared to more conventional and less-interactive learning methods.

Poster Link    Report Link    Presentation Link   


Dae Kwang Lee

Advisor: Prof. Matthew Anderson

Blockchain Consensus: Secure and Fast Transactions

In Bitcoin, transactions are stored in a timestamped object, or a block. Ideally, the public ledger forms a singly linked chain of blocks, or a single blockchain, and each user maintains a uniform state of the blockchain copy. When the blockchain has multiple branches, or forks, the longest branch is the only valid chain, according to the current protocol, the Nakamoto Consensus. GHOST is an alternative protocol that instead selects the branch with the most blocks appended to it. GHOST is much faster than Nakamoto Consensus in transaction processing but produces more forks. Neither protocol protects the system when selfish miners collude to keep mining on a shorter branch and make it the longest or the heaviest to revert valid blocks. What we need is a new policy that is as fast as GHOST but is as secure as the Nakamoto Consensus.

Poster Link    Report Link    Presentation Link   


Shinell Manwaring

Advisor: Prof. Matthew Anderson

Combining VR and Gamification to Improve Academic Performance

Virtual reality (VR) and gamification are growing trends that have resulted in the increase of college stu- dent motivation, engagement, and educational performance. This report outlines the creation of a module, a self-contained unit of an introductory computer science topic, which combines the use of gamification and VR. The module practices the programming concepts of Lists and present them in a gamified VR application. The purpose of this module is to test whether it can improve the academic performance of in- troductory computer science students. The experiment described in this paper details the test results taken from two groups of participants, those who were able to use the module and those that watched a video about lists. Since I wanted to ensure that my findings were entirely due to VR?s interactive component, both groups of participants completed a 4 question test, that had questions of varying difficulty, before and after finishing their activity. While participants use the module or watch the video, feedback is noted and observational notes describing each participants use of my module and the video are jotted down e.g., how long did they use the module, how many levels did they complete, were they engaged and attentive, and were the confused by the dialogue or task at hand. Based on the data collected, participants that were in the group that watched the video had a larger number of people that were able to correctly answer all of the questions in the final testing section. Therefore, I can only conclude that the module was less beneficial than the video at improving the academic performance of the participant. However, observational data indicates that participants found the VR module more engaging and enjoyable than watching the video.

Poster Link    Report Link    Presentation Link   


William M. Martin

Advisor: Prof. Aaron Cass

Authentication Using Graphical Password: Effects of Increased Security on Usability

Graphical User Authentication (GUA) is a method that uses images as passwords, instead of alphanumeric characters. We propose PassDecoy, a shoulder surfing resilient GUA system, and through previous research has been shown to have a higher degree of security when compared to PassPoints (Chaisson et al. [3]). However, previous work has exposed a password problem, where the effect of increased security often comes at the expense of decreased usability. We conducted a user study in an effort to evaluate the usability of PassDecoy, and its ability to be both secure and usable. The results of the user test yielded mixed results, where many results showed insufficient evidence to demonstrate an effect on usability. However, there was some observed decrease in usability, specifically in login time and a users perception of their ability to memorize passwords. These are key measures of password usability, and futrue work is required for PassDecoy to be implemented futher.

Poster Link    Report Link    Presentation Link   


Leslie Tucker Massad

Advisor: Prof. Nicholas Webb

Stimulating Engagement Through Negative Emotional Sentiment on Twitter

While it has become increasingly apparent that criticizing Donald Trump's choice of words when posting on Twitter has become a part of modern American culture, interest is raised regarding the effects and influences his purposefully negative posts have on his engagement with other public users. Through his personal account (@realDonaldTrump), our current president is using Twitter as a medium of voicing his unfiltered opinion in posts of 280-or-less characters. Unlike most celebrity figures who maintain active Twitter accounts, Trump sends tweets that are often constructed using insulting, argumentative, or otherwise negative language. Using emotional sentiment analysis to determine the overall emotional sentiment of each publication in a large set of Donald Trump's posts on Twitter, this research will primarily investigate and analyze Trump's pattern of using negative emotional sentiment on Twitter and show the advantageous outcomes that tweeting with negative emotional sentiment contributes towards his overall user engagement on the platform. In addition, this research will briefly investigate the emotional sentiment habits of other popular users on Twitter to determine if how others' public engagement is affected when posting with negative emotional sentiment.

Poster Link    Report Link    Presentation Link   


James A. Murphy IV

Advisor: Profs. Nicholas Webb (CS) and Mehmet Fuat Sener (ECO)

Quantifying the Impact of Video Quality and Hardware/Software Stabilization on Facial Detection and Recognition in a Mobile Robot System

Without a stable image, new developments in computer vision (CV) are not as applicable in practice. While facial recognition systems are used for stationary cameras, a drop in performance has been noticed in mobile systems (Jung et al. 2004). This issue has been noticed during experiments at Union College as well. In response to this problem, this study empirically explored the benefit of a hardware and software video stabilization for facial detection and recognition. The four solutions were: 1. No solution (baseline), 2. Camera stabilizing hardware, 3. Video stabilization software, and 4. Both hardware and software combined. These solutions were also tested on two cameras, a Samsung S7 (13 megapixels) and Logitech USB camera (3 megapixels). The different cameras were used to provide insight for the benefit of image quality in CV applications as well. After quantifying the results, the hardware stabilization solution proved to create complications due to its sensitivity. The software solution had a slight -1.6% drop in detection while providing a 6.0% increase in correct recognition rate. Camera quality was the most impactful for detection and recognition with the 13 megapixel camera far outperforming the 3 megapixel camera by roughly 20% throughout the experiments. The USB camera also proved to be ineffective beyond nine feet.

Poster Link    Report Link    Presentation Link   


An Nguyen

Advisor: Profs. Christopher Fernandes (CS), Nicholas Webb (CS), and Harlan Holt (ECO)

Housing Price Prediction

This paper explores the question of how house prices in five different counties are affected by housing characteristics (both internally, such as number of bathrooms, bed- rooms, etc. and externally, such as public schools’ scores or the walkability score of the neighborhood). Using data from sold houses listed on Zillow, Trulia and Redfin, three prominent housing websites, this paper utilizes both the hedonic pricing model (Lin- ear Regression) and various machine learning algorithms, such as Random Forest (RF) and Support Vector Regression (SVR), to predict house prices. The models’ prediction scores, as well as the ratio of overestimated houses to underestimated houses are com- pared against Zillow’s price estimation scores and ratio. Results show that SVR gives a better price prediction score than the Zillow’s baseline on the same dataset of Hunt County (TX) and RF gives close or the same prediction scores to the baseline on three other counties. Moreover, this paper’s models reduce the overestimated to underes- timated house ratio of 3:2 from Zillow’s estimation to a ratio of 1:1. This paper also identifies the four most important attributes in housing price prediction across the counties as assessment, comparable houses’ sold price, listed price and number of bathrooms.

Poster Link    Report Link    Presentation Link   


Aidan R. Pieper

Advisor: Prof. Matthew Anderson

Iterated Local Search Algorithms for Bike Route Generation

Planning routes for recreational cyclists is challenging because they prefer longer more scenic routes, not the shortest one. This problem can be modeled as an instance of the Arc Orienteering Problem (AOP), a known NP-Hard optimization problem. Because no known algorithms exist to solve this optimization problem efficiently, we solve the AOP using heuristic algorithms which trade accuracy for speed. We implement and evaluate two different Iterated Local Search (ILS) heuristic algorithms using an open source routing engine called GraphHopper and the OpenStreetMap data set. We propose ILS variants which our experimental results show can produce better routes at the cost of time.

Poster Link    Report Link    Presentation Link   


Elizabeth Ricci

Advisor: Prof. John Rieffel

A Better Way to Construct Tensegrities: Planar Embeddings Inform Tensegrity Assembly

Although seemingly simple, tensegrity structures are complex in nature which makes them both ideal for use in robotics and difficult to construct. We work to develop a protocol for constructing tensegrities more easily. We consider attaching a tensegrity’s springs to the appropriate locations on some planar arrangement of attached struts. Once all of the elements of the structure are connected, we release the struts and allow the tensegrity to find its equilibrium position. This will allow for more rapid tensegrity construction. We develop a black-box that given some tensegrity returns a flat-pack, or the information needed to perform this physical construction.

Poster Link    Report Link    Presentation Link   


Rex Rubin

Advisor: Prof. Aaron Cass

Creating a Document Summarizer for Novices

I am looking to see if adding a glossary to a summarized document will extract more coherent sentences to the final summary. Getting into a field of research is daunting with research papers giving a lot of difficult information, I am looking to extract the easiest to read sentences for those new to the field. To do this, I will be editing an already existing Python program to include a glossary of words related to the original document. I currently have a working version of the summarizer, and the documents that will be run through it, but testing the effectiveness of the augmentation has not yet begun.

Poster Link    Report Link    Presentation Link   


Sharifa Sahai

Advisor: Profs. John Rieffel (CS) and Scott Kirkton (BIO)

Evaluation of Grasshopper Inspired Spring Actuation Model for Tensegrity Robot Locomotion

Tensegrities are popular structures for soft robots due to their robust properties but are also difficult to move in meaningful ways. Looking at movement methods in grasshoppers, which are able to move many times their body length in short intervals, may lead to discovering more effective movement patterns for tensegrity structures. Much of the grasshopper’s effective locomotion is due to the the spring-like structures in its hind legs which store and release energy needed for movement. Tensegrities also have spring structures which can be contracted to produce movement. Spring stiffness varies in grasshoppers between species and stages of development. By Hooke’s law of spring dynamics, F = kx, altering the spring stiffness should increase the force production linearly. We explore the effects of changing spring stiffness on distance traveled in a tensegrity robot in simulation within Open Dynamics Environment. Six of the Twenty Four springs within the tensegrity robot were chosen to be actuated. The spring stiffnesses of these six springs were either changed uniformly or individually to determine if novel tensegrity movement would be produced. Spring stiffness values were optimized using the Covariance Matrix Adaptation Evolution Strategy. Unlike grasshoppers which have increased jump performance with greater spring stiffness, the resulting displacement values of the tensegrity did not follow a linear trend with changes in spring stiffnesses. They did not converge as expected by Hooke’s law to the greatest possible value. This suggests that altering the spring stiffnesses in tensegrities could lead to more diverse patterns of locomotion which may also not follow a linear trend with increasing spring stiffness.

Poster Link    Report Link    Presentation Link   


Varun Shah

Advisor: Profs. Kristina Striegnitz and Nicholas Webb

Clickbait Detection using Natural Language Processing and Machine Learning

Clickbait refers to social media posts designed to entice the clicking of an accompanying link in order to increase online readership. Clickbait detection is important for the preservation of quality and ligitimacy in social media and news publishers. In this paper, I present a model for clickbait classification using Natural Language Processing and Machine Learning. The data used was found on the “Clickbait Challenge 2017” [4] website. I utilized Python’s Natural Language Tool Kit [5] and Stanford CoreNLP [8] parts-of-speech tagger to develop and implement features that would help the model classify clickbait. Results showed that once the features are added to the model, RandomForest achieves the highest classification accuracy of 88.2051% with an ROC–AUC of 0.928 for clickbait instances.

Poster Link    Report Link    Presentation Link   


Kim Wellington

Advisor: Profs: Kristina Striegnitz (CS), Stephen J. Schmidt (ECO)

Agent-Based Modeling to Analyze the Effect of the 2009 Government Stimulus Package on the Labor Market

The unemployment rate can more than double during a recession, and to combat the negative effects of a recession the government stimulates the economy by using various combinations of three primary stimulus methods. These methods -- tax cuts, government-funded projects and increasing the duration of unemployment insurance -- are not equally effective. In this project I used an agent-based model to analyze the effectiveness of the different components of the stimulus package in improving the labor market. To reflect the U.S. market and the 2008 recession, I adjusted an agent-based model of a simple labor market using NetLogo multi-agent programmable modeling environment. Using this modified model, I ran experiments on individual aspects of the stimulus methods and combinations thereof. The results showed that decreasing tax rate can decrease unemployment, as it makes work more attractive to workers and makes it easier to match workers and employers. Similarly, increasing government funding of projects increases job vacancies, thus decreasing unemployment, because government-funded projects increase demand and create employment opportunities. On the other hand, increasing the duration of unemployment insurance has a detrimental effect on labor market recovery, as it decreases the workers' willingness to agree to employment. Also, based on the results, we can conclude that agent-based modeling is an effective method for stimulus package analysis. During recessions, such analysis can help maximize the positive effect of government stimulus by balancing various components of the package.

Poster Link    Report Link    Presentation Link   



Computer Science Senior Design Projects from previous years:
2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003


All rights reserved - Union College, Schenectady, NY 12308
Last edit 24Nov2017
Address questions or comments to freyd@union.edu