T. Gregory Bandy
Abstract: Peter Wegner has presented Interaction Machines as a more powerful class of computability than Turing Machines. Whereas Turing Machines are defined in terms of algorithms, Interaction Machines are defined in terms of interactions. Interaction includes algorithms as a legitimate class of computability, but represents a more powerful class of computability that cannot be achieved using algorithms alone. This fundamental thesis has engendered a vigorous and entertaining debate in the computer science academic community as to the appropriate meaning of computability, the role of interaction, and the proper assessment of powerfulness. Extensions to Turing Machines have been presented in recent years that better describe interaction activities but it is not clear that these extensions are more powerful than Turing Machines. Indeed, the proponents for Interaction Machines have not adequately made the formal proofs and formal descriptions needed to win the established computer science community to its viewpoint.
Robert Gallup
Abstract: The high cost of software development and the failure of developers to follow good coding practices have contributed to the drive for software components. With the demand for truly encapsulated code to be delivered on a much shorter time frame, component software offers potential that was never fully realized with OO programming. Just as OO programming added additional abstraction to existing practices, component-based programming adds its own abstraction to existing OO practices. In fact, the internal code of software components is often, but not necessarily, object-oriented. This talk will investigate the drive toward software components, their relationship to OO issues, and some of the problems posed by software components.
Adam Graham
Abstract: The Personal Software Process (PSP) provides software engineers with a framework to consistently produce quality software. This talk will focus on the methods that PSP uses to increase defect removal in software, decrease overall development times, and increase size and time estimation accuracy. Both personal and group data will be analyzed.
Michael Landi
Abstract: Object Oriented Programming (OOP) has provided enormous improvements in our abilities to write complex applications. However, as the complexity of applications continues to increase, we are realizing that OOP has limitations. We are now seeing many requirements that do not decompose neatly into behaviors centered on a single conceptual entity. Such requirements are known as crosscutting concerns. Aspect-Oriented Programming (AOP) is a Post-Object Programming technology that enables modularization and composition of crosscutting concerns. AOP introduces aspects, which are single units that encapsulate crosscutting concerns. This talk will report on the origin of AOP, what it is, how it is related to and different from OOP, how it works, and the contributions that it makes to the field of software development.
Jennifer Martin
Abstract: Being a division of metaphysics (the study of first principles or the essence of things), it does not seem clear how ontology would be related to engineering. Ontology, the standardizations used to create a vocabulary for transferring or sharing data, is now having a large impact in software design and artificial intelligence. The basis, types, and future of ontological engineering will be discussed in this seminar.
Donald Riggs
Molecular computing, in its most basic form, is the utilization of molecules for the purpose of performing computations. At present, two divergent techniques are under development. The first employs molecules as bi-stable switches that can be connected into a grid, forming a Field Programmable Gate Array (FPGA). An FPGA can be configured into gates, allowing computations to be performed. A second technique, most effectively employed in the solution of NP-hard problems, utilizes DNA's Watson-Crick complementarity and massive parallelism to find brute force solutions to selected problems. This talk will introduce molecular computing but concentrate on DNA computing.
Greg Wheeler
The Semantic Web is an extension of the current Web in which information is given well-defined meaning, enabling computers and people to work in better cooperation. Currently, the raw information that is available on the Web is not in a machine-usable form. You still need a person to discern the meaning of the information and its relevance to your needs. The Semantic Web addresses this problem in two ways. First, it will enable communities to expose their data so that a program doesn't have to strip the formatting, pictures, and ads from a web page to guess at the relevant bits of information. Secondly, it will allow people to write (or generate) files that explain - to a machine - the relationship between different sets of data. For example, one will be able to make a 'semantic link' between a database with a 'zip-code' column and a form with a 'zip' field that they actually mean the same thing. This will allow machines to follow links and facilitate the integration of data from many different sources.