What Every Computer Scientist Should Know About Machine Learning: the Dream, the Difficulty, and a Direction

September 10, 2001 - 11:15am to 12:15pm
Location: 
3-125 EE /CS Building
We begin by outlining a dream Context-Adaptive Computing. Today's computer applications are designed for one or a few mythical "average" users; performance tradeoffs are resolved in ways that do the least damage to the greatest numbers, and an application's behavior after months or years of use is identical to its initial out-of-the-box behavior. This is a pity. In any particular deployment context, only a relatively small fraction of the allowable variations are exercised over the lifetime of any application. Economic considerations preclude the designer from hand-tailoring applications to individual users. But perhaps this tailoring can be automated. The process of an application optimizing its own preferences and internal parameters to fit its deployment context is what we mean by context-adaptive computing. Machine learning is concerned with the automatic selection of a good concept from a space of hypotheses using training data as a guide. It would seem to be an appropriate engine to realize context-adaptive computing as well as many other dream systems of the future. Unfortunately, there is a difficulty. Current machine learning is largely based upon statistics. Training data are viewed as evidence for and against the various elements of the space of concepts. The complexity of a learning problem is measured as the fewest number of randomly selected training examples needed for the best learning algorithm to be confident that its answer is a good one. Learning complexity is related directly to the expressiveness of the concept space. Many everyday human concepts (including those underlying context-adaptive computing) require a sufficiently complex space that an inordinate number of training examples are needed. In this talk we will overview Explanation-Based Learning (EBL), one direction that may improve on the example complexity of empirical learning. EBL integrates prior domain knowledge with the traditional set of training examples. The domain knowledge is typically represented in a first-order language following the syntax of a conventional logic and employing a conventional sound inferencer. But the semantics of the axioms is quite different. The axiom set can be rife with contradictions and approximations. Interestingly, even with such approximations, simplifications, and other flaws, domain knowledge can greatly reduce the complexity of learning. Several implemented systems will be described along with others currently under design.