|<< Prev||- Up -||Next >>|
We have seen that we can do a lot with DCGs, some of it quite surprising. In particular, the feature passing mechanism of DCGs makes it possible to give a surprisingly straightforward a account of long distance dependencies, especially when more complex features values (and in particualr, difference lists) are used. So: just how good are DCGs? Let's start by thinking about their good points:
It seems fair to say that DCGs are a lot neater than ATNs. Certainly in the form we have been writing them, in which we just have simple context free rules, regulated by feature agreement, is very straightforward. Of course, as you should remember from last semester, in DCGs we are free to add arbitrary pieces of code to our rules --- and if we do this, we are working with a full-powered programming language. But it is clear from our work that we don't have to do this. Simple feature agreement can perform a lot of useful linguistic work for us.
It is also fair to say that DCGs are a lot more declarative than ATNs. Certainly the rules we wrote above have a nice, clear declarative interpretation: they simply license certain configurations of features and categories. We don't
have to think in terms of how this information is used, or in terms of movement.
But DCGs have certain disadvantages:
For a start, they lock us into one way of handling grammars. DCGs carry our recognition and parsing using top-down depth-first search --- for that's the way the Prolog search strategy works. This won't always be the best way to work (sometimes a bottom-up, or left-corner, strategy might be better). But if we decide to use DCGs, we're forced to accept the top down approach. As computational linguists, it really is our business to know a lot about grammars and how to compute with them --- we shouldn't accept the off-the-shelf solution offered by DCGs just because it's there.
Our DCGs make it very clear that ``grammars + features'' is potentially a very powerful strategy. But our use of the idea has been highly Prolog specific. What exactly is a feature? How should they be combined? Is the idea of gap threading really fundamental as a way of handling long-distance dependencies, or is that just neat a Prolog implementation idea? These are important questions, and we should be looking for answers to them.
Finally, it should be remarked that DCGs can quickly get clumsy to write. DCGs with six features are OK; DCGs with 30 features, are painful. It would be nice if we had a more friendly notation for working with features.
This discussion pretty much tells us what we should do next. For the remainder of the course, we will examine the two components (grammars and features) in isolation, and only right at the end will be put them back together. Most of our time will be spent looking at context free grammars and how to work with them computationally. Once we have a good understanding of what is involved in recognizing/parsing context free language, we will take a closer look at features. Finally, we'll bring them together again. This will result in a system that has all the advantages of DCGs, and none of the disadvantages.
|<< Prev||- Up -||Next >>|