- Up - | Next >> |
There is a very obvious practical problem with FSAs: they are not modular, and can be very hard to maintain.
To see what this means, suppose we have a large FSA network containing a lot of information about English syntax. However, as we have seen, FSAs are essentially just directed graphs, so a large FSA is just going to be a large directed graph. Suppose we ask the question: where is all the information about noun phrases located in the graph? The honest answer is: it's probably spread out all over the place. Usually there will be no single place we can point to and say: that's what a noun phrase (or a verb phrase, or an adjectival phrase, or a prepositional phrase ... is).
Note that this is not just a theoretical problem: it's a practical problem too. Suppose we want to add more linguistic information to the FSA --- say more information about noun phrases. Doing so will be hard work: we will have to carefully examine the entire network, for all the possible places where the new information might be relevant. The bigger the network, the harder it becomes to ensure that we have made all the changes that are needed.
It would be much nicer if we really could pin down where all the information was --- and there is a simple way to do this. Instead of working with one big network, work with a whole collection of subnetworks, one subnetwork for every category (such as sentence, noun phrase, verb phrase, ...) of interest. And now for the first important idea:
Making an X transition in one subnetwork can be done by traversing the X subnetwork. That is, the network transitions can call on other subnetworks.
Let's consider a concrete example. Here's our very first RTN:
RTN1
---------------------------------------
s | |
-- |
NP VP |
-> 0 ------> 1 ------> 2 -> |
|
---------------------------------------
---------------------------------------
np | |
--- |
Det N |
-> 0 ------> 1 ------> 2 -> |
|
---------------------------------------
---------------------------------------
vp | |
--- |
V NP |
-> 0 ------> 1 ------> 2 -> |
|
---------------------------------------
We also have the following lexicon, which I've written in our now familiar Prolog notation:
word(n,wizard).
word(n,broomstick).
word(np,'Harry').
word(np,'Voldemort').
word(det,a).
word(det,the).
word(v,flies).
word(v,curses).
RTN1 consists of three subnetworks, namely the s, np, and vp subnetworks. And a transition can now be an instruction to go and traverse the appropriate subnetwork.
Let's consider an example: the sentence Harry flies the broomstick. Informally (we'll be much more precise about this example later) this is why RTN1 recognizes this sentence. We start in the s subnetwork (after all, we want to show it's a sentence) in state 0 (the initial state of subnetwork s). The word Harry is then accepted (the lexicon tells us that it is an np) and this takes us to state 1 of subnetwork s.
But now what? We have to make a vp transition to get to state 2 of the s subnetwork. So we jump to the vp network, and start trying to traverse that. The vp network immediately let's us accept flies, as the lexicon tells us that this is a v, and this takes us to state 1 of subnetwork vp.
But now what? We have to make a np transition to get to state 2 of the vp subnetwork. So we jump to the np network, and start trying to traverse that. The np subnetwork lets us accept a broomstick, so we have worked through the entire input string. We then jump back to state 2 of the vp subnetwork --- a final state, so we have successfully recognized a vp --- and then jump back to state 2 of the s subnetwork. Again, this is a final state, so we have successfully recognized the sentence.
As I said, this is a very informal description of what is going on, and we need to be a lot more precise. (After all, we are doing an awful lot of jumping around, so we need to know how exactly how this is controlled.)
But forget that for now and simply look at RTN1. It should be clear that it is far more modular than any FSA we have seen. If we ask where the information about noun phrases is located, there is a clear answer: it's in the np subnetwork. And if we need to add more information about NPs, this is now easy: we just modify the np subnetwork. We don't have to do anything else: all the other subnetworks can now access the new information simply by making a jump.
So, from a practical perspective, it is clear that the idea of subnetworks with the ability to call one another is a useful: it helps us organize our linguistic knowledge better. But once we have come this far, there is a further step that can be taken, and this will take us to genuinely new territory:
Allow subnetworks to call one another recursively. |
Let's consider an example. RTN2 is an extension of RTN1:
RTN2
---------------------------------------
s | |
-- |
NP VP |
-> 0 ------> 1 ------> 2 -> |
|
---------------------------------------
---------------------------------------
np | |
--- |
Det N |
-> 0 ------> 1 ------> 2 -> |
/ \ |
/ \ |
wh \/ /\ VP |
\ / |
\ / |
3 |
|
---------------------------------------
---------------------------------------
vp | |
--- |
V NP |
-> 0 ------> 1 ------> 2 -> |
|
---------------------------------------
For our lexicon we take all the lexicon of RTN1 together with:
word(wh,who).
word(wh,which).
Note that this network allows recursive subnetwork calls. If we are in subnetwork vp trying to make an np transition, we do so by jumping to subnetwork np. But if we there make a transition from 3 to 2, this allows us to jump back to the vp subnetwork. But then, we can jump back to the np subnetwork, and then back to the vp, and so on ...
Now, this kind of recursion is linguistically natural: it's what allows us to generate noun phrases like the wizard who curses the wizard who curses the wizard who curses Voldemort. But obviously if we are going to do this sort of thing, we need more than ever to know how to control the jumping around it involves. So let's consider how to do this.
- Up - | Next >> |