Monday, November 26, 2007

Response to the new Uriagereka article

#Full Interpretation and vacuous quantifiers: if Case can be vacuous, which Juan stated earlier but which I nonetheless strongly, strongly disagree with, why not operators that lack a variable?

# Aren't there cases were we find an operator after a variable in the speech stream? Sure, the LCA puts quantifiers first in English in the speech stream in subject quantification but not object quantification. So when the LCA linearizes something that gives us "I think I hate every chef", the quantifier's scope, which is one of its two variables, aside from the restrictor, comes before the quantifier in the speech stream. As I understood things, the parser having difficulties that was mentioned in the article refered to one that was parsing the speech stream, the PF, not the LF, which could look radically different and in which the quantifier would be covertly raised. Even though covert operations violate the Uniformity condition.......

#What is meant by a parser -- did Baader and Frazier do a mock up using a computer? Even if that's the case, because we admittedly (in this paper) know so little about brains, that mock up may be a very poor analogy. More to the point, if that's not the case, and we're talking about a psycholinguistics experiment, isn't this an example of the liberal borrowing of Computer Science buzzwords (parser, memory, computation) that we're trying to examine and adjudicate the correctness of?

#Why can't we define the element of 'last unit in a derivation'? In LaTeX, for example, if a reference comes before the the point where the antecedent abel was created, we simply compile twice in order to sort of the correct sequential numbering schema. Who's to say the mysterious brain doesn't do the same, engaging in one derivational cycle to get its bearings of length, then recompiling to set proper relations. But, oops, did I just use the sort of computational analogies to the mind that I've been rallying against? If so, I mean to say the two are simply related without supposing identical functionality--the brain might not 'recompile', but it can do whatever it does in language processing twice, as nebulous as that may seem.

#In discussing Probe/Goal relations, what happens with Verb final languages (final in the speech stream, even under an LCA linearizer)? Surely, the V-head whose phi-features need amending is encountered after the Noun Phrase Goal? This has to have been discussed somewhere by now, and if, not, then I'm sad for linguistics.

#Who really knows what proto-languages looked like? If we think it's something like "Me Tarzan, You Jane", aren't we watching too many movies? If we're thinking along those lines otherwise, is it because we're calling to mind infant speech? If the latter, do we explain the typology by invoking processing constraints on a young brain, and do we then hypothesize those same constraints on early humans? My beef: short of reanimating a caveman, we'll never know, as far as I can tell. Besides, assuming limited processing mechanisms is far from certain: just because it took us a long while to engage in the industrial revolution doesn't mean early people were simply slower. It was more likely a consequence of existing in hunter-gatherer societies that lacked the food base to support a diversified class of thinkers and scientists. Look at the native peoples of Australia or the Americas. They aren't stupid and their brains aren't different; the geography and climate details were simply stacked against them. Why should linguistic beings ages ago have been otherwise; i.e. if their brains are different, according to some metric of biological anthropology, then how do we know they had language at all, whether or not it's 'simpler'?

#Maybe I'm missing something, but isn't an MLCA language still context-sensitive? The MLCA parser still engages in derivations just like the LCA parser does; the only divergence between the two is precedence order in linearization, where in one case, we'd 'speak backwards', as Chris put it in an earlier class. If all this is correct, then I'm missing the strict and exclusive association between 'operational memory' and the LCA. Unless, of course, we mean that because an MLCA linearizer puts operators last and variables first (but maybe not Verb final languages) that prevents derivations from taking place (like Agree), but if vacuous operators are okay and Verb final languages prevent us from making a statement on how to enact Agree and quantifiers can still come after variables in an LCA-linearized speech stream (i.e. object quantification) in a speech stream, then this exclusive derivational-LCA relationship is tenuous, as LCA stream does nothing uniquely special.

#Unifying these two last points: Might the MLCA have allowed for context-sensitive processing, as the LCA-stream seems to inconclusively support operator-variable arragement parsing ease; Might there have been no proto-language that's relatively 'simple'? If so, have we lost a motivation for keeping the LCA over the MLCA?

5 comments:

Tim Hawes said...

Honestly, I have a hard time thoughtfully responding to this. I find myself either agreeing with what is said, accepting the line of argument, disagreeing on non-critical details or in non-critical ways, or having concerns beyond the scope of this class. In any case, while I might find those points interesting, I don't think brining them up here is going to advance our discussion.

There was one part I was confused about, maybe missing a technical detail. Prior to the discussion of multiple spell-out, (22) shows a tree that the LCA cannot handle presumably without the introduction of Linearization Induction in Kayne's story. What I don't see is how this actually saves anything in this tree. Isn't the relationship between 2 and 6 the same as the relationship between 3 and 4 causing a conflict in ordering by the LCA? Put another way, 2 and 3 symmetrically c-command each other and so they shouldn't be linearizable as they are in (22). Though I don't know how important this is either, since there are obviously solutions to this (if it is a problem) including MSO.

I also second Sarah's question about verb final languages, it seems like something that must have been addressed at some point. And perhaps more out of curiosity than anything else, I'd also like to know more about where the claims about what proto-languages did and did not have come from.

sarah a. goodman said...

Okay, so it's beyond reply time, but I've been at work or at Georgetown all day, so for the late-comers: As might be clear from my post, I didn't have a hard time responding to this article, and I mostly look at it incredulously, but that could be because without further evidence, I doubt the fact that we know what early language looked like. I mean, the only way I might agree to what is hypothesized in the article is some evidence that tongue-capacity was dextrous and brain cavities were much reduced (in a particular way) so as to not have a developed area wherever it is that Wernicke's and Broca's aphasias are located. But is there such evidence???? I'd like to read HCF 2002, which the article cites as providing some. Did we read that already, because I don't remember the proto-language discussion.

Plus, I was repeating the article's argument to a coworker with a PhD in Computer Science (from CMU) and an MA in linguistics (from Michigan), and she seemed to think the parsing-ease argument of favoring the LCA over the MLCA was bogus. She brought up that there are in fact right-corner parsers, and they do just fine. Could this be addressed in class or in a further reading?

I suppose this just touches on the point that I would like more details on what the 'parser' referred to in the article is and the numbers on its performance.

bdillon said...

I agree with Tim that it wouldn't really do a whole lot for advancing the discussion if I were to make a laundry list of what I did or didn't like or understand.

What I did want to touch on here was the parsing-ease argument for LCA over MLCA. If it is the case that operators are always associated with variables, then the idea that it would convey a parsing advantage to linearize them first is straightforward (this is assuming this dependency mirrors the kinds of dependencies often looked at in psycho, which may or may not be warranted). Encountering elements that allow you to predictively build dependencies leads to more accuracy in parsing than the reverse situation: operators just give you a lot more information about what to expect than variables, and so are much richer cues to navigating the structure efficiently. Presumably, encountering an operator after a variable would require a backwards search through memory or whatever, of the sort that more and more studies are showing is subject to interference and related issues in completion. Predictively building dependencies ahead of time, however, has been shown in a number of cases to make things go much more smoothly.

That said, whether or not that the parsing pressures that prefer to have richer cues (operators, here) before poorer ones (variables, here) maps straightforwardly to an argument about a universal linearization maxim in the grammar is unclear. Assuming though, that the LCA as formulated is what we have, and is distinct from its mirror image MLCA, then it doesn't seem entirely far-fetched to imagine that these kinds of parsing pressures could have contributed to this decision. It's still quite fuzzy around the edges, but seems a promising enough place to start speculating.

RE: Right corner parsers... While, yes, you could do it that way, how would one be able to make a right corner parser act like what is actually employed by humans? Do right corner parsers "do just fine" in incremental interpretation (which is my parser's forte)?

sarah a. goodman said...

While the operator-variable structuring argument might seem appealing, how is it executed in practice?

I'm no psycholinguist, so I might need an answer to: How are people at understanding "I hate every chef"? That seems fairly clear to me and the operator comes after the variable. But, as I've said, just because I have a brain, doesn't mean I know how it works.

In short: is there enough of a difference experimentally to show that operator first, variable second really exerts an evolutionary pressure?

Plus, that particular ordering doesn't seem to universally shake out in languages: verb-final, object quantification, etc, which makes me wonder if those sorts of constraints are actually constraints at all, given their spotty appearance.

Tim Hunter said...

My main reaction to this chapter goes back to some familiar ground ...

A recurring theme is that the MLCA initially seems like a more simple/natural linearisation procedure than the LCA, because this way "items are pronounced roughly as they are engaged in syntactic dependency", i.e. in the order in which they are introduced into the derivation. The LCA's parsing advantage "trumps" this simplicity/naturalness, but the MLCA is claimed to be the null hypothesis. This obviously relies on the familiar idea that derivational time can be "taken seriously" as some kind of time. As far as I can tell, this idea has the status of a hypothesis, and does not follow by logical necessity.

Adopting this hypothesis about the reality of derivational time, makes that prediction that we should linearise according to the MLCA, which is false. What I don't understand is why this doesn't prompt some consideration of the possibility that the hypothesis, having made a false prediction, could be incorrect. I'm not saying it should constitute indisputable evidence that the hypothesis must be incorrect; of course, it may be correct and the parsing advantages of the LCA outweigh the correctly-predicted preference for the more simple/natural MLCA. But I don't really understand how, methodologically, we're treating the idea/hypothesis/assumption/hunch/whatever that derivational time can be taken seriously, such that these sorts of facts don't appear to be evidence (however weak) against it.

Perhaps a sharper way of asking the question is: What conceivable piece of evidence could we ever discover, empirically, that would lead us to reject the hypothesis that derivational time can be "taken seriously"? The way this chapter goes about things, I don't really understand what this evidence would be.

One response to the above that I can imagine is to say that the derivational-time-is-real (DTIR) hypothesis didn't really make a completely erroneous prediction: it predicted the MLCA, which could be tweaked very slightly to produce the (correct) LCA once we took into account some very reasonable parsing arguments, whereas without the DTIR hypothesis we wouldn't have even been in the ballpark of the MLCA and the LCA. Fair enough, but I think there are independent arguments leading to the MLCA/LCA ballpark, i.e. the ballpark where asymmetric c-command corresponds to linear order. Such arguments could come from the fact (noted by folks like Robert Chametzky and Bob Frank) that the c-command relation alone can be used to encode everything there is to know about a particular tree structure, just as linear order encodes everything there is to know about a particular string, so it makes sense that if these are the primitive relations of trees and strings, respectively, they should be the ones used to tie the two kinds of structures together.