Monday, November 26, 2007

Response to the new Uriagereka article

#Full Interpretation and vacuous quantifiers: if Case can be vacuous, which Juan stated earlier but which I nonetheless strongly, strongly disagree with, why not operators that lack a variable?

# Aren't there cases were we find an operator after a variable in the speech stream? Sure, the LCA puts quantifiers first in English in the speech stream in subject quantification but not object quantification. So when the LCA linearizes something that gives us "I think I hate every chef", the quantifier's scope, which is one of its two variables, aside from the restrictor, comes before the quantifier in the speech stream. As I understood things, the parser having difficulties that was mentioned in the article refered to one that was parsing the speech stream, the PF, not the LF, which could look radically different and in which the quantifier would be covertly raised. Even though covert operations violate the Uniformity condition.......

#What is meant by a parser -- did Baader and Frazier do a mock up using a computer? Even if that's the case, because we admittedly (in this paper) know so little about brains, that mock up may be a very poor analogy. More to the point, if that's not the case, and we're talking about a psycholinguistics experiment, isn't this an example of the liberal borrowing of Computer Science buzzwords (parser, memory, computation) that we're trying to examine and adjudicate the correctness of?

#Why can't we define the element of 'last unit in a derivation'? In LaTeX, for example, if a reference comes before the the point where the antecedent abel was created, we simply compile twice in order to sort of the correct sequential numbering schema. Who's to say the mysterious brain doesn't do the same, engaging in one derivational cycle to get its bearings of length, then recompiling to set proper relations. But, oops, did I just use the sort of computational analogies to the mind that I've been rallying against? If so, I mean to say the two are simply related without supposing identical functionality--the brain might not 'recompile', but it can do whatever it does in language processing twice, as nebulous as that may seem.

#In discussing Probe/Goal relations, what happens with Verb final languages (final in the speech stream, even under an LCA linearizer)? Surely, the V-head whose phi-features need amending is encountered after the Noun Phrase Goal? This has to have been discussed somewhere by now, and if, not, then I'm sad for linguistics.

#Who really knows what proto-languages looked like? If we think it's something like "Me Tarzan, You Jane", aren't we watching too many movies? If we're thinking along those lines otherwise, is it because we're calling to mind infant speech? If the latter, do we explain the typology by invoking processing constraints on a young brain, and do we then hypothesize those same constraints on early humans? My beef: short of reanimating a caveman, we'll never know, as far as I can tell. Besides, assuming limited processing mechanisms is far from certain: just because it took us a long while to engage in the industrial revolution doesn't mean early people were simply slower. It was more likely a consequence of existing in hunter-gatherer societies that lacked the food base to support a diversified class of thinkers and scientists. Look at the native peoples of Australia or the Americas. They aren't stupid and their brains aren't different; the geography and climate details were simply stacked against them. Why should linguistic beings ages ago have been otherwise; i.e. if their brains are different, according to some metric of biological anthropology, then how do we know they had language at all, whether or not it's 'simpler'?

#Maybe I'm missing something, but isn't an MLCA language still context-sensitive? The MLCA parser still engages in derivations just like the LCA parser does; the only divergence between the two is precedence order in linearization, where in one case, we'd 'speak backwards', as Chris put it in an earlier class. If all this is correct, then I'm missing the strict and exclusive association between 'operational memory' and the LCA. Unless, of course, we mean that because an MLCA linearizer puts operators last and variables first (but maybe not Verb final languages) that prevents derivations from taking place (like Agree), but if vacuous operators are okay and Verb final languages prevent us from making a statement on how to enact Agree and quantifiers can still come after variables in an LCA-linearized speech stream (i.e. object quantification) in a speech stream, then this exclusive derivational-LCA relationship is tenuous, as LCA stream does nothing uniquely special.

#Unifying these two last points: Might the MLCA have allowed for context-sensitive processing, as the LCA-stream seems to inconclusively support operator-variable arragement parsing ease; Might there have been no proto-language that's relatively 'simple'? If so, have we lost a motivation for keeping the LCA over the MLCA?