Verbal language isn't strictly always necessary as it's often possible to express simple concepts with nonverbal cues, and so gesturing might be considered a less precise subset of sign language. It probably even existed before vocal language in h. sapiens sapiens and exists in other hominids.
C. familiaris probably co-evolved more than c. lupus. Primitive stuff.
Give it another 15ky, and "dogs" will be having conversations with "humans". I also think one or more bird species will evolve human conversation ability.
In any case, the short answer is "No!". There is a LOT written about language and I find it difficult to believe that most ANY idea presented is really new.
For example, have these guys run their ideas past Schank's "conceptual dependency" theory?
The article presents the fact that we appear to treat non-constituents (eg “in the middle of the”) as “units” to mean that language is more like “snapping legos together” than “building trees.”
But linguists have proposed the possibility that we store “fragments” to facilitate reuse—essentially trees with holes, or equivalently, functions that take in tree arguments and produce tree results. “In the middle of the” could take in a noun-shaped tree as an argument and produce a prepositional phrase-shaped tree as a result, for instance. Furthermore, this accounts for the way we store idioms that are not just contiguous “Lego block” sequences of words (like “a ____ and a half” or “the more ___, the more ____”). See e.g. work on “fragment grammars.”
Can’t access the actual Nature Human Behavior article so perhaps it discusses the connections.
There's no reason to assume that an human word begins and ends with a space. Compound words exist. The existence of both "Aforementioned" and "previously spoken of" isn't based on a deep neurological construct of compound words.
Sorry, I'm not following. What do spaces have to do with this? Grammar is dependent on concepts like lexemes (sort of like words), but there aren't any spaces between lexemes in spoken language.
Probably slight confusion over the description, which I was thinking at first with the first "in the middle of" example - that English has compounds nouns so the existence of spaces doesn't necessarily work as a delimiter.
What it seems to be getting at instead is that language works more like madlibs than previously thought, just on a smaller scale than madlibs. Which to me isn't that surprising - it seems extremely close to "set phrases", and is explicitly how we learn language in a structured way when not immersed in it.
I also suspect most people don't even know about tree-style sentence mapping. I've mentioned it a handful of times at work when languages come up and even after describing it no one knew what I was talking about. I only remember it being covered in one class in middle school.
Unless you’re referring to academic paper, I’m not getting a pay wall.
I read the article (but not the paper), but it doesn’t sound like a no. But I also don’t find the claim that surprising given in other languages word matters a lot less.
In languages where word order matters a lot less, the grammar is still there---it just relies more on things like case markers and agreement markers (i.e. morphology).
The paper is basically saying “we have evidence that supports language comprehension inconsistent independently of structural hierarchy” [1] (or at least that’s my read of it).
However I imagine linguists have a more precise definition than most of us, but instead of speculating, I’ve decide read the paper.
Something they explain early on is a concept called multi-words (an example incomplete this is an idiom) tend communicate meaning without any meaning grammatical structure, and they say this
> “… multiword chunks challenge the traditional separation between lexicon and grammar associated with generativist accounts … However, some types of multiword chunks may likewise challenge the constructionist account.”
I’m an amateur language nerd with a piecemeal understanding of linguistics, but I’m no linguist so I don’t know what half this means, but it really sounds like they have a very specific definition here, that neither of us are talking about, and possibly hasn’t been well communicated in the article.
That said I’m out of my depth here, and I have a feeling most ppl replying to this article are probably too if they are going off the title and article that linked to the paper. But I would be interested to hear the opinion of a linguist or someone more familiar with this field, and their experimentation methods.
—-—-—-—-—-
[1] With the hypothesis testing typically done in science you can’t really accept a alternative hypothesis only reject a null one given you’re evidence, so you get titles like “may” or “might” or “evidence supporting x, y, z”, so you get these noncommittal titles like the one. In social sciences or nonnatural sciences I feel this is even more the case given the difficulty of forming definitive experiments without crossing some ethical boundary. In nature science you can put to elements together control variables see what happens in social sciences it’s really hard.
>multiword chunks challenge the traditional separation between lexicon and grammar associated with generativist accounts
This is just silly (the paper, not your comment). Do these folks really think they're the first people to think of associating meanings with multi-word units? Every conceivable idea about what the primes of linguistic meaning might be has been explored in the existing literature. You might be able to find new evidence supporting one of these ideas over another, but you are not going to come up with a fundamentally new idea in that domain.
As another commentor has pointed out, many of the sequences of words they identify correspond rather obviously to chunks of structure with gaps in certain argument positions. No-one would be surprised to find that 'trees with gaps to be filled in' are the sort of thing that might be involved in online processing of language.
On top of that, the authors seem to think that any evidence for the importance of linear sequencing is somehow evidence against the existence of hierarchical structure. But rather obviously, sentences have both a linear sequence of words and a hierarchical structure. No-one has ever suggested that only the latter is relevant to how a sentence is processed. Any linguist could give you examples of grammatical processes governed primarily by linear sequence rather than structure (e.g. various forms of contraction and cliticization).
Not sure why you bring up Schank's conceptual dependency theory. That was back in the late 60s, and I don't think anybody has worked in that theory for many decades.
If the question you're answering is the one posed by the Scitechdaily headline, "Have We Been Wrong About Language for 70 Years?", you might want to work a bit on resistance to clickbait headlines.
The strongest claim that the paper in question makes, at least in the abstract (since the Nature article is paywalled), is "This poses a challenge for accounts of linguistic representation, including generative and constructionist approaches." That's certainly plausible.
Conceptual dependency focuses more on semantics than grammar, so isn't really a competing theory to this one. Both theories do challenge how language is represented, but in different ways that don't really overlap that much.
It's also not as if conceptual dependency is some sort of last word on the subject when it comes to natural language in humans - after all, it was developed for computational language representation, and in that respect LLMs have made it essentially obsolete for that purpose.
Meanwhile, the way LLMs do what they do isn't well understood, so we're back to needing work like the OP to try to understand it better, in both humans and machines.
Verbal language isn't strictly always necessary as it's often possible to express simple concepts with nonverbal cues, and so gesturing might be considered a less precise subset of sign language. It probably even existed before vocal language in h. sapiens sapiens and exists in other hominids.
nonverbal cues go all the way down to canines, etc.
C. familiaris probably co-evolved more than c. lupus. Primitive stuff.
Give it another 15ky, and "dogs" will be having conversations with "humans". I also think one or more bird species will evolve human conversation ability.
Something funny in linguists seeing complex grammar behind language instead of custom.
I would be interested in a study that compares breaches in the word order for adjectives in the English language vs noun-verb mis-numbering...
Paywall.
In any case, the short answer is "No!". There is a LOT written about language and I find it difficult to believe that most ANY idea presented is really new.
For example, have these guys run their ideas past Schank's "conceptual dependency" theory?
The article presents the fact that we appear to treat non-constituents (eg “in the middle of the”) as “units” to mean that language is more like “snapping legos together” than “building trees.”
But linguists have proposed the possibility that we store “fragments” to facilitate reuse—essentially trees with holes, or equivalently, functions that take in tree arguments and produce tree results. “In the middle of the” could take in a noun-shaped tree as an argument and produce a prepositional phrase-shaped tree as a result, for instance. Furthermore, this accounts for the way we store idioms that are not just contiguous “Lego block” sequences of words (like “a ____ and a half” or “the more ___, the more ____”). See e.g. work on “fragment grammars.”
Can’t access the actual Nature Human Behavior article so perhaps it discusses the connections.
There's no reason to assume that an human word begins and ends with a space. Compound words exist. The existence of both "Aforementioned" and "previously spoken of" isn't based on a deep neurological construct of compound words.
Sorry, I'm not following. What do spaces have to do with this? Grammar is dependent on concepts like lexemes (sort of like words), but there aren't any spaces between lexemes in spoken language.
Probably slight confusion over the description, which I was thinking at first with the first "in the middle of" example - that English has compounds nouns so the existence of spaces doesn't necessarily work as a delimiter.
What it seems to be getting at instead is that language works more like madlibs than previously thought, just on a smaller scale than madlibs. Which to me isn't that surprising - it seems extremely close to "set phrases", and is explicitly how we learn language in a structured way when not immersed in it.
I also suspect most people don't even know about tree-style sentence mapping. I've mentioned it a handful of times at work when languages come up and even after describing it no one knew what I was talking about. I only remember it being covered in one class in middle school.
Unless you’re referring to academic paper, I’m not getting a pay wall.
I read the article (but not the paper), but it doesn’t sound like a no. But I also don’t find the claim that surprising given in other languages word matters a lot less.
In languages where word order matters a lot less, the grammar is still there---it just relies more on things like case markers and agreement markers (i.e. morphology).
The paper is basically saying “we have evidence that supports language comprehension inconsistent independently of structural hierarchy” [1] (or at least that’s my read of it).
However I imagine linguists have a more precise definition than most of us, but instead of speculating, I’ve decide read the paper.
Something they explain early on is a concept called multi-words (an example incomplete this is an idiom) tend communicate meaning without any meaning grammatical structure, and they say this
> “… multiword chunks challenge the traditional separation between lexicon and grammar associated with generativist accounts … However, some types of multiword chunks may likewise challenge the constructionist account.”
I’m an amateur language nerd with a piecemeal understanding of linguistics, but I’m no linguist so I don’t know what half this means, but it really sounds like they have a very specific definition here, that neither of us are talking about, and possibly hasn’t been well communicated in the article.
That said I’m out of my depth here, and I have a feeling most ppl replying to this article are probably too if they are going off the title and article that linked to the paper. But I would be interested to hear the opinion of a linguist or someone more familiar with this field, and their experimentation methods.
—-—-—-—-—-
[1] With the hypothesis testing typically done in science you can’t really accept a alternative hypothesis only reject a null one given you’re evidence, so you get titles like “may” or “might” or “evidence supporting x, y, z”, so you get these noncommittal titles like the one. In social sciences or nonnatural sciences I feel this is even more the case given the difficulty of forming definitive experiments without crossing some ethical boundary. In nature science you can put to elements together control variables see what happens in social sciences it’s really hard.
>multiword chunks challenge the traditional separation between lexicon and grammar associated with generativist accounts
This is just silly (the paper, not your comment). Do these folks really think they're the first people to think of associating meanings with multi-word units? Every conceivable idea about what the primes of linguistic meaning might be has been explored in the existing literature. You might be able to find new evidence supporting one of these ideas over another, but you are not going to come up with a fundamentally new idea in that domain.
As another commentor has pointed out, many of the sequences of words they identify correspond rather obviously to chunks of structure with gaps in certain argument positions. No-one would be surprised to find that 'trees with gaps to be filled in' are the sort of thing that might be involved in online processing of language.
On top of that, the authors seem to think that any evidence for the importance of linear sequencing is somehow evidence against the existence of hierarchical structure. But rather obviously, sentences have both a linear sequence of words and a hierarchical structure. No-one has ever suggested that only the latter is relevant to how a sentence is processed. Any linguist could give you examples of grammatical processes governed primarily by linear sequence rather than structure (e.g. various forms of contraction and cliticization).
Not sure why you bring up Schank's conceptual dependency theory. That was back in the late 60s, and I don't think anybody has worked in that theory for many decades.
> In any case, the short answer is "No!".
If the question you're answering is the one posed by the Scitechdaily headline, "Have We Been Wrong About Language for 70 Years?", you might want to work a bit on resistance to clickbait headlines.
The strongest claim that the paper in question makes, at least in the abstract (since the Nature article is paywalled), is "This poses a challenge for accounts of linguistic representation, including generative and constructionist approaches." That's certainly plausible.
Conceptual dependency focuses more on semantics than grammar, so isn't really a competing theory to this one. Both theories do challenge how language is represented, but in different ways that don't really overlap that much.
It's also not as if conceptual dependency is some sort of last word on the subject when it comes to natural language in humans - after all, it was developed for computational language representation, and in that respect LLMs have made it essentially obsolete for that purpose.
Meanwhile, the way LLMs do what they do isn't well understood, so we're back to needing work like the OP to try to understand it better, in both humans and machines.
What did I previously think?