As a reader of the article kindly submitted here who speaks and reads multiple human languages from more than one language family, I'm not convinced that this article passes the linguist's or polyglot's smell test. The sample size reported in the article is wholly inadequate for showing that there is a real effect here. That the underlying study was reported in a rather minor journal is a further indication that there isn't any real-world significance behind the statistical significance claimed by the researchers. For good background reading on why you shouldn't look merely to statistical significance claims for evaluating research on human subjects like this, see the articles by Uri Simonsohn[1] and his statistically astute co-authors, who have identified phenomena called "researcher degrees of freedom" (there are a lot of researcher degrees of freedom here) and "p-hacking" (a news release that mentions "more accurately than by random chance" is almost always a sign of p-hacking) that allow many weak, underpowered studies of human behavior to be published in journals with poor statistical skills among the editorial staff.
> That the underlying study was reported in a rather minor journal
Cognition is one of the top journals in cognitive science. Definitely not a minor journal for this kind of work, and plenty of their reviewers are statistically competent. So my prior would not be that the stats here are bogus.
I share your skepticism of their result about more synesthetic people being more able to guess the meanings of foreign words. The effect size there looks tiny, the motivation is a bunch of hand-waving, and I wouldn't be surprised if it doesn't replicate.
On the other hand I'm inclined to believe their result that people can guess the meanings of certain foreign certain words with above-chance accuracy (about 60% for big vs. small). It's just a version of the well-replicated bouba/kiki effect[1], plus the idea that this can influence words in a language, which doesn't seem far-fetched at all.
What I'm most concerned about is that people might be picking up on similarities between words beyond what the authors identify as "cognates". The sample languages are Albanian, Dutch, Gujarati, Indonesian, Korean, Mandarin, Romanian, Tamil, Turkish, and Yoruba. 4 of those are Indo-European and might have obscure etymological relations that show up as a few overlapping letters.
The article makes some claims about childhood language learning:
But some researchers argue that synesthesia, which appears in 4% of the general population, is actually an exaggerated manifestation of associations we all make from an early age—an ability most of us lose over time, and one that may help explain why children are so good at picking up other languages.
I clicked through to the linked paper, and I could find no evidence that synesthesia provided any specific, significant advantages during childhood language acquisition.
Now, other research supports that children do have real advantages when it comes to learning accents, and that they may have lesser advantages when learning grammar. But beyond that, the evidence becomes controversial. And adults who are socially and professionally immersed for, say, 5 years will often acquire a very solid command of a new language, especially if they're also voracious readers.
In general, English-language media often overrates the ease with which children learn languages[1], and it's insufficiently critical about scientific theories which claim to explain this advantage.
[1] Trying to raise bilingual children, for example, can be surprisingly challenging unless you live in a bilingual society.
> no evidence that synesthesia provided any specific, significant advantages during childhood language acquisition
because that was not claimed. The underlying capability forming connections, ie. exaggerated manifestation of associations and an ability most of us lose over time, is not itself synesthesia.
Synesthesia research is just so not there. My brother was going through a grad neuroscience class last year. They got to select 2 subjects for the teachers to teach in addition and one was synesthesia. He said they could find about 20 studies that were worth presenting in terms of the neuro taught in the class. The paper they selected was an fMRI one where n=5 subjects, and the controls were dubious at best. It's just such a small field and the synesthesia is only self reported. No one has any idea what it means or how to measure it to determine what it really is. Until the field can measure something, it's going to languish on the border with pseudoneuroscience.
> Now, other research supports that children do have real advantages when it comes to learning accents
Childhood seems to be the only time when people can acquire certain new phonemes, in fact. Japanese speakers cannot properly learn to distinguish English /l/ and /r/ in later life. Other phonological distinctions are also difficult to learn in later life: tones, for instance.
>Japanese speakers cannot properly learn to distinguish English /l/ and /r/ in later life
Not sure I agree with the absoluteness of "cannot" here as I've successfully taught a few Japanese (and a Korean!) to properly enunciate the /l/-based sounds from English. It's simply training your brain to have two new pieces of muscle memory with regard to tongue movement. Learning the Japanese /r/ sound is the same process, although we have it a bit easier than they have it. :)
The fact that they try to approximate the English /l/ sounds using their roof-of-mouth-tongue-flicking /r/ sound is a failure of how English is taught in their schools. Every Japanese I've spoken to said their English courses in school had no focus on pronunciation, only on reading/writing, and none of their teachers were native English speakers.
As far as tones go, I've been living in Taiwan for about six months and I'm no closer to being able to hear them in everyday speech. I can pronounce them decently though, according to a friend here.
> taught a few Japanese (and a Korean!) to properly enunciate the /l/-based sounds from English
You can teach people the physical motions of producing the sound, but that's not the same as acquiring the phoneme. They will have to consciously produce the /l/ and /r/ sounds and will still not be able to distinguish them when listening.
Again, the "cannot" here is probably imprecise. In Japanese, r and l exist as a single sound, making differentiation difficult for the average Japanese speaker in other languages. The original research by a Japanese linguist into this phenomenon in the 70s suggested it was impossible because of the way phoneme and allophone acquisition happens in childhood. But this claim had been mitigated by subsequent research that suggests it is context dependent, as well as less certainty about phoneme vs. allophone distinctions.
Interesting. I'm not a linguist, but when my brain switches into Japanese mode, I can pronounce their R sound in speech without consciously thinking about it. Same thing happens with the /r/ sound in Mandarin (as in "rén"). Hearing them doesn't require any additional effort, though my brain does recognize that they are different from the English equivalents.
Anecdotally, I find this to be untrue. My wife, a native Japanese speaker who did not learn English until age 16, is able to distinguish between the two. I learned Japanese in my twenties and am able to pronounce the r/l phoneme. The only oddity I've noticed is that on rare occasions I've unintentionally made the r/l slip in English. Do you have a source for this?
The main problem with the nativist hypothesis is that it's breaching Occam's razor rule. Is there a simpler explanation than being hardwired with information, which would be a fairly complicated thing to explain, considering what we know of the brain today? Yes, there is: languages are related and sound symbolism could be passed from language to language, or the perfectly reasonnable explanation offered by Christine Cuskley at the end of the article. In the world we live in, some set of sounds are associated to small animals, other to big animals.
Nativist hypothesizes suffer from a lack of imagination in looking for alternative explanations. I cannot explain it? Must be innate! Or, children do it? Must be innate!
Cognitive research points to a much more promising explanation: we are really good at learning, and we can learn abstract concepts pretty quickly, even as very young children. See Stanislas Dehaene's work for instance.
Abstracting concepts for big and small from our environment and associating sounds with it is much more plausible than passing around information about abstract concepts in genes through natural selection based on random genetic mutations.
That’s the danger of Occam’s razor and of simple answers. “Simplicity” is often a hiding place for our biases. We reach the conclusion we want to reach and then call it simple. --Plaza Garabaldi
I'm not saying you're wrong, I'm saying that I don't actually agree with you that brains being hardwired with information is that hard to explain.
You're right. Let's say it's a failure to consider the universe of possible explanations to select the simplest, by restraining yourself to the one you already like.
An architecture for encoding sentence meaning in left mid-superior temporal cortex
Human brains flexibly combine the meanings of words to compose
structured thoughts. For example, by combining the meanings of
“bite,” “dog,” and “man,” we can think about a dog biting a man,
or a man biting a dog. Here, in two functional magnetic resonance
imaging (fMRI) experiments using multivoxel pattern
analysis (MVPA), we identify a region of left mid-superior
temporal cortex (lmSTC) that flexibly encodes “who did what to
whom” in visually presented sentences. We find that lmSTC
represents the current values of abstract semantic
variables (“Who did it?” and “To whom was it done?”) in distinct
subregions. Experiment 1 first identifies a broad region of lmSTC
whose activity patterns (i) facilitate decoding of
structure-dependent sentence meaning (“Who did what to whom?”)
and (ii) predict affect-related amygdala responses that depend on
this information (e.g., “the baby kicked the grandfather”
vs. “the grandfather kicked the baby”). Experiment 2 then
identifies distinct, but neighboring, subregions of lmSTC whose
activity patterns carry information about the identity of the
current “agent” (“Who did it?”) and the current “patient” (“To
whom was it done?”). These neighboring subregions lie along the
upper bank of the superior temporal sulcus and the lateral bank
of the superior temporal gyrus, respectively. At a high level,
these regions may function like topographically defined data
registers, encoding the fluctuating values of abstract semantic
variables. This functional architecture, which in key respects
resembles that of a classical computer, may play a critical role
in enabling humans to flexibly generate complex thoughts.
I would have thought all living creatures with ears would assign the same meaning to loud/quiet or long/short sounds (i.e. loud sounds are more important as they signify a big predator, or a large object running/falling nearby. Small sound just confirm confirm 2 twigs collided when moved etc). With this common basis for sound, I don't think it is too suprising that our use of vocalisation relate in some way at the very basic level across languages.
Half the languages in the list are branches off the Indo-European tree. I have a hard time believing this is a result of "language hardware" in the brain as opposed to a much simpler explanation like the shared heritage of the words.
They don't seem to have controlled for native language; E.g. Dutch has a lot of similarities with English...
The reason I thought ca-chook meant small is because it's fairly difficult to say. When saying small vs big, there's a few incentives (physical, contextual, etc) to have one easier to say than another.
e.g. We have a big problem. We have a small problem.
I thought küçük ("coo-chook") sounded small because it reminded me of baby cooing.
I'm not really surprised by this result. Some words are imitative.
Probably words that are more imitative are evolutionarily selected to remain in human languages because their speakers find them easier to recall. To me that seems like a perfectly good explanation for the phenomena that has little to do with "hardwiring into our brains".
What does that have to do with anything? Statistic significance has to do with confidence intervals and sample sizes at the same time, so even a small sample size with a large effect has statistical significance, but a very small effect at a large sample size might not.
Could you point to a good article or tutorial on statistical significance, sample size, confidence intervals etc? I'd like to know how to analyze experimental results myself.
but in this case, to be able to generalize you would need speaker of many languages, otherwise it quickly becomes "people who know english can guess the meaning of adjectives".
Moreover, I fear the same would still be true because they speakers would presumably already need to know english to participate in the study.