Something we've known for a while is that in normal, quick speech, words affect each other but are still understandable. So if someone asks you to pass her "hambag", you'll look for something that contains a purse (handbag), and not some sort of meat container. That said though, it only works with certain speech sounds. So if someone talked about their "phone cushion" you probably would think they had some sort of pillow for their mobile, and not that they meant a "foam cushion". The effect is directional, so for example /n/ can be mispronounced as /m/ and still understood, but /m/ cannot be mispronounced as /n/. This happens with other sounds as well - e.g. /t/ to /p/ and /d/ to /b/.
There's some debate as to whether this is caused by our brains being more accepting of certain mispronunciations, or whether it's more to do with the interaction of words in a sentence. So we tested this using changes in the middle of a word which cannot be affected by the other words around it.
We tested this by presenting real and mispronounced words (like image or *inage) through headphones while participants responded to related (e.g. PICTURE) and unrelated (e.g. HAMMER) words and nonwords (e.g. ZOOBLE) that they saw on a TV screen. We measured reaction times, error rates and event-related brain waves.
We found the exact same effect as has been shown previously at the ends of words, so if this Christmas someone says "this sauce is runny", you probably won't think "of course it's rummy, that's why they call it rum sauce". But if they say "check the dimmer" you'll probably start thinking about your dinner even when they mean the adjustable lighting controller. So it appears that our brains are more accepting of certain mispronunciations, no matter the context.
Aligning mispronounced words to meaning: Evidence from ERP and reaction time studies - Adam Charles Roberts, Allison Wetterlin, and Aditi Lahiri