Which term refers to the process by which the way a word sounds influences the listeners assumptions?

Visual Word Recognition

Kathleen Rastle, in Neurobiology of Language, 2016

21.1 The Architecture of Visual Word Recognition

Although the earliest theories of visual word recognition claimed that words were recognized as wholes on the basis of their shapes (Cattell, 1886), there is a strong consensus among modern theories that words are recognized in a hierarchical manner on the basis of their constituents, as in the interactive-activation model (McClelland & Rumelhart, 1981; Rumelhart & McClelland, 1982) shown in Figure 21.1 and its subsequent variants (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; Grainger & Jacobs, 1996; Perry, Ziegler, & Zorzi, 2007).

Figure 21.1. The interactive-activation model of visual word recognition (McClelland & Rumelhart, 1981; Rumelhart & McClelland, 1982).

Information from the printed stimulus maps onto stored representations about the visual features that make up letters (e.g., horizontal bar), and information from this level of representation then maps onto stored representations of letters. Some theories assert that letter information goes on to activate higher-level sub-word representations at increasing levels of abstraction, including orthographic rimes (e.g., the -and in “band”; Taft, 1992), morphemes (Rastle, Davis, & New, 2004), and syllables (Carreiras & Perea, 2002), before activating stored representations of the spellings of known whole words in an orthographic lexicon. Representations in the orthographic lexicon can then activate information about their respective sounds and/or meanings. The major theories of visual word recognition posit that word recognition is achieved when a unique representation in the orthographic lexicon reaches a critical level of activation (Coltheart et al., 2001; Grainger & Jacobs, 1996; Perry et al., 2007).

In recent years, a different class of theory based on distributed-connectionist principles has made a substantial impact on our understanding of processes involved in mapping orthography to phonology (Plaut, McClelland, Seidenberg, & Patterson, 1996) and mapping orthography to meaning (Harm & Seidenberg, 2004). This chapter highlights some of the most important insights that these models have offered to our understanding of reading. However, although these models have been very effective in helping us to understand the acquisition of quasi-regular mappings (as in spelling-to-sound relationships in English), they have been less successful in describing performance in the most frequently used visual word recognition tasks. They offer no coherent account of the most elementary of these tasks—deciding whether a letter string is a known word (i.e., visual lexical decision). Therefore, this chapter assumes a theoretical perspective based on the interactive-activation model and its subsequent variants but directs the reader to further discussion of this issue in relation to distributed-connectionist models (Coltheart, 2004; Rastle & Coltheart, 2006).

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124077942000213

Word Recognition

J. Zevin, in Encyclopedia of Neuroscience, 2009

Mapping from Spelling to Sound in Visual Word Recognition

Visual word recognition depends in large part on being able to determine the pronunciation of a word from its written form. One factor that influences how easily this can be done is the regularity of the mapping from spelling to sound. Consider a word such as DOLL. Arriving at the correct pronunciation benefits from experience with words such as DOT and GOLF, in which the O is pronounced in the same way. On the other hand, DOLL is very similar to words such as ROLL, TOLL, and KNOLL, in which the letter O is assigned a different pronunciation. The fact that similar written forms map onto disparate phonological forms makes mapping difficult, and in fact words that contain such inconsistent mappings between spelling and sound are more difficult to read than words that contain entirely consistent mappings.

Interestingly, regularity in spelling-to-sound mappings varies greatly among languages. Some, such as Korean and Serbo-Croatian, employ perfectly regular mappings from spelling to sound, such that each sound in the language is represented by a single character. Chinese characters, at the opposite extreme, contain only highly probabilistic information about pronunciation. English – the language in which by far the most research has been conducted – represents something of an intermediate case. This has consequences for how visual word recognition is accomplished in these languages and even for how reading disorders manifest. In English, it is common for dyslexic children to have trouble with ‘decoding’ (i.e., being able to read novel pseudo-words), whereas in Italian (a highly regular writing system) the main deficit in dyslexia is slow reading speed.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780080450469018817

Language and Lexical Processing☆

Randi C. Martin, ... Hoang Vu, in Reference Module in Neuroscience and Biobehavioral Psychology, 2017

Word Recognition

In visual word recognition, a whole word may be viewed at once (provided that it is short enough), and recognition is achieved when the characteristics of the stimulus match the orthography (i.e., spelling) of an entry in the mental lexicon. Speech perception, in contrast, is a process that unfolds over time as the listener perceives subsequent portions of the word. Upon hearing the first syllable of a spoken word such as the “un” in “understand,” several words may be consistent with the input (e.g., “under,” “until,” and “untie”). As subsequent portions are perceived the pool (or “cohort”) of words will be narrowed down, until only one word remains.

Despite these differences in the temporal course of processing, there are many commonalities in spoken and written word recognition. In both cases, the goal is to go from the perceptual information to the lexical form in order to access semantic and syntactic information about the word. In visual word recognition, a letter level intervenes between visual processing and lexical access. In auditory word perception, it is often assumed that a phoneme level intervenes between the acoustic input and lexical access. Phonemes are assumed to be the basic sound units of speech perception (and production). In English there are approximately 40 different phonemes, corresponding to the consonant and vowel sounds. The phonemes of other languages overlap those of English to a large degree, although some languages may lack some of the phonemes in English or may contain phonemes that do not exist in English. For example, Japanese does not distinguish between the “l” and “r” phonemes, and some African languages include clicking sounds as phonemes.

There is general agreement that spoken and written word recognition involve access to the same semantic and syntactic representations. There has been some disagreement, though, about whether there are separate lexical representations for spoken and written words. Some researchers have argued that written words have to be transformed into a sound representation in order to access semantic and syntactic information about the word. If so, then only a phonological representation (e.g., one that indicates the sequence of constituent phonemes and the stress pattern) is needed for each word. However, considerable neuropsychological evidence suggests that there are separate phonological and orthographic representations for words, and that access to word meaning can proceed for written words without conversion to a phonological form. Recent neuroimaging evidence shows that during visual word recognition, certain brain regions are selectively activated in grapheme-to-phoneme conversion and others selectively activated in direct lexical access without such conversion. Nonetheless, it is the case that for healthy individuals the phonological representation of a written word appears to be computed automatically (through an implicit “sounding out” or “letter–sound” conversion process) when a written word is perceived. This derived phonological information can influence the time course of lexical access, making word recognition slower for words that have an unusual letter–sound correspondence, particularly if these words appear infrequently in print (e.g., “yacht”). Despite this slowing, the correct word is typically accessed, indicating that readers cannot be relying solely on letter–sound correspondences in accessing the meaning of written words.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128093245030789

Selective Attention, Processing Load, and Semantics

Cyma Van Petten, in Cognitive Electrophysiology of Attention, 2014

Lexical Manipulations

Studies of visual word recognition show several ERP components that differentiate orthographic from nonorthographic stimuli and occur within 200 ms of stimulus onset, prior to the onset of the N400. These include a left-lateralized negativity peaking between 140 and 180 ms that is larger for letter strings than for many types of visual stimuli (variably called the visual N1, N170, N180); intracranially recorded ERPs suggest that this scalp potential is likely to receive some contribution from a posterior fusiform region considered to be the “visual form area” (Appelbaum, Liotti, Perez, Fox, & Woldorff, 2009; Nobre, Allison, & McCarthy, 1994; Schendan, Ganis, & Kutas, 1998; see Barber & Kutas, 2007 for review). A negative peak at about 250 ms has proven sensitive to some varieties of orthographic priming and is also dissociable from the N400 (Grainger & Holcomb, 2009). Finally, a somewhat later negative peak varies in latency (from roughly 280–340 ms) with word length and the frequency of a word’s occurrence in natural language use (King & Kutas, 1998; Osterhout, Bersick, & McKinnon, 1997). These earlier components reflect the perceptual processes that transform visual input into more abstract orthographic representations, and which are sensitive to the familiarity of orthographic patterns.

In contrast to the components described above, the N400 has been argued to index a more purely conceptual stage of analysis in which the retrieved meaning of an item is integrated with prior context (Hagoort, Baggio, & Willems, 2009). However, closer consideration of the data indicates that the N400 continues to be influenced by processes that precede the analysis of the conceptual/semantic content retrieved from long-term memory. Recall that although N400s elicited by visual, auditory, verbal, and nonverbal stimuli are similarly responsive to prior conceptual context, these potentials have subtly different scalp distributions in healthy adults, and can be differentially affected by developmental language disorders (Duncan et al., 2009; Plante et al., 2000; see Figure 19.2). Although this component of the ERP can be called multimodal, it is not amodal, but instead reflects the physical nature of the input (see Van Petten & Luka, 2006 for review). Moreover, numerous studies have shown orderly variation in the amplitude of the N400 elicited by various types of meaningless stimuli. Larger N400s are elicited by unpronounceable letter strings than by false-font stimuli that are similar in visual complexity to alphabetic stimuli (Appelbaum et al., 2009; Bentin, Mouchetant-Rostaing, Giard, Echallier, & Pernier, 1999). In turn, pronounceable pseudowords elicit larger N400s than strings of consonants or alphanumeric symbols (Bentin et al., 1999; Rugg & Nagy, 1987). Finally, both real words and pseudowords with more orthographic neighbors (real words that can be formed by changing one letter) elicit larger N400s than words and pseudowords with fewer neighbors (Holcomb, Grainger, & O’Rourke, 2002; Laszlo & Federmeier, 2011; Müller, Duñabeitia, & Carreiras, 2010). All three groups of authors attribute this latter effect to greater global activation in a lexico-semantic network when a letter string from a dense neighborhood is encountered, because of partial activation of numerous words that are near matches to the actual input. The orthographic neighborhood effect is consistent with the letter-string-vs.-false-font and pseudoword-vs.-consonant-string results in suggesting a general principle: as a visual stimulus becomes more wordlike—more similar to more items in one’s vocabulary and thus more likely to be potentially meaningful—it elicits a larger N400. One report shows that the influence of orthographic neighborhood size on N400 amplitude is like the word frequency effect—attenuated or eliminated when words are placed in supportive semantic context (Molinaro, Conrad, Barber, & Carreiras, 2010, but see also Laszlo & Federmeier, 2009).

Some investigators (see for instance, Lau, Phillips, & Poeppel, 2008) have argued that the neural processes reflected in the scalp-recorded N400 should be categorized according to a dichotomy proposed by psycholinguists some decades ago: either prelexical, referring to processes that yield identification of a word in order to access information stored with that letter-string (meaning, pronunciation, possible syntactic roles) or postlexical, referring to processes that act on the retrieved information (semantic and/or syntactic integration with prior context, inferences, predictions about upcoming words, etc.). The results briefly reviewed above do not comfortably fit within this dichotomy given that N400 amplitude is influenced by both the effort expended in assessing stimuli that ultimately prove to have no stored meaning (e.g., consonant strings) and by the nature of what is retrieved when a stimulus does prove to be meaningful (e.g., the concreteness effect). Interactions between factors typically assigned to one or the other side of this division, such as those between semantic context and orthographic neighborhood density or between semantic context and word frequency, are particularly problematic for the proposed dichotomy.

The long temporal duration of most N400 effects (several hundred milliseconds) and apparent generation within a large region of cerebral cortex (a substantial portion of the left temporal lobe with some contribution from the right temporal lobe; Halgren et al., 2002; Van Petten & Luka, 2006) allows for the possibility that “the N400” is divisible into subcomponents and subfunctions occurring in different latency ranges and different cortical areas. The attention and processing-load studies reviewed below have largely considered the N400 as a single entity, but further work may aid in identifying subcomponents.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123984517000191

Adult and Second Language Learning

Denise H. Wu, Talat Bulut, in Psychology of Learning and Motivation, 2020

3.3 Theoretical accounts for the regularity and consistency effects of word recognition

In order to examine whether regularity and consistency have an impact on visual word recognition, a vast body of behavioral research has employed the naming paradigm wherein the participants were presented with a visual word and its naming latency was measured with respect to the onset of presentation. These studies have generally found that naming latencies of readers are influenced by the regularity and/or consistency of graphemes in a given word (Coltheart & Rastle, 1994; Cortese & Simpson, 2000; Jared, 1997, 2002; Jared, McRae, & Seidenberg, 1990). For instance, in a series of naming experiments, Jared (1997, 2002) revealed a strong consistency effect and a weak regularity effect in pronunciation of English words. Specifically, the naming speed of consistent words (e.g., silk) was faster than that of inconsistent words (e.g., pint), regardless of frequency. Moreover, although irregular words were associated with longer naming latencies than regular ones, this regularity effect was much stronger when irregular words (e.g., frost) had a low summed frequency of friends (e.g., cost, whose word body has an identical pronunciation to the experimental word) and a high summed frequency of enemies (e.g., most, whose word body has a different pronunciation from the experimental word). On the other hand, the regularity effect among inconsistent words was weak when there was a high summed frequency of friends and a low summed frequency of enemies. Based on these findings, it was argued that statistical relationships between spelling patterns and pronunciations, rather than GPC rules, guide the reading process (Jared, 2002).

The findings of these and many other studies with naming and lexical decision tasks are employed to pit two leading computational accounts of word reading against each other: the dual-route models (Coltheart, Curtis, Atkins, & Haller, 1993; Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001) versus the connectionist models (Plaut, McClelland, Seidenberg, & Patterson, 1996). According to the dual route models, there are lexical and sublexical routes in word recognition. The sublexical route involves the GPC rules, and yields successful naming of regular words (e.g., mint) or pseudowords (e.g., fint), but would fail in naming of irregular words (e.g., pint). On the other hand, the lexical route involves lexical knowledge of known words, hence would result in correct naming of both regular and irregular words, but would fail in naming of pseudowords. According to such models, naming of irregular words takes longer than naming of regular ones because there is conflicting information from the lexical and sublexical routes.

Although connectionists models of reading would also predict the consistency and regularity effects, they do not postulate the explicit GPC rules between graphemes and phonemes in alphabetic languages. Instead, this theoretical approach emphasizes patterns of activation and connection among “nodes” in the network that encode orthographic and phonological units of given languages. Spelling-to-sound correspondence is represented as different weightings on connections between these units. Although recent evidence has suggested a continuous impact of consistency (as proposed by the connectionist accounts) rather than a dichotomous regularity (as suggested by the dual-route models) on naming patterns, hence favoring the connectionist approach of reading, there are also counter-arguments and counter-findings that implicate GPC rules in visual word recognition (Coltheart et al., 2001). The neuropsychological findings from aphasic patients even suggest the necessity for a “third” route in the reading model (e.g., Wu, Martin, & Markus, 2002). Notwithstanding the debate concerning the rule-based versus weighting-based nature of consistency or regularity that links graphemes to phonemes in word recognition, this line of research has clearly shown that readers utilize regularities and clues available in written forms to accurately map the input to phonological representations of words.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/S0079742120300013

Spoken Word Recognition

David B. Pisoni, Conor T. McLennan, in Neurobiology of Language, 2016

20.3.3 Theoretical Accounts of SWR

Early theories of SWR were based on models and research findings in visual word recognition. Three basic families of models have been proposed to account for mapping of speech waveforms onto lexical representations. One approach, represented by the Autonomous Search Model developed by Forster (1976, 1989), is based on the assumption that words are accessed using a frequency-ordered search process. In this model, the initial search is performed based on frequency, with high-frequency words searched before low-frequency words. Search theories are no longer considered viable models of SWR and are not considered any further in this chapter.

The second family of models assumes that words are recognized through processes of activation and competition. Early pure activation models like Morton’s Logogen Theory assumed that words are recognized based on sensory evidence in the input signal (Morton, 1969). Passive sensing devices called logogens were associated with individual words in the lexicon. These word detectors collected information from the input. Once a Logogen reached a threshold, it became activated. To account for frequency effects, common high-frequency words had lower thresholds than rare low-frequency words. There were a number of problems with the Logogen model. It failed to specify precisely the perceptual units used to map acoustic phonetic input onto logogens or how different sources of linguistic information are combined together to alter the activation levels of individual logogens. Finally, the Logogen model was also unable to account for lexical neighborhood effects and the effects of lexical competition among phonetically similar words because the logogens for individual words are activated independently and have no input from other phonetically similar words in memory.

The third family of models combined assumptions from both search and activation models. One example of a hybrid model of SWR is Klatt’s Lexical Access From Spectra (LAFS) model (Klatt, 1979), which relies extensively on real-speech input in the form of power spectra that change over time, unlike other models of SWR that rely on preprocessed coded speech signals as input. Klatt argued that earlier models failed to acknowledge the important role of fine phonetic detail because they uniformly assumed the existence of an intermediate abstract level of representation that eliminated potentially useful acoustic information from the speech signal (Klatt, 1986). Based on a detailed analysis of the design architecture of the HARPY speech recognition system (Lowerre & Reddy, 1980), Klatt suggested that intermediate representations may not be optimal for human or machine SWR because they are always potentially error-prone, especially in noise (Klatt, 1977). Instead, Klatt suggested that spoken words could be recognized directly from an analysis of the input power spectrum using a large network of diphones combined with a “backward beam search” technique like the one originally incorporated in HARPY that eliminated weak lexical candidates from further processing (Klatt, 1979). LAFS is the only model of SWR that attempted to deal with fine phonetic variation in speech, which in recent years has come to occupy the attention of many speech and hearing scientists as well as computer engineers who are interested in designing psychologically plausible models of SWR that are robust under challenging conditions (Moore, 2005, 2007b).

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124077942000201

Savant Skills, Special Skills, and Intelligence Vary Widely in Autism

Lynn Waterhouse, in Rethinking Autism, 2013

Possible Brain Bases for Automatic Word Reading in Advance of Comprehension

Borowsky, Esopenko, Cummine, and Sarty (2007) proposed that early word decoding in typical children involved activity in the brain’s temporal lobe object identification and visual word recognition area. Samson, Mottron, Soulières, and Zeffiro (2012) and Scherf, Luna, Minshew, and Behrmann (2010) provided evidence to suggest that hyperlexia—early word decoding without comprehension—in autism might be the result of atypically displaced face and object processing. Scherf et al. (2010) found that individuals with autism activated object recognition regions of the brain when engaged in a face-processing task. The researchers argued that this displaced processing could result from impairment of the fusiform gyrus or impairment in the connectivity of the fusiform gyrus. Samson et al. (2012) proposed that higher activity for words in the fusiform gyrus and medial parietal cortex in autism combined with lower brain activity in many reading regions, along with a pattern of occipital and temporal word processing in the brain, created an unusual autonomy of word processing. The researchers argued that this atypical autonomy was the basis for hyperlexia in autism.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978012415961700006X

Is Multiple Access an Artifact of Backward Priming?1

Michael K. Tanenhaus, ... Mark Seidenberg, in Lexical Ambiguity Resolution, 1988

8 Coda: Modularity and Lexical Ambiguity

Much of the impact of the lexical ambiguity studies of the late 1970s and early 1980s was due to the fact that multiple access was counterintuitive, especially given the top-down flavor of the interactive models that were then preferred. The ambiguity results, in conjunction with studies of context effects in visual word recognition (e.g., [Stanovich and West, 1983], for a review), seemed to provide strong evidence against hypothesis testing style models of reading and language processing. In the light of recent empirical and theoretical developments, multiple access seems less counterintuitive and its theoretical implications for the modularity debate less clear.

Multiple sense activation now appears to be one of a class of multiple activation phenomena in lexical processing. Multiple pronunciations of non-homo-phonic homographs, such as wind are activated in visual word recognition. Phonological and orthographic representations of words are activated in both auditory and visual word recognition. There is also some evidence both from gating studies as well as cross-modal studies that semantic information related to competing lexical candidates is activated prior to auditory word recognition (see [Marlsen-Wilson, 1987] for a recent review).

Both autonomous and interactive models now embrace the notion of multiple access and the general principle of bottom-up priority. The parallel activation of multiple sources of information is characteristic of connectionist interactive models. Indeed, multiple access is a feature of several implemented connectionist models of lexical ambiguity [Cottrell, this volume; Kawamoto, this volume]. The debate between modular and interactive models comes down to whether or not top-down feedback occurs between message-levels contexts and early lexical processing [Dell, 1985], Autonomous models do not allow for such feedback, whereas interactive models make use of it to enable contextual information to constrain lower-level processes. Autonomous models predict that conceptual expectations based on context should not be able to influence the initial lexical access, whereas interactive models predict that it may or may not, depending on the strength of the context. At this point in time the lexical ambiguity literature can be used—and has been used—to support the claims of both interactive and noninteractive models. Whether or not there are contexts that block access to contextually inappropriate senses via feedback, and whether or not the type of feedback that obtains violates modularity assumptions, is an issue that remains to be decided.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780080510132500161

Traces of Print Along the Visual Pathway

Tatjana A. Nazir, in Reading as a Perceptual Process, 2000

Final remarks

Only literates can come up with the correct pronunciation of a word that is orally spelled to them letter by letter. Yet, a literate person who decodes print in this letter by letter fashion will be termed dyslexic (cf., pure alexia). Between knowing how to decode printed graphemes and the ability to ‘see words’ lies a gap that has been neglected in current theories of visual word recognition. The indifference towards these visuo-perceptual aspects of reading stems in part from the view that early visual stages are common to the perception of all visual stimuli and are therefore not specific to reading. The present chapter proposes that this may not be true.

The significant role of visual processing stages in reading becomes evident when we consider what would be left of our reading skills if pattern memories that we acquire throughout our reading experience were erased and set to zero. To acquire some intuition about this, one needs to experience again what it is to learn a new script: transform your text from the Roman script into another alphabetic script (available on the computer) that is unfamiliar to you (e.g., [Cognitive psychology] could become [Xoγυιτιϖε πσψχηολογψ]). With a bit of effort it will take you a few hours until you can easily tell that [α] is [a], [β] is [b], etc. Note that neither the language nor the grapheme-to-phoneme conversion rules are affected by this manipulation. Once you are comfortable with the new alphabet, you can try to read whole words. In doing so, you will realize that despite your remaining knowledge of how to read, when deprived of the familiarity with the visual symbols you will find yourself elaborating letter-sound correspondences just like beginning readers do. An even more impressive experience comes with writing. Given that everything but the shape of the symbols remains equal in this experiment, once you have learned the new alphabet you will be able to write almost effortlessly. However, when you try to read the text that you wrote you will manifest symptoms of pure alexia: it will be hard to recognize the words that you just wrote. Obviously, nobody really doubts that pattern memories do matter to reading. The point advanced here is simply that these memories maybe found ‘dispersed along the visual pathway’, being less abstract than usually thought.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780080436425500036

When WORDS with Higher-frequency Neighbours Become Words with No Higher-frequency Neighbour (Or How to Undress the Neighbourhood Frequency Effect)

Daniel Zagar, Stéphanie Mathey, in Reading as a Perceptual Process, 2000

Discussion

The LDT data replicated those of Grainger et al. (1989) and confirmed the psychological reality of the NFE: when words were written in upper case, that is in a typography for which they had a HFN, reaction times were 36 ms longer than for the same words written in lower case with the accent. The same comparison in the stable condition indicated that this difference was not a typography effect. When the typography was changed while the words still had no neighbour, reaction times were almost identical (+ 4 ms). The interaction between typography and stability confirmed that the presence of a HFN inhibited the visual word recognition process. Moreover, because the same words were used in both conditions, this inhibition cannot be attributed to factors such as the presence of a HFN. The simulation study produced a similar interaction. The number of cycles to reach the threshold was, of course, absolutely insensitive to the case change in the stable condition, whereas it was greater for ‘upper-case’ words in the unstable condition. These data provide strong support for the existence of an inhibitory mechanism.

Experiment 2

Several HFNs

Simulation investigations with artificial lexica showed that the competition mechanism in the IA model still produced inhibition when N was increased from 1 to 4. The inhibition effect was also observed when N was increased from 4 to 12 for a large difference in frequency between the stimulus word and its neighbours. Such an inhibition should also be observed in LDT experiments when the number of HFNs is increased. However, this effect was never observed (Grainger et al., 1989). On the contrary, Grainger and colleagues reported facilitatory effects (Grainger and Jacobs, 1996). Nonetheless, it could be assumed that the inhibitory effect of large higher-frequency neighbourhood was masked by confounding variables as the NFE had been at times. Experiment 2 was designed to examine whether a large higher-frequency neighbourhood produces an inhibitory effect on LDT. The same lower–upper-case design as in Experiment 1 was used. In the experimental (unstable) condition the number of HFNs changed as the word was written in upper or in lower case, whereas in the stable condition the number of HFNs was constant in both cases. Two conditions of neighbourhood change were tested. In the first condition, the number of HFNs increased from, in average, 1.13 to 3.07. In the second condition the number of HFNs increased from 3.07 to 5.93.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780080436425500048

Which term refers to the process by which the way a word sounds influences the listener's assumptions about what the word describes quizlet?

Which of the following refers to the process by which the way a word sounds influences the listener's assumptions about what the word describes? sound symbolism. When Jane shops, she must feel the fabric of any potential clothing buy before she even bothers to see what the design is.

Which term refers to the perception that is below the level of the consumers awareness?

Which term refers to perception that is below the level of the consumer's awareness? Subliminal Perception.

Which of the following refers to the process we used to select organize and interpret sensory stimuli?

Perception is the process of selecting, organizing, and interpreting information from our senses. Page 1. Perception is the process of selecting, organizing, and interpreting information from our senses. Selection: Focusing attention on certain sights, sounds, tastes, touches, or smells in your environment.

What is a philosophy that translates customers feelings into design elements?

Kansai engineering: A philosophy that translates customers' feelings into design elements.

Chủ đề