Fuzzy Logic Tools for Lexical Acquisition Claude St-Jacques University of Ottawa [email protected]

Caroline Barrière Henri Prade National Research Council IRIT, Université Paul Sabatier [email protected] [email protected]

Abstract Our aim is to build a computational tool that helps children to use an electronic dictionary as a useful resource for the acquisition of their first language. The approach is based on a cognitive theory about grounding lexical acquisition and on the use of fuzzy logic algorithms for grouping similar features together and partitioning different ones in the lexical space of a dictionary. Keywords: Categorical perception, fuzzy clustering, fuzzy similarity, lexical acquisition.

1

Introduction

The use of a dictionary provides opportunities for the acquisition and refinement of a first language. For adults, although sometimes a few searches are necessary to obtain a clear idea of the meaning of an unknown word, that process can be more complex and more diverting for a child. The main reason is that a child’s early lexical knowledge doesn’t permit him to easily rely on searches in the dictionary with rudimentary word-senses and references already acquired. Although children's dictionaries use simple words in their definitions, some of the words might be unknown. The list of unknown words (cross-references) can be small for a monosemous word (single sense), but maybe quite large for highly polysemous words (multiple senses). Facing a long search process, the lexical acquisition task might be aborted by a child unable to contrast the meanings of a polysemous word [1]. In both cases, even if it is possible to follow all cross-references, or if the meaning of the crossreferences is already known, there is still the problematic of constructing the meaning of the

new word from the meaning of all the other words, and mostly to understand the relevance of each word in that construction. We will look at both monosemous and polysemous words in this research focusing on the process of understanding and meaning acquisition. Our working hypothesis to address these crossreference and meaning construction difficulties in lexical acquisition basically emerges from a cognitive theory about mechanisms of grounding language acquisition and cognition with categorical perception (CP) [5]. These mechanisms of CP are essentially described through two cognitive processes: one extracting distance in similarity space and the other making categorizations. CP facilitates lexical knowledge acquisition if it starts from pure linguistic resources, such as the dictionary, by grounding word-sense on already known categories obtained by that process. For example, the linguistic meaning of “zebra” should be highlighted by CP processes if it shows the learner that “˝zebra˝ = ˝horse˝ + ˝stripe˝”1. In our view, the simulation of that strategy within a lexical acquisition tool making use of an online dictionary can help a child overcome his deficit of background knowledge by being guided through the meaning construction process, grounding new words in already acquired words. Even though CP has been experimented with neural networks [5], in the present paper we will explore the way to simulate these processes with fuzzy logic tools. Harnad [10] already noted that cognitive simulation resides more in the successful computability of a model, than in the type of devices used in symbolic or 1

See [5] p. 5.

connectionist simulation. Moreover, the use of analog sensory projections connected with some symbols into neural nets has been criticized for their lack of semantics [15] and analogy [4]. By using a dictionary of a living language as data, we overcome the ontological problem raised by Searle “In what does cognition consist?” [15]. Words in the context of a dictionary are the inputs and outputs of the CP process used to simulate children grounded lexical acquisition.

2

Formal representation of a lexicon

Let us start with the data of the electronic dictionary, “The American Heritage First Dictionary (AHFD)2”, and call it information system I . We define such a system by the following quadruple:

I = E,Q, L,Φ , where E is the set of dictionary entries (Table 1), Q is the set of children queries

Q = {Q1 ,...,Qh } to the dictionary, L is the set of all lexical features for entries in E and Φ is a semantic relation between the words and the entry of the form Φ: l R e ⇔ lexical feature l is semantically related to entry e . Table 1: Example of Dictionary Entries Entry Horse Stripe … Zebra

DESCRIPTION A horse is a large animal with long legs. Horses live on farms. People like to ride horses. A stripe is a line of color. Zebras are covered with stripes. Len has a shirt with stripes on it.

… A zebra is an animal. It looks like a small horse. Zebras have black and white stripes.

Note that not all words present in the definitions are considered as features3. As shown in bold in Table 1, only the most significant words are used for L . In Natural Language Processing (NLP) word significance is often determined by contrasting open-set words (nouns, verbs, 2 3

Copyright obtained from Houghton Mifflin.

Given that words are entries and features, we will identify entries with uppercase letters (Horse) and features with lowercase letters (horse).

adjectives, adverbs) as being content-words and closed-set words (prepositions, conjunctions, etc) as being function words. The content words set generates L which is further reduced by frequency of occurrence, based on the hypothesis that frequency is directly proportional to generality. Even further reduction is achieved by collapsing varied forms of words to their base form. It is a common strategy in natural language processing to apply a stop words filter and word stemming onto the corpus [14]. We now give an initial partial interpretation of lexical data from an electronic dictionary. For a set of entries, as in Table 1, E = e1 , e2, ...,en that might be read as the entries-set horse + stripe +…+ zebra and a set of lexical features,

{

{

L= l1 , l2 , ..., ln

}

}

that might be read as descriptions-sets horse + large +… + stripe(s), we define Φ: Fik (l j )= li , µ F ( li ) | ∀ek ∈E as a

{

}

ik

mapping for a basic space representation of 4 I = E,Q, L,Φ for the query Qh = li that might

{ }

be read as the request about “zebra”. Then µ F (li ) is the membership degree of li to the ik

fuzzy set Fik , i.e. the weight of a lexical feature into a lexical entry ek . Consistently with the standard representation [14] in information retrieval (IR), we use the vector space model. We calculate the weight of a lexical feature into an entry’s vector by following the count of l ’s occurrence. For example, using the data from Table 1, we read the entry Horse, that we represent formally by e1 , and its significant words horse, large, animal, with, long, legs, live, farms, people, like, ride, i.e. formally speaking letting l1 = horse, l2 = large, with L = l1 , l2, l3, l4, l5, l6, l7, l8 , l9, l10, l11 . Then by

{

}

counting the frequency of each significant word, we obtain the degree of membership of each element of the fuzzy set Fi1(li )=µ F (li ) . Note i1

that instead of using the exact count of

4

In our program, the queries can be made with any i being a lexical feature of an entry. That means we give to the user access to the thesaurus of significant words to make his query.

occurrences for a term, we apply a multiplication factor based on which part of the description pertains to the lexical feature. The description of a lexical entry in the AHFD uses two levels of gradation. Namely, by doubling the weight of a word occurring in the first sentence, we emphasize the importance of the definition of an entry generally expressed by “is a” or “means”, compared to the lexical explanation or the lexical exemplification given afterward. Thus, for the word horse in the entry Horse we obtain F11(l1 )=µF (l1 ) = (2+1+1)/l1 = 11

4/l1 , and more generally Fi1 = 4/l1 + 2/l2 + 2/l3 + 2/l4 + 2/l5 + 2/l6 + 1/l7 + 1/l8 + 1/l9 + 1/l10 + 1/l11 .

3

Similitude among lexical features

In the CP model, a first cognitive process enables a learner to transpose a non-symbolic material, like pre-acquisition of a first language, into a symbolic representation. We simulate this mental mechanism by discriminating a set of similar stimuli coming from lexical entries of the AHFD, like it was grounded information in the previous learning of a child. Suppose that we represent this state of low-level mapping between labels as a fuzzy semantic relation Φ*: li R l j . We define Φ* as a special mapping of a semantic relation l R e between lexical features and its entry, i.e. li represents lexical features in the set A forming the vector of words A= l1 ,l2,...,ln and l j represents lexical

{

}

features in the set B forming the vector of words B= l1 ,l2,...,lm that we substitute to entry e . The substitution means that a grounded information about a word entry e can be equivalently presented by the vector of words of its surrounding environment found in previous learning, here given as B . That strategy allows the comparison of a grounded word with itself within a new occurrence, as well as to extend the comparison to other vectors of words. For this reason, A and B are two sets of non-empty crisp subsets on the universe L , each one standing for one ek , and R is a fuzzy relation between the lexical features l of A and B. We define these fuzzy associations of terms

{

}

generating li R l j into the universe L , as the Cartesian product A×B→[0,1] . Since the work of Miyamoto [13], associations between terms are regularly represented by a pseudo-thesaurus in fuzzy IR. Moreover, different definitions of a fuzzy association have been proposed by theoreticians. Miyamoto [13] uses fuzzy relations; such as, related terms, broader terms and narrower terms, and Chen & Horng [6] talk about a positive and negative association concerning a fuzzy generalization and a fuzzy specialization. However, let us recall that we are now trying to reproduce the first cognitive process of CP by a computational simulation, that is to say, a classification of lexical features by similarity between stimuli. Tversky [17] already described the similarity between the features as a fundamental principle for classification. According to that framework, and to the first mechanism of CP, we attempt to identify resemblance among lexical features using a fuzzy similarity relationship. A study of this subject in fuzzy logic [3] investigated different measures of comparison between objects, looking for an analogy with Tversky’s concept of similarity. They began with the idea that by using fuzzy sets, measures of comparison between imprecise objects can be usefully carried out. Taking this into consideration, we use a special case of similarity measure introduced by Zadeh [18] as a consistency index, and also pointed out in overviews of similarity measures [3], [7]: Sim( A, B )=supl min µ A(l),µ B(l) ,where A and B are two non-empty fuzzy sets on a set L= l1 ,l2 ,...ln . This index is monotonic with respect to fuzzy set inclusion.

(

{

)

}

The axiom of independence, saying that the ordering of the similarity effect onto any two components (e.g. li ,l j vs. li ',l j ' ) is independent of a third component (e.g. lh or lh ' ), has been criticized by psychological experiments showing that it is incorrect to predict from the independence postulate that the similarity between two objects partaking one common feature is not affected by the similarity to another one [9]. Taking account of these observations, we think that our choice to stay at a more general level of measures of similitude is

reasonable if we focus our work on how similarity can help children to ground their lexical acquisition. We show in Table 2 that membership grades are obtained by frequency counts5 for the words in Table 1. The fuzzy sets A and B are normalized with respect to the highest value of frequency counts. Table 2: Fuzzy sets FA,B FA,B AHorse BZebra

(li) horse (4), large (2), animal (2), with (2), long (2), legs (2), live (1), farms (1), people (1), like (1), ride(1) zebra (3), animal (2), look (1), like (1), small (1), horse (1), have (2), black (1), white (1), stripes (1)

We are now ready to articulate the grounding process of an unknown word based on the relation of the fuzzy similarity R. We can simulate this process by automatically generating from the corpus of the AHFD, a pseudo-thesaurus, with the formula R(li , l j ) = supC min µc (li ),µc (l j ) .Taking

(

)

lexical features “horse” and “animal”, shown in bold into both entries (AHorse, BZebra) from Table 2, as li and l j , we can say6, for example, that

A= .5/li +.25/l j and B= .12/li +.25/l j then R(li ,l j ) = sup A, B(min[.5,.25],min[.12,.25]) . This means that the similarity between “horse” and “animal” in this fragment of the AHFD corpus has a value of 0.25. This formal machinery would enable us within a computer lexical acquisition tool to help children ground their search of an unknown lexeme by expanding their query (word searched) with relevant terms in the pseudothesaurus. Instead of a long list of possible words to look at as extracted from the definitions, a more targeted list of relevant information could be presented. Let us suppose a consultation for the word “zebra”. To simulate afterward the grounding of his knowledge, we extend the child’s request with similar terms in 5

Remember that a word in the definitional part of the entry’s description counts for two occurrences. 6

We suppose here that we normalize using 8 as the highest value of frequency count.

the pseudo-thesaurus computed with the AHFD. Using the sup−min composition, we obtain an expanded

query

E = Q o R,

i.e.

E(l') = supl min(µQ(l ),µ R(l,l')) . For example, the

expansion

of

the

query

“zebra”

Q = 1/l1 with similar terms R(li ,l j )= 1/zebra + .03/horse + .03/line + .2/stripe + .03/white + .06/black + .2/Len + .04/ shirt coming from the AHFD pseudo-thesaurus when we apply an αcut of 0.025 (a minimal value limiting terms of R(li ,l j ) ) is given by E(lg ) 7= sup{min(1,1)} + + + sup{min(1,.03)} sup{min(1,.03)} sup{min(1,.2)} + sup{min(1,.03)} + + sup{min(1,.2)} + sup{min(1,.06)} sup{min(1,.04)}, i.e. the expanded query is E(lg ) = 1/zebra +.03/horse + .03/line + .2/stripe + .03/white + .06/black + .2/Len + .04/ shirt8.

4

Simulation of cognitive processes

According to [5], the first cognitive process involved in a CP mechanism is the extraction of the distance between the natural language expressions in a similarity space. We simulate the process of grounding language acquisition 7

In the following example, the result is equivalent to take directly the similar terms of the pseudothesaurus and their values because sup apply only when we have more then one term in the original query and min is always the value of the pseudothesaurus because we give the value of 1 to the query of the children. Suppose that we can ask the value of certainty to a child using two terms for a query Q(lh ) = .8/zebra + .2/horse and that

zebra R(li ,l j )=

horse

 zebra  1   .03

horse

.03 1

stripe

.2  , then the  .001 

program will compute E = Q o R

E(lg ) = sup{min(.8,1),min(.2,.03)} /zebra +

sup{min(.8,.03),min(.2,1)} /horse + sup{min(.8,.2),min(.2,.001)} /stripe = .8/zebra + .2/horse + .2/stripe. 8

Note that the terms “Len” and “shirt” are not bad results. They come from the entry “stripe” of the AHFD: A stripe is a line of color. Zebras are covered with stripes. Len has a shirt with stripes on it.

with background knowledge by measuring the similarity between the expanded query E and the matrix F of the AHFD entries. So, we obtain the possibilistic similarity Sim by using the sup-min composition: Sim(E,F )=Π F (E )= E o F , i.e.,

Sim(E,F) = supE ∩ F min(µ E, F (lg ),µ E, F (lik )) . For example, an expanded query (using the alpha-cut α=0.03 as threshold on E ) with the word “letter” produces the data given in Table 3 when we limit the record of entries with an αcut of 0.25 . Table 3: The Similarity to the Lexeme “letter” Entry Letter1

Letter2

DESCRIPTION A letter is one of the symbols people use to write words. The letters of our alphabet are A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, and Z. (0.43) A letter is also a message you write on paper. Eli writes letters to all his friends.

(0.43) Consonant A consonant is a kind of letter. B, C, D,

Vowel

Write

F, G, H, J, K, L, M, N, P, Q, R, S, T, V, W, X, and Z are consonants. Consonants and vowels make the letters of the alphabet. (0.43) A vowel is a kind of letter. A, E, I, O, and U are vowels. Vowels and consonants make the letters of the alphabet. (0.43) To write means to make letters and words on a piece of paper with a pencil or a pen. Diane wrote her name in big letters across the page. (0.43)

Alphabet An alphabet is the letters that people write with. Our alphabet is A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z. (0.29)

Mail

The mail is how we send letters and packages from one place to another.

(0.29)

The value at the end of each description expresses the possibilistic similarity of the query. This strategy of the nearest descriptions to a grounded query finding in a similarity space seems sufficient to locate the most relevant data concerning a word like “zebra” in a dictionary. However, the distinction between polysemous words like letter1 and letter2 is more

problematic as we can see in Table 3. We note that the measurement of the distance in a similarity space, cut out by the child’s query, is not enough on its own to decide between similar lexemes of different categories. This brings us to the second process included in the CP mechanism according to [5], namely the categorization itself. In order to help a child to distinguish between multiple word-senses of a term, resulting from a query on the AHFD, we apply Bezdek’s [2] method of cluster analysis to separate the vectors. As a result, the fuzzy partition of a semantic space can be defined by the objective c

n

∑∑

function J m(U,v )=

k =1 i =1

µ kim ωi d 2ki(vk , xi ) . We

should read xi as being a vector of a lexical entry in the semantic space, vk being the prototypal vector of the cluster k , dki2 expressing the square distance error between these two vectors, ωi being the weight of an element i given by his membership degree µki to the cluster k , and m being a fuzzifier. This partitioning algorithm is what is called the fuzzy c-means algorithm [2] [11] in order to express that fuzzy sets are used for clustering elements of a data set X = xi ,...,xn into

{

{

clusters C = ck ,...,cc

}

}

using means of their

membership values resulting from the fuzzy set

U →[0,1]

C× X

, which one is the matrix of

membership degrees produced at each iteration and expressing the belongingness of each datum x to a cluster c . For applying this method, we use Höppner’s free software for fuzzy clustering [12] and we refer to [11] for more theoretical details. In the Table 4 we show a partition automatically done by this software on a matrix resulting from a query with the word “letter”. Looking at the first four results in each cluster produced, the fuzzy cluster analysis produced 2 clusters correctly incorporating the two word-senses of “letter”, except for “First” and “Spell”. We indicate de membership degree to the cluster below the lexical entry9.

9

We applied an α-cut = 0.51 on the clustering of a matrix of 13 vectors coming from the query.

Table 4: Partition of a Semantic Space Entry

DESCRIPTION

Cluster 1 A letter is one of the symbols people use to write words. The letters of our alphabet are A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, and Z. Alphabet An alphabet is the letters that people write with. Our alphabet is A, B, C, D, (.52) E, F, G, H, I, J, K, L, M, N, O, P, Q, R,

Letter1 (.53)

S, T, U, V, W, X, Y, Z. Consonant A consonant is a kind of letter. B, C, D, F, G, H, J, K, L, M, N, P, Q, R, S, T, V, (.52)

Print (.51)

W, X, and Z are consonants. Consonants and vowels make the letters of the alphabet. To print means to write with care so that none of the letters touch each other. George printed his name at the top of his paper.

Cluster 2

Letter2 (.51) First (.52) Mail (.52) Spell (.52)

A letter is also a message you write on paper. Eli writes letters to all his friends. First means before all the others. The letter A is the first letter in the alphabet. The mail is how we send letters and packages from one place to another. To spell is to put letters together to make words. Nan knows how to spell most of the words in this book.

Although the partition is acceptable, it contains mistakes and is highly unstable as we can see from the membership degrees of the entries being barely above 0.5.Thus, an entry belongs hardly a little more to one cluster than to the other.

5

dictionary entries by a query makes it possible for a child to look at this lexical information in a similarity space as background knowledge about the unknown word. On the other hand, it is less obvious to identify the relevant information about individual word senses for polysemous words in the AHFD. Although the extraction of relevant information with fuzzy similarity is still working well in this case, the partition of multiple senses into categories by using clusters analysis is relatively unstable as shown in Table 4. Even more problematic, the search for a prototype in the fuzzy cluster analysis is often distorted by local minima. This frequently happens when a categorization is made on a sparse matrix. Although some approaches can be explored to address the sparse matrix problem, the structure of a dictionary, with a list of lexical entries containing very concise information often only pertinent to a small subgroup of entries, doesn’t allow the vocabulary of similar segments to be dense enough to escape from this kind of matrix. Especially in the case of a children's dictionary, its vocabulary for kindergarten age children is in some cases so diffuse that it is difficult to find a prototypal differentiation of their resemblance. Table 5: Polysemy and diffuse vocabulary Entry

Description

Land is the part of the world that is not water. People live on land. A land is a country. You can collect Land stamps from many different lands. The land is the earth or ground that someone uses. The farmers planted potatoes on their land. To land is to come down to the ground. Dale saw the airplane when it landed in a field.

Discussion

Even if some improvements can be done concerning our strategy of frequency counts to obtain a better spreading of membership degrees, our simulation of the mechanisms of grounding language acquisition with CP works relatively well for monosemic terms. As we can see with Table 3, the membership of the entries to possibilistic similarity of the query vary by layer of degrees (0.43, 0.29). By undervaluing the counting of some grammatical categories of words, like adjective or adverb, we will eventually get a better spreading of values in a next experiment. Indeed, the extraction of

In the Table 5, we give an example, with the entry “land”, of a polysemous term of the AHFD having a very diffuse vocabulary. We show in bold the significant words and in italic the only common word (except land) amongst this vocabulary, “ground”, appearing in two entries only. This makes very difficult the partition in an Euclidean space of vectors having almost only the polysemous term itself as a point of separation in a sparse matrix. However, it seems that a child should be able to learn the differentiation of senses for a polysemous term.

For this reason, in our next work we wish to explore a sparse graphs model [8] called "Small worlds". In figure 1, we present an extract of what the small world for the entry "Letter" would be. The nodes of the graphs correspond to the features used here in the fuzzy model. The edges are based on the co-occurrences of words. Any polysemous entry is divided in two nodes (letter1, letter2 on the figure) representing the two senses. Small worlds, as used by [8], are especially designed to perform word sense disambiguation based on the notion of cycles and paths in the graph. We can see on Figure 1 how different sets of words will have attractive power for different senses. For example, the set {book, word, dictionary, alphabet} would highlight the first sense of “letter” (see Table 4), but the set {card, mail, message, package} would highlight the second sense of “letter”. Other sets, such as {write, print, pencil} seem to stand in between and are actually pertinent to both senses.

Figure 1 - Extract of Small World around "Letter"

Although small worlds are quite interesting, they lack the important notion of degree of relevance. As future research, we will combine this idea with our grounding model and use a fuzzy logic approach to assign weights on the edges. We therefore are hopeful to find a solution to the grounding of individual word senses of unknown polysemous word, and to incorporate our results within a lexical acquisition tool. We think that a way to explore, the combination of the semantic relations of proximity appearing in a net of small worlds with the grounding model, could be to exploit a certain form of qualitative analysis of the “affect-related information” [16] available in the

text of the dictionary. It has been shown by [16] that by using some specialized lexicon (fuzzy thesaurus) and by calculating with possibilistic operators the more suitable affect in a context, then we can discover some other hidden relationships. Acknowledgements This research has been partly supported by the Fonds québécois de la recherche sur la société et la culture # 87768.

References [1] C. Barrière (1997). From a Children’s First Dictionary to a Lexical Knowledge Base of Conceptual Graphs, Ph.D. Thesis, Simon Fraser University, Burnaby, B.C., CA. [2] J. C. Bezdek (1981). Pattern Recognition With Fuzzy Objective Function Algorithms, New York, NY: Plenum Press. [3] B. Bouchon-Meunier, M. Rifqi, and S. Bothorel (1996). Towards general measures of comparison of objects, Fuzzy Sets and Systems, 84, 143-153. [4] C. F. Boyle (2001). Transduction and degree of grounding, Psycoloquy, 12, #36. [5] A. Cangelosi, A. Greco and S. Harnad (2002). Symbolic Grounding and the Symbolic Theft Hypothesis. In A. Cangelosi and D. Parisi eds., Simulating the Evolution of Language, London: Springer. [6] S. M. Chen and Y. J. Horng (1999). Fuzzy query processing for document retrieval based on extended fuzzy concept, IEEE Trans. Systems Man Cybernet. Part B:Cybernet., 29 (1), 126-135. [7] D. Dubois and H. Prade (1982). A unifying view of comparison indices in a fuzzy settheoretic framework. In R. R. Yager ed., Recent Development in Fuzzy Set and Possibility Theory, Pergamon Press, 3-13. [8] B. Gaume, K. Duvignau, O. Gasquet and M.D. Gineste (2002). Forms of meaning, meaning of forms, JETAI, 14 (1), 61-74. [9] R. L. Goldstone, D. L. Medin and D. Gentner (1991). Nonindependence of features in similarity judgments, Cognitive Psychology, 23, 222-262. [10] S. Harnad (1982). Neoconstructivism: A unifying theme for the cognitive sciences. In T. Simon and R. Scholes eds., Language, mind and brain, Hillsdale NL: Erlbaum, 1-11.

[11] F. Höppner, F. Klawonn, R. Kruse and T. Runkler. (1999). Fuzzy Cluster Analysis, Chichester, England: John Wiley & Sons. [12] F. Höppner (2000). Fuzzy Clustering Algorithms - A Tool Library: User’s Manual [13] S. Miyamoto (1990). Fuzzy Sets in Information Retrieval and Cluster Analysis, Dordrecht, Netherlands: Kluwer Academic Publishers. [14] G. Salton and M. J. McGill (1987). Introduction to modern information retrieval, New York, NY: McGraw-Hill. [15] J. R. Searle (2001). The failure of computalism: II, Psycoloquy, 12, #62. [16] P. Subasic and A. Huettner (2000). Calculus of fuzzy semantic typing for qualitative analysis of text, ACM KDD 2000, Workshop on Text Mining, Boston, August. [17] A. Tversky (1977). Features of similarity, Psychological Review, 84, 327-352. [18] L. A. Zadeh (1978). PRUF – A meaning representation language for natural languages, Int. J. of Man-Machine Studies, 10, 395-460.

Fuzzy Logic Tools for Lexical Acquisition

strategy within a lexical acquisition tool making use of an online ... fuzzy logic tools. Harnad [10] already noted that cognitive simulation resides more in the successful computability of a model, than in the type of devices used in symbolic or. 1. See [5] p. 5. .... example, using the data from Table 1, we read the entry Horse, that ...

208KB Sizes 1 Downloads 187 Views

Recommend Documents

Read Fuzzy Logic For Free
that virtually drive themselves, washing machines that pick the right wash cycles and water temperature automatically and air conditioning and heaters that ...

Fuzzy Logic ilfan.pdf
Interference System (Evaluasi Rule), merupakan sebagai acuan untuk menjelaskan hubungan. antara variable-variabel masukan dan keluaran yang mana ...

Application of Fuzzy Logic Pressure lication of Fuzzy ...
JOURNAL OF COMPUTER SCIENCE AND ENGINEER .... Experimental data has been implemen ... The dynamic process data obtained via modelling or test-.

Lexical Acquisition with and without Metacommunication
Department of Computer Science,. King's College, London. The Strand, London WC2R 2LS ...... Stanford: California (2000). Greenfield, P., Rumbaugh, S. S.: ...

Fuzzy Logic Based Design Automation (DA) System for ...
simulator for checking the performance of the circuit. This performance is checked ... bandwidth, etc. are the performance specifications of an amplifier. In each ...

Anesthesia Prediction Using Fuzzy Logic - IJRIT
Thus a system proposed based on fuzzy controller to administer a proper dose of ... guide in developing new anesthesia control systems for patients based on ..... International conference on “control, automation, communication and energy ...

Fuzzy Logic based Content Protection for Image ...
Figure on top is the energy function of an image using the gradient method ..... following Flickr (Image Hosting Website) members for ... 23, Issue 10, Page No.

TCPS Controller Design Using Fuzzy Logic Controller for Power ...
For economic and ecological reasons, the building of new transmission lines and expansion of existing transmission systems are becoming more and more difficult. In this new situation, it is necessary to utilize the existing power transmission system

integrating fuzzy logic in ontologies
software consists in a number of different modules providing a .... fit them in the best way to a specific environment or, in ..... In Polkowski, L. and Skowron, A., edi-.

integrating fuzzy logic in ontologies - Semantic Scholar
application of ontologies. KAON allows ... cycle”, etc. In order to face these problems the proposed ap- ...... porting application development in the semantic web.

Fuzzy Logic based Protection for Image Resizing by ...
Thus, seam carving is not completely protection free for low energy objects with high content. Examples of such objects may be human beings, buildings, small objects etc. For this reason, we introduce a fuzzy logic based image segmentation method and

Ideal Types and Fuzzy Logic
between dimensions, clustering them in two different sets inversely related – AUT, DIV, REG, PRO on one side (bureaucratization) and IMP,. COM on the other ...

Fuzzy Logic (Arabic).pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Fuzzy Logic ...

Effects of Vocabulary Acquisition on Lexical ...
For the lexicalization tests (see Figure 3), the data for each experiment were analyzed using .... [2] W. D. Marslen-Wilson, “Functional parallelism in spoken word ...

pdf-175\fuzzy-logic-and-applications-10th-international-workshop ...
... the apps below to open or edit this item. pdf-175\fuzzy-logic-and-applications-10th-internation ... ember-19-22-2013-proceedings-lecture-notes-in-com.pdf.

Wall Follower Robot Using Fuzzy Logic: A Review - IJRIT
system that enables a mobile robot in moving through a corridor or following a .... The gain scheduling controller will be used before the FLC to control the error signal ... 2) computing the path winding number, 3) learning a combinatorial map,.

Using Fuzzy Logic to Enhance Stereo Matching in ...
Jan 29, 2010 - Stereo matching is generally defined as the problem of discovering points or regions ..... Scheme of the software architecture. ..... In Proceedings of the 1995 IEEE International Conference on Robotics and Automation,Nagoya,.

Introduction to Fuzzy Logic using MatLab - Sivanandam Sumathi and ...
Introduction to Fuzzy Logic using MatLab - Sivanandam Sumathi and Deepa.pdf. Introduction to Fuzzy Logic using MatLab - Sivanandam Sumathi and Deepa.

Fuzzy Logic in Engineering Applications by Timothy Ross.pdf ...
FUZZY LOGIC WITH. ENGINEERING. APPLICATIONS. Second Edition. Page 3 of 652. Fuzzy Logic in Engineering Applications by Timothy Ross.pdf.