Non-Axiomatic Reasoning System (NARS) solving the Facebook AI Research bAbI tasks Patrick Hammer June 26, 2015 Abstract In “Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks” [1] 20 examples for AI-complete question answering were proposed, which a general AI system, according to them, has to be able to deal with. This paper is applying Pei Wang’s Non-Axiomatic Reasoning System [5, 6, 7] to all of the proposed bAbI tasks.

Contents 1 Narsese syntax examples

3

2 Factoid QA 2.1 Basic Factoid QA with Single Supporting Fact 2.1.1 Example . . . . . . . . . . . . . . . . . . 2.1.2 Answering process . . . . . . . . . . . . 2.2 Factoid QA with Two Supporting Facts . . . . 2.2.1 Example . . . . . . . . . . . . . . . . . . 2.2.2 Answering process . . . . . . . . . . . . 2.2.3 Acquiring background knowledge . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

4 4 4 4 5 5 5 6

3 Relations 3.1 Two argument relations: Subject vs. object 3.1.1 Example . . . . . . . . . . . . . . . . 3.2 Three argument relations . . . . . . . . . . 3.2.1 Example . . . . . . . . . . . . . . . . 3.3 Answering process . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

6 6 6 7 7 7

4 Lists/Sets 4.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Answering process . . . . . . . . . . . . . . . . . . . . . . . . . .

8 8 8

5 Counting 5.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Answering process . . . . . . . . . . . . . . . . . . . . . . . . . .

9 9 9

1

. . . . .

. . . . .

6 Simple Negation 10 6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.2 Answering process . . . . . . . . . . . . . . . . . . . . . . . . . . 10 7 Indefinite knowledges 10 7.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 7.2 Answering Process . . . . . . . . . . . . . . . . . . . . . . . . . . 11 7.3 Acquiring background knowledge . . . . . . . . . . . . . . . . . . 11 8 Coreference 8.1 Basic coreference . . . 8.1.1 Example . . . . 8.2 Compound coreference 8.2.1 Example . . . . 8.3 Answering process . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

11 11 11 12 12 13

9 Time manipulation 13 9.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 9.2 Answering process . . . . . . . . . . . . . . . . . . . . . . . . . . 13 10 Basic inference 10.1 Basic deduction . . 10.1.1 Example . . 10.2 Basic induction . . 10.2.1 Example . . 10.3 Answering process

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

13 13 14 14 14 15

11 Positional reasoning 15 11.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 11.2 Answering process . . . . . . . . . . . . . . . . . . . . . . . . . . 16 11.3 Acquiring background knowledge . . . . . . . . . . . . . . . . . . 16 12 Reasoning about size 16 12.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 12.2 Answering process . . . . . . . . . . . . . . . . . . . . . . . . . . 17 12.3 Acquiring background knowledge . . . . . . . . . . . . . . . . . . 17 13 Path finding 17 13.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 13.2 Answering process . . . . . . . . . . . . . . . . . . . . . . . . . . 18 14 Reasoning about agents motivations 18 14.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 14.2 Answering process . . . . . . . . . . . . . . . . . . . . . . . . . . 19 15 Conclusion

19

2

1

Narsese syntax examples

We will express all knowledge to reason about in Narsese, the formal language on which the Non-Axiomatic Logic (NAL), the cognitive logic which NARS uses for reasoning, is defined on. How NAL works can be seen in detail in [7]. Only the details needed by the bAbl tasks are described by examples here: A is a special case of B: A→B A is an individual, not a category on its own: {A} A is a property: [A] a1 , ..., an are related through relation: (∗, a1 , ..., an ) → relation which can also be written as: ai → (/, relation, a1 , ..., , ..., an ) Something unspecified which is a special case of B: #1 → B If something is a bird it is also flying: ($1 → bird)=⇒($1 → [f lying]) Such implications can also have time attached, like “if A happens then B happens at the same time”: A =|> B and “B happens after A happens”: A =/> B The time difference between A and B can also be measured, for example by “+10”, written as: (&/,A,+10) =/> B Additionally each Narsese-sentence will here either end with “.” which represents a judgement, or end with “?”, representing a question. Furthermore, each sentence with “.” punctuation has an truth-value < f, c > attached, where f represents the frequency, which is defined as w+ w where w+ is the positive evidence and w the total evidence, and c represents the confidence, which is w defined as w+1 . If unspecified the sentence has truth-value < 1.0, 0.9 >. That something is not true, can in NARS either be represented as 3

J. <0.0,c> or with negation as (--,J). <1.0,c> where J is a judgement. Additionally the truth of all sentences can have an tense attached, :Š: for “now”, :/: for “in the future”, and :\: for “in the past”. If none is specified it means “always”. To note here is that what currently is “now” will be “in the past” just seconds later, reasoning time in NARS is not independent from real time.

2 2.1

Factoid QA Basic Factoid QA with Single Supporting Fact

These are defined by [1] as basic question answering where a single supporting fact leads to the answer. 2.1.1

Example

John is in the playground. ((*,{john},{playground}) --> is_in). :|: Bob is in the office. ((*,{bob},{office}) --> is_in). :|: Where is john? ((*,{john},{?where}) --> is_in)? A:playground Answer ((*,{john},{playground}) --> is_in). <1,0.47> 2.1.2

Answering process

In this example all the system has to do is identifying ((*,{john},{playground}) --> is_in). <1,0.47> which is a result of eternalization as the answer to the question ((*,{john},{?where}) --> is_in)? by variable unification, “?where := playground”.

4

2.2

Factoid QA with Two Supporting Facts

2.2.1

Example

Background knowledge: If something is picked, it means that the object which is picked is where the person is. ((&|,((*,#Person,$Object) --> pick),((*,#Person,$Place) --> is_in)) =|> ((*,$Object,$Place) --> is_in)). John is in the playground. ((*,{john},{playground}) --> is_in). :|: Bob is in the office. ((*,{bob},{office}) --> is_in). :|: John picked up the football. ((*,{john},{football}) --> pick). :|: Bob went to the kitchen. ((*,{bob},{kitchen}) --> is_in). :|: Where is the football? ((*,{football},{?where}) --> is_in)? A:playground Answer ((*,{football},{playground}) --> is_in). <1;0.28> Where was Bob before the kitchen? ((&/,((*,{bob},{?Where}) --> is_in),?1) =/> ((*,{bob},{kitchen}) --> is_in))? A:office Answer ((&/,((*,{bob},{office}) --> is_in),+3) =/> ((*,{bob},{kitchen}) --> is_in)). <1,0.31> 2.2.2

Answering process

This one needs a form of temporal reasoning/perception, so that the system can see that ((*,{bob},{office}) --> is_in). :|: was indeed happening before ((*,{bob},{kitchen}) --> is_in). :|: happened. In this example it was through temporal induction supported by one example, indicating that “usually after bob is in the office, he is in the kitchen” Then like in the first example all what is left is recognizing ((&/,((*,{bob},{office}) --> is_in),+3) =/> ((*,{bob},{kitchen}) --> is_in)). as the answer to the question. 5

2.2.3

Acquiring background knowledge

The background knowledge needs either be given like above, or my examples. In this case we had ((&|,((*,#Person,$Object) --> pick),((*,#Person,$Place) --> is_in)) =|> ((*,$Object,$Place) --> is_in)). which can be learned by giving multiple examples like ((*,{tom},{ball}) --> pick). :|: ((*,{tom},{living_room}) --> is_in). :|: ((*,{ball},{living_room}) --> is_in)). :|: in which the system has to apply three inductions with variable introductions, as well as temporal induction.

3 3.1

Relations Two argument relations: Subject vs. object

3.1.1

Example

The office is north of the bedroom. ((*,{office},{bedroom}) --> north-of). The bedroom is north of the bathroom. ((*,{bedroom},{bathroom}) --> north-of). What is north of the bedroom? ((*,{?What},{bedroom}) --> north-of)? What is the bedroom north of? ((*,{bedroom},{?What}) --> north-of)? A:office Answer ((*,{office},{bedroom}) --> north-of). <1,0.9> A:bathroom Answer ((*,{bedroom},{bathroom}) --> north-of). <1,0.9>

6

3.2

Three argument relations

3.2.1

Example

Mary gave the cake to Fred. ((*,{mary},{cake},{fred}) --> gave). Fred gave the cake to Bill. ((*,{fred},{cake},{bill}) --> gave). Jeff was given the milk by Bill. ((*,{bill},{milk},{jeff}) --> gave). Who gave the cake to Fred? ((*,{?Who},{cake},{fred}) --> gave)? Who did Fred give the cake to? ((*,{fred},{cake},{?Who}) --> gave)? What did Jeff receive? ((*,{?1},{?WhatReceived},{jeff}) --> gave)? Who gave the milk? ((*,{?Who},{milk},{?1}) --> gave)? A:Mary Answer ((*,{mary},{cake},{fred}) --> gave). <1,0.9> A:Bill Answer ((*,{fred},{cake},{bill}) --> gave). <1,0.9> A:milk Answer ((*,{bill},{milk},{jeff}) --> gave). <1,0.9> A:Bill Answer ((*,{bill},{milk},{jeff}) --> gave). <1,0.9>

3.3

Answering process

In NARS arbitrary relations can be represented, this example again only demands variable unification / simple pattern matching.

7

4

Lists/Sets

4.1

Example

These type of examples are about forming sets. Daniel picks up the football. ({football} --> (/,hold,{daniel},_)). :|: Daniel drops the newspaper. ({newspaper} --> (/,hold,{daniel},_)). :|: <0,0.9> Daniel picks up the milk. ({milk} --> (/,hold,{daniel},_)). :|: What is Daniel holding? ({?What} --> (/,hold,{daniel},_))? ({?What,?What2} --> (/,hold,{daniel},_))? ({?What,?What2,?What3} --> (/,hold,{daniel},_))? A:milk Answer ({milk} --> (/,hold,{daniel},_)). <1,0.47> A:football,milk Answer ({football,milk} --> (/,hold,{daniel},_)). <1,0.3>

4.2

Answering process

For such examples it is needed for NARS to be able to combine sets to bigger sets. ({a1,...,an} --> M). ({b1,...,bn} --> M). |({a1,...,an,b1,...,bn} --> M). The rest once again is pattern matching / variable unification. However there is one thing to note: If only one question, namely (?What --> (/,hold,{daniel},_))? would have been given to the system ({milk} --> (/,hold,{daniel},_)). <1,0.47> would have probably been considered as the best answer because it is the simplest one and has the highest truth value, altough less information is considered compared to ({football,milk} --> (/,hold,{daniel},_)). <1,0.3> 8

5

Counting

These examples are about counting elements in a set.

5.1

Example

Input: Daniel picked up the football. ({football} --> (/,hold,{daniel},_)). :|: Daniel dropped the football. ({football} --> (/,hold,{daniel},_)). :|: <0,0.9> Daniel got the milk. ({milk} --> (/,hold,{daniel},_)). :|: Daniel took the apple. ({apple} --> (/,hold,{daniel},_)). :|: What the count relation of elements in a set means (({$1} --> $rel) ==> ((*,1,$rel,(*,$1)) --> count)). (({$1,$2} --> $rel) ==> (&&,(--,((*,1,$rel,(*,$1)) --> count)),((*,2,$rel,(*,$1,$2)) --> count))). (({$1,$2,$3} --> $rel) ==> (&&,(--,((*,1,$rel,(*,$1)) --> count)), (--,((*,2,$rel,(*,$1,$2)) --> count)),((*,3,$rel,(*,$1,$2,$3)) --> count))). How many objects is Daniel holding according to our counting definition? ((*,?HowMany,(/,hold,{daniel},_),?M) --> count)? A:two, Only at least one, but under AIKR this is fine. Answer ((*,1,(/,hold,{daniel},_),(*,football)) --> count). <1,0.45>

5.2

Answering process

Altough the basic counting happens automatically by combining sets to bigger ones, the question answering itself is more complicated. What does counting mean if it is not even entirely sure whether the elements to count fullfill the given relation or not? (insufficient knowledge) What does counting mean if the relation to check the data for demands, maybe also because of the amount of data, too much time and space? Like pointed out by Pei Wang in [5, 6] an intelligent system has to be able to deal with insufficient knowledge and resources (AIKR). This means, an AGI has to answer this questions only as good as the current knowledge and resources allow. This is also what happens above and in all other examples this paper demonstrates. Once answers are found, simple answers (low syntactic complexity) of high truth expectation (how true the judgement is) and originality (how much was considered) are preferred. 9

6 6.1

Simple Negation Example

Simple Negation Sandra travelled to the office. ((*,{sandra},{office}) --> at). :|: Fred is no longer in the office. ((*,{fred},{office}) --> at). :|: <0,0.9> Is Fred in the office? ((*,{fred},{office}) --> at)? A:no, Fred was not in the office Answer ((*,{fred},{office}) --> at). :\: <0,0.9> Is Sandra in the office? ((*,{sandra},{office}) --> at)? A:yes, Sandra was in the office Answer ((*,{sandra},{office}) --> at). :\: <1,0.9>

6.2

Answering process

Here the questions were yes/no question, directly answerable by recognizing the corresponding judgements as answers to the question, where negation was expressed like seen in the Narsese syntax examples section.

7

Indefinite knowledges

Reasoning examples about the unknown.

7.1

Example

background knowledge: Johny can’t be at the classroom or playground and at the same time in the office (({john} --> (/,at,_,{classroom})) =|> (--,({john} --> (/,at,_,{office})))). (({john} --> (/,at,_,{playground})) =|> (--,(john --> (/,at,_,{office})))). John is either in the classroom or the playground. (||,({john} --> (/,at,_,{classroom})),({john} --> (/,at,_,{playground}))). :|: (({john} --> (/,at,_,{classroom})) =|> (--,({john} --> (/,at,_,{playground})))). (({john} --> (/,at,_,{playground})) =|> (--,({john} --> (/,at,_,{classroom})))). 10

Sandra is in the garden. ({sandra} --> (/,at,_,{garden})). :|: Is John in the classroom? ({john} --> (/,at,_,{classroom}))? Is John in the office? ({john} --> (/,at,_,{office}))? A:maybe Answer ({john} --> (/,at,_,{classroom})). :\: <0.68,0.93> A:no Answer ({john} --> (/,at,_,{office})). :\: <0,0.9>

7.2

Answering Process

Since the truth values of NAL distinguish between that something is not true (low frequency), and that something is not well known yet (low confidence), the above is possible. The first answer was that the system knows much about that it doesn’t know about whether John is in the class room, and the second answer that the system knows much about that John is not in the office.

7.3

Acquiring background knowledge

In this case the learning of the related background knowledge is principially easy: One could give the system examples like ({person} --> (/,at,_,{room_A})). :|: (--,({person} --> (/,at,_,{room_B}))). :|: Here the negative event is needed because there is a fundamental difference between events which just were not observed and negative events: If something wasn’t observed it can’t just be assumed that it didn’t happen, the so called “closed world assumption” can’t be generally assumed by an AGI system. Also note that in a natural scenario, negative events are not part of the input at all. Negative events mostly come from disappointed anticipations, namely in cases that something was expected to happen but did not happen as expected.

8 8.1 8.1.1

Coreference Basic coreference Example

Daniel was in the kitchen. 11

((*,{daniel},{kitchen}) --> at). :\: Then he went to the studio. (--,((*,{daniel},{kitchen}) --> at)). :|: ((*,{daniel},{studio}) --> at). :|: Sandra was in the office. ((*,{sandra},{kitchen}) --> at). :\: Where is Daniel? A:studio ((*,{daniel},?where) --> at)? A:studio Answer ((*,{daniel},{studio}) --> at). <1,0.47>

8.2 8.2.1

Compound coreference Example

Daniel and Sandra journeyed to the office. ({sandra} --> (/,at,_,{office})). :|: ({daniel} --> (/,at,_,{office})). :|: Then they went to the garden. ({sandra} ({daniel} ({daniel} ({sandra}

--> --> --> -->

(/,at,_,{office})). (/,at,_,{office})). (/,at,_,{garden})). (/,at,_,{garden})).

:|: <0,0.9> :|: <0,0.9> :|: :|:

Sandra and John travelled to the kitchen. ({sandra,john} --> (/,at,_,{kitchen})). :|: After that they moved to the hallway. ({sandra,john} --> (/,at,_,{hallway})). :|: Where is Daniel? ({daniel} --> (/,at,_,?Where))? A:garden Answer ({daniel} --> (/,at,_,{garden})). <1,0.47>

12

8.3

Answering process

This examples implicitly assume, that it is known by the AI system, that something can’t be at two places at the same time. This knowledge has to be given to NARS or acquired by observations by the system itself, or an implicit handling like above is also possible, namely that whenever the position event is input also a corresponding “the object is not at the last position anymore” event is input to the system, this representation is the easiest for the system to work with.

9

Time manipulation

Examples which demand Temporal Reasoning in order to be understood.

9.1

Example

In the afternoon Julie went to the park. ((*,{julie},{park}) --> go). :|: Yesterday Julie was at school. ((*,{julie},{school}) --> go). :\: Julie went to the cinema this evening. ((*,{julie},{cinema}) --> go). :/: Where did Julie go after the park? ((&/,((*,{julie},{park}) --> go),?1) =/> ((*,{julie},{?where}) --> go))? A:cinema Answer ((&/,((*,{julie},{park}) --> go),+3) =/> ((*,{julie},{cinema}) --> go)). <1,0.31>

9.2

Answering process

This example demands temporal reasoning and variable unification, namely to form the temporal implication statement with temporal induction, and then recognize the answer as answer to the question. “Evening”, “afternoon” etc. could also have been represented as a own event here, but this wasn’t necessary in this case.

10 10.1

Basic inference Basic deduction

Deriving known from known.

13

10.1.1

Example

Sheep are afraid of wolves. ((*,sheep,wolf) --> afraid-of). Cats are afraid of dogs. ((*,cat,dog) --> afraid-of). Mice are afraid of cats. ((*,mouse,cat) --> afraid-of). Gertrude is a sheep. ({gertrude} --> sheep). What is Gertrude afraid of? A:wolf ((*,{gertrude},?what) --> afraid-of)? A:wolf Answer ((*,{gertrude},wolf) --> afraid-of). <1,0.73>

10.2

Basic induction

Generalizing using the induction principle and using the resulting hypothesis on a new special case. 10.2.1

Example

Lily is a swan. ({lily} --> swan). Lily is white. ({lily} --> [white]). Greg is a swan. ({greg} --> swan). What color is Greg? ({greg} --> [?WhatColor])? A:white Answer ({greg} --> [white]). <1,0.33>

14

10.3

Answering process

Deduction and Induction are one of NARS’s basic reasoning capabilities. In the former since Gertrude is a sheep and since sheeps are afraid of wolves, so is Gertrude, and in the latter: Greg is probably white because he is a swan and there is a swan, namely Lily, known which is white. Also Abduction tasks are easy for NARS, like that if it is known that Tim is white like a swan, that he might indeed be a swan.

11

Positional reasoning

Many may know the SHRDLU program written by Terry Winograd [3], which is reasoning about a 3D micro domain only consisting of blocks, pyramids etc. where the user is able to ask the system questions like whether there is a red block above the green block. Also one is able to let the system perform certain actions like building new user-defined structures. From AGI perspective the reasoning capabilities which are needed for this are limited to deductions, but it is at the same time a very representative example of positional reasoning.

11.1

Example

Background knowledge about the relation between X and Y in 2-dimensional space: If A is on top of B and B is on the right of C, then A is to the right of C ((&&,((*,$A,#B) --> top-of),((*,$C,#B) --> right-of)) ==> ((*,$C,$A) --> right-of)). If A is to the right of B and B is on top of C, then A is on top of C ((&&,((*,$A,#B) --> right-of),((*,#B,$C) --> top-of)) ==> ((*,$A,$C) --> top-of)).

Input: The triangle is to the right of the blue square. ((*,triangle,(&,[blue],square)) --> right-of). The red square is on top of the blue square. ((*,(&,[red],square),(&,[blue],square)) --> top-of). The red sphere is to the right of the blue square. ((*,(&,[red],sphere),(&,[blue],square)) --> right-of). Is the red sphere to the right of the blue square?

15

((*,(&,[red],sphere),(&,[blue],square)) --> right-of)? Is the red square to the left of the triangle? ((*,triangle,(&,[red],square)) --> right-of)? A: yes Answer ((*,(&,[red],sphere),(&,[blue],square)) --> right-of). <1.00,0.90> A: yes Answer ((*,triangle,(&,[red],square)) --> right-of). <1.00,0.15>

11.2

Answering process

Here only deduction is needed in order to answer the yes/no questions.

11.3

Acquiring background knowledge

The needed background knowledge can be acquired by examples like <(*,{apple},{block}) --> top-of>. <(*,{car},{block}) --> right-of>. <(*,{car},{apple}) --> right-of>. which give evidence to the relation between top-of and right-of which we used as background knowledge. However like always there are much more sentences which are also supported by this evidence, and especially dealing with multiple of such demands the system to be focused on this task. Else induction with variable introduction is also here the key mechanism.

12 12.1

Reasoning about size Example

Background knowledge: Transitivity of size: ((&&,((*,$A,#B) --> fits-in),((*,#B,$C) --> fits-in)) ==> ((*,$A,$C) --> fits-in)). Input: The football fits in the suitcase. ((*,{football},{suitcase}) --> fits-in). The suitcase fits in the cupboard. ((*,{suitcase},{cupboard}) --> fits-in). The box of chocolates is smaller than the football. ((*,{chocolate-box},{football}) --> fits-in). 16

Will the box of chocolates fit in the suitcase? ((*,{chocolate-box},{suitcase}) --> fits-in)? A: yes Answer ((*,{chocolate-box},{suitcase}) --> fits-in). <1,0.73>

12.2

Answering process

Here only deduction is needed in order to answer the yes/no questions.

12.3

Acquiring background knowledge

Here we expect the system to find evidence for the transitivity of the fits-in relation by examples like <(*,human,car) --> fits-in>. <(*,car,street) --> fits-in>. <(*,street,city) --> fits-in>. by induction with variable introduction.

13 13.1

Path finding Example

Background knowledge: Path of length 2 is defined as: ((&&,((*,#1,#2) --> starttarget),((*,#1,#B,$C) --> positioned), ((*,#B,#2,$C2) --> positioned)) ==> ((*,id,$C,id,$C2) --> path)). ((&&,((*,#1,#2) --> starttarget),((*,#1,#B,$C) --> positioned), ((*,#2,#B,$C2) --> positioned)) ==> ((*,id,$C,neg,$C2) --> path)). ((&&,((*,#1,#2) --> starttarget),((*,#B,#1,$C) --> positioned), ((*,#B,#2,$C2) --> positioned)) ==> ((*,neg,$C,id,$C2) --> path)). ((&&,((*,#1,#2) --> starttarget),((*,#B,#1,$C) --> positioned), ((*,#2,#B,$C2) --> positioned)) ==> ((*,neg,$C,neg,$C2) --> path)). Input: The kitchen is north of the hallway. ((*,{kitchen},{hallway},south) --> positioned). The den is east of the hallway. ((*,{den},{hallway},west) --> positioned). How do you go from den to kitchen? ((*,{den},{kitchen}) --> starttarget). (?what --> path)? A:west,north Answer ((*,id,west,neg,south) --> path). <1.00,0.35> 17

13.2

Answering process

This example demanded reasoning about a user-defined definition of a path. It could easily be extended to arbitrary path length if the related horn clause in Narsese form is input, but this representation is unnatural for an AGI system, and complex pathfinding is also difficult for humans. Usually pathfinding happends implicitly, by chosing the right way to go when the situation demands it (when one reaches the crossing), instead of always having the explicit representation of the entire path in mind. If explicit large-scale-planning is needed an algorithmic planner which NARS can call by an operator may be more effective, especially if like in this domain uncertainty is not even considered.

14 14.1

Reasoning about agents motivations Example

John is hungry. ({john} --> [hungry]). :|: John goes to the kitchen. ({john} --> (/,go,_,{kitchen})). :|: John eats the apple. ({john} --> (/,eat,_,{apple})). :|: Daniel is hungry. ({daniel} --> [hungry]). :|: Where does Daniel go? ({daniel} --> (/,go,_,{?Where}))? A:kitchen Answer ({daniel} --> (/,go,_,{kitchen})). <1;0.29> Why did John go to the kitchen? (?Why =/> ({daniel} --> (/,go,_,{kitchen})))? A:hungry, was expected, but there isn’t enough context to make this unambiguous, so NARS in this case thinks it is because John went to the kitchen which is also valid. Answer ((&/,({john} --> (/,go,_,{kitchen})),+3) =/> ({daniel} --> (/,go,_,{kitchen}))). <1,0.29>

18

14.2

Answering process

This one involves temporal induction based on ({john} --> [hungry]). :|: and ({john} --> (/,go,_,{kitchen})). :|: with variable introduction, which together leads to: ((&/,($1 --> [hungry]),+k) =/> ($1 --> (/,go,_,{kitchen}))) This is a possible base hypothesis to answer the first question, because from ((&/,($1 --> [hungry]),+k) =/> ($1 --> (/,go,_,{kitchen}))) and ({daniel} --> [hungry]). :|: one can derive ({daniel} --> (/,go,_,{kitchen}). :/: The answer of the system related to the why question is also valid, because there is not enough context to exclude that John going to the kitchen was also the reason why Daniel did it.

15

Conclusion

This paper showed how to represent the in [1] proposed QA tasks in Narsese so that they can be answered by NARS. However since an example often can be expressed in more than one way in Narsese, the above treatment is not necessarily the only possibility. Promising work to let NARS directly reason on natural language data instead can be seen in [4], and further natural language training is described in [2]. We tried all these example types with the latest stable version of the system, OpenNARS v1.6.4. If the related background knowledge is explicitly given or acquired by examples by the system, then it is able to deal with all examples in the specific domain without further pretraining due to the semantics NAL provides. However it can not be assumed that all of these tasks will always be optimally solved by NARS due to the assumption of insufficient knowledge and resources such an attention-driven system has to make. Furthermore these examples only capture a small part of the reasoning capabilities an AGI has to have. Essential by these examples not or only loosely captured aspects of general AI systems include Attention, Uncertainty Reasoning, Introspective Inference, and Decision Making.

19

References [1] Jason W., Antoine B., Sumit C., Tomas M., Alexander M. R.: Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks, arXiv:1502.05698 [cs.AI], New York (2015) [2] Ozkan K.: Intelligent reasoning on natural language data: A NonAxiomatic Reasoning System Approach, Master Thesis, Department of Computer and Information Sciences, Temple University (2015) [3] Terry W.: Procedures as a Representation for Data in a Computer Program for Understanding Natural Language Cognitive Psychology Vol. 3 No 1, Massachusetts Institute of Technology (1972) [4] Wang, P.: Natural Language Processing by Reasoning and Learning Proceedings of AGI-13, Pages 160-169, Beijing (2013) [5] Wang, P.: Non-Axiomatic Reasoning System: Exploring the Essence of Intelligence. Ph.D. thesis, Indiana University (1995) [6] Wang, P.: Rigid Flexibility: The Logic of Intelligence. Springer, Dordrecht (2006) [7] Wang, P.: Non-Axiomatic Logic: A Model of Intelligent Reasoning. World Scientific, Singapore (2013)

20

Non-Axiomatic Reasoning System (NARS) solving the Facebook AI ...

Jun 26, 2015 - solving the Facebook AI Research bAbI tasks. Patrick Hammer ..... <0,0.9>. Daniel picks up the milk. ({milk} --> (/,hold,{daniel},_)). :|: What is ...

203KB Sizes 0 Downloads 187 Views

Recommend Documents

Non-Axiomatic Reasoning System (NARS) solving ... -
Jun 26, 2015 - Daniel took the apple. ..... This one involves temporal induction based on ... This is a possible base hypothesis to answer the first question, ...

Solving Qualitative Spatio-temporal Reasoning ...
This work also provides an interesting set of new benchmark problems for. ASP. In particular, some of the transformations ..... 4 https://github.com/alviano/python ...

Case-Based Reasoning and User-Generated AI for ...
games, and Section 4 focuses on CBR-inspired meta-reasoning techniques ... In the same way as for supervised learning, we can divide the approaches to learn- ...... into a behavior authoring environment, which we call an intelligent IDE (iIDE). ....

Non-Axiomatic Reasoning System (Version 2.2)
Apr 14, 1993 - Indiana University. 510 N. Fess ..... In this way, it is possible to provide a uni ed rep- ..... previous study of induction, both in the eld of ma-.

Non-Axiomatic Reasoning System (Version 2.2)
Apr 14, 1993 - cidable system (I call it a full-axiomatic system). It has a set of axioms and a ...... system is waiting for new tasks, as in Turing Machine and other ...

Non-Axiomatic Reasoning System (Version 2.2)
14 Apr 1993 - 3.1 Term-oriented language. Traditionally, the meaning of a formal language L is provided by a model-theoretic semantics, where a term a in L indicates an object A in a domain D, and a predicate P in L indicates a property p in D. A pro

Non-Axiomatic Reasoning System | Exploring the ...
Aug 30, 1995 - fillment of the requirements of the degree of Doctor of Philosophy. Doctoral ... undergraduate student in the Department of Computer Science and Technology at ... During the summer vacation that year, a classmate, Yan Yong, persuaded .

KS1-Reasoning-and-Problem-Solving-Questions ...
would be in the shaded squares. Page 4 of 28. KS1-Reasoning-and-Problem-Solving-Questions---White-Rose-Maths-Hub---Twitter-March-2016.pdf.

Review;267* Facebook Millionaire System Review
Page 1. >Review:267* Facebook Millionaire System Review - Bass fishing6lb Fish On ... Click This Link to Download: Facebook Millionaire System

Based Reasoning: High-Level System Design
a domain by using the vocabulary appropriate for the task. But typically the languages in which the expert systems have been implemented have sought ...

Non-Axiomatic Reasoning System (Version 4.1)
NARS (Non-Axiomatic Reasoning System) is an intelligent reasoning system. .... Since by definition S ∈ P is identical to {S} ⊂ P, rules on the “∈” relation can be ... There is also a file for download, which contains both the code and the .

Based Reasoning: High-Level System Design - IEEE Xplore
Page 1. Generic Tasks in Knowledge-. Based Reasoning: High-Level. Building Blocks for Expert .... building blocks forthe construction (and understanding) of.

1499339697183-facebook-seduction-arrangement-a-facebook ...
Page 1. Whoops! There was a problem loading more pages. Retrying... 1499339697183-facebook-seduction-arrangement-a-facebook-seduction-method.pdf.

The Power of Comparative Reasoning
given criterion (max in this case) we call the resulting fam- ily of hash functions ... Another aspect of this method merits a discussion. Our choice of K leads to ..... and 27 positions relative to the descriptor center, for a total of 108 dimension

The Power of Comparative Reasoning
art machine learning methods with complex optimization setups. For solving ... in about 10 lines of code in most languages (2 lines in MAT-. LAB), and does not ...

ai-bike.pdf
We will seek shelter if a storm comes through and can also provide rain jackets if. needed. Generally, we can work around rain. Our staff will be watching the.

Reasoning - PhilPapers
high degree (cf. ..... Other times, it is not an abbreviation: by 'a good F', we mean something that is .... McHugh (2014), McHugh and Way (2016 b), Howard (ms.).

Ai
Oct 21, 2016 - 3rd Prize : Cash Prize of PhP 1,000.00 with glass plaque and certificate of participation. 16. Winners will be announced right after the judging of ...

AI translation.pdf
intelligence could help humans work in many areas where humans cannot access by themselves,. and greatly improve the efficiency of human lives and ...