[ieee 2006 ieee/wic/acm international conference on intelligent agent technology - hong kong, china...

4
Sixth-Sense: Context Reasoning for Potential Objects Detection in Smart Sensor Rich Environment Bin Guo, Satoru Satake, Michita Imai Keio University 3-14-1 Hiyoshi, Kohokuku, Yokohama, 223-0061, Japan {bingo, satake, michita} @ayu.ics.keio.ac.jp Abstract A new system named Sixth-Sense is proposed for obtaining physical world information in smart space. Our view is object-centered and sensors are attached to several objects in the space. What we focus is to use the smart sensor attached objects to obtain information from no sensor attached objects, i.e., the so called potential objects detection problem. The Sixth-Sense resolves this problem by Sixth-Sense Skeleton and inference rules. A series of inference rules are presented and illustrated. 1. Introduction Smart space, in which devices, agents, robots and everyday objects are all expected to seamlessly integrated and cooperate in support of human lives, will soon become part of our daily activities [1]. However, to achieve this goal, numerous challenges need to be tackled, one of which is the information acquisition from physical world. For example, in order to perform sophisticated communication, robots need to acquire sufficient information about the object that human pays attention to. This paper describes a new idea to deal with information acquisition from physical world objects. Two main problems exist when acquiring information from objects. One is how to obtain information as extensive as possible. Related studies include the smart space using ubiquitous sensing [2], the context reasoning system for human-robot interaction [3], and knowledge acquisition based on specific sensors [4]. The second problem is how to deploy sensors when detecting objects in an interesting region, which has also been studied in a lot of studies. The quality evaluation for given placement of sensors discussed in [5], and a framework to minimize deployed sensor number for a given grid proposed in [6]. Our view of smart space is object-centered, just as the MediaCup project mentioned in [7]. We attach sensors to everyday objects to realize smart objects, which will help us to know more clearly and rapidly what is happening in the environment. However, owing to the use limitation of sensor and sensor faults, in some circumstances, the smart object might be undetectable; meanwhile, there also exist no sensor attached objects, thus how to detect the existence and obtain information from these objects, which we called the potential object detection problem, becomes a challenge in our smart space research. In the present paper, a physical context reasoning system named Sixth-Sense to detect the existence of potential objects and obtain information from these objects is proposed. We attached wireless sensor nodes named MOTE to several selected everyday objects in our smart space. The sensory data obtained from each MOTE is analyzed by the IRR (Inference Rule Reasoning) module, which executes context reasoning based on the inference rules stored at IRB (Inference Rule Base). 2. Potential Objects Detection 2.1. Problem Description Figure 1 gives a sample of potential object detection. There are three sensor attached objects in the figure. But as Book-A is placed on Book-B, the ultrasonic radio sent by Book-B is blocked by Book-A, thus Book-B is undetectable by the network, then how to detect Book-B is one type of potential objects detection problem. Figure 1. Potential objects detection sample 2.2. Problem Definitions Definition 1 (Potential Objects Detection): It can be simply defined by this function: ( ) Y f X = . (1) Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'06) 0-7695-2748-5/06 $20.00 © 2006

Upload: michita

Post on 02-Mar-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Sixth-Sense: Context Reasoning for Potential Objects Detection in Smart Sensor Rich Environment

Bin Guo, Satoru Satake, Michita ImaiKeio University

3-14-1 Hiyoshi, Kohokuku, Yokohama, 223-0061, Japan {bingo, satake, michita} @ayu.ics.keio.ac.jp

Abstract

A new system named Sixth-Sense is proposed for obtaining physical world information in smart space. Our view is object-centered and sensors are attached to several objects in the space. What we focus is to use the smart sensor attached objects to obtain information from no sensor attached objects, i.e., the so called potential objects detection problem. The Sixth-Sense resolves this problem by Sixth-Sense Skeleton and inference rules. A series of inference rules are presented and illustrated.

1. Introduction

Smart space, in which devices, agents, robots and everyday objects are all expected to seamlessly integrated and cooperate in support of human lives, will soon become part of our daily activities [1]. However, to achieve this goal, numerous challenges need to be tackled, one of which is the information acquisition from physical world. For example, in order to perform sophisticated communication, robots need to acquire sufficient information about the object that human pays attention to. This paper describes a new idea to deal with information acquisition from physical world objects.

Two main problems exist when acquiring information from objects. One is how to obtain information as extensive as possible. Related studies include the smart space using ubiquitous sensing [2], the context reasoning system for human-robot interaction [3], and knowledge acquisition based on specific sensors [4]. The second problem is how to deploy sensors when detecting objects in an interesting region, which has also been studied in a lot of studies. The quality evaluation for given placement of sensors discussed in [5], and a framework to minimize deployed sensor number for a given grid proposed in [6].

Our view of smart space is object-centered, just as the MediaCup project mentioned in [7]. We attach sensors to everyday objects to realize smart objects, which will help us to know more clearly and rapidly what is happening in

the environment. However, owing to the use limitation of sensor and sensor faults, in some circumstances, the smart object might be undetectable; meanwhile, there also exist no sensor attached objects, thus how to detect the existence and obtain information from these objects, which we called the potential object detection problem, becomes a challenge in our smart space research.

In the present paper, a physical context reasoning system named Sixth-Sense to detect the existence of potential objects and obtain information from these objects is proposed. We attached wireless sensor nodes named MOTE to several selected everyday objects in our smart space. The sensory data obtained from each MOTE is analyzed by the IRR (Inference Rule Reasoning) module, which executes context reasoning based on the inference rules stored at IRB (Inference Rule Base).

2. Potential Objects Detection

2.1. Problem Description

Figure 1 gives a sample of potential object detection. There are three sensor attached objects in the figure. But as Book-A is placed on Book-B, the ultrasonic radio sent by Book-B is blocked by Book-A, thus Book-B is undetectable by the network, then how to detect Book-B is one type of potential objects detection problem.

Figure 1. Potential objects detection sample

2.2. Problem Definitions

Definition 1 (Potential Objects Detection): It can be simply defined by this function: ( )Y f X= . (1)

Proceedings of the IEEE/WIC/ACM InternationalConference on Intelligent Agent Technology (IAT'06)0-7695-2748-5/06 $20.00 © 2006

Where: X denotes the sensory data set from physical objects; Y denotes the set of detected potential objects; f denotes the reasoning function based on inference rules.

Definition 2 (4D Reference Space): The whole problem domain can be described as , , ,X Y Z T< > (3D space plus Time), called as the 4D Reference Space.

Definition 3 (Reference Objects and Potential Objects): In our smart space, the detectable sensor attached objects are called reference objects; while the undetectable objects are called potential objects. A reference object O is described as , , ,O Nid Sid S T=< > .

Where: Nid and Sid separately denote the unique ID of O and the sensor ID that object O attached; S is a state set.

Definition 4 (State Set): State set S of reference object O can be defined as , , , , , , ,S L G M D V A P R=< > .

Where: L denotes the location; G denotes the slope angle, the range of which is [0, 2]π (shown in Figure 2). M denotes the motion state; D denotes the direction of O’s motion; V denotes the light sense attribute; A denotes the audition attribute; P denotes the force analysis attribute and R denotes the action zone of the force.

Figure 2. Slope angle range of reference objects

Definition 5 (State-Stable and State-Alter): We describe reference object O as 1 1 1 1, , ,Nid Sid S T< > at T1, and 1 1 2 2, , ,Nid Sid S T< > at T2. If S1 and S2 are the same, then the state of O can be described as State-Stable during this period; otherwise as State-Alter.

Definition 6 (Down and Up Relation): To reference objects A and B, A receives some (doesn’t receive any) upwards pull force, if the upwards (downwards) projection of A is within the range of the projection of B, and there is no other reference object between them, then we consider that there exists a relation between A and B, which we describe as ( , ) ( ), , ,R A B down up A B t=< > .

2.3. Sixth-Sense Architecture

A whole architecture of Sixth-Sense is shown in Figure 3. The first two layers, physical layer and sensor deployment layer appear as the input X in (1), while the application layer represents the output Y. What’s more, the context reasoning layer acts as the f function.

Physical Layer: This layer involves various everyday objects and sensors. Most of the objects are easily found in a typical home, such as books, tables, or toys.

Sensor Deployment Layer: As we have mentioned, not all objects can be attached with sensors and directly sensed at any time. How to select proper objects to act as reference objects is this layer’s work. Here, two aspects are considered: Firstly, whether an everyday object can

be selected as a reference object is determined by the specific application domain. For instance, if we are interested in how to assist human’s work, then a notebook or a cup will have higher priority to be selected. Secondly, the detectability of an object is also very important. For example, a bowl stored in a cupboard shouldn’t be selected as a reference object, for the radio wave it sends can’t be received by the sensor network. These two considerations make up of the main sensor deployment strategy of Sixth-Sense. The selected reference objects will build a hardware structure of the smart space, called Sixth-Sense Skeleton. Figure 4 shows some examples of MOTE attached reference objects.

Figure 3. Sixth-sense architecture

Figure 4. Sensor attached reference objects

Context Reasoning Layer: This layer accepts the input sensory data from MOTE nodes, which will then be used for context reasoning. Here, we give a detailed definition about context, which actually includes two levels of contents: the first level is the state of the object itself (e.g. what is the object’s motion state...?); the second level consists of the possible relation between this object and others (e.g. is the object located on the table?). By analyzing the sensory data from the reference objects, the current contexts about the objects themselves and the relations between them can be understood. IRR, the core module of this layer, will then read and reason these contexts based on the inference rules from IRB. The finally reasoning result will be sent to the next layer.

Application Layer: This layer is formed a variety of applications. As one of the basic problems in physical world information acquisition, potential objects detection has wide application areas, some are listed in Figure 4.

Among the whole architecture, our focus is mainly on the context reasoning layer. In the next section, we will describe the inference rules in detail.

3. Inference Rules Building and Reasoning

3.1. Inference Rules

Proceedings of the IEEE/WIC/ACM InternationalConference on Intelligent Agent Technology (IAT'06)0-7695-2748-5/06 $20.00 © 2006

In Sixth-Sense, the inference rules mainly come from common sense and physics theory. According to the different knowledge they concern, we classify the inference rules into 4 classes (shown in Table 1). The first two kinds of rules are all based on static context, called 3D Static Rules; while the last two kinds are derived from dynamic context, called 4D Dynamic Rules.

Table 1. Classification of inference rules

Type Description Location Relation

Inference based on specific location context between reference objects.

Force Relation Inference by the force context. Motion

State Change Inference according to the motion state change context from the reference object.

Senders

and Receivers

Several reference objects make up of sender-receiver pairs and deployed in a specific model. Inference based on the dynamic context change between them.

3.2. Building Inference Rules

In this section, we will describe the inference rules in detail according to the classification presented in Table 1.

(1) Location Relation Inference Rule 3D-L1: A relation , , , iR up A B t< >

exists between reference objects A and B at time ti; the slope angle of A is 0 or 2π ; besides, there is no touch point between A and B, then we infer that an object C between them (illustrated by Figure 1, formulized by (2)).

( , , , ) ( _ ( ) {0, 2})( _ _ ( )! _ _ ( ))

( , ( , ))

R up A B ti gets G Agets lowL z A gets topL z Bexist C between A B

π< > ∧ ∈∧ =→

(2)

Where: _ _ ( )gets lowL z Obj and _ _ ( )gets topL z Obj is used to get object Obj’s lowest and top projection point on Z axis in the 3D space; ( , )exist Obj Scope indicates the correlative information of the new inferred object Obj, in which Scope denotes the location range of Obj.

Inference Rule 3D-L2: A relation , , , iR up A B t< > exists between reference objects A and B at time ti; moreover, the slope angle of A is within (0, / 2)π , then we infer that there is an object C between A and B (illustrated by left of Figure 5, formulized by (3)).

( , , , ) ( _ ( ) (0, 2))( , ( , ))

iR up A B t gets G Aexist C between A B

π< > ∧ ∈→

(3)

Where: _ ( )gets G Obj is used to get the slope angle of object Obj.

(2) Force Relation Inference Rule 3D-F1: If reference object A, if it’s

State-Stable and suffered from downward pressure in scope Sp(action zone of the force); Moreover, there is no other reference object x in scope Sp can form the relation

, , ,up x A t< > with A, then there will be an object B on A (illustrated by right of Figure 5, formulized by(4)).

_ ( ) ( ( _ ( )) )(( , , , )( _ ( ) _ ( ))

( ,on( _ ( )))

State Stable A direction gets P A downup x A T gets L x gets R A

exist B get R A

∧ ==∧ ∀ < > ∉→

(4)

Where: _ ( )gets P Obj is to get the force that Obj suffered (gravity excluded); _ ( )gets L Obj is to get the location of Obj; _ ( )gets R Obj is to get the action zone of the force Obj suffered; ( )idirection F is to get the direction of force Fi.

Figure 5. Inference rule samples for 3D-L2, 3D-F1 (3) Motion State Changes Inference Rule 4D-M1: If there is a State-Alter to

reference object A between time T1 and T2 (from rest to motion), then there will be an object B in the inverse direction of A’s movement (illustrated by left of Figure 6, formulized by (5)).

1 2

2

( _ ( , ) ) ( _ ( , ) )( , ( , _ ( , )))

gets M A T rest gets M A T motionexist B after A gets D A T

== ∧ ==→

(5)

Where: _ ( , )gets M Obj T and is used to get the motion state of Obj at time T; _ ( , )gets D Obj T is used to get the direction of Obj’s movement at T.

(4) Senders and Receivers Inference Rule 4D-SR: To reference objects A and B,

B is a State-Stable light source and A can sense the light from B. If there is a State-Alter of A’s light sense attribute from T1 to T2 (from bright to dark, or from dark to bright), then there will be an object C between A and B (illustrated by right of Figure 6, formulized by (6)).

1 2_ ( ) ( _ ( , )! _ ( , ))( , ( , ))

State Stable B gets V A T gets V A Texist C between A B

∧ =→

(6)

Where: _ ( , )gets V Obj T is used to get the light sense attribute of object Obj at time T.

Figure 6. Inference rule samples for 4D-M1, 4D-

SR

Proceedings of the IEEE/WIC/ACM InternationalConference on Intelligent Agent Technology (IAT'06)0-7695-2748-5/06 $20.00 © 2006

4. Experiment and Result We select two experiment situations shown in Figure

7, separately concerns for static and dynamic reasoning.

Figure 7. Two experiment situations

In the first situation, we attached an ultrasonic location sensor and an acceleration sensor (2-axis) on Book-B, and two other location sensors on Table. Our target is to detect the existence of Book_A. Through the location sensors we can infer the location context that Book_B is on Table. Moreover, by using the acceleration sensor, we can estimate the slope angle of Book_B, for different slope angle yields different acceleration values. Firstly we measured the acceleration value when Book_B was flatly deployed. Owing to the existence of noise, the measured results were bounded in two intervals (the X-axis in[ 1.1 , 0.7 ]g g− − , and the Y-axis in ]75.0,5.0[ gg (the available range is 2g± ). Thus if the measured X or Y value didn’t drop in these two intervals, there would be a slope angle to Book_B, by which we can get the other location context about slope angle. With the above two contexts, the inference rule 3D_L2 condition will meet, and we can then detect the existence of potential object Book_A and its location (between Table and Book_B). The result is shown in Figure 8.

Figure 8. Situation-1 result: the black squares

make up the variation range of the acceleration value when Book-B is flatly deployed, while the red circles indicate the value when Book-B is inclined deployed.

In the second situation, we attached a location sensor on Box and our purpose is to detect the existence of Toy Car. At first Toy Car was moving towards Box. One time it knocked into the Box and stopped, meanwhile Box started moving for a short distance for the impulsive force and then stopped. From the location sensor we acquired two locations of Box before and after the movement, which could be viewed as a displacement context. By the context we can use the inference rule 4D-

M1 to infer the existence of Toy Car and the probable location of Toy Car (in the inverse direction of Box’s movement). The experiment result is shown in Figure 9.

Figure 9. Situation-2 result: the X and Y values

have obvious change between time 12 to 14, which indicate the displacement context of Box.

5. Conclusion and Future Work This paper proposed a potential objects detection

system named Sixth-Sense which can be used to obtain information from objects in the smart space. We attached MOTEs to several selected objects to form the Sixth-Sense Skeleton, by which we can obtain context from these objects. The context will then be analyzed by the inference rules. However, currently we can only detect the existence and probable location of the potential object. Considering the rich information from the reference object hasn’t been sufficiently used, in the future we will devote to obtain more information from the derived object, even to estimate what type of object it might be.

References [1] H. Chen, S. Tolia, and C. Sayers, "Creating Context-Aware Software Agents", GSFC/JPL Workshop on Radical Agent Concepts, NASA GSFC, Greenbelt, Sep. 2001, pp. 26-28.

[2]. I. A. Essa, "Ubiquitous sensing for smart and aware environments", IEEE Personal Communications, Oct. 2000.

[3]. M. Imai, "Human-Robot communication based on sensor information", SICE Annual Conference, Okayama, 2005.

[4]. C. Randell and H. Muller, "Context awareness by analysing accelerometer data", Fourth International Symposium on Wearable Computers, 2000, pp. 175–176.

[5]. S. Meguerdichian, S. Slijepcevic, and V. Karayan, "Coverage problems in wireless ad-hoc sensor networks", INFOCOM 2001, Anchorage, USA, 2001, pp. 1380-1387.

[6]. S.S. Dhillon, K. Chakrabarty, and S. S. Iyengar, "Sensor placement algorithms for grid coverage", Proc. International Conference on Information Fusion, 2002, pp. 1581-1587.

[7]. M. Beigl, H.W. Gellersen and A. Schmidt, "Mediacups: Experience with design and use of computer-augmented everyday objects", Computer Networks, 2001, pp. 401–409.

Proceedings of the IEEE/WIC/ACM InternationalConference on Intelligent Agent Technology (IAT'06)0-7695-2748-5/06 $20.00 © 2006