Does Location Help Daily Activity Recognition? Chao Chen1 , Daqing Zhang1 , Lin Sun1 , Mossaab Hariz1 , and Yang Yuan2 1

Institut TELECOM/TELECOM SudParis, CNRS SAMOVAR, France {chao.chen,daqing.zhang,lin.sun,mossaab.hariz}@it-sudparis.eu 2 School of Computer, Northwestern Polytechnical University, China

Abstract. Daily activity recognition is essential to enable smart elderly care services and the recognition accuracy affects much the quality of the elderly care system. Although a lot of work has been done to recognize elderly people’s activities of daily life (ADL), few systems have investigated if the location information can be deployed to improve the ADL recognition accuracy. In this paper, we intend to incorporate the location information in the activity recognition algorithm and see if it can help to improve the recognition accuracy. We propose two ways to bring the location information into the picture: one way is to bring location in the feature level, the other way is to utilize it to filter irrelevant sensor readings. Intensive experiments have been conducted to show that bringing location information into the activity recognition algorithm in both ways can help to improve the recognition rate by around 5% on average compared to the system neglecting the location information.

1

Introduction

Elderly people’s activities of daily life (ADL) are important user context in smart home for elderly people. These activities include eating, getting in and out of bed, using the toilet, bathing or showering, dressing, using the telephone, and preparing meals. When such activities are accurately recognized, proper services can be provided accordingly [8]. Thanks to the rapid development of ubiquitous sensors (RFID, accelerometers,pressure, wearable sensors and etc.) and the data mining techniques, many daily activities can be recognized through analysing sensing data [9][7][10][12][4]. AQUEDUC project [1] is aiming to provide services to elderly people who live independently in the smart home environment. One common service is remainder service. And it can be provided at different levels. For example, when fall activity is detected, the relevant caregivers should be reminded as soon as possible. When abnormal sleeping pattern is observed about elders, the caregivers can be provided afterwards. Out of all the reminder services, the basic requirement is to recognize different activities of elders. The recognition of daily activity is a procedure to interpret low-level raw captured data to infer high-level context. Until now, lots of work has been done to recognize daily activities for elderly people. However, so far few papers have taken location information into consideration [2](the location in the paper refers to sub-areas in a smart home, M. Donnelly et al. (Eds.): ICOST 2012, LNCS 7251, pp. 83–90, 2012. c Springer-Verlag Berlin Heidelberg 2012 

84

C. Chen et al.

Fig. 1. RFID reader in the wrist (left); Tags (middle); Receiver (right)

such as kitchen, living room, etc. Hereafter we will alternatively use sub-area and location). In the paper we would like to investigate if the users’ location in smart home can be used to improve the accuracy and to what extent. Basically, there are two broad categories of algorithms that have been proposed to recognize daily activities, one is rule-based, the other is machine learning based. The rule-based approaches explore experts knowledge in the form of “if-then” rules, which usually do not require training samples to infer context (activities) [11]. These approaches will fail if too many sensors and activities are involved [11]. The machine learning based algorithms learn models from training samples after feature representation and extraction [2][11]. One main drawback is that they usually require to label the training samples [7][8], which is quite tedious when the data set is huge. Among all the previous work, few of them pay attention to bring location information in [2]. In this paper, we focus on studying two ways of bringing the location information into recognizing the daily activity for elderly people. One way is to add the location information as one additional feature dimension while the other way is to utilize it to filter out irrelevant sensing readings. Considering the possible sequential relationship among different activities during a short time, we further filter out unreasonable recognition results and obtain even higher accuracy. We will begin with presenting our activity recognition based smart system in Section 2, and followed by the recognition approach elaborated in Section 3. Then we will introduce the scenarios, and experimental results in Second 4. Finally, we present concluding remarks in Section 5.

2

An Activity Recognition Based Smart Elderly Care System

AQUEDUC [1] is an activity recognition based smart elderly care system, which contains 10 daily activities for elderly people, varying from dressing to making meals. Table 1 shows the list of the studied daily activities in the project, including their names and respected IDs. These 10 activities can happen in 4 sub-areas in the smart home. Table 2 provides the possible ADLs associated with each subarea. Please note certain activities can occur in different sub-areas, for example, elderly people can drink coffee in both kitchen and living-room. And elderly people can use the same objects for different activities, and move with them across different sub-areas. For instance, making meals and eating may happen in the kitchen and live-room, and they share the same dishes, bowls.

Does Location Help Daily Activity Recognition?

85

Table 1. Studied daily activities and their IDs 1 2 3 4 5 dressing toileting making calls drinking eating 6 7 8 9 10 washing hand brushing teeth watching TV sleeping making meals

Table 2. Information of activities and occurring sub-areas Sub-areas (ID) Kitchen (1) Bedroom (2) Living-room (3) Washing-room (4)

Daily activities making meals, drinking sleeping, dressing watching TV, making calls, drinking, eating washing hand, brushing teeth, toileting

In order to recognize these daily activities, we deploy two types of pervasive sensors in the smart home, one is the simple switch sensor and the other is RFID sensor. Switch sensor can record two states of any home appliance, such as TV and telephone. RFID sensor contains a reader and many tags. When elderly people wear the reader in their wrists and the tags are attached on different objects, the RFID sensor can detect what objects elderly people has touched in sequence. Fig. 1 shows the pictures of RFID reader, tag, and receiver respectively. In this project, we have deployed 35 tags attaching to various objects (cup, bowl, toothbrush, etc.). When elders touch any object, the sensing data will be automatically transferred to a server wirelessly. The format of the each RFID sensing data log is shown in Table 3. It tells which object has been touched at what time. Table 3. Data format Action ID Time Tag ID 4 [2009-4-29 19:21:27.203823] E00700001E0E7B0A

3 3.1

Activity Recognition Based on SVM Feature Extraction

Features can be represented as f = (s1 , s2 , · · · , sm ), where m is the total number of sensors deployed in the smart home. In this project, we have deployed 35 RFID sensors and 2 switch sensors, thus m = 37. si records interaction times of elderly people touch the respected object for RFID sensor while it is 1 (ON) or 0 (OFF) for switch sensors at a given time duration(w). We get the first sample when receiving the first data log, and the second sample is obtained after α (w = α) time . For each sample, N ummin means the minimal value

86

C. Chen et al. SubAerai hasSensor

hasSensor

Activity1 hasSensor

hasSensor

hasSensor

hasSensor

Activity2

hasActivity

hasActivity

hasActivity

hasSensor

hasSensor

ĂĂ

Activityp hasSensor

Sensor11

Sensor12

ĂĂ

Sensor1q1

Sensor21

ĂĂ

Sensor22

Sensor2q2

Sensorp1

Sensorp2

ĂĂ

Sensorpqp

Fig. 2. Illustration of sub-areas, activites and sensors

across all the dimension, while N ummax means the maximal value. After that, we normalize the feature using the following equation:  si −N ummin (i = 1, 2, · · · , m) if N ummax = N ummin si = N ummax −N ummin (1) 1 otherwise We have two ways to add the sub-area information into the samples. The direct way is to append on the end of the normalized feature as an additional dimension. And the feature is f = (f, loc), where loc is the integer number respected to different sub-areas (can be seen in Table 2), varying from 1 to 4. The length of feature for each sample is 1 more than the number of sensors. Another way is to filter out the irrelevant sensor readings using the sub-area information. For instance, when elderly people were in the kitchen, though switch sensor recorded the TV “ON” signal, we still set the value on the respected dimension of f to be zero for the sample. This way is to filter out the irrelevant single, and only perform feature extraction for sensors in the sub-area where activity is taken place (see Eq. 2). After that, we normalize the extracted feature using Eq. 1. The dimension of feature for each sample is exactly the same as the number of sensors.  si if sub-area hasSensor i (2) si = 0 otherwise As can be seen from Fig. 2, for each sub-area, we can get its SensorSet. In a sub-area, it has many activities, and hasActivtivity illustrates the activities can be happened in the sub-area, and hasSensor shows the related sensors for the specific activity.The sensor set in the sub-area is the union of all the evolved sensors of all the activities which are taken place in the sub-area: SensorSet{Activity1} ∪ SensorSet{Activity2} · · · ∪ SensorSet{Activityj} SensorSet{Activityi} ∩ SensorSet{Activityj} can be not empty, which means different activities can share same sensors. And also SensorSet of different subareas can have the same sensors, as some objects can be moved by the elderly

Does Location Help Daily Activity Recognition?

87

people. For details of the methods of locating the elderly people in a smart home, please refer to [6][5]. 3.2

ADLs Recognition Based on SVM

We input the samples got by proposed approach to the SVM classifier [3], and to see the performance. All samples are fed into the SVM with linear kernel function and the parameter is set to be c set = [−3, −1, 1, 3, 5, 7, 9, 11], c = 2c set(k) . We denote the first way to add an additional dimension in the feature as LOASVM1, and denote the second way as LOA-SVM2. We also denote the baseline as SVM, which does not consider the location information. We perform feature extraction for all the samples from 5 persons, and label each sample. We use samples from one person as the test set, and another person as the validate set, while use samples from the remaining three persons as the train set. We show this procedure in Algorithm 1. Algorithm 1. Procedure of evaluation 1: for i = 1 to 5 do 2: test person = i 3: validate person = mod(i + 1, 5)//mod(a, b) gets the modulus of a divides b 4: train person = rest 5: end for

For ith test, we first get models of different c value using samples from train person, and then choose the c value which achieves the best accuracy in the validate person. Finally, we use the trained model with the selected best c value to see the performance for the test person. Overall accuracy (see Eq. 3) is used to evaluate the recognition performance. Accuracy(i, j)(i ∈ {1, 2, 3, 4, 5}) denotes the overall accuracy of ith experiment for Activityj, j ∈ {1, 2, · · · , 10}. Accuracy(i, j) =

N umOf Correct(i, j) T otalN um(i, j)

(3)

where N umOf Correct(i, j) represents the number of samples that corrected recognized as j, while T otalN um(i, j) is the number of samples which are labelled as j. The average overall accuracy which is defined in Eq. 4. It evaluates the performance across all the 5 test sets. We do not differentiate the test users in the evaluation. 5 N umof Correct(i, j) Accuracy(j) = i=1 (4) 5 i=1 T otalN um(i, j) A common characteristic of activities is that people seldom change activities in a short time, especially for elderly people. This would help us to further improve the performance as the mention approaches assign a label to each sample, and

88

C. Chen et al.

Further Filter Operation

window

window

Time

Time

Fig. 3. Illustration of further filtering operation

do not consider the consistence of samples in time dimension. For example, SVM recognizes continuous 5 samples f1 , f2 , f3 , f4 , f5  as r1 = dressing, r2 = dressing, r3 = sleeping, r4 = dressing, r5 = dressing. When people doing dressing, RFID read in the wrist may also read tags on pillow and the bed, resulting in the third sample is wrongly recognized as sleeping. However, we can simply deny the third sample as sleeping based on the characteristic of elderly people. We further fuse the recognition results in the decision level for every n continuous samples. n is set to be 5, which means we fuse every 5 samples which is shown in Fig. 3 (it is about 10∼15 seconds in real case). LibSVM [3] assigns a mass to a sample which represents the possibility of belonging to each activity Ai . Assume we have m activities to recognize. The output for → − each sample is R (f ) = (p(A1 ), p(A2 ), · · · , p(Am )). The final result of every n samples is obtained by the following formula: j = argmax

n 

pi (Aj )

(5)

i=1

4

Empirical Evaluation

In the project, we asked 5 elderly people to perform all the 10 daily activities in the smart home. We also design the following two scenarios in the experiment with the expectation of evaluating the effectiveness of the added sub-area information. – Scenarios 1: we sample drinking activity both in two sub-areas (kitchen and live room), as the activity can both take place in these two sub-areas. – Scenarios 2: we turn on the TV when elderly people brush teeth in washing room to simulate the case they forget to turn off TV in the living room, and also hang on the telephone when they dress their close in bedroom. We first compare the results of SVM, LOA-SVM1, and LOA-SVM2 to verify whether the information of location can improve the accuracy or not. As can be seen in Fig. 4, at least one of them achieves higher accuracy for each studied daily activity, which clearly proves the effectiveness of exploration the location information. SVM achieves the accuracy up to 90% for making calls, drinking, watching TV, toileting and sleeping. However, for brushing teeth, making meals, and eating activities, the accuracy is low for SVM. For brushing teeth

Does Location Help Daily Activity Recognition?

89

1.0

.8

.6

.4 SVM LoA-SVM1 LoA-SVM2

.2

0.0 0

1

2

3

4

5

6

7

8

9

10

11

Fig. 4. Performance of three different methods. Accuracy (left); Confuse matrix of LOA-SVM1 (right).

1.0

1.0

1.0

.8

.8

.8

.6

.6

.6

.4

.4

SVM SVM+

.4

LoA-SVM1 LoA-SVM1+

.2

.2

.2 0

1

2

3

4

5

6

7

8

9

10

11

LoA-SVM2 LoA-SVM2+

0

1

2

3

4

5

6

7

8

9

10

11

0

1

2

3

4

5

6

7

8

9

10

11

Fig. 5. Accuracy comparison results

activity, the reason is that the TV is always ON, which disqualifies the performance, and for making meals and eating, those two activities share many common feature. LOA-SVM1 performs much better than LOA-SVM2 for recognizing making meals and eating. With the additional location information in the feature dimension, LOA-SVM1 can well tell them apart as making meals takes place in kitchen while eating is in the living room. But for LOA-SVM2, it uses the location information to filter out the irrelevant sensors, and these two activity has the same SensorSet. All of three method cannot well distinguish washing hand and brushing teeth, and their accuracy are all below 90%. This is because no matter people wash hand or brush teeth, they will use the water from water faucet and dry their hand using the tissue. And these two activities both happens in the same sub-area. Although we explore location information, it will be helpless. We can see the confusion matrix of LOA-SVM1 for further proof (highlighted in the red circle in confusion matrix in Fig.4). We also investigate the performance of three approaches with further filter every n samples, and we show the comparison result in Fig.5, where “+” denotes the respected further filter approach. From the figures, we can see that “+” approach can consistently improve the recognition accuracy for all activities, meaning the effectiveness of further filtering. The accuracy of dressing, brushing teeth, eating and making meals is greatly improved when comparing SVM to SVM+. LOA-SVM1+ also greatly improve the accuracy of recognizing brushing teeth activity (Activity 7), though a little increment of accuracy of washing hand. LOA-SVM2+ improves the accuracy of washing hand, making meals and eating dramatically and a little increment of brushing teeth. To sum up, with further

90

C. Chen et al.

filter technique, it can improve the recognition accuracy of activities which can not be distinguished by simply adding the location information in the feature dimension (such as washing hand and brushing teeth) and activities which share the same SensorSet (such as eating and making meals).

5

Conclusion

In this paper, we study if users’ location information can improve the accuracy of recognizing daily activities for elderly people and to what extent it can improve. Concretely, we proposed two ways to incorporate the location information in. One way is to add it in the feature dimension directly while another way is to represent the feature only using the data from sub-area to filter “noisy” readings. We have shown two combined approaches can achieve upto 5% on average higher than results without adding the location information.

References 1. Aqueduc, http://aqueduc.kelcode.com/ 2. Brush, A.B., Krumm, J., Scott, J.: Activity recognition research: The good, the bad, and the future. In: Pervasive 2010 Workshop (2010) 3. Chang, C.-C., Lin, C.-J.: LibSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27 (2011) 4. Chen, L., Nugent, C.D., Cook, D., Yu, Z.: Knowledge-driven activity recognition in intelligent environments. Pervasive and Mobile Computing 7, 285–286 (2011) 5. Helal, S., Winkler, B., Lee, C., Kaddoura, Y., Ran, L., Giraldo, C., Kuchibhotla, S., Mann, W.: Enabling location-aware pervasive computing applications for the elderly. In: PerCom 2003, pp. 531–536 (2003) 6. Kelly, D., McLoone, S., Dishongh, T.: Enabling affordable and efficiently deployed location based smart home systems. Technol. Health Care 17, 221–235 (2009) 7. Lim, J.-H., Jang, H., Jang, J., Park, S.-J.: Daily activity recognition system for the elderly using pressure sensors. In: EMBS 2008, pp. 5188–5191 (2008) 8. Pouke, M., Hickey, S., Kuroda, T., Noma, H.: Activity recognition of the elderly. In: Proceedings of the 4th ACM International Workshop on Context-Awareness for Self-Managing Systems, pp. 7:46–7:52 (2010) 9. Song, S.-K., Jang, J.-W., Park, S.: An Efficient Method for Activity Recognition of the Elderly Using Tilt Signals of Tri-axial Acceleration Sensor. In: Helal, S., Mitra, S., Wong, J., Chang, C.K., Mokhtari, M. (eds.) ICOST 2008. LNCS, vol. 5120, pp. 99–104. Springer, Heidelberg (2008) 10. Tapia, E.M., Intille, S.S., Larson, K.: Activity Recognition in the Home Using Simple and Ubiquitous Sensors. In: Ferscha, A., Mattern, F. (eds.) PERVASIVE 2004. LNCS, vol. 3001, pp. 158–175. Springer, Heidelberg (2004) 11. van Kasteren, T.: Activity Recognition for Health Monitoring Elderly using Temporal Probabilistic Models. PhD thesis, Universiteit van Amsterdam (2011) 12. Zhang, S., Ang, M., Xiao, W., Tham, C.: Detection of activities for daily life surveillance: Eating and drinking. In: HealthCom 2008, pp. 171–176 (2008)

Does Location Help Daily Activity Recognition?

one additional feature dimension while the other way is to utilize it to filter out irrelevant sensing .... The dimension of feature for each sample is exactly the same as the number of sensors. si = { si if sub-area .... In: Proceedings of the 4th ACM International Workshop on Context-Awareness for Self-Managing Systems, pp.

232KB Sizes 1 Downloads 203 Views

Recommend Documents

Does Location Help Daily Activity Recognition?
washing hand brushing teeth watching TV sleeping making meals. Table 2. Information ... sensors in the smart home, one is the simple switch sensor and the other is RFID sensor. ... Illustration of sub-areas, activites and sensors across all the ...

Daily Activity Report -
215.898.2575. 215.573.8532. Sunday 09/14/2014. Start. End. Activity. Room. Location. Jon M. Huntsman Hall. JMHH G90. FIMRC- Marketing/Awareness Com.

Hierarchical Models for Activity Recognition
Alvin Raj. Dept. of Computer Science. University of ... Bayesian network to jointly recognize the activity and environ- ment of a ... Once a wearable sensor system is in place, the next logical step is to ..... On the other hand keeping the link inta

Qualitative Spatial Representations for Activity Recognition - GitHub
Provide foundation for domain ontologies with spatially extended objects. • Applications in geography, activity recognition, robotics, NL, biology…

Activity Recognition Using a Combination of ... - ee.washington.edu
Aug 29, 2008 - work was supported in part by the Army Research Office under PECASE Grant. W911NF-05-1-0491 and MURI Grant W 911 NF 0710287. This paper was ... Z. Zhang is with Microsoft Research, Microsoft Corporation, Redmond, WA. 98052 USA (e-mail:

Using Active Learning to Allow Activity Recognition on ...
Obtaining labeled data requires much effort therefore poses challenges on the large scale deployment of activity recognition systems. Active learning can be a ...

Learning temporal context for activity recognition - STRANDS project
The results indicate that incremental learning of daily routines allows to dramat- ically improve activity classification. For example, a weak classifier deployed in a single-inhabited ... showed that the patterns of the spatio-temporal dynamics of t

Exploring Semantics in Activity Recognition Using ...
School of Computer Science, University of St Andrews, St Andrews, Fife, UK, KY16 9SX. ... tention in recent years with the development of in- .... degree. 3. Theoretical Work. Taking inspiration from lattice theory [31], we de- ... 1. A simplified co

A Possibilistic Approach for Activity Recognition in ...
Oct 31, 2010 - A major development in recent years is the importance given to research on ... Contrary as in probability theory, the belief degree of an event is only .... The Gator Tech Smart House developed by the University of ... fuse uncertain i

multiple people activity recognition using simple sensors
Depending on the appli- cation, good activity recognition requires the careful ... sensor networks, and data mining. Its key application ... in smart homes, and also the reporting of good results by some ..... WEKA data mining software: An update.

Getting cited: does open access help?
Apr 1, 2011 - may vary across authors for instance according to age and career ..... of the American Society for Information Science and Technology 60: 3-8.

Why does Unsupervised Pre-training Help Deep ... - Research at Google
pre-training acts as a kind of network pre-conditioner, putting the parameter values in the appropriate ...... 7.6 Summary of Findings: Experiments 1-5. So far, the ...

Activity Recognition Using Correlated Pattern Mining for ...
istics of the data, many existing activity recognition systems. [3], [4], [5], [6] ..... [14] L. J. Bain and M. Englehardt, Statistical Analysis of Reliability and. Life-testing ...

Transferring Knowledge of Activity Recognition across ...
is to recognize activities of daily living (ADL) from wireless sensor network data. ... nition. However, the advantage of our method is that any existing or upcoming.

Learning temporal context for activity recognition - STRANDS project
by novel techniques to manage huge quantities of data (Big Data) and the increased .... collect daily activity data to create rhythmic models of the activities.

Human Activity Recognition for Video Surveillance
of unusual event recognition with lack of training data. Zelnik-Manor et al. .... a GMM classifier Ci for each CFV Fi with MAP (Maximum a. Posteriori) principle, as ...

Learning temporal context for activity recognition - Lincoln Centre for ...
... paper is still in review and is awailable on request only. 1 Lincoln Centre for Autonomous Systems, University of Lincoln, UK email: [email protected].

Active EM to Reduce Noise in Activity Recognition
fying email to activities. For example, Kushmerick and Lau's activity management system [17] uses text classification and clustering to examine email activities ...

Bayesian Activity Recognition in Residence for Elders
Intelligent Systems Lab,. University of Amsterdam ,. Kruislaan 403, 1098 SJ, Amsterdam , The Netherlands. Keywords: Activity Recognition, Temporal Sensor Pat ...

A Possibilistic Approach for Activity Recognition in ...
Oct 31, 2010 - electronic components, the omnipresence of wireless networks and the fall of .... his activity, leading him to carry out the actions attached to his.

Does net neutrality help the migration to FTTH_final.pdf
Does net neutrality help the migration to FTTH_final.pdf. Does net neutrality help the migration to FTTH_final.pdf. Open. Extract. Open with. Sign In. Main menu.

Why does Unsupervised Pre-training Help Deep ... - Semantic Scholar
such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results .... of attraction of the dynamics of learning, and that early on small perturbations allow to ...... Almost optimal lower bounds for small depth circuits.

What does electrodermal activity tell us about prognosis
bDepartment of Psychology, Occidental College, Los Angeles, CA, USA ... The theoretical implications of these findings and directions for further research are briefly discussed. D 2002 .... (1989) found that patients who relapsed over a 2-year.

Accurate Activity Recognition in a Home Setting
Sep 24, 2008 - dataset consisting of 28 days of sensor data and its anno- tation is described and .... based on the Katz ADL index, a commonly used tool in healthcare to .... Figure 6. The graphical representation of a linear-chain CRF. The.