Transfer Learning for Semi-Supervised Collaborative Recommendation Weike Pan1 , Qiang Yang2∗ , Yuchao Duan1 and Zhong Ming1∗ [email protected], [email protected], [email protected], [email protected]

1 College

of Computer Science and Software Engineering Shenzhen University, Shenzhen, China

2 Department of Computer Science and Engineering Hong Kong University of Science and Technology, Hong Kong, China

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

1 / 27

Introduction

Problem Definition

Semi-Supervised Collaborative Recommendation (SSCR) Input: Labeled feedback (or explicit feedback) R = {(u, i, rui )}: the rating rui and the corresponding (user, item) pair (u, i) is a kind of a real-valued label and a featureless instance, respectively. Unlabeled feedback (or implicit feedback) O = {(u, i)}: the (user, item) pair (u, i) is an unlabeled instance without supervised information.

Goal: predict the preference of each (user, item) pair in the test data Rte . Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

2 / 27

Introduction

Challenges

The heterogeneity challenge: how to integrate two different types of feedback (explicit and accurate preferences vs. implicit and uncertain preferences). The uncertainty challenge: how to identify some likely-positive feedback from the unlabeled feedback associated with high uncertainty w.r.t. users’ true preferences.

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

3 / 27

Introduction

Overall of Our Solution We map the SSCR problem to the transfer learning paradigm, and design an iterative algorithm, Self-Transfer Learning (sTL), containing two basic steps: 1

For the first step of knowledge flow from the unlabeled feedback to the labeled feedback, we focus on integrating the identified likely-positive unlabeled feedback into the learning task of labeled feedback.

2

For the second step of knowledge flow from the labeled feedback to the unlabeled feedback, we turn to use the tentatively learned model for further identification of likely-positive unlabeled feedback.

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

4 / 27

Introduction

Advantages of Our Solution

The unlabeled-to-labeled knowledge flow and labeled-to-unlabeled knowledge flow can address the heterogeneity challenge and the uncertainty challenge, respectively. The iterative algorithm is able to achieve sufficient knowledge transfer between labeled feedback and unlabeled feedback.

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

5 / 27

Introduction

Notations (1/2)

Table: Some notations (part 1).

n m u i, i ′ rui ˆrui R = {(u, i, rui )} O = {(u, i)} Rte = {(u, i, rui )} I˜u = {i}

Pan, Yang, Duan and Ming (SZU & HKUST)

user number item number user ID item ID observed rating of (u, i) predicted rating of (u, i) labeled feedback (training) unlabeled feedback (training) labeled feedback (test) examined items by user u

SSCR (sTL)

ACM TiiS

6 / 27

Introduction

Notations (2/2)

Table: Some notations (part 2).

µ∈R bu ∈ R bi ∈ R d ∈R Uu· ∈ R1×d U ∈ Rn×d (s) Vi·, Wi ′ · ∈ R1×d V, W(s) ∈ Rm×d T,L

Pan, Yang, Duan and Ming (SZU & HKUST)

global average rating value user bias item bias number of latent dimensions user-specific feature vector user-specific feature matrix item-specific feature vector item-specific feature matrix iteration number

SSCR (sTL)

ACM TiiS

7 / 27

Method

Prediction Rule of sTL

The predicted preference of user u on item i, ˆrui(ℓ) = µ + bu + bi + Uu· Vi·T +

ℓ X

(s) T ˜ ¯ u· Vi· , U

(1)

s=0

˜¯ (s) = where U u·

q1 (s) |I˜u |

P

(s)

(s)

i ′ ∈I˜u

(0)

(s)

Wi ′ · , I˜u = I˜u and Iu ⊆ I˜u .

Note that when ℓ = 0, the above prediction rule is exactly the same with that of SVD++.

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

8 / 27

Method

Objective Function of sTL The optimization problem, min

I (ℓ) ,Θ(ℓ)

n X m X u=1 i=1

1 (ℓ) yui [ (rui − ˆrui )2 + reg(Θ(ℓ) )], 2

(2)

(s)

(s)

where I (ℓ) = {I˜u }ℓs=0 and Θ(ℓ) = {µ, bu , bi , Uu· , Vi· , Wi· }ℓs=0 are likely-to-prefer items to be identified and model parameters to be learned, respectively. The regularization term reg(Θ(ℓ) ) = λ2 kUu· k2 + λ2 kVi· k2 + λ2 kbu k2 + λ2 kbi k2 + P P (s) 2 (s) (0) 2 λ Pℓ λ Pℓ (s) kW ′ k + (s) kW ′ s=0 s=1 2 i · 2 i · − Wi ′ · k is used to i ′ ∈I˜u i ′ ∈I˜u P P (s) (0) avoid overfitting. In particular, the term ℓs=1 i ′ ∈I˜ (s) kWi ′ · − Wi ′ · k2 will constrain

(s) Wi ′ ·

to be similar to

(s) Wi ′ ·

(0) Wi ′ · ,

u

which is helpful to avoid

overfitting when is associated with insufficient training data, i.e., (s) ˜ when |Iu | is small. Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

9 / 27

Method

Learning the sTL (1/3) For the first step of unlabeled-to-labeled knowledge flow, we adopt a gradient descent algorithm to learn the model parameters. We denote (ℓ) gui = 21 (rui − ˆrui )2 + reg(Θ(ℓ) ) and have the gradient, (ℓ)

r ∂reg(Θ(ℓ) ) ∂gui (ℓ) ∂ ˆ = (rui − ˆrui ) ui + , ∂θ ∂θ ∂θ

(3)

(s)

where θ can be µ, bu , bi , Uu· , Vi· and Wi ′ · , and the gradient thus ∂gui ∂gui ui includes ∂g ∂µ = −eui , ∂bu = −eui + λbu , ∂bi = −eui + λbi , Pℓ ˜ ∂gui ∂gui ¯ (s) s=0 Uu· ) + λVi· , and ∂Uu· = −eui Vi· + λUu· , ∂Vi· = −eui (Uu· + ∂gui (s) ∂Wi ′ ·

(s)

(s)

(0)

= −eui q 1 (s) Vi· + λWi ′ · + λ(Wi ′ · − Wi ′ · ) with |I˜u |

(ℓ)

(s)

i ′ ∈ I˜u , s = 0, . . . , ℓ. Note that eui = rui − ˆrui denotes the difference between the true rating and predicted rating. Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

10 / 27

Method

Learning the sTL (2/3)

We then have the update rule for each model parameter, θ =θ−γ

∂gui , ∂θ

(4) (s)

where θ again can be µ, bu , bi , Uu· , Vi· and Wi ′ · , and γ (γ > 0) is the step size or learning rate when updating the model parameters.

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

11 / 27

Method

Learning the sTL (3/3)

For the second step of labeled-to-unlabeled knowledge flow, we use the latest learned model parameters and the accumulated identified (s+1) items, i.e., I (s) and Θ(s) , to construct I˜u for each user u: we estimate the preference of user u on item i for each (u, i) ∈ O, (s) i.e., ˆrui , via the prediction rule in Eq.(1) we remove the (user, item) pair (u, i) from O and put the item i in (s+1) (s) if ˆrui > r0 , where r0 is a predefined threshold I˜u (s+1)

, we can integrate Note that with the newly identified item set I˜u them into the learning task of labeled feedback again.

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

12 / 27

Method

Algorithm (1/2)

1: Input: Labeled and unlabeled feedback R, O; tradeoff parameter λ, threshold r0 , latent dimension number d, and iteration numbers L, T .

2: Output: Learned model parameters Θ(L) and identified likely-to-prefer items (s)

Iu , s = 1, . . . , L. 3: Initialization: Initialize the item set Iu(0) = I˜u for each user u. 4: for ℓ = 0, . . . , L do 5: Please see the details in the next page

6: end for

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

13 / 27

Method

Algorithm (2/2) 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15:

// Step 1: Unlabeled-to-labeled knowledge flow Set the learning rating γ = 0.01 and initialize the model parameters Θ(ℓ) for t = 1, . . . , T do for t2 = 1, . . . , |R| do Randomly pick up a rating record (u, i, rui ) from R ∂g Calculate the gradients ∂θui Update the model parameters θ end for Decrease the learning rate γ ← γ × 0.9 end for // Step 2: Labeled-to-unlabeled knowledge flow if ℓ < L then for u = 1, . . . , n do (s) (ℓ) Predict the preference ˆrui ′ , i ′ ∈ I˜u \ ∪ℓs=1 I˜u (s) Select some likely-to-prefer items from I˜u \ ∪ℓ I˜u with ˆrui > r0 and s=1

(ℓ+1)

16:

save them as I˜u end for

17: end if Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

14 / 27

Method

Analysis The whole algorithm iterates in L + 1 loops: When L = 0, the sTL algorithm reduces to a single step of unlabeled-to-labeled knowledge flow, which is the same with that of SVD++ using the whole unlabeled feedback without uncertainty reduction. When L = 0 and O = ∅, sTL further reduces to the basic matrix factorization. We illustrate the relationships among sTL, SVD++ and MF as follows, sTL

L=0 −−−−→

SVD++

O=∅ −−−−→

MF,

(5)

from which we can see that our sTL is a quite generic algorithm.

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

15 / 27

Experiments

Datasets

Table: Statistics of one copy of labeled feedback R, unlabeled feedback O and test records Rte of ML10M, Flixter and ML20M used in the experiments.

Labeled feedback Unlabeled feedback Test feedback User # (n) Item # (m) Labeled feedback # (|R|) Unlabeled feedback # (|O|) Test feedback # (|Rte |)

Pan, Yang, Duan and Ming (SZU & HKUST)

ML10M Flixter ML20M (u, i, rui ), rui ∈ {0.5, 1, . . . , 5} (u, i) (u, i, rui ), rui ∈ {0.5, 1, . . . , 5} 71567 147612 138493 10681 48794 26744 4000022 3278431 8000104 4000022 3278431 8000107 2000010 1639215 4000052

SSCR (sTL)

ACM TiiS

16 / 27

Experiments

Baselines

Item-based collaborative filtering (ICF) Matrix factorization (MF) SVD with unlabeled feedback (SVD++) Factorization machine (FM)

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

17 / 27

Experiments

Initialization of Model Parameters

We use the statistics of the training data R to initialize the model parameters: (s)

For each entry of matrix U, V and W(s) , i.e., Uuk , Vik and Wi ′ k with k = 1, . . . , d and s = 1, . . . , ℓ, we use a small random value (r − 0.5) × 0.01, where r ∈ [0, 1) is a small random number For the bias of user u and item i, i.e., bu and bi , we use bu = ¯ru − µ and bi = ¯ri − µ, where ¯ru , ¯ri , µ are user u’s average rating, item i’ average rating and global rating, respectively

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

18 / 27

Experiments

Parameter Configurations For the number of latent dimensions d and the iteration number T , we set them as d = 20 and T = 100. For the the tradeoff parameter λ, we search it from λ ∈ {0.001, 0.01, 0.1} using RMSE on the first copy of each data (via sampling a holdout validation data with n records from the training data) and then fix it for the remaining two copies. For the threshold r0 , we first set it close to the average rating of each data set, i.e., r0 = 3.5, and then study the impact of using smaller and bigger values. For the number of knowledge transfer steps L in our sTL, we first fix it as L = 2, and then study the performance with different values of L ∈ {0, 1, 2, 3, 4}. We set the number of neighbors as 50 in ICF. Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

19 / 27

Experiments

Post-Processing

When we estimate the preference for user u on item i, i.e., ˆrui , the predicted rating may be out of the range of labeled feedback of the training data, i.e., [0.5, 5] for the data sets in our experiments. For a predicted preference that is larger than 5 or smaller than 0.5, we adopt the following commonly used post-processing before final evaluation, ( 0.5, if ˆrui < 0.5 ˆrui = . (6) 5, if ˆrui > 5

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

20 / 27

Experiments

Evaluation Metrics

Mean Absolute Error (MAE)

MAE =

X

|rui − ˆrui |/|Rte |

(u,i,rui )∈Rte

Root Mean Square Error (RMSE) s X (rui − ˆrui )2 /|Rte | RMSE = (u,i,rui )∈Rte

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

21 / 27

Experiments

Results (1/2) Table: The significantly best results are marked in bold font (the p-values are smaller than 0.01). Data ML10M (R) ML10M (R, O) Flixter (R) Flixter (R, O) ML20M (R) ML20M (R, O)

Pan, Yang, Duan and Ming (SZU & HKUST)

Method ICF MF SVD++ FM sTL ICF MF SVD++ FM sTL ICF MF SVD++ FM sTL

MAE 0.6699±0.0003 0.6385±0.0008 0.6249±0.0006 0.6276±0.0004 0.6209±0.0004 0.6687±0.0007 0.6479±0.0007 0.6400±0.0008 0.6447±0.0007 0.6398±0.0006 0.6555±0.0002 0.6226±0.0005 0.6122±0.0004 0.6120±0.0004 0.6064±0.0002

SSCR (sTL)

RMSE 0.8715±0.0004 0.8323±0.0011 0.8182±0.0009 0.8181±0.0006 0.8103±0.0007 0.9061±0.0010 0.8749±0.0010 0.8683±0.0009 0.8701±0.0008 0.8650±0.0008 0.8591±0.0004 0.8153±0.0007 0.8033±0.0006 0.8036±0.0007 0.7969±0.0004 ACM TiiS

22 / 27

Experiments

Results (2/2) Observations The proposed self-transfer learning (sTL) algorithm achieves better performance than all other baselines in all cases. Such significant superiority in preference prediction clearly shows the advantage of the designed knowledge flow strategy in sTL in order to fully leverage the uncertain unlabeled feedback in an iterative manner. The overall ordering w.r.t. preference prediction performance is ICF
Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

23 / 27

Related Work

Collaborative Recommendation

Table: Summary of some related works on collaborative recommendation, including supervised, unsupervised and semi-supervised collaborative recommendation settings for labeled feedback R, unlabeled feedback O, and heterogeneous feedback R and O, respectively. Supervised (R) Unsupervised (O) Semi-Supervised (R, O)

Pan, Yang, Duan and Ming (SZU & HKUST)

ICF, etc.: memory-based method MF, etc.: model-based method iMF, etc.: with pointwise assumption BPR, etc.: with pairwise assumption SVD++, FM, etc.: for heterogeneity challenge sTL (proposed): for heterogeneity & uncertainty challenges

SSCR (sTL)

ACM TiiS

24 / 27

Related Work

Transfer Learning for Collaborative Recommendation

Most previous works on transfer learning for collaborative recommendation are somehow one-time knowledge transfer, i.e., the algorithm only contains a step of unlabeled-to-labeled knowledge flow represented by one single arrowed line from right to left. We generalize the commonly adopted one-time knowledge transfer approach in previous works, and design a novel iterative knowledge transfer algorithm, i.e., self-transfer learning, aiming to address the heterogeneity and uncertainty challenges of the labeled and unlabeled feedback in one single framework.

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

25 / 27

Conclusion

Conclusion

We study an important problem with both labeled feedback (explicit feedback) and unlabeled feedback (implicit feedback), i.e., semi-supervised collaborative recommendation (SSCR), in the transfer learning paradigm. We design a novel transfer learning algorithm, i.e., self-transfer learning (sTL), which is able to identify and integrate likely-positive unlabeled feedback into the learning task of labeled feedback in a principled and iterative manner.

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

26 / 27

Thank you

Thank you!

We thank the editors and reviewers for their expert comments and constructive suggestions. Weike Pan, Yuchao Duan and Zhong Ming thank the support of National Natural Science Foundation of China (NSFC) Nos. 61502307 and 61170077, Natural Science Foundation of Guangdong Province Nos. 2014A030310268 and 2016A030313038, and Natural Science Foundation of SZU No. 201436. Qiang Yang thanks the support of China National 973 project 2014CB340304, and Hong Kong CERG projects 16211214 and 16209715.

Pan, Yang, Duan and Ming (SZU & HKUST)

SSCR (sTL)

ACM TiiS

27 / 27

Transfer Learning for Semi-Supervised Collaborative ...

Transfer Learning for Semi-Supervised Collaborative. Recommendation. Weike Pan1, Qiang Yang2∗, Yuchao Duan1 and Zhong Ming1∗ [email protected], [email protected], [email protected], [email protected]. 1College of Computer Science and Software Engineering. Shenzhen University ...

813KB Sizes 0 Downloads 330 Views

Recommend Documents

10 Transfer Learning for Semisupervised Collaborative ...
labeled feedback (left part) and unlabeled feedback (right part), and the iterative knowledge transfer process between target ...... In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data. Mining (KDD'08). 426â

Transfer learning in heterogeneous collaborative filtering domains
E-mail addresses: [email protected] (W. Pan), [email protected] (Q. Yang). ...... [16] Michael Collins, S. Dasgupta, Robert E. Schapire, A generalization of ... [30] Daniel D. Lee, H. Sebastian Seung, Algorithms for non-negative matrix ...

Transfer learning in heterogeneous collaborative ... - ScienceDirect.com
Tudou,3 the love/ban data in Last.fm,4 and the “Want to see”/“Not interested” data in Flixster.5 It is often more convenient for users to express such preferences .... In this paper, we consider the situation where the auxiliary data is such

Transfer Learning in Collaborative Filtering for Sparsity Reduction
ematically, we call such data sparse, where the useful in- ... Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10) ... way. We observe that these two challenges are related to each other, and are similar to the ...

Transfer Learning for Collaborative Filtering via a ...
aims at predicting an active user's ratings on a set of. Appearing in Proceedings of ...... J. of Artificial Intelligence Research, 12, 149–198. Caruana, R. A. (1997).

Transfer Learning and Active Transfer Learning for ...
1 Machine Learning Laboratory, GE Global Research, Niskayuna, NY USA. 2 Translational ... data in online single-trial ERP classifier calibration, and an Active.

Digital Ecosystems for Collaborative Learning
1School of Information Technologies, 2Faculty of Education and Social Work ... 3 Ontario Institute for Studies in Education, University of Toronto, Ontario, ...

Transfer Learning for Behavior Prediction
on an e-commerce platform. Accurate future behavior prediction can assist a company's strategy and policy on ad- vertising, customer service, and even logistics, which is of great importance to both users and service providers. However, the task of b

Computer Supported Collaborative Learning - T ...
Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Main menu. Whoops! There was a problem previewing Computer Supported Collaborative Learn

Computer Supported Collaborative Learning - T ...
Nama sumber data atau informan dalam penelitian kualitatif, tidak boleh dicantumkan apabila dapat merugikan informan tersebut. Page 3 of 122. Computer Supported Collaborative Learning - T. Koschman, D. D. Suthers, T-W Chan.pdf. Computer Supported Col

Edmonton Public Schools builds collaborative learning community ...
400, so access to technology and the Internet was limited,” says Terry Korte,. Technology Integration ... In April 2010, EPS conducted several information sessions for principals and lead teachers and ... been all my career?' And for many ...

Computer Supported Collaborative Learning - T ...
Page 2 of 4. Page 3 of 4. Computer Supported Collaborative Learning - T. Koschman, D. D. Suthers, T-W Chan.pdf. Computer Supported Collaborative Learning - T. Koschman, D. D. Suthers, T-W Chan.pdf. Open. Extract. Open with. Sign In. Main menu. Displa

Collaborative Learning TCEA16-2.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Collaborative ...

Selective Transfer Learning for Cross Domain ...
domain data are not consistent with the observations in the tar- get domain, which may misguide the target domain model build- ing. In this paper, we propose a ...

Collaborative Filtering via Learning Pairwise ... - Semantic Scholar
assumption can give us more accurate pairwise preference ... or transferring knowledge from auxiliary data [10, 15]. However, in real ..... the most popular three items (or trustees in the social network) in the recommended list [18], in order to.

Active Semi-supervised Transfer Learning (ASTL) for ...
transfer learning (ASTL) for offline BCI calibration, which integrates active learning .... classifier, and the 4th term minimizes the distance between the marginal ... using available sample information in both source and target domains. ..... apply

Exploiting Feature Hierarchy for Transfer Learning in ...
lated target domain, where the two domains' data are ... task learning, where two domains may be re- lated ... train and test articles written by the same author to.

Restricted Transfer Learning for Text ... - Research at Google
We present an algorithm for RIT for text categorization which we call ... Let U be the uniform distribution over labels Ys, and κ be a fixed regularization con-. 2 ...

Semisupervised Wrapper Choice and Generation for ...
Index Terms—document management, administrative data processing, business process automation, retrieval ... of Engineering and Architecture (DIA), University of Trieste, Via Valerio .... The ability to accommodate a large and dynamic.