A General Magnitude-Preserving Boosting Algorithm for Search Ranking Chenguang Zhu*, Weizhu Chen, Zeyuan Allen Zhu*, Gang Wang, Dong Wang*, Zheng Chen Microsoft Research Asia, *Tsinghua University 2009.11

Motivation 

Traditional learning to rank algorithms 

Pairwise: 



Ranking SVM, RankBoost, RankNet

Deficiency: 

e.g. Mis-ranking two documents with ratings 5 and 1 gets the same loss as mis-ranking two documents with ratings 5 and 4

Challenge 

Challenges for incorporating magnitude information in pairwise learning to rank 

More complex than classification: 



Deals with the order of document list

More complex than regression: 

The prediction is carried out for each single document, not document pairs

Our Contribution Basis

Algorithm

Theory

•Apply magnitude-preserving labels for pairwise learning

•Leverage Boosting for learning to rank: MPBoost

•Prove the convergence property on ranking accuracy

Formalization for Learning to Rank 

Input: Query set Q and {( x



Learning:

qi





, for each q  Q

Generate pairwise learning instances: S



n

, rqi )}i q1

q

{(( xqi , xqj ), yqij ) | rqi  rqj }

Learning to generate ranking function F ( x)

Prediction: list {xqi }i q1 ,

For a retrieved document rank the documents from the largest F ( xqi ) to the smallest one. n



Pairwise Approach Human Ratings



Generate Pairwise Labels

Traditional pairwise labels  1, ri is preferenced to rj yij    1, rj is preferenced to ri

Learning Preference

Magnitude Preserving Labels 

Magnitude Preserving Labels: 

Directed Distance Function (DDF) 

Preserve preference relationship:

sgn(dist (ri , rj ))  sgn(ri  rj ) 

Preserve magnitude information

| ri  rj || ri ' rj ' || dist (ri , rj ) || dist (ri ', rj ') |

Directed Distance Function

Directed Distance Function (DDF)  

The form of DDF can vary We investigate three kinds of DDF: 



Linear Directed Distance (LDD) dist (ri , rj )   (ri  rj ) Logarithmic Directed Distance (LOGDD) dist (ri , rj )  sgn(ri  rj )log(1   | ri  rj |)



Logistic Directed Distance (LOGITDD) dist (ri , rj )  sgn(ri  rj )

1 1 e

  | ri  rj |

Directed Distance Function (DDF)

Pairwise Approach Human Ratings



Generate Pairwise Labels

Learning Preference

We propose the Magnitude-Preserving Boosting algorithm (MPBoost)

MPBoost Algorithm 

Loss J for a ranking function F : J (F )  e



 dist ( ri , rj )( F ( xi )  F ( x j ))

Optimization:  

GentleBoost (Friedman et al.) Stage-wise gradient descent 

Require | dist (ri , rj ) | 1 (ri  rj )

MPBoost Algorithm Input: Query set Q and{( xqi , rqi )}i 1, for each q  Q Output: ranking function F ( x) 1: Generate S  q{(( xqi , xqj ), dist (rqi , rqj )) | rqi  rqj } 2: Generate index set I  {(i, j ) | (( xi ,x j ),dist(ri ,rj ))  S} 1 (1) w  3: Initialize ij , for (i, j )  I |I| 4: for t  1...T do 5: Fit the weak ranker f t , such that: ft  argmin J wse ( f ) ( t 1) ( t )  dist ( ri , rj )( ft ( xi )  ft ( x j )) 6: Update: wij  wij e / Zt ( t )  dist ( r , r )( f ( x )  f ( x )) Z  w where t  I ij e 7: end for T 8: Output the final ranking function F ( x)   f t ( x) nq

          

i

j

t

i

t

j

t 1

Convergence Property of MPBoost 

Theorem 1. The normalized ranking loss (mis-ordering) of F is bounded:  w [[ F ( x )  F ( x )]]   w [[ F ( x )  F ( x )]]   Z T

{( i , j )I | ri  rj }

(1) ij

i

j

{( i , j )I | ri  r j }

(1) ij

i

j

t 1

t

where [[ ]] is defined to be 1 if the condition  holds and 0 otherwise.



Proof: Omitted

Datasets   



OHSUMED dataset (From LETOR 3.0) Web-1 dataset (From Yandex competition) Web-2 dataset (From a commercial English search engine) 3+1+1 five-fold cross validation Dataset

Queries

Documents

Rating

OHSUMED

106

16140

{0,1,2}

Web-1

9124

97290

[0,4]

Web-2

467

50000

{0,1,2,3,4}

Experiment Methodology 

Baseline:  



RankBoost, ListNet, AdaRank-NDCG MPBoost.BINARY

Proposed method:   

MPBoost.LDD MPBoost.LOGDD MPBoost.LOGITDD

Experimental Results on OHSUMED  

MPBoost.* outperforms pairwise baselines MPBoost.BINARY outperforms RankBoost

Experimental Results on Web-1

Experimental Results on Web-2

Discussion 



The magnitude-preserving versions of MPBoost outperform baselines by 0.16% to 2.2% over average NDCG MPBoost.LOGDD and MPBoost.LOGITDD outperform MPBoost.LDD 

Linear directed distance can hardly guarantee that for all pairs | dist (ri , rj ) | 1 (ri  rj )

Overfitting Issues

Conclusion Magnitude-preserving labels can effectively improve ranking accuracy Directed Distance Function can have various forms MPBoost inherits theoretical properties from RankBoost

Q&A Thanks!

Related Work 

Qin et al.  





Based on RankSVM Multiple Hyperplane Ranker (MHS) Complex when the number of ratings are large

Cortes et al.  



Based on RankSVM Incorporate magnitude differences Limited due to σ-admissibility of cost function

MPBoost & RankBoost

Based On

Loss Function Weak Learner’s criteria

e

MPBoost

RankBoost

GentleBoost

AdaBoost

0

 w [dist (r , r )  ( f ( x )  f ( x ))]

2

ij

1

1

0

x0 , x1

i

j

i

T

Combination

 D( x , x )[[H ( x )  H ( x )]]

 dist ( ri , rj )( F ( xi )  F ( x j ))

F ( x)   f t ( x) t 1

j

 D( x , x )(h( x )  h( x )) 0

1

1

x0 , x1 T

H ( x)   t ht ( x) t 1

0

A General Magnitude-Preserving Boosting Algorithm for ...

Pairwise: Ranking SVM, RankBoost, RankNet. Deficiency: e.g. Mis-ranking two documents with ratings 5 and 1 gets the same loss as mis-ranking two.

663KB Sizes 1 Downloads 297 Views

Recommend Documents

No documents