LLR and other similarity metrics graph in The Universal Recommender with CCO slide

51 views
Skip to first unread message

Marius Rabenarivo

unread,
May 4, 2017, 11:54:03 AM5/4/17
to Pat Ferrel, actionml-user, us...@predictionio.incubator.apache.org
Hello,

Can you point me to some resource explaining how the graphic comparing
LLR with other similarity metrics was generated?

https://docs.google.com/presentation/d/1MzIGFsATNeAYnLfoR6797ofcLeFRKSX7KB8GAYNtNPY/edit#slide=id.g15e36a57f5_0_119

Regards,

Marius

Pat Ferrel

unread,
May 4, 2017, 12:03:30 PM5/4/17
to Marius Rabenarivo, actionml-user, us...@predictionio.incubator.apache.org
That was generated using the old Mahout Mapreduce recommenders, which had pluggable similarity metrics. I ran it on a vey large E-Commerce dataset from a real ecom site. The data was for 6 months of sales. We did cross-validation of an 80 training set and 20% held out probe/test set. The test set was 20% of the most recent sales. We then measure MAP@k for several k. A decline in MAP@k as K increases means the ranking of items is correct. This higher MAP@k the better the precision of recommendations.

Using cross-validation between different algorithms is highly suspect so this was using an identical algo, but not one I’d use today.


--
You received this message because you are subscribed to the Google Groups "actionml-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to actionml-use...@googlegroups.com.
To post to this group, send email to action...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/actionml-user/CAC-ATVHLVZQGAgYhXdRKBTWs7ABqSsaOdYvE4o0dN5tPH1kF%2BA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Marius Rabenarivo

unread,
May 4, 2017, 12:07:36 PM5/4/17
to Pat Ferrel, actionml-user, us...@predictionio.incubator.apache.org
Thank you

2017-05-04 20:03 GMT+04:00 Pat Ferrel <p...@occamsmachete.com>:
That was generated using the old Mahout Mapreduce recommenders, which had pluggable similarity metrics. I ran it on a vey large E-Commerce dataset from a real ecom site. The data was for 6 months of sales. We did cross-validation of an 80 training set and 20% held out probe/test set. The test set was 20% of the most recent sales. We then measure MAP@k for several k. A decline in MAP@k as K increases means the ranking of items is correct. This higher MAP@k the better the precision of recommendations.

Using cross-validation between different algorithms is highly suspect so this was using an identical algo, but not one I’d use today.

On May 4, 2017, at 8:54 AM, Marius Rabenarivo <mariusra...@gmail.com> wrote:

Hello,

Can you point me to some resource explaining how the graphic comparing
LLR with other similarity metrics was generated?

https://docs.google.com/presentation/d/1MzIGFsATNeAYnLfoR6797ofcLeFRKSX7KB8GAYNtNPY/edit#slide=id.g15e36a57f5_0_119

Regards,

Marius

--
You received this message because you are subscribed to the Google Groups "actionml-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to actionml-user+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages