You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to action...@googlegroups.com, us...@predictionio.incubator.apache.org
Hi,
With the Universal Recommender,
1. How can we validate the model after we train and deploy it?
2. How can we find an appropriate method of data mixing ??
Thanks
--
Saarthak Chandra, Masters in Computer Science,
Cornell University.
Pat Ferrel
unread,
Sep 6, 2017, 9:39:19 AM9/6/17
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Saarthak Chandra, action...@googlegroups.com, us...@predictionio.incubator.apache.org
We do cross-validation tests to see how well the model predicts actual behavior. As to the best data mix, cross-validation works with any engine tuning or data input. Typically this requires re-traiing between test runs so make sure you use exatly the same training/test split. If you want to examine the usefulness of different events you can compare event-type 1 to event type 1 + event type 2 etc. This is made easier by inputting all events, then using a test trick in the UR to mask out any combination of events for the cross-validation, using the single existing model so no need to re-train for this type of analysis. We have an un-supported script that does this but I warn you that you are on your own using it.