Testing a Taxonomy with a Video Recommendation Engine?

28 views
Skip to first unread message

Claire

unread,
Mar 7, 2017, 11:00:40 AM3/7/17
to Content Strategy

I'm designing and implementing a taxonomy for a video library (~3,000 assets and growing) for a SVOD mobile app. The taxonomy will be applied in our DAM system. The purpose of the taxonomy is to help us organize our assets, internally, but also to optimize text search and a video recommendation engine that will greatly increase the UX of the app for our users. I have a draft of the taxonomy ready to go, but it'd be great if we could apply it to a small cluster of videos and test it with the recommendation engine before spending months and months implementing it. Is there a best practice for doing this? Should I gather a cluster of videos that accurately reflects the ratio of genres throughout our entire library (for example, if we have 33% animal, 33% cooking, and 33% comedy videos, ensuring my test cluster also reflects those content ratios)? How large would the sample of videos/test cluster have to be to be an accurate test of the taxonomy? I've been searching online and can't find any specific guidance. Thank you in advance for help.


John Tulinsky

unread,
Mar 7, 2017, 12:15:08 PM3/7/17
to content...@googlegroups.com
This is a classic scenario for a card sort. I recommend doing a Google search on it, which will produce a lot of results, and do some reading about it, but the basic idea is to write your terms on slips of paper ('cards') and have some users organize them in a way that makes sense to them.

There are many variations (give the users pre-made categories in which to place individual terms, have them work as a group to decide on categories, limit the number of categories, etc, etc) and there are also online tools that allow you to work with bigger or geographically dispersed groups of test subjects and provide more sophisticated data analysis but the basic approach with Post-it notes and a handful of users is a good place to start and often gives you pretty good feedback. Also, personally I find that sort of exercise to be one of the most fun and enjoyable things I do in my job.

On Mar 7, 2017 10:00 AM, "Claire" <cm3...@nyu.edu> wrote:

I'm designing and implementing a taxonomy for a video library (~3,000 assets and growing) for a SVOD mobile app. The taxonomy will be applied in our DAM system. The purpose of the taxonomy is to help us organize our assets, internally, but also to optimize text search and a video recommendation engine that will greatly increase the UX of the app for our users. I have a draft of the taxonomy ready to go, but it'd be great if we could apply it to a small cluster of videos and test it with the recommendation engine before spending months and months implementing it. Is there a best practice for doing this? Should I gather a cluster of videos that accurately reflects the ratio of genres throughout our entire library (for example, if we have 33% animal, 33% cooking, and 33% comedy videos, ensuring my test cluster also reflects those content ratios)? How large would the sample of videos/test cluster have to be to be an accurate test of the taxonomy? I've been searching online and can't find any specific guidance. Thank you in advance for help.


--
You received this message because you are subscribed to the Google Groups "Content Strategy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to contentstrategy+unsubscribe@googlegroups.com.
To post to this group, send email to contentstrategy@googlegroups.com.
Visit this group at https://groups.google.com/group/contentstrategy.
For more options, visit https://groups.google.com/d/optout.

Joe Pairman

unread,
Mar 7, 2017, 12:44:43 PM3/7/17
to content...@googlegroups.com
Hi Claire,

It seems to me that you're looking to test two things: the taxonomy itself, and the efficacy of the recommendation engine. Or, at least, I think that the latter should be evaluated separately from the former. The engine could be using the same taxonomy but perform very differently depending on whether it was simply based on "# of concepts in common", or a more sophisticated breakdown using several facets.

Returning to evaluating the taxonomy itself, Heather Hedden outlines the pros and cons of various approaches (including cardsorting as John Tulinsky mentioned) in this deck: https://www.slideshare.net/HeatherHedden/testing-taxonomies
Slide 24, discussing testing for precision and recall, is particularly relevant I think.

HTH,
Joe

On Tue, Mar 7, 2017 at 3:40 PM, Claire <cm3...@nyu.edu> wrote:

I'm designing and implementing a taxonomy for a video library (~3,000 assets and growing) for a SVOD mobile app. The taxonomy will be applied in our DAM system. The purpose of the taxonomy is to help us organize our assets, internally, but also to optimize text search and a video recommendation engine that will greatly increase the UX of the app for our users. I have a draft of the taxonomy ready to go, but it'd be great if we could apply it to a small cluster of videos and test it with the recommendation engine before spending months and months implementing it. Is there a best practice for doing this? Should I gather a cluster of videos that accurately reflects the ratio of genres throughout our entire library (for example, if we have 33% animal, 33% cooking, and 33% comedy videos, ensuring my test cluster also reflects those content ratios)? How large would the sample of videos/test cluster have to be to be an accurate test of the taxonomy? I've been searching online and can't find any specific guidance. Thank you in advance for help.


Reply all
Reply to author
Forward
0 new messages