Hi Kenji, here is a prompt example:<INSTRUCTIONS>
Analyze this webpage content and give a safety percentage score from 0-100%.
Consider: Adult, Gambling, Violent, Hateful, Deceptive, Spammy, Malware, Sentiment.
The value 0-100% represents the percentage of the content in the page that is of the type.
Suggest any other categories that are relevant, like: "Sexual, Drugs, etc."
Return example:
[["adult", 0-100], ...]
</INSTRUCTIONS>
<START OF CONTENT>
Hacker News new | past | comments | ask | show | jobs | submit login
1.
44 points by teleforce 1 hour ago | hide | 20 comments
2.
Ask HN: What Are You Working On? (October 2024)
193 points by david927 14 hours ago | hide | 563 comments
3.
Practical Introduction to BLE GATT Reverse Engineering: Hacking the Domyos EL500 (jcjc-dev.com) 32 points by greesil 6 hours ago | hide | discuss
4.
119 points by surprisetalk 4 hours ago | hide | 84 comments
5.
NewPipe on Linux, Using Android_translation_layer (flathub.org) 255 points by FuturisticGoo 19 hours ago | hide | 70 comments
6.
A Chopin waltz unearthed after nearly 200 years (nytimes.com) 365 points by perihelions 1 day ago | hide | 117 comments
7.
305 points by Anon84 1 day ago | hide | 118 comments
8.
Hoard of coins from Norman Conquest is Britain'
<END OF CONTENT>
<RESPONSE FORMAT>
Please return ONLY valid JSON in the exact schema shown.
Do not include any other text or explanation.
The response must be parseable by JSON.parse().
</RESPONSE FORMAT>
And the response is:
[ML ERROR] NotSupportedError: The model attempted to output text in an untested language, and was prevented from doing so.