Hi,
When posting a comment to a youtube video which contains character patterns likely to cause immediate removal by validation bots, the effect of any comment removal is not seen until a page refresh.
Because lost comments removed this way are unrecoverable, it's lost time and effort in the
case where a copy of the message was not saved to file by the user.
Is it possible to test a function like
insert for potentially unacceptable data in a comment, so that sandboxing a comment directly in the API gets the same effect as in the browser page?
Had a look at the help, and filled a few fields, getting a 400 error on execution, thus an example for such an exercise would help greatly.
Thanks.