The length of the label doesn't really matter in this discussion: you should not be putting a log message in a label at all. *Any* label which varies from request to request is a serious problem, because each unique value of that label will generate a new timeseries in Prometheus, and you'll get a cardinality explosion.
Internally, Prometheus maintains a mapping of
{bag of labels} => timeseries
Whether the labels themselves are short or long makes very little difference. It's the number of distinct values of that label which is important, because that defines the number of timeseries. Each timeseries has impacts on RAM usage and chunk storage.
If you have a limited set of log categories - say a few dozen values - then using that as a label is fine. The problem is a label whose value varies from event to event, e.g. it contains a timestamp or an IP address or any varying value. You will cause yourself great pain if you use such things as labels.
But don't take my word for it - please read
"CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values."
I completely understand your desire to get specific log messages in alerts. If you need to do that, then as I said before, use Loki instead of Prometheus. Loki stores the entire log message, as well as labels. It has its own LogQL query language inspired by PromQL, and integrates with Grafana and alerting. It's what you need for handling logs, rather than metrics.
(If you still want to do this with prometheus, it would be an interesting project to see if you can get exemplars in an alert. But I suspect this would involve hacking mtail, alertmanager and even prometheus itself. This is something only to be attempted by a serious Go coder)