Hi guys,
currently I am working with Model Maker Library for Text Classification with MobileBert. It works fine with a seq_length of 512. But in my case I would like to train the Model with a few emails of the enron dataset and most of these have a length more than 512. So is there any possibilty to increase seq_length to more than 512? I tried with e.g. 1024 but then I got an exception and I think it is up to the underlying keras layer with a limited sequence lengt of 512, is that correct?
Or maybe do I have to split the inputs (emails) into inputs with length 512 to get the whole information in training?
Thank you very much and kind regards
Marvin