No, that won’t fix the problem. This isn’t an issue with the FIMO software. It’s an inherent limitation of using p-values as a measure of statistical significance.
You should refer to the paper I linked to above on the multiple testing problem in my answer on the Q&A site for a detailed review of the problem. But briefly, a p-value is the probability that you will observe a score at least as good as the given score, but entirely due to chance. Traditionally a p-value of 0.01 is used as a threshold for statistical significance. That means that the probability of seeing an entirely chance match scoring at least as well, is 1 in 100. However, that’s really only applicable to a single test of a single motif. If instead, you apply that same p-value test 100 times in a single experiment, you are just about certain to hit a entirely random match scoring at least as well. It doesn’t matter if FIMO runs the experiments all at once, or if you run them one at a time, you are still using the p-value threshold to identify the significant matches.
The FIMO q-value corrects for the fact that the p-value test has been run at thousands of positions in your sequence data, but it doesn’t correct for scanning with multiple motifs. You should use the FIMO q-value rather than the p-value for your measure of statistical significance, but you probably should also correct for the fact that you are scanning with hundreds of different motifs. The simplest correction is the Bonferroni correction I mentioned in the Q&A post. Essentially, take your nominal q-value threshold, say 0.01, divide by twice the number of motifs you scan with, and use that for your threshold of statistical significance. Use twice the number of the number of motifs because they are scanned in both the forward and reverse orientation.