But that still only takes the values up to ~10,000, not 600,000.
However, the quantiles are likely much more relevant; the medians look like they are actually in the reasonable range (10^2.5 is ~316), and the partial predictor table has values in the range 400 to 1300 (indicating the plot doesn't show the full range somehow, since log10(1300) is greater than 3, and the scale cuts off before that!)
So the real issue seems to be the uncertainty, which gives a huge range in the lower&upper quantiles, and large sd. If one had the mean&sd on the log scale, then the exponentiated mean would be exp(m+s^2/2) if the log-scale posterior is Normal, so the large uncertainty on the log-scale is what leads to the large posterior means.
So to troubleshoot it, I'd investigate what's causing the large uncertainty. E.g. how identifiable should one expect the model to be? What does the summary.fixed output look like? Is the spde confounded with a global intercept, for example? Even though their sum should be identifiable, I'm not sure how well the Tweedie distribution works in practice; it's not a simple model!
What happens if you use something simpler, like a negative binomial, or even a Poisson, to see how those simpler observation models behave in combination with your predictor model?