Hi Matt,
that's right – as the simplest approximation, you can boost the word counts for different attributes. E.g. each "deployment" in Title is worth five "deployment"s in Description etc.
This is actually a common approach, even if hackish, and tends to work well given its simplicity. The "combined" model effectively adds a handful of constants, since such weights typically don't depend on the query/indexed document at all, and are fixed throughout.
Of course, that leads to the question of "what weights to use". Do you have an annotated set from which to tune these extra parameters? Or do you plan to set them based on your intuition?
Other approaches include keeping the attribute vectors separate, and extending the query model instead. So instead of comparing two vectors (doc x query) with cosine similarity, you'd be comparing N x N vectors (N for query, N for doc) with a more sophisticated measure. Depending on the quality of your training data and appetite for unchartered "academic" algorithms, keeping the input signals "raw" and pushing the complexity (and explosion of model parameters) into evaluation can be quite the rabbit hole. That's the direction of modern attention algorithms and deep learning.
Hope that helps,
Radim