A lot of this is specific to the db engine that is being used. Different implementations have different capabilities.
To that point, most of the graph databases that I've worked with don't do in-memory joins anywhere near as efficiently as the relational databases I've worked with. (Pieter's point about Neo4j being the main exception.) The relational database join operation isn't just a matter of having an index on the columns joined, there's a lot more involved and the implementations are specific to how the database handles the on-disk persistence, caching, join strategies, etc.
Without knowing which graph database you are using, it is hard to express more than generalities. In general, graph database engines are not optimized for in-memory joins in the same way that relational database engines are. Therefore, a helpful optimization when bulk importing data is to extract it from the source in such a way that the obvious edges are pre-computed.
This assumes that the source is a relational database where the expense of such join operations is very small. If, however, the source were a document database with no join functionality, then forming the edges in the graph database may be the better option. (That isn't to say that document databases can't have join functionality, only that some don't.)
I think that those are the safest general guidelines. Building a transactional pipeline may be different and the cost of the join operation within the graph may fall within the target performance window. The particular mix of data engines in the persistence layer, and also general staff capabilities, or even access to other systems could impact my opinions on the matter.
I recently rebuilt a set of data-import CSVs using command line join in a bash shell because: 1) I changed the partitioning scheme 2) I didn't have access to the source systems 3) setting up a local relational db in that env was more of a hassle than using the command line. I would never recommend that approach as a first option, and in fact more recently I had to do the same type of operation, but had Postgres already in place so used it.
Hope that helps.
-Josh