I'm trying to create an alembic bulk insert statement to add a row into a model that has a relationship (many-to-many) with another table but don't know if this is possible or if so what syntax to use.
In my current alembic file I do this in two or more steps:
1.) I add the rows to the table represented by the model
2.) I add the rows to the mixer table for the model and its related model/table
like so:
g.session.bulk_insert_mappings(
CvConfiguration,
[
{
"controlled_vocabulary_type_id": controlled_vocabulary_type(
"cv_vessel_type"
),
"default_cv_id": cv_vessel_type("well_plate"),
},
],
return_defaults=True,
)
g.session.flush()
mix_cv_organization_cv_configuration_stmt = mix_cv_organization_cv_configuration.insert().values(
[
{
"cv_organization_id": cv_organization("biologics_generation_group"),
"cv_configuration_id": cv_configuration("cv_vessel_type", "well_plate"),
},
],
)
g.session.execute(mix_cv_organization_cv_configuration_stmt)
I'd really like to combine the relationship into the bulk_insert_mapping if possible, so if the relationship on the SqlAlchemy model is called "used_by_cv_organizations" my insert looks something like this, with the foreign key objects in a list or something.
g.session.bulk_insert_mappings(
CvConfiguration,
[
{
"controlled_vocabulary_type_id": controlled_vocabulary_type(
"cv_vessel_type"
),
"default_cv_id": cv_vessel_type("well_plate"),
"used_by_cv_organizations": [
cv_organization("biologics_generation_group")
],
},
],
return_defaults=True,
)
g.session.flush()
Is this possible? Does anyone know how to do it?