--
You received this message because you are subscribed to the Google Groups "Mojolicious" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mojolicious+unsubscribe@googlegroups.com.
To post to this group, send email to mojol...@googlegroups.com.
Visit this group at https://groups.google.com/group/mojolicious.
For more options, visit https://groups.google.com/d/optout.
> I spoke with you about this briefly on Twitter, but just figured I'd put it
> out here, too. What about multiple record inserts on a single call?
> Twitter thread.
Yes, it's possible, not sure about how useful it would actually be though.
my $i = $pg->db->dbh->prepare('insert into b (first, last, birthday, age, phone) values(?, ?, ?, ?, ?)');
my $results = $pg->db->select('a', [qw/first last birthday age phone/], undef, {limit => $limit, offset => 1});
while ( my $next = $results->array ) {
$i->execute(@$next);
For example, we can buffer minion jobs via enqueue and insert them to minion_jobs all at once in the end of request.That feature also is very valuable for highload project if you want to store some kind of logs in the database.So bulk inserts is definitely a good thing and I use them a lot in my daily job.
It is of note here that each insert is a transaction. What this means is Postgres is doing some extra coordination to make sure the transaction is completed before returning. On every single write this takes some overhead. Instead of single row transactions, if we wrap all of our inserts in a transaction, we’ll see some nice performance gains. -- CitusData Blog
my $results = $pg->db->select('a', [qw/first last birthday age phone/], undef, {limit => $limit, offset => 1});
$pg->db->bulk_insert('b', $results->hashes->to_array);
# $records is an array of hashes
sub bulk_insert {my ($self, $table, $records, $options) = @_;my ($stmt) = $self->pg->abstract->insert($table, $records->[0], $options);eval {my $tx = $self->pg->db->begin;my $i = $self->pg->db->dbh->prepare($stmt);while ( my $next = shift @$records ) {$i->execute(@$next{sort keys %$next});}$tx->commit;};return $@ if $@;}
--
You received this message because you are subscribed to the Google Groups "Mojolicious" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mojolicious+unsubscribe@googlegroups.com.
as Sebastian said at the beginning, it's really all that useful for Mojo::Pg.
sub bulk_upsert {
my ($self, $table, $records, $options) = @_;
my ($stmt) = $self->pg->abstract->insert($table, $records->[0], {on_conflict => [$table, $records->[0]], %$options});
eval {my $tx = $self->pg->db->begin;my $i = $self->pg->db->dbh->prepare($stmt);while ( my $next = shift @$records ) {
$i->execute(@$next{sort keys %$next}, @$next{sort keys %$next});
sub bulk_insert {my ($self, $table, $records, $options) = @_;my ($stmt) = $self->pg->abstract->insert($table, $records->[0], $options);eval {my $tx = $self->pg->db->begin;my $i = $self->pg->db->dbh->prepare($stmt);while ( my $next = shift @$records ) {$i->execute(@$next{sort keys %$next});}$tx->commit;};return $@ if $@;}
There's a lot wrong with this. You should hold on to $db objects, there's a pool of connections inMojo::Pg. Manual prepare/execute is pointless, Mojo::Pg::Database has a transparent statementhandle cache, once you hold on to the $db object you get caching for free without having to doanything. Just pass the same data structure to ->insert or same SQL to ->query and you're done.
my $db = $pg->db;my $results = $db->select('a', [qw/first last birthday age phone/], undef, {limit => $limit, offset => 1});my $tx = $db->begin;while ( my $next = $results->hash ) {$db->insert('b', $next);}$tx->commit;
Thanks for the feedback! Is this a correct implementation of what you're saying?my $db = $pg->db;my $results = $db->select('a', [qw/first last birthday age phone/], undef, {limit => $limit, offset => 1});my $tx = $db->begin;while ( my $next = $results->hash ) {$db->insert('b', $next);}$tx->commit;
Almost. You do not want more than one active statement handle per connection. The while loopis pointless there anyway, because DBD::Pg does not support cursors. So all results will be sentto DBD::Pg as soon as you call $results->hash for the first time anyway. Better to use$results->hashes->each, which releases the statement handle and frees up the connection forinserts.
my $db = $pg->db;
my $tx = $db->begin;$results->hashes->each(sub{$db->insert('b', $_)});$tx->commit;
--
You received this message because you are subscribed to the Google Groups "Mojolicious" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mojolicious...@googlegroups.com.