Sorcery Activity loggings

45 views
Skip to first unread message

Hans

unread,
Sep 25, 2015, 8:27:05 AM9/25/15
to Ruby on Rails: Talk
I am now refactoring and optimizing my code and noticed that the calls to update last_activity_at varies between 50 and 100 ms.
 I think is is too much to be ok för a simple update.

Have others different or the same experience of sorcery ?

My user table has 6000 rows and 36 fields. (too many - should be refactored anyhow). 

Can the slow response times depend on this table, sorcery, the server or the rest of application

What is wrong ?



Albert Ramstedt

unread,
Sep 25, 2015, 9:16:44 AM9/25/15
to rubyonra...@googlegroups.com
Hello Hans!

In order for anyone to help you (who might not be experienced with sorcery), a lot more information is needed. But perhaps just some debugging hints would be helpful:

Could you write the output from the SQL (and a explain analyze of this query) that is triggered with the update. Are there any callbacks (or database-level triggers, index updates) doing stuff in an update, or is it only the sql that takes time?

An update to a table of 6000 rows should not take close to that time to just update a date. But there might be other stuff happening when you do this.

Albert


--
You received this message because you are subscribed to the Google Groups "Ruby on Rails: Talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rubyonrails-ta...@googlegroups.com.
To post to this group, send email to rubyonra...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rubyonrails-talk/1e57391f-fe96-47ec-ae56-ab918ae75a1f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hans

unread,
Sep 25, 2015, 10:17:36 AM9/25/15
to Ruby on Rails: Talk
Thanks for your replay
However as sorcery is a gem and the update of last_activity_at and last login_at is made from inside the gem I cannot add the explain command. Here is some relevant log outputs

[1m [36mSQL (75.3ms) [0m   [1mUPDATE `users` SET `last_login_at` = '2015-09-25 11:22:12' WHERE `users`.`id` = 1 [0m
[1m [35mSQL (40.0ms) [0m  UPDATE `users` SET `last_activity_at` = '2015-09-25 11:22:12' WHERE `users`.`id` = 1

to be compared with
1m [35m (0.3ms) [0m  UPDATE `users` SET `visits` = 1124, `updated_at` = '2015-09-25 11:22:12' WHERE `users`.`id` = 1
where I do about the same thing in my application

Matt Jones

unread,
Oct 1, 2015, 10:53:23 AM10/1/15
to Ruby on Rails: Talk


On Friday, 25 September 2015 10:17:36 UTC-4, Hans wrote:
Thanks for your replay
However as sorcery is a gem and the update of last_activity_at and last login_at is made from inside the gem I cannot add the explain command. Here is some relevant log outputs

[1m [36mSQL (75.3ms) [0m   [1mUPDATE `users` SET `last_login_at` = '2015-09-25 11:22:12' WHERE `users`.`id` = 1 [0m
[1m [35mSQL (40.0ms) [0m  UPDATE `users` SET `last_activity_at` = '2015-09-25 11:22:12' WHERE `users`.`id` = 1

to be compared with
1m [35m (0.3ms) [0m  UPDATE `users` SET `visits` = 1124, `updated_at` = '2015-09-25 11:22:12' WHERE `users`.`id` = 1
where I do about the same thing in my application


75ms and 40ms seem wildly out of line for updating a single row in a table with only 6000 rows. Can you provide more specifics on how the application server and database server are set up? In particular, how long does it take for a simple `ping` to get from the application server to the database?

--Matt Jones

Hans Marmolin

unread,
Oct 2, 2015, 5:55:18 AM10/2/15
to rubyonra...@googlegroups.com
Matt, Thanks for your answer

A ping gives this result
64 bytes from 193.45.5.64: icmp_req=7 ttl=64 time=0.013 ms
64 bytes from 193.45.5.64: icmp_req=8 ttl=64 time=0.029 ms
64 bytes from 193.45.5.64: icmp_req=9 ttl=64 time=0.017 ms

The server is an Apacheserver with Phusion Passenger Nginx on our own dedicated computer at a web hotell that uses apache + fastCgi + ruby-on-rails under Ubuntu Linux
We are using rails 3.2 and ruby 1.9
> You received this message because you are subscribed to a topic in the Google Groups "Ruby on Rails: Talk" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/rubyonrails-talk/MEYDCMqBCQs/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to rubyonrails-ta...@googlegroups.com.
> To post to this group, send email to rubyonra...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/rubyonrails-talk/ff65b925-8ae2-4fef-b357-b3fda05ea8e1%40googlegroups.com.

Albert Ramstedt

unread,
Oct 5, 2015, 8:09:42 AM10/5/15
to rubyonra...@googlegroups.com
Firstly, you could most certainly change the gem you use to output explain information. The easy way if you have the data locally, is just to do "bundle open <gemname>" and edit the gems code to do this. Otherwise fork on github, insert debugging logging in your fork, switch to using your fork instead of the gem.

Secondly, it seems very unlikely that it is the gem that is the problem, if an update really takes 70ms, with that SQL that you replied with, it should be easily reproducable in a local sql session, where you also can do explain analyze etc.

Do make sure that the query is not run exactly the same twice in a row though, since that second query would most likely be cached. I am not 100% sure how (do you use postgres?) the database handles caching in updates, but I think it could have an impact.

I would recommend you to google a bit for useful articles about using EXPLAIN and ANALYZE since they are great tools to know when troubleshooting SQL queries.

I suspect the machine you run this on is rather slow, or the database is capped somehow? Or it could just be that you have big indices set up on the table you are trying to update, which slows down inserts and updates.

Albert

Reply all
Reply to author
Forward
0 new messages