Uh ? Maximum number of requests per session ?

4,615 views
Skip to first unread message

The Bitland Prince

unread,
Oct 28, 2010, 12:02:03 PM10/28/10
to ravendb
Hello,

ealier this month I changed my code in order not to "micro-manage"
sessions and switched to a common, per-request, session management.

Basically my ASP.NET code opens a common session for each request (in
Begin_Request) and closes it at end. Meanwhile, lots of things happen,
including loading of multiple web controls, localization of strings
and so on. Today I received for the first time ever this error:

"The maximum number of requests (30) allowed for this session has been
reached.
Raven limits the number of remote calls that a session is allowed to
make as an early warning system. Sessions are expected to be short
lived, and
Raven provides facilities like Load(string[] keys) to load multiple
documents at once and batch saves.
You can increase the limit by setting
DocumentConvention.MaxNumberOfRequestsPerSession or
DocumentSession.MaxNumberOfRequestsPerSession, but it is
advisable that you'll look into reducing the number of remote calls
first, since that will speed up your application signficantly and
result in a
more responsive application."

Which is clear: it just suggests not to make too many calls using the
same session. But my question then is: isn't then better to try to
micro-manage session in order to make VERY short lived ?

Since I really don't know how many request my code could ever do using
same session, to be safe I should increase this value to a very high
value (1000 perhaps ?), even if most of them wil be load requests and
a very few of them are supposed to be write request. What are the
exact implications of raising request limit for each session ?

Thank you.

gdmk

unread,
Oct 28, 2010, 4:44:02 PM10/28/10
to ravendb
You most probably don't need to make these many calls.
The limit is there to prevent SELECT N + 1 type of errors.
http://ayende.com/Blog/archive/2006/05/02/CombatingTheSelectN1ProblemInNHibernate.aspx

Refactor.

On Oct 28, 6:02 pm, The Bitland Prince <guglielmo.meng...@gmail.com>
wrote:

Ayende Rahien

unread,
Oct 29, 2010, 2:49:56 AM10/29/10
to rav...@googlegroups.com
Basically, this is meant to stop you from doing SELECT N+1
30 is a VERY high number ( I usually go with 3 - 5).
If you are hitting this limit, you need to figure out why. The most likely reason is that you are calling RavenDB in a loop.

This is an early warning system that is meant to let you know about potential performance problems.

The Bitland Prince

unread,
Oct 29, 2010, 7:36:10 AM10/29/10
to ravendb
Hello,

thank you for your reply but to me this is something expectable.

What I am doing is opening a session in my App_BeginRequest event
(remember ? You told me that was the best way to do that :-) and use
such session for all DB access during that request. Now, let alone I
didn't implement any cache yet, all stuff for my page is retrieved via
that session because it is shared across a single page request and it
will be closed when request ends.

Now consider this:

* page has to load itself and its settings (multiple DB access though
those can be reduced by implementing a caching system);
* page has to load controls and their data (multiple DB access and
there could be an unlimited number of controls on each page);
* each control will probably load one or more data objects to retrieve
data;
* localization resource provider will load localization stuff using
the same session;
* additional processing might occurr.

Now problem is, having more than 30 requests per session is NOT
something related to a bug or other race conditions (infinite loops
and so on). That's what will normally happen on page processing
lifecycle and this is totally expected. I'm not able to predict how
many connections will be needed for a single request.

That is exactly why I was micro-managing sessions. That's what I would
do with SQL database connections but I've been told to implement a
single, shared session per request, because that would have been
better. Now I find myself a bit lost since this 30 request per session
limit which, I'd like to make it clear, is NOT unexpected.

What is the best way to implement this kind of system (which basically
is a CMS) ?

Thank you all.
> > Thank you.- Nascondi testo citato
>
> - Mostra testo citato -

Chris Marisic

unread,
Oct 29, 2010, 8:45:06 AM10/29/10
to ravendb
What they've been pointing out is that your page making 30+ requests
is bad. Even with 10 users you're looking at 300 requests, make it 100
users... and those number of requests are every few seconds. If you
need that many requests your application's data structure is not very
well laid out, and you'll face extreme scalability issues past even a
few dozen users.

So you have a few options, 1 ignore that your application is written
very inefficiently and accept that should you ever need to scale you
will just need lots of hardware and have an application that works
correctly in a web farm and just increase the max # of connections per
session.

2. address the underlying design issues that require 30+ connections
per request.

On Oct 29, 7:36 am, The Bitland Prince <guglielmo.meng...@gmail.com>

The Bitland Prince

unread,
Oct 29, 2010, 10:26:20 AM10/29/10
to ravendb
Yes, I understand that and I also understand what it means having many
requests coming at once. That rough number will be reduced once
optimization will be kicked in but anyway I expect that we will have a
high number of requests anyway because of the modular design of that
application. You cannot optimize things you don't know at design and
that application is meant to let users develop their own modules and
kick them in. My question was not about that, though.

If there's a limit about the number of requests per session, that
means it is highly inefficient to use more than 30 requests for a
session, unless that's just some kind of debug warning meant to just
warn you about that high number.

So my original question stays because I *already* expected that and
was attempting to micro-manage session. While designing that way, I've
been suggested not to do that because that would have been inefficient
and I've been suggested to use a shared session through all the
request lifetime. Then I hit such limit which and if that's in place,
I think there should be a reason, isn't it ?

My question then is: when having such high number of requests per
session, should I revert back to micro-manage my sessions ? Given that
such high number is not an error but it's by design (and, while
optimization will come, I don't expect to reduce 30+ requests to 2-3-4
ones), is sharing a session through Web request lifetime still the
suggested approach ?

Thanks.
> > > - Mostra testo citato -- Nascondi testo citato

Ayende Rahien

unread,
Oct 29, 2010, 10:35:14 AM10/29/10
to rav...@googlegroups.com
inline

On Fri, Oct 29, 2010 at 1:36 PM, The Bitland Prince <guglielm...@gmail.com> wrote:
Hello,

thank you for your reply but to me this is something expectable.

What I am doing is opening a session in my App_BeginRequest event
(remember ? You told me that was the best way to do that :-)

Yep, and that is still recommended
 
and use
such session for all DB access during that request. Now, let alone I
didn't implement any cache yet, all stuff for my page is retrieved via
that session because it is shared across a single page request and it
will be closed when request ends.


Still a best practice.
 
Now consider this:

* page has to load itself and its settings (multiple DB access though
those can be reduced by implementing a caching system);
* page has to load controls and their data (multiple DB access and
there could be an unlimited number of controls on each page);

Nope, here you are doing something wrong.
We have support for:
session.Load( string[] ids)

Which allows you to load multiple documents in a single remote call.

Actually, I would say that you want to store all state and controls for a page in the page document, not spread them around.
 
* each control will probably load one or more data objects to retrieve
data;

You can use the overload to load or use Include to avoid that.
 
* localization resource provider will load localization stuff using
the same session;

Localization stuff shouldn't be a separate document, it should be something like:

{ "Name": { "He": "אורן", "En": "Oren" } }

 
* additional processing might occurr.


What you are basically saying is that your page might make N queries.
You can enable that if you want, by setting the MaxNumberOfRequests to int.MaxValue
But the reason we have this limit is that you would stop and think about that, and optimize your data access.
 
Now problem is, having more than 30 requests per session is NOT
something related to a bug or other race conditions (infinite loops
and so on). That's what will normally happen on page processing
lifecycle and this is totally expected. I'm not able to predict how
many connections will be needed for a single request.

Actually, not really. At +30 RavenDB requests per request, you are looking at a LOT of remote calls.
Even if we put a 10 ms response time on RavenDB, you just ensured that every one of your requests is going to take at LEAST 300 ms.
And when you have more than a few users, you may get to the point where you have literally thousands of requests hitting RavenDB.
It can handle that, usually, but it isn't something that would be performant or scalable.


That is exactly why I was micro-managing sessions. That's what I would
do with SQL database connections but I've been told to implement a
single, shared session per request, because that would have been
better. Now I find myself a bit lost since this 30 request per session
limit which, I'd like to make it clear, is NOT unexpected.

Again, you can disable this early warning system if you want to.

store.Conventions.MaxNumberOfRequestsPerSession = int.MaxValue;

But be aware that you still have to deal with the problems of N requests.
 
What is the best way to implement this kind of system (which basically
is a CMS) ?


See my previous comments on how to optimize such calls. 

Ayende Rahien

unread,
Oct 29, 2010, 10:37:18 AM10/29/10
to rav...@googlegroups.com
No, you should NOT micro manage the session.
If you want to allow high number requests, just allow it.

In addition to that, you probably want to read this:

The Bitland Prince

unread,
Oct 29, 2010, 11:54:07 AM10/29/10
to ravendb
There surely are very relevant hints I need to consider. As I said, at
this stage, I'm not optimizing that much so an optimization round will
come later. However, a few hints are very relevant and interesting.

Anyway, I still think other parts of code cannot be optimized that
much as development is disconnected via MEF and componentization,
though some common data is shared. For example, consider a
localization resource provider which allows things like this in pages
and controls:

<asp:Literal ID="Literal1" runat="server" Text="<%$ Resources:system,
home%>" />

On a page, you can easily have, say, 10 of them. If you consider 10
controls per page (not so unlikely) and 2 localization items per
control (which is probably underestimated) you could easily have 30
requests to a database only to localize that page. Besides caching
(which will surely happen) what would it be an easier way to reduce
such calls to a handful ? What would it be a good approach in your
opinion ?

Of course, things get different when you KNOW that your team is the
only one developing code for your project: any kind of optimization
might exist, in that case.

But anyway thanks: there are hints I should carefully evaluate.

Ayende Rahien

unread,
Oct 30, 2010, 8:58:48 AM10/30/10
to rav...@googlegroups.com
In the scenario that you present, it is pretty much guaranteed that you want to take another path toward loading the localization data, rather than put each string on its own document.
Precisely because you want to avoid this sort of issue.

Would you mind if I use this as a basis for a post?

DanPlaskon

unread,
Oct 30, 2010, 9:26:09 AM10/30/10
to ravendb
This is actually a pretty interesting use case for Raven....I'm pretty
sure this would be workable with a custom ResourceProviderFactory /
IResourceProvider combo...you're given either the global (class
context) or the local (location context) upfront when you construct
the resource provider in the factory; so you could pull and cache
everything used for the susequent GetObject requests later in the
pipeline.

I believe I've seen caching examples using Sql Server for this sort of
thing, so I assume the impl would be pretty similar.


On Oct 30, 8:58 am, Ayende Rahien <aye...@ayende.com> wrote:
> In the scenario that you present, it is pretty much guaranteed that you want
> to take another path toward loading the localization data, rather than put
> each string on its own document.
> *Precisely *because you want to avoid this sort of issue.
> > > - Mostra testo citato -- Hide quoted text -
>
> - Show quoted text -

Ayende Rahien

unread,
Oct 30, 2010, 9:33:02 AM10/30/10
to rav...@googlegroups.com
I wouldn't even bother, I would simply put all the resources into a single doc.

DanPlaskon

unread,
Oct 30, 2010, 9:37:46 AM10/30/10
to ravendb
Right, but surely fetching that on each request wouldn't even be
needed...wouldn't you just cache that locally when constructing the
resourceprovider itself?

Overall, I have a great deal of interest in trying to put together a
complete stack of Providers for ASP.Net (the relevant ones anyways);
sort of what the memcached guys are doing (details here:
http://memcached.enyim.com/post/1294551316/new-releases-asp-net)

I'd image the following are viable:

SessionProvider
OutputCacheProvider
ResourceProvider
MembershipProvider?

May make for an interested 'raven contrib' project?

On Oct 30, 9:33 am, Ayende Rahien <aye...@ayende.com> wrote:
> I wouldn't even bother, I would simply put all the resources into a single
> doc....
>
> read more »
> > > > > > > > > > DocumentConvention.MaxNumberOfRequestsPerSession or- Hide quoted text -

Ayende Rahien

unread,
Oct 30, 2010, 9:40:47 AM10/30/10
to rav...@googlegroups.com
Yes, you could do that, sure.

And yes, they would be an interesting addition to the contrib.

DanPlaskon

unread,
Oct 30, 2010, 9:48:01 AM10/30/10
to ravendb
The only *tricky* part I could see about a ResourceProvider is
figuring out how to handle the fallback behavior for getting each
resource:

eg:

exists for fr-CA culture? -> exists for fr culture? -> exists for
default culture?

One other thing I'm a little curious about...how would one go about
managing the session in some sort of 'contrib' project? Since you have
little context on how the session is managed in any end-user
application that makes use of it; do you say do a
Session.SaveChanges() explicitly after each operation, or assume that
the user controls that at some higher level? I guess service location
would also be needed to get the session or the sessionfactory as well,
since you don't have control over how those providers are invoked by
asp.net itself and probably can't count in DI to inject instances into
the factory impls.


On Oct 30, 9:40 am, Ayende Rahien <aye...@ayende.com> wrote:
> Yes, you could do that, sure....
>
> read more »
> > > > > > > > > > > If you are hitting this limit, you need to figure- Hide quoted text -

Ayende Rahien

unread,
Oct 30, 2010, 9:55:36 AM10/30/10
to rav...@googlegroups.com
That really depend on the type of the API.
The MembershipProvider API, for example, forces you to do just that.

As for cultures, I would have:


/foo
/foo/fr-CA

Then just load it all and do a chain of responsability between them

The Bitland Prince

unread,
Oct 30, 2010, 10:40:48 AM10/30/10
to ravendb
Hello, I will elaborate later about my code. Just as a summary, I
already have implemented :

MembershipProvider
RoleProvider
ResourceProvider and ResourceProviderFactory (with ResourceDataReader)
SitemapProvider

all of them based on RavenDB. The case of Ayende to put all
localization items into a single object should be evaluated as
resourse provider factories get cached by framework itself but would
it be wise to load all localization keys in memory just to access a
few of them ? By using the same logic, Membership provider should load
all users' data at once...

By the way, having resource provider to fall back is pretty simple
thing. The only tricky thing is knowing keys for invariant must always
be available or ASP.NET page compilation won't succeed.



On 30 Ott, 15:55, Ayende Rahien <aye...@ayende.com> wrote:
> That really depend on the type of the API.
> The MembershipProvider API, for example, forces you to do just that....
>
> leggi tutto
> > > > > > > > > > > > will be closed when request ends.- Nascondi testo citato

DanPlaskon

unread,
Oct 30, 2010, 11:32:42 AM10/30/10
to ravendb
Wow..that's awesome; would you consider making the source available
for these at some point in the future? I would certainly be interested
in helping you out with firming up the impelementations, etc...

In addition, I'd love to see a SessionProvider and OutputCacheProvider
as well, I think they'd be pretty trivial to setup. If you're
interested; hit me up offline and we can discuss further.



On Oct 30, 10:40 am, The Bitland Prince <guglielmo.meng...@gmail.com>
wrote:
> > > > > > > > > > > > So you have a few options, 1 ignore- Hide quoted text -
>
> - Show quoted text -...
>
> read more »

Ayende Rahien

unread,
Oct 30, 2010, 1:13:41 PM10/30/10
to rav...@googlegroups.com
Users and resources are pretty different things.
You usually only have a single user in a particular request, but MANY resources.

DanPlaskon

unread,
Oct 30, 2010, 1:16:36 PM10/30/10
to ravendb
Not to mention that resources are (more or less) immutable during the
lifecycle of the application - so getting a single document containing
all resources would seem like a sane enough thing to do.

On Oct 30, 1:13 pm, Ayende Rahien <aye...@ayende.com> wrote:
> Users and resources are pretty different things.
> You usually only have a single user in a particular request, but MANY
> resources....
>
> read more »
>
> > > > > > > > > > > > > is bad. Even with 10 users you're- Hide quoted text -

The Bitland Prince

unread,
Oct 31, 2010, 8:33:45 AM10/31/10
to ravendb
Hello,

I will surely share source once code will be stable. The whole project
will actually be open-sourced.

Right now, rather than trying to pull all resources out in a single
document, I'm going to implement some key-level caching since .NET
framework creates a singleton for factory objects which will manage
individual resource classes. I think it's more easier and effective to
store keys in a lookup structure (like a dictionary) as they are
retrieved and treat that as a cache for any other request of the same
key.

That way, we only cache keys which have been accessed and might be
accessed again and memory requirements will progressively increase as
more keys need to be cached. Also, that allows to completely ignore
full classes of keys which might not be important during execution
(while localization data usuallt is made of small strings like hello,
bye, welcome and so on, I've seen projects requiring thousands of
them... if you want to support, say, 5 languages that would surely
matter).



On 30 Ott, 18:13, Ayende Rahien <aye...@ayende.com> wrote:
> Users and resources are pretty different things.
> You usually only have a single user in a particular request, but MANY
> resources....
>
> leggi tutto
> > > > > > > > > > > > > is bad. Even with 10 users you're- Nascondi testo citato

Ayende Rahien

unread,
Oct 31, 2010, 8:42:42 AM10/31/10
to rav...@googlegroups.com
How many keys will you have?
Let us say that you have 10,000 different resources.
Let us further say that each resource has 5 cultures and each of them is 32 bytes long.

That still gives you less than 1.5 MB in size if you hold them all in memory. 
But the process of writing the cache, paging things to memory, etc, is going to be much more expensive.

DanPlaskon

unread,
Oct 31, 2010, 12:41:36 PM10/31/10
to ravendb
I wonder how the in-built framework ResourceManager handles this? I
suspect examination of the .NET source or perhaps a little reflector
investigation could answer this.

The only scenario I could think of where caching the entire resource
set may be problematic is when a lot of blobs (eg. image resources,
etc) are involved, clearly on-demand fetching would be advantageous
there...but for a typical string-based resource table? I think Ayende
has the right idea here...memory is cheap compared to all the round-
trips/serialization/etc.

On Oct 31, 8:42 am, Ayende Rahien <aye...@ayende.com> wrote:
> How many keys will you have?
> Let us say that you have 10,000 different resources.
> Let us further say that each resource has 5 cultures and each of them is 32
> bytes long....
>
> read more »
>
> That still gives you less than 1.5 MB in size if you hold them all in
> memory.
> But the process of writing the cache, paging things to memory, etc, is going
> to be *much* more expensive.
>
> On Sun, Oct 31, 2010 at 2:33 PM, The Bitland Prince <
>
>
>
> > > > > > will- Hide quoted text -

Ayende Rahien

unread,
Oct 31, 2010, 12:43:28 PM10/31/10
to rav...@googlegroups.com
Dan,
Let us do the math again, okay?
Most of the images in resources tend to be in the < 5Kb range.
Let us say that we have 10,000 of them...
You are still holding less than 50 MB of memory.

Don't worry about it.

DanPlaskon

unread,
Oct 31, 2010, 12:46:23 PM10/31/10
to ravendb
Ayende,

10-4; not worrying about it ;)

On Oct 31, 12:43 pm, Ayende Rahien <aye...@ayende.com> wrote:
> Dan,
> Let us do the math again, okay?
> Most of the images in resources tend to be in the < 5Kb range.
> Let us say that we have 10,000 of them...
> You are still holding less than 50 MB of memory....
>
> read more »
> > > > > > > > > > > > > > controls per page (not so unlikely) and 2- Hide quoted text -

Bevermans

unread,
Dec 3, 2010, 11:03:50 AM12/3/10
to ravendb
Just as a sidenote (so no help needed for my problem below):

I am too interested in a sort of RavenDB Caching/Resource providers.
Today I ran too into a maximum number of requests error. Usually this
(very helpful) notification forces me to get back to the drawing board
and re-design my services, but today this error happened due to the
fact I was using a commercial Treeview which in Callback mode seems to
build up its tree groundup for every node I expand. So when more and
more nodes get expanded I eventually get this error. I default use
paging so getting data for an expanded node sometimes needs more calls
(eg if an expanded node contains 55 subnodes and my pagesize is 25 I
need to make 3 calls).

Evert Wiesenekker

Ayende Rahien

unread,
Dec 3, 2010, 3:58:47 PM12/3/10
to ravendb
Bevermans,
Are you using a WPF app for this?

Bevermans

unread,
Dec 3, 2010, 5:22:52 PM12/3/10
to ravendb
No it is a 'normal' ASP.NET application.

Ayende Rahien

unread,
Dec 3, 2010, 6:07:16 PM12/3/10
to rav...@googlegroups.com
Then you should rethink your approach
Your control might be bad and you will want to take control oc that yourself

Bevermans

unread,
Dec 7, 2010, 1:22:48 PM12/7/10
to ravendb
Thanks I totally agree with you now (a weekend helps) and again this
error shows to be very helpful...
Reply all
Reply to author
Forward
0 new messages