Hello,
I've struggled with this in various forms over the seven years or so we've been running DSpace. High load on public servers can easily exhaust PostgreSQL connection slots. The easy answer is to increase the connection limits, but before that it's better to understand why the system load is increasing. Here are a few tips.
The easiest thing is to enable DSpace's XML sitemaps. Search engines like Google really hammer the repository as they crawl and click all sorts of dynamic links in the Browse and Discovery sidebar. Instead, you register your web property with Google Webmaster Tools and give them the path to your sitemap so they can get to each item directly without crawling haphazardly. Once you're sure Google is consuming your sitemap, you can block them from the dynamic pages in robots.txt. Here's the link on the wiki for DSpace 4:
Second, look at your web server access logs. You might see many requests from bots like Bing, Yandex, Google, Slurp, etc, and notice they will all becoming from different IP addresses—sometimes from five or ten concurrently! Another place you might see this is in the "Current Activity" tab in the DSpace Admin UI control panel. The problem with this is that each of these connections creates a new Tomcat session, which consumes precious memory, CPU, and other resources. You can enable a Crawler Session Manager Valve in your Tomcat config which will tell Tomcat to make all user agents matching a certain pattern use a single session. There are some notes from me in the comments here:
And finally, in the last link is a discussion about updating the DSpace defaults for PostgreSQL connections from a recently developers meeting.
I hope that helps. Cheers,