public void execute(Tuple input, BasicOutputCollector collector) {
try {
con = connector.getConnection(sqlDBUrl, sqlUser, sqlPassword);
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (SQLException e) {
e.printStackTrace();
} finally {
//System.out.print("Connection created.");
}
PreparedStatement pst = null;
//parse out the Status object from the first tuple
Status s = (Status) input.getValue(0);
try {
pst = con.prepareStatement("INSERT INTO " + db + " (tweet)" +
"VALUES (?);");
pst.setString(1, s.toString());
//execute the SQL
pst.executeUpdate();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
if(pst != null) {try {pst.close();} catch (SQLException e) {e.printStackTrace();}}
if(con != null) {try {con.close();} catch (SQLException e) {e.printStackTrace();}}
}
}
I didn't even realize that was an option for the JVM. Thanks. I'll do that and be back if I have any problems or need any help.
On Thursday, January 10, 2013 1:58:04 PM UTC-5, Michael Rose wrote:
Have you added -XX:+HeapDumpOnOutOfMemoryError to your JVM options?Once it OOMs with that you'll have a heap dump to work with and run jhat over to explore your object graph.Alternately, you can attach a profiler like YourKit to your running topology and watch your object allocations.On Thursday, January 10, 2013 at 11:55 AM, Chris Maness wrote:
Hello all, I have a very simple topology that I'm running that takes in a tweet from the 1% sample and stores it in a mysql database. My problem comes when I allow the application to run overnight it runs out of heap space. No matter how much heap I allocate using the -Xmx argument it fills up after running for about 24 hours or longer (I've tried up to 16GB allocated to it). I use the code for the twitter sample spout that's included in the storm sample code and for the time being I'm running the topology from Eclipse until I get the bugs worked out of it. For the MySQL bolt I have the following code:
public void execute(Tuple input, BasicOutputCollector collector) {try {con = connector.getConnection(sqlDBUrl, sqlUser, sqlPassword);} catch (ClassNotFoundException e) {e.printStackTrace();} catch (SQLException e) {e.printStackTrace();} finally {//System.out.print("Connection created.");
}PreparedStatement pst = null;//parse out the Status object from the first tuple
Status s = (Status) input.getValue(0);try {pst = con.prepareStatement("INSERT INTO " + db + " (tweet)" +"VALUES (?);");pst.setString(1, s.toString());
//execute the SQL
pst.executeUpdate();} catch (SQLException e) {// TODO Auto-generated catch blocke.printStackTrace();} finally { if(pst != null) {try {pst.close();} catch (SQLException e) {e.printStackTrace();}} if(con != null) {try {con.close();} catch (SQLException e) {e.printStackTrace();}}}}
Same problem here-- how did you resolve this?
On Monday, January 14, 2013 7:41:39 AM UTC-8, Chris Maness wrote:So after poking around with the Eclipse Memory Analyzer I see that java.util.concurrent.LinkedBlockingQueue is the root of my problem. Why is this taking up so much room in memory? I'm guessing this is the queue that serves the tuples up to the next bolt.
--
You received this message because you are subscribed to a topic in the Google Groups "storm-user" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/storm-user/4CPHtPfSdVY/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to storm-user+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.