On Thu, Jun 7, 2012 at 8:01 PM, Fabricio <
fabr...@hotmail.com> wrote:
> Hi.
>
> I have this problem:
>
> I have PostgreSQL 9.1.3 and the last night crash it.
>
> This was the first error after an autovacuum (the night before last):
>
> <2012-06-06 00:59:07 MDT��� 814 4fceffbb.32e >LOG:� autovacuum: found orphan
> temp table "(null)"."tmpmuestadistica" in database "dbRX"
> <2012-06-06 01:05:26 MDT��� 1854 4fc7d1eb.73e >LOG:� could not rename
> temporary statistics file "pg_stat_tmp/pgstat.tmp" to
> "pg_stat_tmp/pgstat.stat": No such file or directory
> <2012-06-06 01:05:28 MDT��� 1383 4fcf0136.567 >ERROR:� tuple concurrently
> updated
> <2012-06-06 01:05:28 MDT��� 1383 4fcf0136.567 >CONTEXT:� automatic vacuum of
> table "global.pg_catalog.pg_attrdef"
> <2012-06-06 01:06:09 MDT��� 1851 4fc7d1eb.73b >ERROR:� xlog flush request
> 4/E29EE490 is not satisfied --- flushed only to 3/13527A10
> <2012-06-06 01:06:09 MDT��� 1851 4fc7d1eb.73b >CONTEXT:� writing block 0 of
> relation base/311360/12244_vm
> <2012-06-06 01:06:10 MDT��� 1851 4fc7d1eb.73b >ERROR:� xlog flush request
> 4/E29EE490 is not satisfied --- flushed only to 3/13527A10
> <2012-06-06 01:06:10 MDT��� 1851 4fc7d1eb.73b >CONTEXT:� writing block 0 of
> relation base/311360/12244_vm
> <2012-06-06 01:06:10 MDT��� 1851 4fc7d1eb.73b >WARNING:� could not write
> block 0 of base/311360/12244_vm
> <2012-06-06 01:06:10 MDT��� 1851 4fc7d1eb.73b >DETAIL:� Multiple failures
> --- write error might be permanent.
>
>
> Last night it was terminated by signal 6.
>
> <2012-06-07 01:36:44 MDT��� 2509 4fd05a0c.9cd >LOG:� startup process (PID
> 2525) was terminated by signal 6: Aborted
> <2012-06-07 01:36:44 MDT��� 2509 4fd05a0c.9cd >LOG:� aborting startup due to
> startup process failure
> <2012-06-07 01:37:37 MDT��� 2680 4fd05a41.a78 >LOG:� database system
> shutdown was interrupted; last known up at 2012-06-07 01:29:40 MDT
> <2012-06-07 01:37:37 MDT��� 2680 4fd05a41.a78 >LOG:� could not open file
> "pg_xlog/000000010000000300000013" (log file 3, segment 19): No such file or
> directory
> <2012-06-07 01:37:37 MDT��� 2680 4fd05a41.a78 >LOG:� invalid primary
> checkpoint record
>
> And the only option was pg_resetxlog.
>
> After this a lot of querys showed me this error:
> <2012-06-07 09:24:22 MDT 1306 4fd0c7a6.51a >ERROR: missing chunk number 0
> for toast value 393330 in pg_toast_2619
> <2012-06-07 09:24:31 MDT 1306 4fd0c7a6.51a >ERROR: missing chunk number 0
> for toast value 393332 in pg_toast_2619
>
> I lost some databases.
>
> I restarted the cluster again with initdb and then I restored� the databases
> that I could backup (for the other I restored an old backup)
>
> no space or permissions problem. No filesystem or disk error.
>
> Can you help me to know what happened?
I'd say that everything still points to a filesystem error. Have you
tried unmounting it and running an offline check?