I issued the drop user command as above. I know it can take some time to
complete
this operation, but can anyone tell me how long? Is there a way to check it
Oracle
is really doing something or if it hangs under Solaris? Top doesn't show
much activity
on the CPU or disks, is that ok?
Thanks
Tom
The time needed depends on the number of objects to be dropped and
above all the number of extents used.
One easy way to check is to run repeatedly from a different session
select object_type, count(*) from dba_objects where owner =
'<user_to_be_dropped>' group by object_type
Hth
Sybrand Bakker, Senior Oracle DBA
To reply remove -verwijderdit from my e-mail address
I sent such query half an hour ago and have still no response...
Is that normal?
It is. There must be something seriously wrong with your
configuration.
Without further 'clues' however, I can't help you any further. It can
be really anything.
Regards
What would you check at first, as senior Oracle DBA?
Database configuration, which means
- how much memory in the system
- settings of db_block_buffers
- shared_pool_size
- any other default init.ora setting that should always be changed
(there are too many to mention them all)
- number of disk drives used.
- contents of the alert_<sid>.log
Hth
Then I would pick one segment, and drop that one segment.
Then I would repeat for another segment.
If individual segment drops are working correctly, then there's a good
chance that a particular segment is throwing a wobbly, and that might be
because it has a bazillion extents (select segment_name, count(*) from
dba_extents group by owner having owner='FRED'; to check). It could also be
that you're running into trouble performing the recursive SQL needed by
Oracle to work out what to drop, and maybe a bigger shared pool might help
that. It could also be half a dozen other things, I'm afraid.
I've been known to drop 300 tables individually, successfuly, where a drop
user cascade failed to work.
Regards
HJR
"Tom Miskiewicz" <miski...@no.spam.yahoo.com> wrote in message
news:Gl_p9.4$Ih3....@news.ecrc.de...
> - how much memory in the system
2GB
> - settings of db_block_buffers
db_block_buffers = 75941
> - shared_pool_size
shared_pool_size = 207370444
> - any other default init.ora setting that should always be changed
> (there are too many to mention them all)
where can I read about it then?
> - number of disk drives used.
2
> - contents of the alert_<sid>.log
Fri Oct 11 11:43:11 2002
Errors in file
/export/home/oracle/admin/arstest1/bdump/arstest1_ora_4490.trc:
ORA-07445: exception encountered: core dump [kcfcbkq()+156] [SIGSEGV]
[Address not mapped to object]
[2972260692] [] []
As long as I'm deleting e.g. indexes one by one it's going. Slow, but
working.
As soon as I select about 40 I can wait forever. Nothing happens. Help
please!
> If individual segment drops are working correctly, then there's a good
> chance that a particular segment is throwing a wobbly, and that might be
> because it has a bazillion extents (select segment_name, count(*) from
> dba_extents group by owner having owner='FRED'; to check).
This doesn't work on my box too.
When I do desc dba_extents I get
Your query gives me:
SQL> select segment_name, count(*) from dba_extents
2 group by owner having owner='ARADMIN';
select segment_name, count(*) from dba_extents
*
ERROR at line 1:
ORA-00979: not a GROUP BY expression
> that you're running into trouble performing the recursive SQL needed by
> Oracle to work out what to drop, and maybe a bigger shared pool might help
> that. It could also be half a dozen other things, I'm afraid.
> I've been known to drop 300 tables individually, successfuly, where a drop
> user cascade failed to work.
Thanks
Tom
Peter
"Tom Miskiewicz" <miski...@no.spam.yahoo.com> schreef in bericht
news:mbfq9.13$CU4....@news.ecrc.de...
Depending on the block size of the database (parameter db_block_size,
usually 8k) your setting of db_block_buffers looks way too high (some
560 M) and you also have some 200M shared pool. You must be running
out of virtual memory, and this suspicision is underpinned by the core
dumps you are getting. The guideline of Oracle is to limit the cache
to one third of *physical* RAM, your case some 600M
You may also suffer from an insufficient log_buffer parameter, an
insufficient sort_area_size parameter (the default is 64k, which is
just, ahem, ridiculous), and incorrectly sized online redolog files
(in this particular case you need to check for 'checkpoint not
complete' and 'cannot allocate log, archival required' messages in
your alert.
Are the affected objects in a locally managed tablespace or a
dictionary managed tablespace? In the latter case, you will be getting
so-called space transactions (which involves updating sys.uet$ and
sys.fet$) using the ST lock, of which there can be only one.
Is there any chance you can blow away the tablespace instead of the
user (after exporting of course), ie using 'drop tablespace including
contents'. This won't look at the schema level and 'just do it'
>Your query gives me:
>
>SQL> select segment_name, count(*) from dba_extents
> 2 group by owner having owner='ARADMIN';
>select segment_name, count(*) from dba_extents
> *
>ERROR at line 1:
>ORA-00979: not a GROUP BY expression
should be corrected to
select segment_name, count(*)
from dba_extents
where owner = 'ARADMIN'
group by segment_name
Hth
> your setting of db_block_buffers looks way too high (some
> 560 M) and you also have some 200M shared pool. You must be running
> out of virtual memory, and this suspicision is underpinned by the core
> dumps you are getting. The guideline of Oracle is to limit the cache
> to one third of *physical* RAM, your case some 600M
> You may also suffer from an insufficient log_buffer parameter, an
> insufficient sort_area_size parameter (the default is 64k, which is
> just, ahem, ridiculous), and incorrectly sized online redolog files
> (in this particular case you need to check for 'checkpoint not
> complete' and 'cannot allocate log, archival required' messages in
> your alert.
It this case, could you pleaes recommend me something, so that I can try:
db_block_buffers = ?
shared_pool_size = ?
sort_area_size = ?
incorrectly sized online redolog files = set what, where?
> Are the affected objects in a locally managed tablespace or a
> dictionary managed tablespace? In the latter case, you will be getting
> so-called space transactions (which involves updating sys.uet$ and
> sys.fet$) using the ST lock, of which there can be only one.
How do I check that?
Thanks in advance!
Tom
This is an internal (read unhandled) error and as such should be reported to
Oracle support. Like Sybrand I'd suspect that you are running out of RAM,
but this should be logged as a support call.
From the docs
ORA-07445 exception encountered: core dump [string] [string] [string]
[string] [string] [string]
Cause: An operating system exception occurred which should result in the
creation of a core file. This is an internal error.
Action: Contact Oracle Customer Support.
If you want to try reducing the memory allocation drop your block biffers
parameter to (say) 25000 or 1/3rd of your current value.
HTH
--
Niall Litchfield
Oracle DBA
Audit Commission UK
*****************************************
Please include version and platform
and SQL where applicable
It makes life easier and increases the
likelihood of a good answer
******************************************
I did that, restarted the server, but the problems consist.
Sorry. It should of course have read exactly as you say.
Regards
HJR
"Sybrand Bakker" <gooid...@sybrandb.demon.nl> wrote in message
news:qe0jqu0kda6ri08if...@4ax.com...
It's not really that difficult. You get the name of the segment from the
first query I gave you, and then you issue the command:
drop table fredtbl cascade constraints;
drop index blah;
drop cluster foo;
...or whatever other segment types you have listed.
HJR
BTW, I'm currently trying to drop the tablespace. It's 250MB big
but the Enterprise Manager is thinking already since 2 hours...
Can someone tell me, what the heck is going on with this box?
Thanks
Tom
>On Sun, 13 Oct 2002 16:12:54 +0200, "Tom Miskiewicz"
><miski...@no.spam.yahoo.com> wrote:
>
>>Your query gives me:
>>
>>SQL> select segment_name, count(*) from dba_extents
>> 2 group by owner having owner='ARADMIN';
>>select segment_name, count(*) from dba_extents
>> *
>>ERROR at line 1:
>>ORA-00979: not a GROUP BY expression
>
>should be corrected to
>select segment_name, count(*)
>from dba_extents
>where owner = 'ARADMIN'
>group by segment_name
Why not :
select segment_name, extents
from dba_segments
where owner = 'ARADMIN';
>> Depending on the block size of the database (parameter db_block_size,
>> usually 8k)
>Yes, it's 8k
>
>> your setting of db_block_buffers looks way too high (some
>> 560 M) and you also have some 200M shared pool. You must be running
>> out of virtual memory, and this suspicision is underpinned by the core
>> dumps you are getting. The guideline of Oracle is to limit the cache
>> to one third of *physical* RAM, your case some 600M
>> You may also suffer from an insufficient log_buffer parameter, an
>> insufficient sort_area_size parameter (the default is 64k, which is
>> just, ahem, ridiculous), and incorrectly sized online redolog files
>> (in this particular case you need to check for 'checkpoint not
>> complete' and 'cannot allocate log, archival required' messages in
>> your alert.
>It this case, could you pleaes recommend me something, so that I can try:
>
>db_block_buffers = ?
would go for 25000 or 12500
>shared_pool_size = ?
80 M should be sufficient
>sort_area_size = ?
1048576 (1M)
>incorrectly sized online redolog files = set what, where?
>
select * from v$log will show you the size.
you'll have to issue alter database drop logfile group
alter database add logfile group statements to resolve that, with
interspersed alter system switch logfile statements, you can't change
the size online.
>
>> Are the affected objects in a locally managed tablespace or a
>> dictionary managed tablespace? In the latter case, you will be getting
>> so-called space transactions (which involves updating sys.uet$ and
>> sys.fet$) using the ST lock, of which there can be only one.
>How do I check that?
>
select * from v$lock will definitely show you (in lock_type) ST
transactions
>Thanks in advance!
>
>Tom
col name form a35
col value form a20
set head off
set feedback off
select ' '||upper(name)name,value
from v$parameter
where name = 'timed_statistics'
/
set feedback on
set head on
col event form a13 trunc head "Event| Waiting For"
col p2 form 99999 trunc
col wait_time form 999 trunc head "Last|Wait|Time"
col program form a13 trunc
col command form a7 trunc head "Command"
col username form a8 trunc
col state form a10
col sid form 999 trunc
col last_call_et form 9999999 trunc head "Last Call|In Secs"
select a.sid,username,b.program program,
decode(command,0,'None',2,'Insert',3,'Select',
6,'Update',7,'Delete',10,'Drop Index',12,'Drop Table',
45,'Rollback',47,'PL/SQL',command) command,
last_call_et,
event,p1,p2,state,wait_time
from v$session_wait a,V$session b
where b.sid=a.sid
order by 1
/
regards
Srivenu
--
Niall Litchfield
Oracle DBA
Audit Commission UK
*****************************************
Please include version and platform
and SQL where applicable
It makes life easier and increases the
likelihood of a good answer
******************************************
"Tom Miskiewicz" <miski...@no.spam.yahoo.com> wrote in message
news:0piq9.7$Xn5....@news.ecrc.de...
Check the number of extents, particularly if it's a Dictionary Managed
Tablespace. Last week I had to drop a user who'd managed to have close
to 50,000 extents, a single segment had over 20,000. It took Oracle
a few hours to do the processing for that. Converting that instance
(fortunately just a prototyping instance) to LMT with uniform extents
is on my to-do list :-).
--
Andrew Mobbs - http://www.chiark.greenend.org.uk/~andrewm/