Register Login

How to solve Tablespace Full Issue?

Updated Jun 06, 2018

Hi All,
I have a tablespace full issue in my environment.
Some are saying adding datafiles is good, some are saying resizing datafiles is good. I am totally got stucked and not able to choose which way I need to go.
Please suggest me which is good and why it is?


Comments

  • 17 Mar 2015 3:27 pm Nitesh Singh

    You should consider having some max limit for the datafiles. Once the limit is full use another datafile.
    This helps in most of the environments for good management.

  • 17 Mar 2015 3:27 pm Abhijeet Mudgal

    Find out the tables which are occupied more space, check whether you have any backup tables(if not needed consult with your pm). More space is more money.

  • 17 Mar 2015 3:28 pm Romil Tripathi

    Add data file and chill

  • 17 Mar 2015 3:29 pm Chandan Singh Parihar

    If it is small file tablespace we can add or resize upto 32GB ,so check exsting datafile size already reached 32GB then add datafile.

  • 17 Mar 2015 3:29 pm Sonia Barwar

    Most important thing when there is load there equal ratio of purge also from application end we as a DBA need to stuck of adding or resizing data file to size of 32 g max. If we found any one of the table space which if growing and not purging at all we need to have discussion with app team to find out reason and from database end we can resize to high water mark reached but once approved by apps team as they are owners problem can be solved.

  • 17 Mar 2015 3:29 pm Jyoti Pandey

    Most of the time we add a datafile.

  • 17 Mar 2015 3:31 pm Sushma

    It is best practice to keep ur datafiles to maximum of 4gb rather than extending to its maximum capability of 32gb for better search operations I/o is better. In a 32 gb your chunk of data will be more scattered as compared to 4 gb.

  • 17 Mar 2015 3:31 pm Sugandh

    Oracle will search block in data file and when you distribute blocks to n number of data files of 4 GB I/o will take time and search latency will increase

  • 17 Mar 2015 3:33 pm Sugandh

    If you have one 32 GB data file compare to n number of 4 GB data files which one will be better.

  • 17 Mar 2015 3:37 pm Sugandh

    To search blocks, one search and one I/o for 32 GB datafile where as n times for n number of datafiles which is of 4 GB. Recovery time increases.

  • 17 Mar 2015 3:39 pm Jyoti Pandey

    No. When you have multiple files your workload is distributed. Large datafiles takes longer time to recover as compared to multiple smaller datafiles.

  • 17 Mar 2015 3:39 pm Sugandh

    For large datafiles we have maintenance problem and and talking about I/O depends on the size. depnds on sql, depends on disks, so many factors?

  • 17 Mar 2015 3:40 pm Jyoti Pandey

    When you break your tablespace in multiple datafile spread across multiple mount points you are bound to get improved I/O throughput.

  • 05 Jun 2018 9:36 pm Guest

    I know this is an old question but maybe this will help answer those facing this questions today in the age of Oracle 12c.

    Generally it will come down to individual preference however the concensus today is to grow a single file up to the lowest of the following supported maximums... your backup system, the OS hosting the ddatabase files, or the Oracle RDBMS limit for that version.

    While making use of multiple files can help in access speeds it is only faster if each file is hosted on a seperate partition/disk/etc (think of RAID stripes).  If all of the files are on the same disk then this is actually much slower, especialy if the data to be accessed is spread across multiple DBF's

    Hope this helps folks still facing this question today.


×