Oracle updating chunks big table

In order to improve the performance of these operations it is therefore advisable to partition tables greater than 2GB in separate chunks.The data and indexes using certain criteria are partitioned into separate file groups in a database.The selection criterion used is in the form of a partition key, which helps in dividing data in a large table into distinct sets of smaller chunks.

When the script is run against an Oracle 9i database, both bulk operations are faster than the regular for loop.Against an Oracle 10g database, the bulk operation using an array size of 10 rows is actually slower than the cursor for loop, while the operation with an array size of 100 rows is slightly faster.When I say it’s magical, it really does feel magical.It’s amazing how he’s created an entire expressive scripting language that runs in My SQL and feels just right for the job.This clearly demonstrates the implicit array processing being done by Oracle 10g.

The next section shows alternative methods of limiting the data returned by bulk collections.

In order to improve the performance of these operations it is therefore advisable to partition tables greater than 2GB in separate chunks….

Continue Reading → As table size grows due to more data, the performance of insert, update, delete and select SQL operations, as well as maintenance operations tasks such as taking backups, index maintenance and update statistics may take longer to perform.

In this article, I have explained the concepts of partitioning tables and indexes in Oracle and how partitioning can be useful for large tables to improve performance and maintenance.

I have also given real life examples of how these partitions and indexes can be created and used.

This chunking can be achieved using the LIMIT clause of the BULK COLLECT syntax.