Oracle PL/SQL: Bulk Processing with BULK COLLECT and FORALL
Bulk Processing techniques in Oracle PL/SQL can significantly improve the performance of SQL operations by minimizing context switches between the PL/SQL and SQL engines. The two primary methods for achieving this are:
BULK COLLECT: Retrieves multiple rows with a single fetch, improving data retrieval speed. FORALL: Allows for efficient INSERT, UPDATE, and DELETE operations using collections to affect multiple rows at once.

Bulk Processing with BULK COLLECT and FORALL· BULK COLLECT: SELECT statements that retrieve multiple rows with a single fetch, improving the speed of data retrieval· FORALL: INSERTs, UPDATEs, and DELETEs that use collections to change multiple rows of data very quicklyPL/SQL is so tightly integrated with the SQL language, you might be wondering why special features would be needed to improve the performance of SQL statements inside PL/SQL. The explanation has everything to do with how the runtime engines for both PL/SQL and SQL communicate with each other—through a context switch.
Source: Mukesh K Suthar on Oracle PL/SQL: Bulk Processing with BULK COLLECT and FORALL
BULK COLLECT
Bulk binds can improve the performance when loading collections from a queries. The BULK COLLECT INTO
construct binds the output of the query to the collection. To test this create the following table.
CREATE TABLE bulk_collect_test AS SELECT owner, object_name, object_id FROM all_objects;
The following code compares the time taken to populate a collection manually and using a bulk bind.
SET SERVEROUTPUT ON DECLARE TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE; l_tab t_bulk_collect_test_tab := t_bulk_collect_test_tab(); l_start NUMBER; BEGIN -- Time a regular population. l_start := DBMS_UTILITY.get_time; FOR cur_rec IN (SELECT * FROM bulk_collect_test) LOOP l_tab.extend; l_tab(l_tab.last) := cur_rec; END LOOP; DBMS_OUTPUT.put_line('Regular (' || l_tab.count || ' rows): ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk population. l_start := DBMS_UTILITY.get_time; SELECT * BULK COLLECT INTO l_tab FROM bulk_collect_test; DBMS_OUTPUT.put_line('Bulk (' || l_tab.count || ' rows): ' || (DBMS_UTILITY.get_time - l_start)); END; / Regular (42578 rows): 66 Bulk (42578 rows): 4 PL/SQL procedure successfully completed.
Remember that collections are held in memory, so doing a bulk collect from a large query could cause a considerable performance problem. In actual fact you would rarely do a straight bulk collect in this manner. Instead you would limit the rows returned using the LIMIT clause and move through the data processing smaller chunks. This gives you the benefits of bulk binds, without hogging all the server memory. The following code shows how to chunk through the data in a large table.
SET SERVEROUTPUT ON DECLARE TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE; l_tab t_bulk_collect_test_tab; CURSOR c_data IS SELECT * FROM bulk_collect_test; BEGIN OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 10000; EXIT WHEN l_tab.count = 0; -- Process contents of collection here. DBMS_OUTPUT.put_line(l_tab.count || ' rows'); END LOOP; CLOSE c_data; END; / 10000 rows 10000 rows 10000 rows 10000 rows 2578 rows PL/SQL procedure successfully completed.
So we can see that with a LIMIT 10000 we were able to break the data into chunks of 10,000 rows, reducing the memory footprint of our application, while still taking advantage of bulk binds. The array size you pick will depend on the width of the rows you are returning and the amount of memory you are happy to use.
From Oracle 10g onward, the optimizing PL/SQL compiler converts cursor FOR LOOPs into BULK COLLECTs with an array size of 100. The following example compares the speed of a regular cursor FOR LOOP with BULK COLLECTs using varying array sizes.
SET SERVEROUTPUT ON DECLARE TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE; l_tab t_bulk_collect_test_tab; CURSOR c_data IS SELECT * FROM bulk_collect_test; l_start NUMBER; BEGIN -- Time a regular cursor for loop. l_start := DBMS_UTILITY.get_time; FOR cur_rec IN (SELECT * FROM bulk_collect_test) LOOP NULL; END LOOP; DBMS_OUTPUT.put_line('Regular : ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk with LIMIT 10. l_start := DBMS_UTILITY.get_time; OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 10; EXIT WHEN l_tab.count = 0; END LOOP; CLOSE c_data; DBMS_OUTPUT.put_line('LIMIT 10 : ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk with LIMIT 100. l_start := DBMS_UTILITY.get_time; OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 100; EXIT WHEN l_tab.count = 0; END LOOP; CLOSE c_data; DBMS_OUTPUT.put_line('LIMIT 100: ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk with LIMIT 1000. l_start := DBMS_UTILITY.get_time; OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 1000; EXIT WHEN l_tab.count = 0; END LOOP; CLOSE c_data; DBMS_OUTPUT.put_line('LIMIT 1000: ' || (DBMS_UTILITY.get_time - l_start)); END; / Regular : 18 LIMIT 10 : 80 LIMIT 100: 15 LIMIT 1000: 10 PL/SQL procedure successfully completed. SQL>
You can see from this example the performance of a regular FOR LOOP is comparable to a BULK COLLECT using an array size of 100. Does this mean you can forget about BULK COLLECT in 10g onward? In my opinion, no. I think it makes sense to have control of the array size. If you have very small rows, you might want to increase the array size substantially. If you have very wide rows, 100 may be too large an array size.