asdf13
asked on
OS400 PF: how to reduce unused space in a BLOB field.
Actual issue : there is a PF with 5.000.000 records (reclen 1000 Bytes, but with a BLOB field.) PF occupies about 1,5 TB (!)
A deletion Job deletes older records periodically. REUSE is set to *YES.
Problem: BLOB field-size will not be reduced, also if new records have smaller documents (because of optimized application) as in the old, deleted records.
Theoretical solution: set PF to RERUSE = *NO, write new records - with smaller documents/smaller BLOB-size on new diskspace, and then do a RGZPFM,
to remove deleted records (with larger BLOBs) may reduces filesize.
But there is not enaugh diskspace for a RGZPFM.
Is there any other solution to reduce unused BLOB-space in a BLOB field ?
(OS400 with V7.3)
A deletion Job deletes older records periodically. REUSE is set to *YES.
Problem: BLOB field-size will not be reduced, also if new records have smaller documents (because of optimized application) as in the old, deleted records.
Theoretical solution: set PF to RERUSE = *NO, write new records - with smaller documents/smaller BLOB-size on new diskspace, and then do a RGZPFM,
to remove deleted records (with larger BLOBs) may reduces filesize.
But there is not enaugh diskspace for a RGZPFM.
Is there any other solution to reduce unused BLOB-space in a BLOB field ?
(OS400 with V7.3)
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Here is an example of an RPG program to extract BLOBs to a stream file. Is that what you are looking for?
https://www.mcpressonline.com/programming/rpg/techtip-blobs-dont-run-embrace-them-with-rpg
https://www.mcpressonline.com/programming/rpg/techtip-blobs-dont-run-embrace-them-with-rpg
ASKER
Hello,
yes, the example is helpful for understandig, how BLOB fields are handled with RPG/SQL.
My concerns - from System perspective - are: how to get those "monsterfiles" with 1,5 TB smaller . . .
A question regarding to IBM design :
Why is the BLOB fields-size the real document size, because only a pointer points to BLOB-field space ? This causes an issue, because simple PF member max size is about 1,7 TB . . . (without partitioned database)
yes, the example is helpful for understandig, how BLOB fields are handled with RPG/SQL.
My concerns - from System perspective - are: how to get those "monsterfiles" with 1,5 TB smaller . . .
A question regarding to IBM design :
Why is the BLOB fields-size the real document size, because only a pointer points to BLOB-field space ? This causes an issue, because simple PF member max size is about 1,7 TB . . . (without partitioned database)
All of the data resides in the physical file object - fixed fields and LOBs (BLOBs, CLOBs).
Physical files have a "data space", which contains the fixed length portion of each record, including a pointer for each LOB field in the record.
Physical files also have "auxiliary space", which contains LOBs (and possibly some or all of variable length fields). This pointer in data space points to a location in the "aux space".
Keyed physical files also have an access path - a built-in index.
Strategies for avoiding this problem:
1) SIMPLY DON'T STORE LOBS IN DATABASE FILES! For a small table, no problem. But for a table that could grow large, this often isn't the most efficient way to manage this type of data.
Instead, store them as stream files (XML, JPG, etc) in the IFS, and store the path in a text field, or store the path in a DATALINK field, or store them in Amazon S3 (or other local store or cloud-based storage). Additional benefits often apply: for example, no need to extract and save to the OS file system to use in web pages and other applications that can directly use the stream files.
2) If you can't remove them, then consider compressing LOBs before storing them, and decompress them when you need to use them. Trim leading and trailing spaces, then compress data using a user-defined function to reduce storage utilization.
https://www.mcpressonline.com/analytics-cognitive/db2/techtip-compress-large-object-lob-data-in-db2-for-i
Physical files have a "data space", which contains the fixed length portion of each record, including a pointer for each LOB field in the record.
Physical files also have "auxiliary space", which contains LOBs (and possibly some or all of variable length fields). This pointer in data space points to a location in the "aux space".
Keyed physical files also have an access path - a built-in index.
Strategies for avoiding this problem:
1) SIMPLY DON'T STORE LOBS IN DATABASE FILES! For a small table, no problem. But for a table that could grow large, this often isn't the most efficient way to manage this type of data.
Instead, store them as stream files (XML, JPG, etc) in the IFS, and store the path in a text field, or store the path in a DATALINK field, or store them in Amazon S3 (or other local store or cloud-based storage). Additional benefits often apply: for example, no need to extract and save to the OS file system to use in web pages and other applications that can directly use the stream files.
2) If you can't remove them, then consider compressing LOBs before storing them, and decompress them when you need to use them. Trim leading and trailing spaces, then compress data using a user-defined function to reduce storage utilization.
https://www.mcpressonline.com/analytics-cognitive/db2/techtip-compress-large-object-lob-data-in-db2-for-i
ASKER
is there any common file tool (beneth RPGIV/SQL) to get Information (or stored BLOB documents itself) from a BLOB field, defined in SQL created PF ?