I have a Physical file whose size is about 600GB (consists of over 300 million records). It has four logical files with 3 different keyed access paths.
Each of the logical files has a unique record format and doesn't share the same record format as the physical file.
Off-late maintenance on these files has become a nightmare.
Today when we add a new field to any physical file these are the steps - we follow - (in a jist )
- Drop the exisiting logical files
- Rename the existing physical file
- Create a new PF object with the changes
- Copy the data from the backed up file
- Drop the backup
- Re-create all the logical files, with this new field added to its record format.
Although a single command executes the CL that does all the above mentioned steps, it takes almost a day or two for this entire process to complete.(since the file size = 600GB)
I'm exploring some options that may reduce the downtime when we have to make changes to this file. (For eg - Vertically splitting this file into 3 with a surrogate key to join)
Another option I was exploring is to use 'CHGPF' and get the file changes done on the physical file.
Since today, the logical files don't share the record format with the physical file. Using the above command leaves the logical files with a different format level identifier and the new field unusable via the logical files.
What happens if the logical files share the same record format with the physical file? Are there any disadvantages/challenges I may face after implementing this?
This would help me avoid recreating logical files every time I have a new field to be added. This would help me save a lot of downtime.
Please share your thoughts/inputs on this.
Also, any inputs/ideas on maintenance of large physical files will also be greatly appreciated
Looking forward to the responses.
Each of the logical files has a unique record format and doesn't share the same record format as the physical file.
Off-late maintenance on these files has become a nightmare.
Today when we add a new field to any physical file these are the steps - we follow - (in a jist )
- Drop the exisiting logical files
- Rename the existing physical file
- Create a new PF object with the changes
- Copy the data from the backed up file
- Drop the backup
- Re-create all the logical files, with this new field added to its record format.
Although a single command executes the CL that does all the above mentioned steps, it takes almost a day or two for this entire process to complete.(since the file size = 600GB)
I'm exploring some options that may reduce the downtime when we have to make changes to this file. (For eg - Vertically splitting this file into 3 with a surrogate key to join)
Another option I was exploring is to use 'CHGPF' and get the file changes done on the physical file.
Since today, the logical files don't share the record format with the physical file. Using the above command leaves the logical files with a different format level identifier and the new field unusable via the logical files.
What happens if the logical files share the same record format with the physical file? Are there any disadvantages/challenges I may face after implementing this?
This would help me avoid recreating logical files every time I have a new field to be added. This would help me save a lot of downtime.
Please share your thoughts/inputs on this.
Also, any inputs/ideas on maintenance of large physical files will also be greatly appreciated
Looking forward to the responses.
Comment