Development Refresh - When to Re-Create LF's?



As space concerns continue to become more of an issue, I have been tasked with overhauling their Development environment refresh program.
So what I have done is to create a UDC table of files that can be completely deleted and another UDC table of files that need to have a specific amount of history kept.
I have a processing option that will be populated with a date that will be used to determine how far back to keep the data in all files for the second UDC table.

Now for the logic on how to keep the historical data for a file (i.e. F0911) in Dev:
1. Create a duplicate object in another library(QTEMP) without any data.
2. Use the copy file command with the keyed field *GE to the processing options historical Date.
3. Display Database relation to an Outfile & Delete all logical files from the Outfile.
4. Clear the physical file in the original Data library.
5. Run an SQL statement to insert the records from the QTEMP file to the original Data file.
6. Delete the QTEMP file.
7. Submit a job to Re-Create the logical files from the DSPDBR Outfile. (I made this a SBMJOB, so as not to hold up the main job)
8. Get next file from UDC table and loop to step 1.

My question is would it be faster to Re-Create the logical files after the physical file is cleared and then SQL insert the data back into the physical file? or keep it the way I have it with the logical files being created as a separate job as not to hold up the main job that is running through a loop of the UDC table of files?

Any help input is appreciated!!

IMHO, (after making two system backups) it would be simpler to use SQL to delete records in the original files which don't meet your preservation criteria, then use RGZPFM to remove the deleted records from the PF. No need to delete the logicals.
Sorry I may have left out an important qualifier. Most environments only keep 2-3 years of data on a system that goes back to 1995.
So I also have my first version that does that does just as you describe, but it took longer to delete more records and then reorg them.
It sorta seems like its you still have two steps or two backups to do, its just which is the most efficient method.
I have forgotten to include a piece of critical info. The client usually only keeps about 2-3 years of historical data on the Dev environments (& they have about 8 or so Dev environments) of which the data goes back to 1995.
So I have my first version of the Dev refresh exactly as you described, but it was taking a long time to delete and reorg. Which is why I chose to try the method I posted above.
I guess its just a matter of trying to figure out the most efficient method for the large amount data that is in play, considering it is a two step or two system backup process either way.