Hello Everyone,
I'm hoping that there is someone a bit more creative than me out there who can help figure out a way to optimize this loop or get rid of it all together!
The loop in question is part of a larger procedure whose general intent is to transform poorly formatted data into pivot-table optimized data. The data set can have in excess of 75k lines and the order of the data and amount of data is always changing. The loop (which generally takes over 5 minutes to run as-is), represents just one step in the procedure, but about 90% of the time it takes.
Here is what the loop does: Certain lines in the data are tagged with a "Yes" indicating that value of these lines needs to be split up into three groups, determined by 3 cell values adding up to 100%. My loop checks each line for a yes, and if there is one creates three copies of the line at the bottom of the data (using the % values to split up the amount among the lines). It then goes back and deletes the original line so the value is not double counted and the data is optimized for a pivot table (as many individual records [rows] as possible). To help clarify this description better, I have attached a file with some dummy data that represents exactly what the loop is doing my real file. The code is also pasted below. Any ideas on how to optimize this code would be truly appreciated!!
Thank you in advance.
(P.S. – The Timer is just there to keep track of how long this is taking, and the intention would be to delete columns G:l in the data at a later point.)
Attachment 303384
Bookmarks