I am using the code block below to transfer spreadsheet data into word content controls. This works in theory (and worked well with a small dataset) but is unbelievably slow since I expanded it to this larger dataset (like ~45 minutes). Some potentially relevant context:
- The word document with the content controls is a legacy doc within my organization and I have limited capacity to change it (e.g., I can change tags/names or other behind the scenes things, but other people tab through the content controls to do manual entry and I can't do anything that would screw that up or overtly change the appearance).
- the 'dsData' dataset represents a row in an excel speadsheet with 563 columns. Not all columns will have values in any particular row, and each single row instance is likely to have 200-300 columns with values. However, the specific columns with values vary a lot across row instances so it's not like I can base my code around predictable subgroupings of the 563 values.
- I don't really know what I'm doing. I sort of squished this together from various sources I found online and I'm hoping there is an obvious solution. It seems like there is a problem of large scale double iteration here but I'm not sure what to fix and had trouble directly googling this issue. Some things I would be interested in if they are technically feasible:
1. can I improve my first if statement to exclude cells with formulas but no actual value (i.e., I think this is only skipping over truly blank cells, but a lot of cells in any particular row will have formulas but no specific value)
2. can I drop out 'used' values in subsequent iterations (i.e., after I get a 'hit' for a CC.Title = dsData(i) match, can i somehow drop that particular value out of subsequent iterations?)
3. would using 'i' and 'j' counts instead of 'i' and 'each cc in activedocument.contentcontrols' have any practical value here?
4. Is there some completely different approach that would be more efficient?
Bookmarks