Here is the scenario. I have a workbook with one data source, that can have many many rows of data (upwards of 500k in some instances). This data source is used to create approx 35 pivot tables that are all connected to approx 23 slicers. As you can image, changing slicer values is incredibly inefficient. It can take up to a few mins to refresh all the pivot tables which is pretty unacceptable for the program. I have some thoughts on how to solve this, but am having implementation troubles. I am also open to other creative solutions to this problem. I cant use powerpivot as it is not deployed company wide.
All the slicers are on a dashboard sheet with no pivot tables. On any given pivot table sheet, there are at most 3 pivot tables and no slicers. Since the user is only looking at sheet at a time, i think it may be best to just have the slicers update the sheet they are looking at. Here is an idea of the pseudo code i was thinking:
1. When user is on the dashboard, disconnect all slicers from all pivot tables, so that the user can change them without all the pivots recalculating.
2. When a user goes onto any pivot table sheet, connect the slicers only to the pivot tables on that sheet.
I am able to implement this. And it improves usability considerably, but it is still not quite good enough. When I loop through the slicers and reconnect them, it re-calcs the pivots with each new connected slicer. This take approx 7 seconds even with smaller data sets. I want to cut this considerably. Another thought is to only -re-connect the slicers that the user actually has used. I havnt tried this yet, so dont know what type of performance gain i would get.
I'm brainstorming on paper a bit here, but i still don't have a great solution. I would like any insight, ideas, etc from the community. I ideally we would be able to handle large data sets with little performance loss to the user.