Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Cache persistence

29 views
Skip to first unread message

Matthew Berry

unread,
Feb 14, 2024, 3:45:05 PM2/14/24
to xlSlim Support
Hi Russel,

Is there an easy way to persist cached objects, or the Excel data arrays created by viewPyObject, across Excel sessions and/or until manually refreshed from Excel?

I have some functions that bring in a lot of API data, which is then static once imported. It would be good to have these data imports cached, so (i) they don't have to be re-downloaded whenever I open the spreadsheets, and (ii) they don't have to re-downloaded when Excel recalculates.

Any thoughts on making data persist, or on setting xlSlim formulas to refresh manually?

Thanks so much! 
Matt 

xlSlim Dev

unread,
Feb 15, 2024, 6:35:57 AM2/15/24
to xlSlim Support
Hi Matt,

You can acomplish this with a bit of Python code. In the below sample code I use the Python pickle module to save a local copy of the dataframe created from a web request.

This function can then be called from Excel as usual. As far as I could see in testing it behaves as expected. the data is downloaded the first time, but every subsequent run uses the local pickle file.

I put the code on Gist too as it doesn't format too well in Google Groups.

I was not sure about the formula recalculation question as all functions registered by xlSlim only recalculate when inputs change.

Regards,
Russel

"""
import logging
import os
import pandas as pd

LOG = logging.getLogger(__name__)

def read_dataframe_from_pickle_or_url(url: str, path: str) -> pd.DataFrame:
    """Read JSON data from the URL and convert to a pandas dataframe.
   
    The dataframe is cached locally at the supplied path.
   
    Pickle is used to cache the dataframe.
    """
    df = None
    path = os.path.expandvars(path)
    # If the pickle file exists use it
    if os.path.exists(path):    
        LOG.info("Reading data from %s", path)
        df = pd.read_pickle(path)
    # Else fetch the data from the URL
    else:
        LOG.info("Fetching data from %s", url)
        df = pd.read_json(url)
        # Important to pickle the dataframe so the pickle file
        # is available the next time the function runs
        LOG.info("Pickling dataframe to %s", path)
        df.to_pickle(path)        
    return df


if __name__ == "__main__":
    logging.basicConfig(level="INFO")
    url =  "https://jsonplaceholder.typicode.com/todos"
    path = r"%TEMP%\df.pickle"
    df = read_dataframe_from_pickle_or_url(url, path)
    print(df.head())
"""

Matthew Berry

unread,
Feb 15, 2024, 1:19:58 PM2/15/24
to xlSlim Support
Thank you for introducing me to pickle!  This solves my problem perfectly :)
Reply all
Reply to author
Forward
0 new messages