Cache

LLM requests are NOT cached by default. However, you can turn on LLM request caching from script metadata or the CLI arguments.

script({

...,

cache: true

})

or

npx genaiscript run ... --cache

The cache is stored in the .genaiscript/cache/chat.jsonl file. You can delete this file to clear the cache. This file is excluded from git by default.

  • Directory.genaiscript
    • Directorycache
      • chat.jsonl

Custom cache file

Section titled “Custom cache file”

Use the cacheName option to specify a custom cache file name. The name will be used to create a file in the .genaiscript/cache directory.

script({

...,

cache: "summary"

})

Or using the --cache-name flag in the CLI.

npx genaiscript run .... --cache-name summary

  • Directory.genaiscript
    • Directorycache
      • summary.jsonl

Programmatic cache

Section titled “Programmatic cache”

You can instantiate a custom cache object to manage the cache programmatically.

const cache = await workspace.cache("custom")

// write entries

await cache.set("file.txt", "...")

// read value

const content = await cache.get("file.txt")

// list values

const values = await cache.values()