Overview

A small, pixelated 2D illustration features a geometric laptop with a screen showing Node.js and npm icons alongside a cogwheel, symbolizing development tools. Script files are placed nearby, while a cloud icon represents proxy and environment configuration. Polygonal arrows flow between the elements, indicating command execution and automation. The entire scene uses five solid corporate colors, fits within a 128x128 pixel frame, and has a flat, minimalistic pixel art style with no background, shadows, humans, or text.

The GenAIScript CLI genaiscript runs GenAIScript scripts outside of Visual Studio and in your automation.

The CLI is a Node.JS package hosted on npm.

  • Install locally as a devDependency in your project.
  • Install it globally.

npm install -g genaiscript

  • Check that your node version is at least 20._ and npm 10._ by running this command.

npx is installed with Node.JS.

Using npx, you can run the cli without any prior installation steps. npx will install the tool on demand. npx also takes care of tricky operating system issues where the tool is not found in the path.

  • Add --yes to skip the confirmation prompt, which is useful in a CI scenario.

npx --yes genaiscript ...

  • Specify the version range to avoid unexpected behavior with cached installations of the CLI using npx.

npx --yes genaiscript@^1.16.0 ...

To make sure that the TypeScript definition files are written and updated, you can add the following scripts to your package.json.

{

"scripts": {

"postinstall": "genaiscript scripts fix",

"postupdate": "genaiscript scripts fix",

"genaiscript": "genaiscript"

}

}

The genaiscript is also a shorthand script that makes it easier to invoke the CLI using npm run:

Some optional packages used by the CLI do not support an installation behind an HTTP proxy, which is very common in an enterprise setting.

If your work environment requires going through a proxy, you should use npm install --omit=optional to have optional packages fail gracefully during the installation.

If your work environment requires going through a proxy, you can set one of the following environment variables (HTTP_PROXY, HTTPS_PROXY, http_proxy or https_proxy) to have the CLI use a proxy, e.g. HTTP_PROXY=http://proxy.acme.com:3128.

The CLI will load the secrets from the environment variables or a ./.env file.

You can override the default .env file name by adding the --env .env.local file, over even import both.

npx genaiscript run <script> --env .env .env.local

Creates a new script file in the genaisrc folder.

npx genaiscript scripts create <name>

Runs the TypeScript compiler to find errors in the scripts.

npx genaiscript scripts compile

Run a script on file and streams the LLM output to stdout. Run from the workspace root.

npx genaiscript run <script> [files...]

where <script> is the id or file path of the tool to run, and [files...] is the name of the spec file to run it on.

The CLI also supports UNIX-style piping.

cat README.md | genaiscript run summarize > summary.md

Run the script model command to list the available scripts and their model configuration. This can be useful to diagnose configuration issues in CI/CD environments.

npx genaiscript scripts model [script]

where [script] can be a script id or a file path.

The CLI can be imported and used as an API in your Node.JS application.

Both files and --vars are variable command-line arguments. That is, they will consume all the following entries until a new option starts. Therefore ordering is important when mixing them. It is best to place the files, then follow with the --vars option.

genaiscript run <script> [files...] --vars key1=value1 key2=value2

Run Learn how to execute genai scripts on files with streaming output to stdout, including usage of glob patterns, environment variables, and output options.

Convert Learn how to apply a script to many files and extract the output.

Serve Launch local web server.

Video Learn about various video-related command

Test Learn how to run tests for your scripts using GenAIScript CLI with support for multiple AI models.

Configure Configure and validate the LLM connections.