Fix: run plugin hooks on command failure, not just success by derekmisler · Pull Request #6794 · docker/cli
Conversation
This PR is part of a trilogy:
- docker/cli <-- you are here
- docker/ai
- docker/pinata
Closes https://github.com/docker/gordon/issues/125
- What I did
Enabled CLI plugin hooks to fire on command failure, not just success. Previously, when a plugin-delegated command like docker build (buildx) or docker compose up failed, the hooks system was skipped entirely. This meant plugins like ai, scout, and debug could never show "What's next:" hints for failed builds or compose errors — exactly when users would benefit most from a suggestion like docker ai "fix my problem".
Additionally, introduced a new error-hooks plugin configuration key that lets plugins opt in to error-only hooks, separate from the existing hooks key that fires on every invocation.
- How I did it
Three coordinated changes:
-
cli-plugins/manager/hooks.go— ExtendedRunPluginHooksto accept acmdErrorMessagestring and forward it torunHooks(previously hardcoded to""). RefactoredpluginMatchto check bothhooks(always fires) anderror-hooks(fires only whencmdErrorMessageis non-empty). Extracted a newmatchHookConfighelper to deduplicate the comma-separated hook matching logic. -
cmd/docker/docker.go— Moved theRunPluginHookscall out of theif err == nilblock so hooks fire regardless of plugin exit status. The error message is extracted the same way the native command path (RunCLICommandHooks) already does. Hooks are still skipped for "not found" errors (i.e., unknown commands) to avoid false triggers. -
*_test.go— Added comprehensive tests for:pluginMatchwith the newcmdErrorMessageparameter anderror-hooksconfiguration- The new
matchHookConfighelper RunPluginHooksintegration tests verifying both success and failure pathsinvokeAndCollectHooksedge cases (no plugins, cancelled context, error-hooks skipped on success)getExitCodecoveringStatusError, wrapped errors, and signal termination
- How to verify it
These two PRs work together:
- docker/cli#6794 — Makes plugin hooks fire on command failure and adds
error-hooksconfig - docker/ai#384 — Registers the
aiplugin's hook handler to show "ask Gordon" hints when commands fail
1. Build the custom CLI binary
git clone https://github.com/docker/cli.git cd cli gh pr checkout https://github.com/docker/cli/pull/6794 make -f docker.Makefile build # Binary: ./build/docker
2. Build and install the custom docker-ai plugin
Note:
task cli:installis currently broken due to adotenvissue in an unrelated Taskfile. Run the build steps directly instead.
git clone https://github.com/docker/ai.git cd ai gh pr checkout https://github.com/docker/ai/pull/384 cd cli go build -trimpath -ldflags="-s -w" -o docker-ai . ln -sf "$PWD/docker-ai" ~/.docker/cli-plugins/docker-ai
3. Configure hooks in ~/.docker/config.json
Add the features and plugins keys (merge with your existing config):
{
"features": {
"hooks": "true"
},
"plugins": {
"ai": {
"hooks": "run",
"error-hooks": "build,buildx build,compose"
}
}
}Important:
error-hooksmust include both"build"and"buildx build".docker buildanddocker buildx buildproduce different command strings internally, and the hook matching is a prefix token match —"buildx build"won't match a baredocker build. Use"compose"(not"compose up") because flags like-fappear betweencomposeandupin the args, breaking the positional match.
4. Test the full flow
Failing build (the main use case):
echo "FROM nonexistent:image" > /tmp/Dockerfile ~/path/to/cli/build/docker build /tmp # Expected output after the build error: # What's next: # Debug this build failure with Gordon → docker ai "help me fix this build failure"
Failing run:
~/path/to/cli/build/docker run nonexistent:image # Expected: # What's next: # Debug this container error with Gordon → docker ai "help me fix this container error"
Failing compose:
cat > /tmp/docker-compose.yml <<'EOF' services: broken: image: nonexistent:image EOF ~/path/to/cli/build/docker compose -f /tmp/docker-compose.yml up # Expected: # What's next: # Debug this Compose error with Gordon → docker ai "help me fix this compose error"
Successful build (error-hooks should NOT fire):
~/path/to/cli/build/docker build -t test . # Expected: build succeeds, NO Gordon hints shown # (error-hooks only fire on failure)
Successful run (regular hooks fire, but plugin returns nothing):
~/path/to/cli/build/docker run --rm hello-world # Expected: run succeeds, no hints shown # (ai plugin returns nothing when CommandError is empty)
5. Run the tests
# CLI plugin hooks tests cd cli go test ./cli-plugins/manager/... ./cmd/docker/... # AI plugin hooks tests cd ai/cli go test ./commands/... -run Hook
6. Restore original plugin when done
ln -sf /Applications/Docker.app/Contents/Resources/cli-plugins/docker-ai \
~/.docker/cli-plugins/docker-aiAnd remove the features/plugins keys from ~/.docker/config.json if you don't want hooks active.
- Human readable description for the release notes
CLI plugin hooks now fire on command failure (not just success), and plugins can use "error-hooks" to show hints only when commands fail.
I'm not sure about that e2e test, @thaJeztah, it passes sometimes and fails others, and I can't figure out why.
Yeah, that test is rather flaky; not an issue with this PR; I restarted CI.
Yeah, that PR was helpful when working on this one. I saw we were hard-coding an empty "" in place of the error here
Yup; I mostly pinged @vvoland and @laurazard to check if they recalled there was a specific reason; there was this line in the PR description;
This is only available for CLI command executions.
And I do recall there was some chatter at the the about some risks with hook being triggered recursively (with a growing number of plugins, there's also a growing amount of overhead).
| // RunPluginHooks is the entrypoint for the hooks execution flow | ||
| // after a plugin command was just executed by the CLI. | ||
| func RunPluginHooks(ctx context.Context, dockerCLI config.Provider, rootCmd, subCommand *cobra.Command, args []string) { | ||
| func RunPluginHooks(ctx context.Context, dockerCLI config.Provider, rootCmd, subCommand *cobra.Command, args []string, cmdErrorMessage string) { |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so I don't forget; this function is exported (and may be in use elsewhere), so we probably can't change the signature.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yes, maybe we're fine; at least nothing in a public repository AFAICS; https://grep.app/search?q=.RunPluginHooks
We should probably consider to move some of this to internal/, and make the "public" (cli-plugins/xxxx) wrappers for the internal implementation so that we have a better separation between "these are only to be used by the CLI itself" and "these are "ok(ish)" for external consumers". We've been painted in a corner too many times because things were meant for the CLI itself, but happened to be exported, and now ... got used by other projects.
Forgot to post this, but in case it's useful information; I was curious what the performance impact would be. For that I chose a "minimal" call to some plugins; I picked <plugin> version, which should be very lightweight, and (I expect) not be making API calls.
First, timing when running that command as "plugin" (docker <plugin-name> version); note that some of the overhead here is in the CLI, which may still needs optimization (it currently scans for CLI plugin candidates that can provide a subcommand, so effectively, all plugins may get invoked here for the "metadata" subcommand). I also set a remote context (-c remote1), pointing ao a ssh:// connection;
hyperfine --runs 5 --warmup 2 'docker -c remote1 --version' Benchmark 1: docker -c remote1 --version Time (mean ± σ): 20.5 ms ± 2.0 ms [User: 10.7 ms, System: 8.5 ms] Range (min … max): 17.5 ms … 23.2 ms 5 runs hyperfine --runs 5 --warmup 2 'docker -c remote1 compose version' Benchmark 1: docker -c remote1 compose version Time (mean ± σ): 53.1 ms ± 2.4 ms [User: 19.4 ms, System: 13.2 ms] Range (min … max): 50.0 ms … 55.5 ms 5 runs hyperfine --runs 5 --warmup 2 'docker -c remote1 ai version' Benchmark 1: docker -c remote1 ai version Time (mean ± σ): 72.6 ms ± 5.1 ms [User: 31.8 ms, System: 17.0 ms] Range (min … max): 65.2 ms … 78.4 ms 5 runs hyperfine --runs 5 --warmup 2 'docker -c remote1 buildx version' Benchmark 1: docker -c remote1 buildx version Time (mean ± σ): 76.7 ms ± 3.7 ms [User: 37.4 ms, System: 19.8 ms] Range (min … max): 71.7 ms … 81.2 ms 5 runs
Also looking at running the same command, but invoking the plugins directly;
hyperfine --runs 5 --warmup 2 'docker --version' Benchmark 1: docker --version Time (mean ± σ): 13.6 ms ± 1.9 ms [User: 7.1 ms, System: 5.3 ms] Range (min … max): 11.8 ms … 16.7 ms 5 runs hyperfine --runs 5 --warmup 2 '~/.docker/cli-plugins/docker-compose version' Benchmark 1: ~/.docker/cli-plugins/docker-compose version Time (mean ± σ): 17.9 ms ± 3.1 ms [User: 6.5 ms, System: 3.4 ms] Range (min … max): 15.7 ms … 23.3 ms 5 runs hyperfine --runs 5 --warmup 2 '~/.docker/cli-plugins/docker-ai version' Benchmark 1: ~/.docker/cli-plugins/docker-ai version Time (mean ± σ): 28.4 ms ± 1.6 ms [User: 12.7 ms, System: 5.3 ms] Range (min … max): 25.9 ms … 29.8 ms 5 runs hyperfine --runs 5 --warmup 2 '~/.docker/cli-plugins/docker-buildx version' Benchmark 1: ~/.docker/cli-plugins/docker-buildx version Time (mean ± σ): 54.1 ms ± 13.0 ms [User: 20.3 ms, System: 13.4 ms] Range (min … max): 38.6 ms … 74.6 ms 5 runs
Looks like the ai plugin is not the slowest, but still takes twice as much as the CLI itself (not sure why buildx is so slow though; that's surprising!)
This extra overhead may be less problematic in the "unhappy" path (something failed), although it's possible there's scripts that run a command to detect errors; I THINK (but must double-check) we skip hooks if there's no TTY attached, but it's possible we invoke the hook, but don't print the output.
The overhead is of course also "relative" to the command executed; 50ms of extra time probably doesn't matter on a failing docker pull;
hyperfine --runs 5 --warmup 2 --ignore-failure 'docker pull no-sucn-image' Benchmark 1: docker pull no-sucn-image Time (mean ± σ): 822.0 ms ± 62.9 ms [User: 25.6 ms, System: 28.1 ms] Range (min … max): 765.2 ms … 930.0 ms 5 runs Warning: Ignoring non-zero exit code. Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
However the way hooks currently work is that they must be registered for each command they should be called on. This means that if we want the ai plugin to be called for assistance on any command, it would be invoked after every command; regardless if the command was successful or failed. That means that setting a hook on every command would add at least 50ms extra; that's based on running the version subcommand, so it'll probably be more.
One thing we could consider is to introduce a separate config for "error-hooks"; i.e., only invoke the hook for errors, but not for the "happy path"; currently hooks are defined like this;
"plugins": { "debug": { "hooks": "exec" }, "scout": { "hooks": "pull,image pull,buildx build" } }, "features": { "hooks": "true" }
We could add a similar config (error-hooks, failure-hooks ?); note that adding hooks for all commands would be ... slightly tedious (every command has to be enumerated in the list); not sure what's best for that; we could have a "error-handler" or wildcard support, but that may become a sliding slope.
There's another thing we must verify (just thinking out loud here, haven't thought it through, and haven't looked at the logic);
docker compose buildnowadays also hands off builds tobuildx, and (similar to the CLI itself) invokes buildx as a plugin (docker compose build-> invokes~/.docker/cli-plugins/docker-compose build-> invokes~/.docker/cli-plugins/docker-buildx bake). I wonder how hooks are called in that scenario (could it invoke hooks multiple times, recurse?)- similar to the above; how do we currently handle failures when invoking a hook? (we must make sure we don't trigger a "error-hook" if a "hook" or "error-hook" failed 😂)
Thanks for the thorough review and the detailed benchmarks, @thaJeztah, that's all super helpful context. And thanks @vvoland for the +1 on the direction.
I've reworked this to use the error-hooks approach. The change is contained to pluginMatch in hooks.go:
"hooks"behavior is unchanged — fires on every invocation- New
"error-hooks"config key — only fires when the command exits with an error
Example config:
{
"plugins": {
"ai": {
"error-hooks": "build,compose,pull"
}
}
}The docker.go changes to move RunPluginHooks outside the if err == nil block are still needed, without that, error-hooks would never get a chance to be evaluated. But the performance impact is negligible: pluginMatch is just string comparison, and when a plugin only declares error-hooks, no subprocess is spawned on success.
Re: the longer-term response header approach (WithResponseHook / X-Docker-Hint) — that sounds like the right eventual architecture, but it's beyond the scope of what I can tackle here. Happy to leave that for y'all to drive.
Thanks! Sorry for the delay; I didn't get to look before the weekend.
I gave this a try and it looks to work as expected. I'll have a closer look at the code changes, but overall looks good (I may do a follow-up to handle the signature change; we probably want to start reducing some of the exported functions (move them internal)).
Some thoughts;
I noticed your comment;
Important:
error-hooksmust include both"build"and"buildx build".docker buildanddocker buildx buildproduce different command strings internally, and the hook matching is a prefix token match —"buildx build"won't match a baredocker build. Use"compose"(not"compose up") because flags like-fappear betweencomposeandupin the args, breaking the positional match.
I think that's something we should look into; perhaps we should have separate info / fields for this and have a "normalized" presentation of the command that was executed as separate field;
docker build->docker.buildx.builddocker run->docker.container.rundocker ps,docker container ps,docker container ls->docker.container.list
And in addition, also have the "raw" command that was executed, although maybe the CLI plugin already can obtain that otherwise (through os.Args)
Another thought; currently, the hook only "hints" (you can run XYZ to do something). While this was by design (we tried to keep functionality limited and only implemented for what we wanted it to be used for), there's also a case to be made to make running such commands easier; we could have some top-level flag to allow the hook to be actually doing something.
I think docker buildx already looks if the (top-level?) --debug flag is set; docker ai could potentially (haven't tried) have something along those lines; if --debug (or some other "I cant you to just do things, not tell me what to do" flag), it could run the command to try and provide a solution;
For example, in the "how to try this" case, we create a dockerfile at /tmp/Dockerfile. Typing docker ai "help me fix this build failure" doesn't really help, because it doesn't know "what failure??"
Passing the actual failure to it makes it give a good answer straight out of the box;
docker ai -<<'EOF' help me explain this build failure: echo -e 'FROM nonexistent:image\n' | docker build - [+] Building 1.0s (2/2) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 61B 0.0s => ERROR [internal] load metadata for docker.io/library/nonexistent:image 0.9s ------ > [internal] load metadata for docker.io/library/nonexistent:image: ------ Dockerfile:1 -------------------- 1 | >>> FROM nonexistent:image 2 | 3 | -------------------- ERROR: failed to build: failed to solve: nonexistent:image: failed to resolve source metadata for docker.io/library/nonexistent:image: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed EOF
Now, passing a here-doc with the output probably isn't great, but docker build does keep logs, and those could be used to get the full details of the build;
docker buildx history ls | head -n 5 BUILD ID NAME STATUS CREATED AT DURATION vqazgdtzh7od2iw70c691g4md tmp Error 16 seconds ago 1.0s 12g8bxgb19uav02two9dtfwh8 Completed 12 minutes ago 3.2s 06nee7yg0eochfnu2csthmc88 ai/cli Completed 51 minutes ago 0.7s mifw5xygib53v70zw81wcpiny ai/cli Completed 51 minutes ago 0.8s
Which can reproduce the exact logs produced, in different formats;
docker buildx history logs vqazgdtzh7od2iw70c691g4md #1 [internal] load build definition from Dockerfile #1 transferring dockerfile: 60B done #1 DONE 0.0s #2 [internal] load metadata for docker.io/library/nonexistent:image #2 ERROR: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed ------ > [internal] load metadata for docker.io/library/nonexistent:image: ------ docker buildx history logs --progress rawjson vqazgdtzh7od2iw70c691g4md {"vertexes":[{"digest":"sha256:6eae6dc06f9344c64c9025ec6845b0ff004a673e1fad264d643813b774baec58","name":"[internal] load build definition from Dockerfile"},{"digest":"sha256:6eae6dc06f9344c64c9025ec6845b0ff004a673e1fad264d643813b774baec58","name":"[internal] load build definition from Dockerfile","started":"2026-02-16T12:16:33.948710795Z","completed":"2026-02-16T12:16:33.948785337Z"},{"digest":"sha256:6eae6dc06f9344c64c9025ec6845b0ff004a673e1fad264d643813b774baec58","name":"[internal] load build definition from Dockerfile","started":"2026-02-16T12:16:33.948980753Z","completed":"2026-02-16T12:16:33.96709692Z"},{"digest":"sha256:c4737a0a7768625f7149b88406679b2d651db48fc66caa5f474715ba3117ba55","name":"[internal] load metadata for docker.io/library/nonexistent:image","started":"2026-02-16T12:16:33.975304753Z","completed":"2026-02-16T12:16:34.916714379Z","error":"pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed"}],"statuses":[{"id":"transferring dockerfile:","vertex":"sha256:6eae6dc06f9344c64c9025ec6845b0ff004a673e1fad264d643813b774baec58","name":"transferring","current":60,"timestamp":"2026-02-16T12:16:33.962916503Z","started":"2026-02-16T12:16:33.954416378Z","completed":"2026-02-16T12:16:33.962916295Z"}]}
Actually; I gave that a try; i.e., what if the error output of buildx would include the build-history id?
docker ai "help me explain the docker build error from build vqazgdtzh7od2iw70c691g4md"Unfortunately it didn't seem to know about docker buildx history, so it thought it was Docker Build Cloud;
After giving it some hints, it ran the docker buildx history commands and came with a solution;
@thaJeztah thank you for your feedback! that was super detailed and i loved all of it.
There's another thing we must verify (just thinking out loud here, haven't thought it through, and haven't looked at the logic);
- docker compose build nowadays also hands off builds to buildx, and (similar to the CLI itself) invokes buildx as a plugin (docker compose build -> invokes ~/.docker/cli-plugins/docker-compose build -> invokes ~/.docker/cli-plugins/docker-buildx bake). I wonder how hooks are called in that scenario (could it invoke hooks multiple times, recurse?)
- similar to the above; how do we currently handle failures when invoking a hook? (we must make sure we don't trigger a "error-hook" if a "hook" or "error-hook" failed 😂)
good questions on the recursion front. i dug into this and i think the risk is pretty low:
- compose → buildx chain: when
docker compose buildinvokesdocker-buildx bake, it spawns a direct subprocess (exec.Command), not a re-entry intorunDocker(). the parent CLI's hooks fire once for the original compose build invocation, and the nested buildx call doesn't trigger a second round of hooks from the parent process. - hook failure cascading: hook errors are silently skipped (continue in
invokeAndCollectHooks), so a misbehaving hook never produces acmdErrorMessagethat could trigger error-hooks for other plugins. the error isolation is already there.
the one theoretical edge case is that plugin subprocesses inherit the parent's environment (including DOCKER_CLI_HOOKS=true), so if a hook subprocess somehow re-invoked the docker CLI on a TTY, hooks could fire in that child process. in practice i can't find any hook that does this, but a hardening follow-up with a recursion guard env var (e.g., DOCKER_CLI_HOOKS_EXECUTING=1 checked in invokeAndCollectHooks) would close that off completely. happy to file an issue for that if you think it's worth tracking.
Another thought; currently, the hook only "hints" (you can run XYZ to do something). While this was by design (we tried to keep functionality limited and only implemented for what we wanted it to be used for), there's also a case to be made to make running such commands easier; we could have some top-level flag to allow the hook to be actually doing something.
I think docker buildx already looks if the (top-level?) --debug flag is set; docker ai could potentially (haven't tried) have something along those lines; if --debug (or some other "I cant you to just do things, not tell me what to do" flag), it could run the command to try and provide a solution;
For example, in the "how to try this" case, we create a dockerfile at /tmp/Dockerfile. Typing docker ai "help me fix this build failure" doesn't really help, because it doesn't know "what failure??"
nice, that build-history integration idea is really compelling, especially seeing Gordon nail the diagnosis once it had the actual logs. agreed that "help me fix this build failure" without context isn't very useful on its own.
that said, the goal for this initial pass is more modest: surface the docker ai command to users who may not know it exists, right at the moment they're most likely to want help. the hint is really an advertisement, like "hey, this thing exists", not a one-click fix. once users run docker ai, they're in an interactive session where they can paste the error, describe their setup, etc.
richer context passing (build-history IDs, structured error payloads, the --debug-style auto-fix flow) would be great as a follow-up. I think that touches enough moving parts (buildx output format, HookPluginData schema, the ai plugin's MCP tools) that it deserves its own issue. happy to file one if that's useful.
thank you, @vvoland! i can't merge this PR, unfortunately (not enough permissions) and i'm also unfamiliar with the deploy process. any advice?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, sorry for the delay; we initially planned doing a v29.2.2 release, and this had to wait for v29.3.0, but we're skipping v29.2.2 😅
I'll have a quick peek at the potential follow-up to keep that signature (and possibly phase out some of the exported functions that are ... intended for internal use)
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a lot of context anymore (it's been a while since I've been looking at CLI code 😅), but I left a couple of thoughts+some nits.
Comment on lines +207 to +257
| func TestMatchHookConfig(t *testing.T) { | ||
| testCases := []struct { | ||
| doc string | ||
| configuredHooks string | ||
| subCmd string | ||
| expectedMatch string | ||
| expectedOk bool | ||
| }{ | ||
| { | ||
| doc: "empty config", | ||
| configuredHooks: "", | ||
| subCmd: "build", | ||
| expectedMatch: "", | ||
| expectedOk: false, | ||
| }, | ||
| { | ||
| doc: "exact match", | ||
| configuredHooks: "build", | ||
| subCmd: "build", | ||
| expectedMatch: "build", | ||
| expectedOk: true, | ||
| }, | ||
| { | ||
| doc: "prefix match", | ||
| configuredHooks: "image", | ||
| subCmd: "image ls", | ||
| expectedMatch: "image", | ||
| expectedOk: true, | ||
| }, | ||
| { | ||
| doc: "comma-separated match", | ||
| configuredHooks: "pull,build,push", | ||
| subCmd: "build", | ||
| expectedMatch: "build", | ||
| expectedOk: true, | ||
| }, | ||
| { | ||
| doc: "no match", | ||
| configuredHooks: "pull,push", | ||
| subCmd: "build", | ||
| expectedMatch: "", | ||
| expectedOk: false, | ||
| }, | ||
| } | ||
|
|
||
| for _, tc := range testCases { | ||
| match, ok := pluginMatch(tc.pluginConfig, tc.commandString) | ||
| assert.Equal(t, ok, tc.expectedOk) | ||
| assert.Equal(t, match, tc.expectedMatch) | ||
| t.Run(tc.doc, func(t *testing.T) { | ||
| match, ok := matchHookConfig(tc.configuredHooks, tc.subCmd) | ||
| assert.Equal(t, ok, tc.expectedOk) | ||
| assert.Equal(t, match, tc.expectedMatch) | ||
| }) |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that this change includes a refactor of hooks.go to split pluginMatch into pluginMatch and matchHookConfig, but these tests are a bit confusing – doesn't this test cover a subset of what is now covered in TestPluginMatch?
| err := tryPluginRun(ctx, dockerCli, cmd, args[0], envs) | ||
| if ccmd != nil && dockerCli.Out().IsTerminal() && dockerCli.HooksEnabled() && !errdefs.IsNotFound(err) { | ||
| errMessage := cmdErrorMessage(err) | ||
| pluginmanager.RunPluginHooks(ctx, dockerCli, cmd, ccmd, args, errMessage) |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit concerning to me to pass plain error messages into plugins for hook invocation – we took some care previously to implement this with some concern for the potential security implications of "we now call whatever binaries are placed in ~/.docker/cli-plugins", and a large part of that idea was to just not pass that much information when invoking plugins to collect hook responses, instead getting plugins to return a template that the CLI can fill in.
This isn't a major concern because exploiting this would require tricking a user into downloading/setting up a malicious binary, but it's worth calling out that this does change the expectations here to "any plugin might be able to see full error messages, which might contain sensitive information". Maybe we should warn users once to let them know this is the case.
| // Plugins can declare two types of hooks in their configuration: | ||
| // - "hooks": fires on every command invocation (success or failure) | ||
| // - "error-hooks": fires only when a command fails (cmdErrorMessage is non-empty) | ||
| func pluginMatch(pluginCfg map[string]string, subCmd string, cmdErrorMessage string) (string, bool) { |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like this function doesn't need cmdErrorMessage, just to know whether it was an error or not.
| err := tryPluginRun(ctx, dockerCli, cmd, args[0], envs) | ||
| if ccmd != nil && dockerCli.Out().IsTerminal() && dockerCli.HooksEnabled() && !errdefs.IsNotFound(err) { | ||
| errMessage := cmdErrorMessage(err) | ||
| pluginmanager.RunPluginHooks(ctx, dockerCli, cmd, ccmd, args, errMessage) |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a better API here would be to simply pass err into pluginmanager.RunPluginHooks and check whether it's nil/not+extract the error message there.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters