input_metric: split large metric contexts into chunk-sized batches by thewillyhuman · Pull Request #11554 · fluent/fluent-bit

@thewillyhuman @claude

Addresses fluent#9653

Large metric scrapes (e.g., 200K+ metrics from prometheus_scrape)
were encoded as a single chunk exceeding the 2MB limit, breaking
downstream plugins like out_opentelemetry. Add a size-aware split
path to flb_input_metrics_append() that batches metric families
into chunk-sized cmt contexts using cmt_cat_*() before encoding.
Small payloads are unaffected (fast path preserved).

Signed-off-by: Guillermo Facundo Colunga <guillermo.facundo.colunga@cern.ch>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>