[GH-1146] Cap features sent to LLM during semantic ingestion update by o-love · Pull Request #1163 · MemMachine/MemMachine

@o-love

Add configurable max_features_per_update (default 50) to prevent
LengthFinishReasonError when a large profile causes the LLM response
to overflow its output token budget.

The limit flows from SemanticMemoryConf (YAML) through SemanticService
into IngestionService, and is passed as page_size to get_feature_set
in the update path.

@o-love

sscargal

@o-love