[GH-1146] Cap features sent to LLM during semantic ingestion update by o-love · Pull Request #1163 · MemMachine/MemMachine
Add configurable max_features_per_update (default 50) to prevent LengthFinishReasonError when a large profile causes the LLM response to overflow its output token budget. The limit flows from SemanticMemoryConf (YAML) through SemanticService into IngestionService, and is passed as page_size to get_feature_set in the update path.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters