ReflectionConfig¶
gepa.optimize_anything.ReflectionConfig(skip_perfect_score: bool = False, perfect_score: float | None = None, batch_sampler: BatchSampler | Literal['epoch_shuffled'] = 'epoch_shuffled', reflection_minibatch_size: int | None = None, module_selector: ReflectionComponentSelector | Literal['round_robin', 'all'] = 'round_robin', reflection_lm: LanguageModel | str | None = 'openai/gpt-5.1', reflection_prompt_template: str | dict[str, str] | None = optimize_anything_reflection_prompt_template, custom_candidate_proposer: ProposalFn | None = None)
dataclass
¶
Controls how the LLM proposes improved candidates each iteration.
The reflection LM sees evaluation feedback (side_info) for a minibatch of
examples and proposes an improved candidate. reflection_lm is the
model used for this step (defaults to openai/gpt-5.1).
reflection_minibatch_size controls how many examples are shown per
reflection step (default: 1 for single-task, 3 otherwise). Showing a
small minibatch rather than all examples at once produces focused,
targeted improvements on that subset. Over iterations, all examples get
attention, and the Pareto frontier preserves specialized gains across
iterations rather than averaging them away.