InstructionProposalSignature¶
gepa.strategies.instruction_proposal.InstructionProposalSignature()
dataclass
¶
Bases: Signature
Attributes¶
default_prompt_template = "I provided an assistant with the following instructions to perform a task for me:\n```\n<curr_param>\n```\n\nThe following are examples of different task inputs provided to the assistant along with the assistant's response for each of them, and some feedback on how the assistant's response could be better:\n```\n<side_info>\n```\n\nYour task is to write a new instruction for the assistant.\n\nRead the inputs carefully and identify the input format and infer detailed task description about the task I wish to solve with the assistant.\n\nRead all the assistant responses and the corresponding feedback. Identify all niche and domain specific factual information about the task and include it in the instruction, as a lot of it may not be available to the assistant in the future. The assistant may have utilized a generalizable strategy to solve the task, if so, include that in the instruction as well.\n\nProvide the new instructions within ``` blocks."
class-attribute
instance-attribute
¶
input_keys: list[str] = ['current_instruction_doc', 'dataset_with_feedback', 'prompt_template']
class-attribute
¶
output_keys: list[str] = ['new_instruction']
class-attribute
¶
prompt_template: str
class-attribute
¶
Functions¶
validate_prompt_template(prompt_template: str | None) -> None
classmethod
¶
Source code in gepa/strategies/instruction_proposal.py
prompt_renderer(input_dict: Mapping[str, Any]) -> str | list[dict[str, Any]]
classmethod
¶
Source code in gepa/strategies/instruction_proposal.py
output_extractor(lm_out: str) -> dict[str, str]
classmethod
¶
Source code in gepa/strategies/instruction_proposal.py
run(lm: LanguageModel, input_dict: Mapping[str, Any]) -> dict[str, str]
classmethod
¶
run_with_metadata(lm: LanguageModel, input_dict: Mapping[str, Any]) -> tuple[dict[str, str], str | list[dict[str, Any]], str]
classmethod
¶
Like run(), but also returns the rendered prompt and raw LM output.
Returns:
| Type | Description |
|---|---|
tuple[dict[str, str], str | list[dict[str, Any]], str]
|
A tuple of (extracted_output, rendered_prompt, raw_lm_output). |