LabeledCriteria: EvalConfig & {
    evaluatorType: "labeled_criteria";
    criteria?: Criteria | Record<string, string>;
    llm?: BaseLanguageModel;
}

Configuration to load a "LabeledCriteriaEvalChain" evaluator, which prompts an LLM to determine whether the model's prediction complies with the provided criteria and also provides a "ground truth" label for the evaluator to incorporate in its evaluation.

Type declaration

  • evaluatorType: "labeled_criteria"
  • Optional criteria?: Criteria | Record<string, string>

    The "criteria" to insert into the prompt template used for evaluation. See the prompt at https://smith.langchain.com/hub/langchain-ai/labeled-criteria for more information.

  • Optional llm?: BaseLanguageModel

    The language model to use as the evaluator, defaults to GPT-4

Param: criteria

The criteria to use for the evaluator.

Param: llm

The language model to use for the evaluator.

Returns

The configuration for the evaluator.

Example

const evalConfig = {
evaluators: [{
evaluatorType: "labeled_criteria",
criteria: "correctness"
}],
};

Example

const evalConfig = {
evaluators: [{
evaluatorType: "labeled_criteria",
criteria: { "mentionsAllFacts": "Does the include all facts provided in the reference?" }
}],
};

Generated using TypeDoc