📊 Evaluation Metrics Guide
Evaluation metrics are essential tools used to assess the performance and quality of the assistant’s responses. These metrics are categorized under different evaluation themes including generation quality, retrieval quality, and ethical safety. This guide outlines each metric in detail, along with definitions and formulae where applicable.
🔹 1. Generation Quality Metrics
1.1 Answer Correctness
Definition: Measures how accurate the generated response is when compared to the ground truth.
Key Components:
-
Semantic Similarity
Measures how closely the meaning of the generated response aligns with the meaning of the ground truth, even if different words or phrasing are used. -
Factual Similarity
Evaluates whether the facts or claims in the response are accurate and consistent with the ground truth information.
Scoring:
Higher scores indicate a closer alignment with the ground truth, reflecting both semantic and factual correctness.
1.2 Answer Similarity
Definition: Evaluates only the semantic similarity between the generated answer and the ground truth.
Note: Unlike answer correctness, this does not account for factual correctness.
1.3 Answer Relevance
Definition: Answer Relevance measures how well the assistant’s response directly addresses the user’s input or question. It ensures that the response is pertinent, avoids irrelevant details, and fulfills the user's informational need.
Range:
0
to 1
— Higher values indicate stronger alignment between the response and the original query.
- A value close to
1
means the response is highly relevant to the input. - A lower value indicates that the response may include off-topic or incomplete info
How It’s Measured:
- Convert each sentence or response segment into a vector using an embedding model.
- Compute the cosine similarity between the embeddings of the generated response and the user input.
- Average the similarity scores across all segments.
Formula:
Where:
1.4 BLEU Score
Definition: A widely used metric based on n-gram precision and brevity penalty, used to compare generated text with reference responses.
Range:
0
(no match at all) to 1
(perfect match with the reference).
Formula:
Where:
1.5 ROUGE Score
Definition: Measures overlap between n-grams of the generated text and reference text. Includes precision, recall, and F1-score.
Variants:
- ROUGE-N: Overlap of n-grams
- ROUGE-L: Longest Common Subsequence
Range: 0
to 1
A score of 0
means no overlap between the generated text and the reference (poor quality), while a score of 1
means perfect overlap (ideal match)
Formula (F1-score):
1.6 Faithfulness
Definition:
The Faithfulness metric measures how factually consistent a response is with the retrieved context. It ensures that the assistant does not hallucinate or fabricate information that is not grounded in the provided sources.
Range:
0 to 1 — Higher scores indicate better consistency with the retrieved context.
Steps to Calculate:
- Identify all the claims made in the response.
- For each claim, verify whether it is supported or inferable from the retrieved context.
- Compute the score using the formula below.
Formula:
Interpretation:
- A score of
1
means all claims are backed by the retrieved context. - A lower score indicates that some claims are unsubstantiated or hallucinated.
🔹 2. Retrieval Quality Metrics
2.1 Context Recall
Definition: Measures how much of the benchmark response (reference answer) can be generated using the retrieved data points.
Formula:
2.2 Context Entities Recall
Definition: Evaluates how many common entities (keywords or concepts) are shared between the retrieved context and the reference response.
Formula:
2.3 Context Precision
Definition: Measures how much of the retrieved data is actually useful for generating the correct response.
Formula:
2.4 Noise Sensitivity
Definition: Measures how frequently the assistant generates incorrect responses due to irrelevant or misleading context.
Range: 0 (better) to 1 (worse)
Formula:
🔹 3. Ethics and Safety Metrics
These are LLM-based critic metrics, evaluating the ethical quality of responses by asking specific safety-related questions and applying a majority vote mechanism.
Workflow:
- Define a critic prompt.
- Make 3 independent LLM calls.
- Apply majority vote to determine the binary outcome (e.g., Harmful or Not Harmful).
Evaluated Categories:
- Harmful: Promotes or causes harm.
- Malicious: Exploits or misleads for harmful purposes.
- Bias: Reinforces stereotypes or unfair treatment.
- Toxic: Uses abusive or aggressive language.
- Hateful: Encourages discrimination or hate speech.
- Sexual: Contains inappropriate or explicit sexual content.
- Violent: Incites or glorifies violence.
- Insensitive: Disrespectful to identities or situations.
- Self-harm: Promotes suicidal or self-injurious behavior.
- Manipulative: Deceptively influences user behavior.
Summary Table
Metric | Category | Measures | Score Range / Type |
---|---|---|---|
Answer Correctness | Generation | Semantic + Factual Accuracy | 0–1 |
Answer Similarity | Generation | Semantic Similarity | 0–1 |
Answer Relevance | Generation | Relevance to Query | 0–1 |
BLEU Score | Generation | n-gram Match + Brevity | 0–1 |
ROUGE Score | Generation | Word Sequence Overlap | 0–1 |
Faithfulness | Generation | Factual consistency with retrieved context | 0–1 |
Context Recall | Retrieval | Data-Answer Overlap | 0–1 |
Context Entities Recall | Retrieval | Entity Overlap | 0–1 |
Context Precision | Retrieval | Relevant Context Snippets | 0–1 |
Noise Sensitivity | Retrieval | Errors Due to Noise | 0–1 (lower is better) |
Ethics and Safety | Safety | Binary Verdicts via LLM | Yes / No |