LLM Uncertainty#
The LLM module provides uncertainty quantification methods specifically designed for large language models.
Token-level Uncertainty#
Compute predictive entropy at each token position. |
|
Maximum softmax probability at each token position. |
|
Surprisal (negative log-probability) of generated tokens. |
|
Confidence based on probability mass in top-k tokens. |
Sequence-level Uncertainty#
Joint probability of the entire sequence. |
|
Aggregated entropy over the sequence. |
Sampling-based Uncertainty#
Self-consistency via majority voting across samples. |
|
Semantic entropy - entropy over semantically clustered responses. |
|
Predictive entropy across multiple sampled sequences. |
|
Mutual information between predictions and model (aleatoric vs epistemic). |
Generation Methods#
Uncertainty estimation from beam search scores. |
|
Detect when the model is expressing uncertainty verbally. |
|
Uncertainty from contrastive decoding (comparing expert vs amateur models). |