A comprehensive Python library providing state-of-the-art methods for calibration, out-of-distribution detection, conformal prediction, and uncertainty estimation in deep learning.
Everything you need for uncertainty quantification in modern ML systems
Ensure your model's confidence matches its accuracy
Identify when your model encounters unfamiliar data
Distribution-free uncertainty with coverage guarantees
Abstain from predictions when uncertain
Quantify uncertainty in language model outputs
Built-in datasets and helper functions
Get started in seconds
pip install incerto
Coming soon to PyPI
git clone https://github.com/steverab/incerto.git
cd incerto
pip install -e .
Recommended for development
Simple, intuitive API for all uncertainty quantification tasks
import torch
from incerto.calibration import TemperatureScaling, ece_score
# Train your model
model = YourModel()
# ... training code ...
# Post-hoc calibration on validation set
calibrator = TemperatureScaling()
calibrator.fit(val_logits, val_labels)
# Get calibrated predictions
test_logits = model(test_data)
calibrated_probs = calibrator.predict(test_logits).probs
# Evaluate calibration
ece = ece_score(test_logits, test_labels)
print(f"Expected Calibration Error: {ece:.4f}")
from incerto.ood import Energy
from incerto.data import get_ood_benchmark
# Load OOD benchmark
id_loader, ood_loader = get_ood_benchmark(
"CIFAR10_vs_SVHN", batch_size=128
)
# Initialize Energy-based OOD detector
detector = Energy(model)
# Compute OOD scores
id_scores = detector.score(id_data)
ood_scores = detector.score(ood_data)
# Evaluate detection performance
from incerto.ood import auroc
auc = auroc(id_scores, ood_scores)
print(f"AUROC: {auc:.4f}")
from incerto.conformal import inductive_conformal
# Calibrate on validation set
alpha = 0.1 # Desired coverage: 1 - alpha = 90%
predictor = inductive_conformal(
model, calibration_loader, alpha
)
# Generate prediction sets with coverage guarantees
x_test = torch.randn(100, 3, 32, 32)
prediction_sets = predictor(x_test)
# Each prediction set is guaranteed to contain
# the true label with probability ≥ 90%
for i, pred_set in enumerate(prediction_sets):
print(f"Sample {i}: Predicted classes {pred_set}")
from incerto.llm import (
TokenEntropy,
SelfConsistency,
SemanticEntropy
)
# Token-level uncertainty
logits = model(input_ids) # shape: (batch, seq_len, vocab)
token_entropy = TokenEntropy.compute(logits)
# Sampling-based uncertainty
responses = [
model.generate(prompt, do_sample=True)
for _ in range(10)
]
# Self-consistency
sc_result = SelfConsistency.compute(responses)
print(f"Agreement rate: {sc_result['agreement_rate']:.2f}")
# Semantic entropy
se_result = SemanticEntropy.compute(responses)
print(f"Semantic entropy: {se_result['semantic_entropy']:.4f}")
Consistent interface across all uncertainty quantification methods
Built on PyTorch for seamless integration with your models
Implementations based on peer-reviewed publications
190 tests ensuring reliability and correctness
Simple API that doesn't sacrifice functionality
MIT licensed, free for commercial and research use
Join the community making ML models more trustworthy