v0.1.0

Uncertainty Quantification
for Machine Learning

A comprehensive Python library providing state-of-the-art methods for calibration, out-of-distribution detection, conformal prediction, and uncertainty estimation in deep learning.

5
Core Modules
50+
Methods
190
Tests

Comprehensive Uncertainty Toolkit

Everything you need for uncertainty quantification in modern ML systems

📊

Calibration

Ensure your model's confidence matches its accuracy

  • Post-hoc methods: Temperature scaling, Platt scaling, Isotonic regression
  • Training-time: Label smoothing, Focal loss, Evidential deep learning
  • Metrics: ECE, MCE, Brier score, NLL
🎲

OOD Detection

Identify when your model encounters unfamiliar data

  • Score-based: MSP, Energy, ODIN, MaxLogit
  • Distance-based: Mahalanobis, KNN
  • Training methods: Mixup, CutMix, Outlier Exposure
🎯

Conformal Prediction

Distribution-free uncertainty with coverage guarantees

  • Classification: Inductive CP, APS, RAPS, Mondrian CP
  • Regression: Jackknife+, CV+
  • Guaranteed coverage on any data distribution
🔍

Selective Prediction

Abstain from predictions when uncertain

  • Confidence thresholding strategies
  • Self-Adaptive Training (SAT)
  • Risk-coverage tradeoff analysis
🤖

LLM Uncertainty

Quantify uncertainty in language model outputs

  • Token-level: Entropy, perplexity, surprisal
  • Sequence-level: Probability, average log-prob
  • Sampling-based: Self-consistency, semantic entropy
📦

Data & Utilities

Built-in datasets and helper functions

  • Datasets: MNIST, CIFAR-10/100, SVHN
  • OOD benchmarks with standard splits
  • Visualization and common architectures

Quick Installation

Get started in seconds

From PyPI

pip install incerto

Coming soon to PyPI

From Source

git clone https://github.com/steverab/incerto.git
cd incerto
pip install -e .

Recommended for development

Requirements

Python 3.10+ PyTorch 2.0+ NumPy scikit-learn

See It In Action

Simple, intuitive API for all uncertainty quantification tasks

import torch
from incerto.calibration import TemperatureScaling, ece_score

# Train your model
model = YourModel()
# ... training code ...

# Post-hoc calibration on validation set
calibrator = TemperatureScaling()
calibrator.fit(val_logits, val_labels)

# Get calibrated predictions
test_logits = model(test_data)
calibrated_probs = calibrator.predict(test_logits).probs

# Evaluate calibration
ece = ece_score(test_logits, test_labels)
print(f"Expected Calibration Error: {ece:.4f}")
from incerto.ood import Energy
from incerto.data import get_ood_benchmark

# Load OOD benchmark
id_loader, ood_loader = get_ood_benchmark(
    "CIFAR10_vs_SVHN", batch_size=128
)

# Initialize Energy-based OOD detector
detector = Energy(model)

# Compute OOD scores
id_scores = detector.score(id_data)
ood_scores = detector.score(ood_data)

# Evaluate detection performance
from incerto.ood import auroc
auc = auroc(id_scores, ood_scores)
print(f"AUROC: {auc:.4f}")
from incerto.conformal import inductive_conformal

# Calibrate on validation set
alpha = 0.1  # Desired coverage: 1 - alpha = 90%
predictor = inductive_conformal(
    model, calibration_loader, alpha
)

# Generate prediction sets with coverage guarantees
x_test = torch.randn(100, 3, 32, 32)
prediction_sets = predictor(x_test)

# Each prediction set is guaranteed to contain
# the true label with probability ≥ 90%
for i, pred_set in enumerate(prediction_sets):
    print(f"Sample {i}: Predicted classes {pred_set}")
from incerto.llm import (
    TokenEntropy,
    SelfConsistency,
    SemanticEntropy
)

# Token-level uncertainty
logits = model(input_ids)  # shape: (batch, seq_len, vocab)
token_entropy = TokenEntropy.compute(logits)

# Sampling-based uncertainty
responses = [
    model.generate(prompt, do_sample=True)
    for _ in range(10)
]

# Self-consistency
sc_result = SelfConsistency.compute(responses)
print(f"Agreement rate: {sc_result['agreement_rate']:.2f}")

# Semantic entropy
se_result = SemanticEntropy.compute(responses)
print(f"Semantic entropy: {se_result['semantic_entropy']:.4f}")

Why incerto?

🎯

Unified API

Consistent interface across all uncertainty quantification methods

PyTorch Native

Built on PyTorch for seamless integration with your models

📚

Research-Backed

Implementations based on peer-reviewed publications

Well Tested

190 tests ensuring reliability and correctness

🚀

Easy to Use

Simple API that doesn't sacrifice functionality

📖

Open Source

MIT licensed, free for commercial and research use

Ready to Get Started?

Join the community making ML models more trustworthy