ErrorMetrics.jl
A Julia package providing error / performance metrics for comparing model outputs with observations.
Quick start
using ErrorMetrics
using Random
y = randn(100) # observations
ŷ = y .+ 0.1randn(100) # model output
mse = metric(MSE(), ŷ, y)
nse = metric(NSE(), ŷ, y)
pcor = metric(Pcor(), ŷ, y)
# with observational uncertainty
yσ = 0.2 .* ones(100)
nseσ = metric(NSEσ(), ŷ, y, yσ)See the API page for the full list of metrics.
Available Metrics
Error-based Metrics
MSE: Mean squared error - Measures the average squared difference between predicted and observed valuesNAME1R: Normalized Absolute Mean Error with 1/R scaling - Measures the absolute difference between means normalized by the range of observationsNMAE1R: Normalized Mean Absolute Error with 1/R scaling - Measures the average absolute error normalized by the range of observations
Nash-Sutcliffe Efficiency Metrics
NSE: Nash-Sutcliffe Efficiency - Measures model performance relative to the mean of observationsNSEInv: Inverse Nash-Sutcliffe Efficiency - Inverse of NSE for minimization problemsNSEσ: Nash-Sutcliffe Efficiency with uncertainty - Incorporates observation uncertainty in the performance measureNSEσInv: Inverse Nash-Sutcliffe Efficiency with uncertainty - Inverse of NSEσ for minimization problemsNNSE: Normalized Nash-Sutcliffe Efficiency - Measures model performance relative to the mean of observations, normalized to [0,1] rangeNNSEInv: Inverse Normalized Nash-Sutcliffe Efficiency - Inverse of NNSE for minimization problems, normalized to [0,1] rangeNNSEσ: Normalized Nash-Sutcliffe Efficiency with uncertainty - Incorporates observation uncertainty in the normalized performance measureNNSEσInv: Inverse Normalized Nash-Sutcliffe Efficiency with uncertainty - Inverse of NNSEσ for minimization problems
Correlation-based Metrics
Pcor: Pearson Correlation - Measures linear correlation between predictions and observationsPcorInv: Inverse Pearson Correlation - Inverse of Pcor for minimization problemsPcor2: Squared Pearson Correlation - Measures the strength of linear relationship between predictions and observationsPcor2Inv: Inverse Squared Pearson Correlation - Inverse of Pcor2 for minimization problemsNPcor: Normalized Pearson Correlation - Measures linear correlation between predictions and observations, normalized to [0,1] rangeNPcorInv: Inverse Normalized Pearson Correlation - Inverse of NPcor for minimization problems
Rank Correlation Metrics
Scor: Spearman Correlation - Measures monotonic relationship between predictions and observationsScorInv: Inverse Spearman Correlation - Inverse of Scor for minimization problemsScor2: Squared Spearman Correlation - Measures the strength of monotonic relationship between predictions and observationsScor2Inv: Inverse Squared Spearman Correlation - Inverse of Scor2 for minimization problemsNScor: Normalized Spearman Correlation - Measures monotonic relationship between predictions and observations, normalized to [0,1] rangeNScorInv: Inverse Normalized Spearman Correlation - Inverse of NScor for minimization problems
Adding a New Metric
1. Define the Metric Type
Create a new metric type in the ErrorMetrics.jl source:
export NewMetric
struct NewMetric <: ErrorMetric endRequirements:
Use PascalCase for the type name
Make it a subtype of
ErrorMetricExport the type
Add a purpose function describing the metric's role
2. Implement the Metric Function
Implement the metric calculation:
function metric(::NewMetric, ŷ::AbstractArray, y::AbstractArray, yσ::AbstractArray)
# Your metric calculation here
return metric_value
endRequirements:
Function must be named
metricCanonical ErrorMetrics API is
metric(m, ŷ, y[, yσ])m: metric type instance (e.g.NewMetric())ŷ: model simulation / estimatey: observationsyσ(optional): observational uncertainty; if omitted ErrorMetrics uses a ones-like default (no allocation)
For a new metric type, implement the 4-argument method:
metric(::NewMetric, ŷ, y, yσ)You do not need to implement the 3-argument method; it is already provided and forwards to your 4-arg method.
Must return a scalar value
3. Define Purpose
Add a purpose function for your metric type:
import OmniTools: purpose
purpose(::Type{NewMetric}) = "Description of what NewMetric does"4. Testing
Test your new metric by:
Running it on sample data
Comparing results with existing metrics
Verifying it works correctly with different data types and sizes
Testing edge cases (e.g., NaN values)
Examples
Calculating Metrics
using ErrorMetrics
# Define observations and model output
y = [1.0, 2.0, 3.0] # observations
yσ = [0.1, 0.1, 0.1] # uncertainties
ŷ = [1.1, 2.1, 3.1] # model output
# Calculate MSE
mse = metric(MSE(), ŷ, y, yσ)
# Calculate correlation
correlation = metric(Pcor(), ŷ, y, yσ)
# Calculate NSE with uncertainty
nse_uncertain = metric(NSEσ(), ŷ, y, yσ)Using Multiple Metrics
using ErrorMetrics
# Calculate multiple metrics for comparison
metrics = Dict(
:mse => metric(MSE(), ŷ, y, yσ),
:nse => metric(NSE(), ŷ, y, yσ),
:pcor => metric(Pcor(), ŷ, y, yσ),
:scor => metric(Scor(), ŷ, y, yσ)
)Best Practices
- Documentation
Add clear documentation for your new metric
Include mathematical formulas if applicable
Provide usage examples
- Testing
Test with various data types and sizes
Verify edge cases (e.g., NaN values)
Compare with existing metrics
- Performance
Optimize for large datasets
Consider memory usage
Handle missing values appropriately
- Compatibility
Ensure compatibility with existing workflows
Follow the established interface
Maintain consistent error handling
Defining Purpose for Metric Types
Each metric type in ErrorMetrics.jl should have a purpose function that describes its role. This helps with documentation and provides clear information about what each metric does.
How to Define Purpose
- Make sure that the base
purposefunction from OmniTools is already imported:
import OmniTools: purpose- Then,
purposecan be easily extended for your metric type:
# For a concrete metric type
purpose(::Type{MyMetric}) = "Description of what MyMetric does"Best Practices
Keep descriptions concise but informative
Focus on what the metric measures and how it's calculated
Include any normalization or scaling factors in the description
For abstract types, clearly indicate their role in the type hierarchy