NEA/WPNCS Sub-Group 14:

Performance Benchmark for Error Recovery and Experimental Coverage

Contact: Ugur Mertyurek mertyureku@ornl.gov
Motivation

The nuclear criticality safety community has been at the forefront of developing methods for uncertainty characterization to ensure the safe handling of nuclear fuel. Well-established uncertainty procedures, referred to as data adjustment/assimilation (DA), exist for quantifying biases and uncertainties in critical experiments and their similarities to application of interest. Since biases in simulation results depend on the configuration of critical experiments (e.g.,flux spectrum, material composition, operation conditions), selecting representative experiments is essential. These experiments are identified based on their similarity to the application in terms of configuration and their response to nuclear data uncertainties. Nuclear data can be adjusted within their reported uncertainty bounds, and the results of these adjustments (assimilations) are used to calculate biases and simulate application responses.

Drawing on extensive experience, these methods, supported by sufficient experimental coverage and conservative uncertainty margins, have reliably provided safe criticality margins for existing light-water reactor (LWR) systems. However, the introduction of advanced reactors and new fuel cycle concepts present challenges to applying these methods due to the need to adapt existing critical experiments for the following scenarios:

  1. Applications under conditions outside the experimental operating range (e.g., hot vs. cold conditions).
  2. Applications exhibiting partial similarity to experiment configurations (e.g., the same moderator with a different fuel type or vice versa) or partial similarity to experimental spectra.
  3. Applications requiring justifiable coverage and confidence for safe operation.

Additionally, the requirements for uncertainty characterization are becoming increasingly stringent, given the reduced safety margins anticipated with new materials and higher fuel enrichments. Advanced uncertainty characterization involves integrating simulation results with experimental measurements to enhance predictions. This integration relies on data assimilation (DA) techniques, necessitating a deeper understanding of their mechanics. Global collaboration is essential to facilitate the exchange of knowledge among international stakeholders, supporting the ongoing expansion of nuclear power.

The Benchmark

The objective of this benchmark is to build confidence in DA techniques and adjusted nuclear data under existing and anticipated challenging conditions. It aims to provide insights into state-of-the-art methods, such as Bayesian estimation, machine learning-based approaches, and nonparametric techniques, focusing on their capacity to:

  1. Characterize error sources.
  2. Quantify their impact on application-specific responses.

These goals will be achieved through a simplified analytical exercise (a "toy case") that replicates representative neutronic behavior that develops under the range of operating conditions in the body of experiments and the application, e.g., different fuel-to-moderator ratio. The benchmark is designed to be valuable across a wide range of applications, including criticality safety, spent fuel characterization, online reactor monitoring, and safeguards.

Problem Description

The activity involves a synthetic benchmark consisting of an analytical exercise—a toy case that mimics a group of experiments and corresponding applications at a range of operating conditions—with embedded error sources to enable comparisons of the performance of various state-of-the-art DA techniques. The performance criteria are the ability to correctly identify and adjust for the embedded error sources and the ability to accurately predict the application bias and its uncertainty. The benchmark is structured as follows:

  1. Similar Operating Conditions and Configurations: Existing LWR applications and experiments are selected to reflect similar operating conditions and configurations, allowing for straightforward comparisons.
  2. Nonlinear Behavior Simulation: Applications and experiments are designed to exhibit nonlinear behavior with respect to data adjustments. This includes cases where the experiments or applications diverge in operating conditions (e.g., differing hydrogen-to-heavy-metal ratios or state conditions). Unrealistically large covariance values are introduced to challenge DA methods and simulate extreme scenarios
  3. Confidence in Experimental Coverage: The degree of experimental coverage provided by a given set of experiments can vary depending on the methodology used. Variations in confidence in experimental coverage can lead to significant changes in data adjustments, resulting in large differences in predicted biases and uncertainties
Expected Results and Deliverables

The participants' results include the adjustments proposed by the DA method of their choice, and the applications responses predictions. Any additional secondary results which may vary from a DA method to another are welcome to be reported by the participants. An overall analysis report, detailing the participants' results and providing insight into the performance of the various methods, will be prepared.

Description

This benchmark consists of 4 experiments and 3 applications, designed to emulate criticality experiments where the measured response is the critical eigenvalue and the sources of uncertainties are the cross-sections. For simplicity, all models are assumed to receive 5 input cross-sections with prior Gaussian uncertainties (represented by mean values and covariance structure ). The mean values are used as the reference for calculating the reference eigenvalues for the experiments and the given application. The sensitivity profiles for the experiments and the applications are provided as 5 dimensional vectors, represented as the derivatives of the response eigenvalue with respect to the cross-sections, and normalized by the reference cross-section values and the calculated response. The experimental measurements are provided for all 4 experiments.

The goal of this exercise is to calculate the posterior estimate of the application response, given the experimental models and measurements for the five problems listed below. Additionally, we request a coverage information regarding the completeness of the number of experiments and the sensitivity of parameters for the calculated responses (to be discussed further in the September meeting). Each participant is also expected to provide feedback on the ability to quantify experimental relevance.

  1. Estimate the posteriori applications' responses with the inclusion of experiment Albert only.
  2. Estimate the posteriori applications' responses with the inclusion of experiments Albert and Bohr only.
  3. Estimate the posteriori applications' responses with the inclusion of experiments Albert and Chadwick only.
  4. Estimate the posteriori applications' responses with the inclusion of experiments Albert and Dyson only.
  5. Estimate the posteriori applications' responses with the inclusion of experiments Albert, Bohr, Chadwick, and Dyson.

Please fill out the response template here.

Upload CSV File



Application Bravo Calculated Response: μ = 0.99596507, σ = 0.02899880

Sensitivity Profile (Sandwich Rule): [-0.05495225 -0.26728842 -0.02640413 -0.25776423 -0.12361347]