Upload CSV File
Performance Benchmark for Error Recovery and Experimental Coverage
The nuclear criticality safety community has been at the forefront of developing methods for uncertainty characterization to ensure the safe handling of nuclear fuel. Well-established uncertainty procedures, referred to as data adjustment/assimilation (DA), exist for quantifying biases and uncertainties in critical experiments and their similarities to application of interest. Since biases in simulation results depend on the configuration of critical experiments (e.g.,flux spectrum, material composition, operation conditions), selecting representative experiments is essential. These experiments are identified based on their similarity to the application in terms of configuration and their response to nuclear data uncertainties. Nuclear data can be adjusted within their reported uncertainty bounds, and the results of these adjustments (assimilations) are used to calculate biases and simulate application responses.
Drawing on extensive experience, these methods, supported by sufficient experimental coverage and conservative uncertainty margins, have reliably provided safe criticality margins for existing light-water reactor (LWR) systems. However, the introduction of advanced reactors and new fuel cycle concepts present challenges to applying these methods due to the need to adapt existing critical experiments for the following scenarios:
Additionally, the requirements for uncertainty characterization are becoming increasingly stringent, given the reduced safety margins anticipated with new materials and higher fuel enrichments. Advanced uncertainty characterization involves integrating simulation results with experimental measurements to enhance predictions. This integration relies on data assimilation (DA) techniques, necessitating a deeper understanding of their mechanics. Global collaboration is essential to facilitate the exchange of knowledge among international stakeholders, supporting the ongoing expansion of nuclear power.
The objective of this benchmark is to build confidence in DA techniques and adjusted nuclear data under existing and anticipated challenging conditions. It aims to provide insights into state-of-the-art methods, such as Bayesian estimation, machine learning-based approaches, and nonparametric techniques, focusing on their capacity to:
These goals will be achieved through a simplified analytical exercise (a "toy case") that replicates representative neutronic behavior that develops under the range of operating conditions in the body of experiments and the application, e.g., different fuel-to-moderator ratio. The benchmark is designed to be valuable across a wide range of applications, including criticality safety, spent fuel characterization, online reactor monitoring, and safeguards.
The activity involves a synthetic benchmark consisting of an analytical exercise—a toy case that mimics a group of experiments and corresponding applications at a range of operating conditions—with embedded error sources to enable comparisons of the performance of various state-of-the-art DA techniques. The performance criteria are the ability to correctly identify and adjust for the embedded error sources and the ability to accurately predict the application bias and its uncertainty. The benchmark is structured as follows:
The participants' results include the adjustments proposed by the DA method of their choice, and the applications responses predictions. Any additional secondary results which may vary from a DA method to another are welcome to be reported by the participants. An overall analysis report, detailing the participants' results and providing insight into the performance of the various methods, will be prepared.
This benchmark consists of 4 experiments and 3 applications, designed to emulate criticality experiments where the measured response is the critical eigenvalue and the sources of uncertainties are the cross-sections. For simplicity, all models are assumed to receive 5 input cross-sections with prior Gaussian uncertainties (represented by mean values and covariance structure ). The mean values are used as the reference for calculating the reference eigenvalues for the experiments and the given application. The sensitivity profiles for the experiments and the applications are provided as 5 dimensional vectors, represented as the derivatives of the response eigenvalue with respect to the cross-sections, and normalized by the reference cross-section values and the calculated response. The experimental measurements are provided for all 4 experiments.
The goal of this exercise is to calculate the posterior estimate of the application response, given the experimental models and measurements for the five problems listed below. Additionally, we request a coverage information regarding the completeness of the number of experiments and the sensitivity of parameters for the calculated responses (to be discussed further in the September meeting). Each participant is also expected to provide feedback on the ability to quantify experimental relevance.
Please fill out the response template here.