NEA/WPNCS Sub-Group 14: Performance Benchmark for Error Recovery and Experimental Coverage

Contact : Ugur Mertyurek (mertyureku@ornl.gov)

Objective:

The objective of this benchmark is to develop confidence in data assimilation (DA) techniques and adjusted nuclear data. The benchmark intends to develop insight into the mechanics of state-of-the-art methods, e.g., Bayesian estimation, Machine Learning-based methods, nonparametric techniques, etc., their ability to characterize error sources and quantify the impact on the responses for application of interest. This will be facilitated via a toy analytical exercise with representative neutronic behavior. The benchmark is envisioned to have value for a range of applications, including criticality safety, spent fuel characterization, online monitoring for operating reactors, safeguard applications, etc.

Justification:

The nuclear criticality safety community has been a leader in developing methods for uncertainty characterization in support of the safe handling of nuclear fuel. With the introduction of advanced reactors and fuel cycles concepts, the requirements for uncertainty characterization are expected to be much more stringent due to lower anticipated safety margins with new material concepts and proposed higher enrichments. Such uncertainty characterization involves integration of simulation results with measurements to improve predictions. This integration can be achieved using data assimilation (DA) techniques, however necessitates a deeper understanding of how the latter work. The exchange of this understanding between all international stakeholders is necessary to support the ongoing worldwide expansion of nuclear power.

Scope:

The activity consists of a synthetic benchmark of an analytical exercise (a toy case mimicking a group of experiments and an application) with embedded error sources to help compare the performance of state-of-the-art data assimilation (DA) techniques. The cases and error sources are intentionally fictitious and designed to facilitate the analysis of the DA method themselves, and to limit the workload for the participants. The performance criteria are the ability to correctly identify and adjust for the embedded error sources and the ability to accurately quantify the experimental coverage for a given application.

Expected Results and Deliverables:

The participants' results include the adjustments proposed by the DA method of their choice, and the improved predictions. Any additional secondary results which may vary from a DA method to another are welcome to be reported by the participants. An overall analysis report detailing the participants' results, and providing insight on the performance of the various methods, will be prepared.

Description

This benchmark consists of 4 experiments and 2 applications, designed to emulate criticality experiments where the measured response is the critical eigenvalue and the sources of uncertainties are the cross-sections. For simplicity, all models are assumed to receive 5 input cross-sections with prior Gaussian uncertainties (represented by mean values and covariance structure). The mean values are used as the reference for calculating the reference eigenvalues for the experiments and the given application. The sensitivity profiles for the experiments and the applications are provided as 5 dimensional vectors, represented as the derivatives of the response eigenvalue with respect to the cross-sections, and normalized by the reference cross-section values and the calculated response. The experimental measurements are provided for all 4 experiments.

The goal of this exercise is to calculate the posterior estimate of the application response, given the experimental models and measurements for the five problems listed below. Additionally, we request a coverage information regarding the completeness of the number of experiments and the sensitivity of parameters for the calculated responses (to be discussed further in the September meeting). Each participant is also expected to provide feedback on the ability to quantify experimental relevance.

  1. Estimate the posteriori applications' responses with the inclusion of experiment Albert only.
  2. Estimate the posteriori applications' responses with the inclusion of experiments Albert and Bohr only.
  3. Estimate the posteriori applications' responses with the inclusion of experiments Albert and Chadwick only.
  4. Estimate the posteriori applications' responses with the inclusion of experiments Albert and Dyson only.
  5. Estimate the posteriori applications' responses with the inclusion of experiments Albert, Bohr, Chadwick, and Dyson.

Please fill out the response template here.

Upload CSV File



Application Bravo Calculated Response: μ = 0.99596507, σ = 0.02899880

Sensitivity Profile (Sandwich Rule): [-0.05495225 -0.26728842 -0.02640413 -0.25776423 -0.12361347]