The document discusses the development of benchmarks for the International Criticality Safety Benchmark Evaluation Project (ICSBEP). It describes the process for developing an ICSBEP benchmark, which includes describing the experiment, evaluating the data and uncertainties, specifying the benchmark model, providing sample calculations, and ensuring quality assurance. Benchmarks undergo rigorous internal and external review to be included in the ICSBEP Handbook, which contains over 500 evaluated experiments from 20 countries to validate nuclear data and computer codes.
The document provides an overview of the IMS Question & Test Interoperability (QTI) specification, which describes a data model for representing assessment content and results. QTI allows for the exchange of assessment items between authoring tools, item banks, test builders, and learning systems. It has undergone several versions since 1999 to support additional features like adaptive testing and metadata.
The document discusses object-oriented testing strategies. It explains that in object-oriented testing, the component being tested is a class-object rather than a function. Unit testing focuses on testing each class's operations and attributes. Integration testing focuses on testing groups of collaborating classes. Validation testing is based on use case scenarios from the object-oriented analysis model. The document provides details on techniques for unit testing, integration testing, and validation testing of object-oriented systems.
Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012TEST Huddle
EuroSTAR Software Testing Conference 2012 presentation on Leveraging Defect Taxonomies for Testing by Michael Felderer. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Este documento describe la aparición de una nueva franja social de personas que tienen alrededor de sesenta años. A diferencia de generaciones anteriores, esta generación se siente joven y activa y no tiene planes de envejecer o jubilarse. Han llevado vidas satisfactorias siguiendo sus propias pasiones e intereses. Se sienten plenos, curiosos y competentes con la tecnología. Compiten de forma diferente, cultivando su propio estilo en lugar de envidiar la juventud.
This document provides an overview and tutorial on benchmarking experiments for criticality safety and reactor physics applications. It discusses the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP), which provide benchmark models and data from critical and subcritical experiments. The tutorial demonstrates how to access and analyze benchmark reports from these projects, including how they are formatted and what type of experimental data and evaluations are typically included. Key sections of a sample benchmark report are dissected, such as the description of experimental configurations and materials, evaluation of data uncertainties, and derivation of the benchmark model. The purpose of conducting such benchmarking and evaluations is to support validation of nuclear data and computer codes used
This document provides an overview of benchmarking experiments for criticality safety and reactor physics applications. It discusses the benchmarking process used by the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP). The tutorial aims to demonstrate the databases used to access benchmark experiments - the International Criticality Safety Benchmark Experiment Data (DICE) and the International Data Bank for Reactor Physics Experiments (IDAT). It outlines the typical contents of a benchmark report, including experimental data, evaluation, benchmark model specifications, sample calculations and measurements. Participation in ICSBEP and IRPhEP is highlighted as a collaborative international effort.
The document provides guidance on writing formal project reports, with a focus on the structure and content of the main sections. It discusses writing the introduction, which should include background information, the purpose and objectives, a problem statement, previous work, proposed methods, and an outline of the report format. The main body sections may include the proposed solution, design procedures, results and discussions, conclusions, and recommendations. Details are also provided on writing about design constraints and standards, the design description, and implementation.
A Test Analysis Method for Black Box Testing Using AUT and Fault Knowledge.Tsuyoshi Yumoto
With a rapid increase in size and complexity of software today, the scope of software testing is also expanding. The efficiency of software testing needs to be improved in order to ensure the appropriate delivery deadline and cost of software development. For improving efficiency of software testing, the test needs to be designed in a way that the number of test cases is sufficient and appropriate in quantity. Test analysis is the activity to refine Application Under Test (AUT) into proper size that test design techniques can be applied to. It is for designing the test properly. However, the classification for proper size depends on individual’s own judgments. This paper proposes a test analysis method for the black box testing using a test category that is the classification based on fault and AUT knowledge.
The document provides an overview of the IMS Question & Test Interoperability (QTI) specification, which describes a data model for representing assessment content and results. QTI allows for the exchange of assessment items between authoring tools, item banks, test builders, and learning systems. It has undergone several versions since 1999 to support additional features like adaptive testing and metadata.
The document discusses object-oriented testing strategies. It explains that in object-oriented testing, the component being tested is a class-object rather than a function. Unit testing focuses on testing each class's operations and attributes. Integration testing focuses on testing groups of collaborating classes. Validation testing is based on use case scenarios from the object-oriented analysis model. The document provides details on techniques for unit testing, integration testing, and validation testing of object-oriented systems.
Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012TEST Huddle
EuroSTAR Software Testing Conference 2012 presentation on Leveraging Defect Taxonomies for Testing by Michael Felderer. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Este documento describe la aparición de una nueva franja social de personas que tienen alrededor de sesenta años. A diferencia de generaciones anteriores, esta generación se siente joven y activa y no tiene planes de envejecer o jubilarse. Han llevado vidas satisfactorias siguiendo sus propias pasiones e intereses. Se sienten plenos, curiosos y competentes con la tecnología. Compiten de forma diferente, cultivando su propio estilo en lugar de envidiar la juventud.
This document provides an overview and tutorial on benchmarking experiments for criticality safety and reactor physics applications. It discusses the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP), which provide benchmark models and data from critical and subcritical experiments. The tutorial demonstrates how to access and analyze benchmark reports from these projects, including how they are formatted and what type of experimental data and evaluations are typically included. Key sections of a sample benchmark report are dissected, such as the description of experimental configurations and materials, evaluation of data uncertainties, and derivation of the benchmark model. The purpose of conducting such benchmarking and evaluations is to support validation of nuclear data and computer codes used
This document provides an overview of benchmarking experiments for criticality safety and reactor physics applications. It discusses the benchmarking process used by the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP). The tutorial aims to demonstrate the databases used to access benchmark experiments - the International Criticality Safety Benchmark Experiment Data (DICE) and the International Data Bank for Reactor Physics Experiments (IDAT). It outlines the typical contents of a benchmark report, including experimental data, evaluation, benchmark model specifications, sample calculations and measurements. Participation in ICSBEP and IRPhEP is highlighted as a collaborative international effort.
The document provides guidance on writing formal project reports, with a focus on the structure and content of the main sections. It discusses writing the introduction, which should include background information, the purpose and objectives, a problem statement, previous work, proposed methods, and an outline of the report format. The main body sections may include the proposed solution, design procedures, results and discussions, conclusions, and recommendations. Details are also provided on writing about design constraints and standards, the design description, and implementation.
A Test Analysis Method for Black Box Testing Using AUT and Fault Knowledge.Tsuyoshi Yumoto
With a rapid increase in size and complexity of software today, the scope of software testing is also expanding. The efficiency of software testing needs to be improved in order to ensure the appropriate delivery deadline and cost of software development. For improving efficiency of software testing, the test needs to be designed in a way that the number of test cases is sufficient and appropriate in quantity. Test analysis is the activity to refine Application Under Test (AUT) into proper size that test design techniques can be applied to. It is for designing the test properly. However, the classification for proper size depends on individual’s own judgments. This paper proposes a test analysis method for the black box testing using a test category that is the classification based on fault and AUT knowledge.
The document provides details on method development for chromatography. It discusses defining key terms, developing a test method plan, optimizing methods through experimental design techniques like factorial design. The method development process involves studying samples, setting goals, reviewing literature, selecting an approach, optimizing parameters, and finalizing the method. Critical parameters like column length and temperature, flow rate, mobile phase composition are identified for optimization. Formal validation is required once the method is developed.
The document discusses the importance of documentation in software testing. It notes that documentation is needed to record test implementation and results, and helps direct testing and reuse tests. There are different types of test documentation, including test plans, specifications, and analysis reports. Effective documentation provides benefits like training, communication, maintenance, and historical reference. Test documentation should be maintained throughout the software development life cycle.
This webinar will provide pesticides residue analysts with valuable information on software method development and data processing for the analysis of pesticide residues in food for both LC–MS and GC–MS. Technical experts will review the latest in software advances to help with data interpretation and reporting.
CIE AS Level Applied ICT Unit 4 - Systems Life CycleMr G
The document outlines the key stages in the systems life cycle including analysis, design, development, testing and implementation, evaluation and maintenance, and documentation. Analysis involves feasibility studies, data collection, and requirements specification. Design includes output, input, and process design as well as hardware and software selection. Development is when programming and data structures are created. Testing and implementation involves testing modules and the whole system, then implementing using various strategies. Evaluation assesses the system and maintenance includes perfective, corrective and adaptive changes. Documentation covers technical details for specialists and user guides.
The document discusses the process of product teardown for understanding how a product is made and functions. It involves disassembling a product and analyzing its physical components and functional behavior. The key steps are: 1) listing design issues, 2) preparing tools, 3) examining distribution/installation, 4) disassembling and measuring components, and 5) creating data sheets and models including an exploded view, bill of materials, functional model, and force flow diagram. Examples of teardown analyses of a scanner and hot glue gun are provided to illustrate the process.
A study on the efficiency of a test analysis method utilizing test-categories...Tsuyoshi Yumoto
This document describes a study on improving the efficiency of test analysis through utilizing test categories based on application under test (AUT) knowledge and known faults. The study proposes a method for defining test categories based on logical structures of features to guide test condition determination. A verification experiment was conducted and showed measurable improvement in test coverage when using the proposed method. The method aims to minimize variability in test analysis results by providing a standardized process for testers to follow.
This document discusses the importance of test data documentation. It defines test data as samples of valid and invalid data used for testing. Documenting test data has advantages like reusing data for regression testing and aiding user acceptance testing. Test design techniques like boundary value analysis and equivalence partitioning help identify test data by partitioning inputs. The document emphasizes generating comprehensive test data through templates and linking it to test scripts to ensure test coverage.
quality control STUDY ON 3 POLE MCCB MBA SIP report Akshay Nair
This document summarizes a quality control study on the production of 3-pole MCCBs at Havells India Ltd. The study found that the current process has low Cp and Cpk values, indicating high variation and rejection rates. The objectives were to analyze the production process, identify causes for low Cp and Cpk, and make improvements. Data was collected and analyzed using tools like control charts, fishbone diagrams, and Pareto charts. Improvements like fixing tooling issues and developing new springs increased Cp values but further work is needed to meet Cpk specifications. Recommendations include improving incoming material quality and using more in-house parts. The internship provided hands-on experience with quality control management systems and
The document provides an overview of fundamentals of testing including the testing process, psychology of testing, and exams. It describes the typical activities in a test process including test planning, monitoring and control, analysis, design, implementation, execution, and completion. For each activity, it outlines the common tasks and work products. It also discusses how human psychology and the different mindsets of testers and developers can impact testing. The document emphasizes the importance of independence in testing to avoid author bias and more effectively find defects.
6. FUNDAMENTALS OF SE AND REQUIREMENT ENGINEERING.pptPedadaSaikumar
This document discusses requirement engineering fundamentals including requirement elicitation, analysis, and system models. It defines what requirements are, describes different types of requirements like user requirements, system requirements, functional requirements, and non-functional requirements. It also discusses requirements engineering processes like requirements elicitation and analysis, specification, validation, and the use of system models. Key activities in requirements engineering include establishing customer needs, specifying services and constraints, and generating requirements descriptions.
The document discusses test design which includes creating test scenarios and test cases to thoroughly test all features of a system. It provides templates and guidelines for writing effective test scenarios and test cases, including elements like preconditions, test steps, and expected results. The document also discusses traceability matrices to map test cases to requirements and help determine test coverage.
Basic Engineering Design (Part 6): Test and EvaluateDenise Wilson
The document describes the process of testing and evaluating components in the engineering design cycle. It emphasizes beginning with testing critical components, like sensors, in isolated and controlled environments to characterize performance before moving to more complex system-level testing. Testing should progress from controlled laboratory settings to realistic operating environments to verify functionality. Both critical and supporting components require testing to validate they meet design specifications.
Test strategy utilising mc useful toolsMark Chappell
1) The document outlines a high level test strategy that involves layering the project under test and identifying components in each layer. It describes identifying test basis documentation, creating a dependency matrix, and formulating an overall test "big picture".
2) Test packs will be designed based on project layers, and key documentation will be stored in a repository to facilitate test coverage analysis. A dependency matrix and big picture diagram will guide regression test selection.
3) Tools like DocIndex, InternetMiner and VisioDecompositer are used to extract and store information from documents, web pages and diagrams to generate the test basis repository, and inform the dependency matrix and big picture diagram.
The document summarizes revisions made to ISO/IEC 17025:2017, the standard for testing and calibration laboratories. Key changes include:
- Aligning the structure and language with other ISO standards and emphasizing outcomes over prescriptive requirements.
- Adding definitions for terms like "intralaboratory comparison" and updating terms like "impartiality".
- Focusing on risk-based thinking and giving laboratories more flexibility in meeting requirements.
- Emphasizing process approaches and addressing new areas like information technology and electronic documents.
- Providing two options for management system requirements - addressing specific clauses or using ISO 9001:2015.
Gaining acceptance in next generation PBK modelling approaches for regulatory...OECD Environment
On 10 May 2021, the OECD presented the recently published Guidance Document on the Characterisation, Validation and Reporting of Physiologically Based Kinetic (PBK) Models for Regulatory Purposes. This guidance aims to increase the confidence in the use of PBK models parameterised with data derived from in vitro and in silico methods, and help address “unfamiliar” uncertainties associated with these methods.
The webinar introduced the assessment framework for PBK models that was developed to evaluate the attributes and uncertainties of these models, including a dedicated discussion on sensitivity analysis. It also focused on the scientific workflow for characterising and validating PBK models together with a template for documenting PBK models in a systematic manner and a checklist to support model evaluation.
Check out the webinar video recording at: https://youtu.be/PT7w6PB97Ag and access the Guidance Document on the Characterisation, Validation and Reporting of Physiologically Based Kinetic (PBK) Models for Regulatory Purposes at: https://www.oecd.org/chemicalsafety/risk-assessment/guidance-document-on-the-characterisation-validation-and-reporting-of-physiologically-based-kinetic-models-for-regulatory-purposes.pdf.
This document discusses quality assurance and quality control procedures for chemical test laboratories to meet ISO/IEC 17025:2017 requirements. It covers establishing quality assurance plans, differentiating quality assurance and quality control, applying quality control practices like blanks, replicates, and laboratory controls. Quality control charts are presented as a tool to monitor analytical accuracy and precision over time.
This document discusses validation of analytical methods. It defines validation as establishing evidence that a process will consistently produce results meeting specifications. Validation guidelines include ICH Q2A, Q2B, FDA guidance, and pharmacopoeias. Key validation characteristics covered are specificity, linearity, range, accuracy, precision, detection/quantitation limits, robustness, and system suitability testing. Equipment validation involves design qualification, installation qualification, operational qualification, performance qualification, and process qualification in three phases: pre-validation, process validation, and validation maintenance.
This document provides an overview of standards for requirements specification documents, including the IEEE 830-1998 standard. It discusses the purpose and contents of a requirements specification document according to IEEE 830, including an introduction, overall description, and specific requirements sections. It also mentions the IEEE 830 objectives of helping customers and suppliers agree on requirements and reducing development effort.
This document provides an overview of benchmarking experiments for criticality safety and reactor physics applications. It discusses benchmark experiment availability through demonstration databases and outlines the typical structure of a benchmark report, including experimental data, evaluation, modeling, sample calculations, and measurements. The document encourages student and young professional involvement in benchmark participation, which can provide educational opportunities, experience with computational analysis, and collaboration on senior design or thesis projects. Benchmarking cultivates engineering judgment and an analytical skill set that is valuable for nuclear professionals.
This document provides an overview and tutorial on benchmarking experiments for criticality safety and reactor physics applications. It discusses the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP), which maintain evaluated benchmark experiments. The document demonstrates the availability of benchmark experiments through the DICE and IDAT tools. It also dissects a sample benchmark report to illustrate the typical components, including experimental data, evaluation, benchmark model, sample calculations, and benchmark measurements. Finally, it provides an overview of benchmark participation and contributions by country to the ICSBEP evaluations.
More Related Content
Similar to Development Of An ICSBEP Benchmark 2011
The document provides details on method development for chromatography. It discusses defining key terms, developing a test method plan, optimizing methods through experimental design techniques like factorial design. The method development process involves studying samples, setting goals, reviewing literature, selecting an approach, optimizing parameters, and finalizing the method. Critical parameters like column length and temperature, flow rate, mobile phase composition are identified for optimization. Formal validation is required once the method is developed.
The document discusses the importance of documentation in software testing. It notes that documentation is needed to record test implementation and results, and helps direct testing and reuse tests. There are different types of test documentation, including test plans, specifications, and analysis reports. Effective documentation provides benefits like training, communication, maintenance, and historical reference. Test documentation should be maintained throughout the software development life cycle.
This webinar will provide pesticides residue analysts with valuable information on software method development and data processing for the analysis of pesticide residues in food for both LC–MS and GC–MS. Technical experts will review the latest in software advances to help with data interpretation and reporting.
CIE AS Level Applied ICT Unit 4 - Systems Life CycleMr G
The document outlines the key stages in the systems life cycle including analysis, design, development, testing and implementation, evaluation and maintenance, and documentation. Analysis involves feasibility studies, data collection, and requirements specification. Design includes output, input, and process design as well as hardware and software selection. Development is when programming and data structures are created. Testing and implementation involves testing modules and the whole system, then implementing using various strategies. Evaluation assesses the system and maintenance includes perfective, corrective and adaptive changes. Documentation covers technical details for specialists and user guides.
The document discusses the process of product teardown for understanding how a product is made and functions. It involves disassembling a product and analyzing its physical components and functional behavior. The key steps are: 1) listing design issues, 2) preparing tools, 3) examining distribution/installation, 4) disassembling and measuring components, and 5) creating data sheets and models including an exploded view, bill of materials, functional model, and force flow diagram. Examples of teardown analyses of a scanner and hot glue gun are provided to illustrate the process.
A study on the efficiency of a test analysis method utilizing test-categories...Tsuyoshi Yumoto
This document describes a study on improving the efficiency of test analysis through utilizing test categories based on application under test (AUT) knowledge and known faults. The study proposes a method for defining test categories based on logical structures of features to guide test condition determination. A verification experiment was conducted and showed measurable improvement in test coverage when using the proposed method. The method aims to minimize variability in test analysis results by providing a standardized process for testers to follow.
This document discusses the importance of test data documentation. It defines test data as samples of valid and invalid data used for testing. Documenting test data has advantages like reusing data for regression testing and aiding user acceptance testing. Test design techniques like boundary value analysis and equivalence partitioning help identify test data by partitioning inputs. The document emphasizes generating comprehensive test data through templates and linking it to test scripts to ensure test coverage.
quality control STUDY ON 3 POLE MCCB MBA SIP report Akshay Nair
This document summarizes a quality control study on the production of 3-pole MCCBs at Havells India Ltd. The study found that the current process has low Cp and Cpk values, indicating high variation and rejection rates. The objectives were to analyze the production process, identify causes for low Cp and Cpk, and make improvements. Data was collected and analyzed using tools like control charts, fishbone diagrams, and Pareto charts. Improvements like fixing tooling issues and developing new springs increased Cp values but further work is needed to meet Cpk specifications. Recommendations include improving incoming material quality and using more in-house parts. The internship provided hands-on experience with quality control management systems and
The document provides an overview of fundamentals of testing including the testing process, psychology of testing, and exams. It describes the typical activities in a test process including test planning, monitoring and control, analysis, design, implementation, execution, and completion. For each activity, it outlines the common tasks and work products. It also discusses how human psychology and the different mindsets of testers and developers can impact testing. The document emphasizes the importance of independence in testing to avoid author bias and more effectively find defects.
6. FUNDAMENTALS OF SE AND REQUIREMENT ENGINEERING.pptPedadaSaikumar
This document discusses requirement engineering fundamentals including requirement elicitation, analysis, and system models. It defines what requirements are, describes different types of requirements like user requirements, system requirements, functional requirements, and non-functional requirements. It also discusses requirements engineering processes like requirements elicitation and analysis, specification, validation, and the use of system models. Key activities in requirements engineering include establishing customer needs, specifying services and constraints, and generating requirements descriptions.
The document discusses test design which includes creating test scenarios and test cases to thoroughly test all features of a system. It provides templates and guidelines for writing effective test scenarios and test cases, including elements like preconditions, test steps, and expected results. The document also discusses traceability matrices to map test cases to requirements and help determine test coverage.
Basic Engineering Design (Part 6): Test and EvaluateDenise Wilson
The document describes the process of testing and evaluating components in the engineering design cycle. It emphasizes beginning with testing critical components, like sensors, in isolated and controlled environments to characterize performance before moving to more complex system-level testing. Testing should progress from controlled laboratory settings to realistic operating environments to verify functionality. Both critical and supporting components require testing to validate they meet design specifications.
Test strategy utilising mc useful toolsMark Chappell
1) The document outlines a high level test strategy that involves layering the project under test and identifying components in each layer. It describes identifying test basis documentation, creating a dependency matrix, and formulating an overall test "big picture".
2) Test packs will be designed based on project layers, and key documentation will be stored in a repository to facilitate test coverage analysis. A dependency matrix and big picture diagram will guide regression test selection.
3) Tools like DocIndex, InternetMiner and VisioDecompositer are used to extract and store information from documents, web pages and diagrams to generate the test basis repository, and inform the dependency matrix and big picture diagram.
The document summarizes revisions made to ISO/IEC 17025:2017, the standard for testing and calibration laboratories. Key changes include:
- Aligning the structure and language with other ISO standards and emphasizing outcomes over prescriptive requirements.
- Adding definitions for terms like "intralaboratory comparison" and updating terms like "impartiality".
- Focusing on risk-based thinking and giving laboratories more flexibility in meeting requirements.
- Emphasizing process approaches and addressing new areas like information technology and electronic documents.
- Providing two options for management system requirements - addressing specific clauses or using ISO 9001:2015.
Gaining acceptance in next generation PBK modelling approaches for regulatory...OECD Environment
On 10 May 2021, the OECD presented the recently published Guidance Document on the Characterisation, Validation and Reporting of Physiologically Based Kinetic (PBK) Models for Regulatory Purposes. This guidance aims to increase the confidence in the use of PBK models parameterised with data derived from in vitro and in silico methods, and help address “unfamiliar” uncertainties associated with these methods.
The webinar introduced the assessment framework for PBK models that was developed to evaluate the attributes and uncertainties of these models, including a dedicated discussion on sensitivity analysis. It also focused on the scientific workflow for characterising and validating PBK models together with a template for documenting PBK models in a systematic manner and a checklist to support model evaluation.
Check out the webinar video recording at: https://youtu.be/PT7w6PB97Ag and access the Guidance Document on the Characterisation, Validation and Reporting of Physiologically Based Kinetic (PBK) Models for Regulatory Purposes at: https://www.oecd.org/chemicalsafety/risk-assessment/guidance-document-on-the-characterisation-validation-and-reporting-of-physiologically-based-kinetic-models-for-regulatory-purposes.pdf.
This document discusses quality assurance and quality control procedures for chemical test laboratories to meet ISO/IEC 17025:2017 requirements. It covers establishing quality assurance plans, differentiating quality assurance and quality control, applying quality control practices like blanks, replicates, and laboratory controls. Quality control charts are presented as a tool to monitor analytical accuracy and precision over time.
This document discusses validation of analytical methods. It defines validation as establishing evidence that a process will consistently produce results meeting specifications. Validation guidelines include ICH Q2A, Q2B, FDA guidance, and pharmacopoeias. Key validation characteristics covered are specificity, linearity, range, accuracy, precision, detection/quantitation limits, robustness, and system suitability testing. Equipment validation involves design qualification, installation qualification, operational qualification, performance qualification, and process qualification in three phases: pre-validation, process validation, and validation maintenance.
This document provides an overview of standards for requirements specification documents, including the IEEE 830-1998 standard. It discusses the purpose and contents of a requirements specification document according to IEEE 830, including an introduction, overall description, and specific requirements sections. It also mentions the IEEE 830 objectives of helping customers and suppliers agree on requirements and reducing development effort.
Similar to Development Of An ICSBEP Benchmark 2011 (20)
This document provides an overview of benchmarking experiments for criticality safety and reactor physics applications. It discusses benchmark experiment availability through demonstration databases and outlines the typical structure of a benchmark report, including experimental data, evaluation, modeling, sample calculations, and measurements. The document encourages student and young professional involvement in benchmark participation, which can provide educational opportunities, experience with computational analysis, and collaboration on senior design or thesis projects. Benchmarking cultivates engineering judgment and an analytical skill set that is valuable for nuclear professionals.
This document provides an overview and tutorial on benchmarking experiments for criticality safety and reactor physics applications. It discusses the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP), which maintain evaluated benchmark experiments. The document demonstrates the availability of benchmark experiments through the DICE and IDAT tools. It also dissects a sample benchmark report to illustrate the typical components, including experimental data, evaluation, benchmark model, sample calculations, and benchmark measurements. Finally, it provides an overview of benchmark participation and contributions by country to the ICSBEP evaluations.
The document summarizes benchmark evaluations of the NRAD reactor core conversion from HEU to LEU fuel. Key points:
- Benchmark models of the NRAD reactor were updated with new fuel composition data, reducing computational bias.
- Criticality and reactivity measurements for the 56- and 60-fuel element LEU cores were within uncertainty of calculations using MCNP.
- Future work includes additional startup tests with more fuel elements to further validate the LEU core performance.
1) The GROTESQUE experiment involved arranging small pieces of highly enriched uranium (HEU) metal into a complex geometric configuration on a steel diaphragm to achieve criticality.
2) Uncertainties in benchmark parameters like unit dimensions and positions resulted in negligible uncertainties in the calculated k-effective of less than 0.0001.
3) Radial position of units was identified as having the largest sensitivity, contributing an uncertainty of 0.0008 to k-effective.
The document summarizes benchmark tests performed on the NRAD reactor after its conversion from HEU to LEU fuel. Key points:
- Startup tests were conducted from March to June 2010 and included initial criticality, rod worth measurements, power calibrations up to 250kW.
- The 60-rod LEU core configuration has been added to benchmark databases and is available for criticality safety and nuclear data validation.
- Uncertainties in fuel parameters like uranium isotopic content contribute most to total experimental uncertainty of ±0.0027Δk.
- Simplifications made in the benchmark model like removing minor materials introduce small biases of +0.0012±0.0009Δ
This document discusses educating the next generation of nuclear criticality safety engineers through participation in international benchmark projects. It describes how evaluating benchmarks provides hands-on experience and enhances engineering skills for students. Benchmark analysis has been used to educate over 30 students in the past 25 years. Current student projects involve evaluating criticality safety and reactor physics benchmarks to develop critical thinking and judgment.
The MIRTE program conducted criticality safety experiments from 2008-2010 involving low-enriched uranium rod lattices reflected by or separated by various structural materials. MIRTE-1 experiments included configurations reflected by aluminum, glass, and water, as well as configurations with interacting arrays separated by large absorbing screens of various materials or thin absorbing plates. MIRTE-2 will continue experiments with new materials and potential modifications to the experimental device through 2013. Proprietary experimental data will be available to designated U.S. beneficiaries through non-disclosure agreements after 2017.
The document summarizes new and revised benchmark experiments included in the March 2011 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments. It describes 16 experimental series from 53 total series performed at 31 reactor facilities around the world. The benchmarks cover various reactor types including gas cooled, liquid metal fast, light water, heavy water, and others. Newly available or revised benchmarks include experiments from the High Temperature Engineering Test Reactor, Very High Temperature Reactor, SNEAK 7A/7B, ZPPR-9, -13A, -18C, IPEN/MB-01, LR-0, ZED-2, VENUS-9/1, RBMK, ZEB
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) accomplished the following in 2010:
1) Published the 2010 edition of the ICSBEP Handbook containing 25 newly approved benchmark evaluations.
2) Upgraded the ICSBEP database and user interface called DICE.
3) Approved 25 new benchmark evaluations from the United States and other countries covering a variety of experimental configurations.
1) Three HEU-beryllium experiments conducted at Oak Ridge in the 1960s were evaluated using modern computational tools to help validate models for a proposed Fission Surface Power system.
2) The experiments showed small biases of less than 0.5% Δk/k from computational models, indicating high quality data for validation.
3) Uncertainty analysis showed the new experiments would provide additional validation of computational models, especially for the beryllium reflector performance important for the Fission Surface Power design. The experiments helped reduce overall uncertainties in modeling fission reactors.
This document summarizes the International Handbook of Evaluated Reactor Physics Benchmark Experiments from March 2010. It describes 13 new or revised benchmarks from various reactor types including sodium-cooled fast reactors, lead-cooled fast reactors, very high temperature reactors, inert matrix fuel reactors, and others. The benchmarks support validation of computational methods for Generation IV reactor design and current light water, heavy water, and fundamental physics reactors. The handbook contains data from over 40 experimental series performed at 24 reactor facilities in 15 contributing countries and is available to OECD member countries and others.
This document summarizes a benchmark analysis of initial physics tests performed at the Fast Flux Test Facility (FFTF), a 400 MW sodium-cooled fast reactor. Key measurements included criticality, neutron spectra, control rod worths, temperature coefficients, and gamma/electron spectra. The benchmark model used MCNP5 simulations with some component homogenization. Most measurements showed good agreement with calculations, though rod worths were underestimated by 2-6% and below-core neutron spectra were impacted by homogenization. Future work includes developing a fully heterogeneous FFTF model and evaluating additional experimental data.
This document summarizes a benchmark analysis of start-up physics tests performed at the High Temperature Engineering Test Reactor (HTTR). The analysis evaluated cold critical configurations, excess reactivity measurements, shutdown margins, axial reaction rates, and isothermal temperature coefficients. Some challenges included limitations in available public data and conflicting reported values. Overall, there was generally good agreement between benchmark measurements and calculations, though calculations were approximately 2% higher, likely due to uncertainties in graphite composition and cross sections. Completed benchmarks from this analysis will be published in the International Handbook of Evaluated Reactor Physics Benchmark Experiments.
This document summarizes a presentation given on criticality benchmarks for various annular core configurations of Japan's High Temperature Engineering Test Reactor (HTTR). It describes the objectives of benchmarking the HTTR, including developing models to support validation of very high temperature reactor designs. It provides details on the HTTR design specifications, fuel loading patterns for different core configurations, and results of uncertainty and sensitivity analyses. Calculated eigenvalues for different core designs were found to be within 1% of experimental values.
This document discusses providing nuclear criticality safety analysis education through benchmark experiment evaluation. It describes challenges in educating the next generation of nuclear criticality safety professionals without hands-on experience. Participating in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) provides opportunities for students to gain experience by evaluating benchmark experiments. Benchmark evaluations involve investigating experimental design and results, developing computational models, and participating in an international review process. This helps students develop analytical skills while cultivating good engineering judgment.
This document summarizes a criticality benchmark analysis of water-reflected uranium oxyfluoride slabs. It outlines the experiment background, evaluation process, results of the uncertainty and bias analyses, sample calculations comparing results using different nuclear data libraries, and current efforts to revise the benchmark. The benchmark evaluation assesses the minimum critical thickness of an infinite slab based on experimental data from 1955-1956. It analyzes uncertainties in parameters and simplifications of the model to determine bias. The detailed model results are within uncertainties of the simplified model, validating its use. An updated benchmark will be presented to the ICSBEP working group in 2010.
This document summarizes an analysis of criticality experiments performed with arrays of nested annular tanks containing highly enriched uranyl nitrate solution. The experiments were conducted in the 1980s at the Rocky Flats Critical Mass Laboratory and involved configurations with 1-6 tanks. Absorbing materials like borated concrete and cadmium plugs were also tested. MCNP models were developed and showed calculated eigenvalues were within 1-2 sigma of benchmark values. The results will be published in the International Handbook of Evaluated Criticality Safety Benchmarks.
This document summarizes an assessment of computational modeling capabilities for designing a fission surface power system for a lunar outpost. It was found that ENDF/B-VII nuclear data reduced biases compared to older data, except for subcritical and highly enriched uranium benchmarks. Beryllium reflector worth was found to have an increasing bias trend. Existing Zero Power Physics Reactor critical experiments were identified as able to validate the design without needing a new critical experiment. Uncertainty analysis found cross sections like beryllium (n,n) contribute significantly to uncertainty. Future work is outlined to further validate the design using benchmark experiments and reduce uncertainties.
This document summarizes a study assessing the feasibility of using existing Earth-to-orbit launch vehicles coupled with a nuclear thermal rocket engine to deliver a 21 metric ton payload to the lunar surface, as an alternative to proposed architectures like ESAS. The study finds that using a fleet of 6 Delta IV Heavy or Atlas V Heavy rockets for launch and in-orbit assembly of a nuclear thermal rocket stage could deliver the payload for around $2.7 billion, compared to an estimated $1.5 billion for a single Ares V launch. However, development costs are uncertain and in-space assembly techniques provide benefits like redundancy and a more flexible exploration architecture.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
1. Development of an ICSBEP
Benchmark Evaluation
J. Blair Briggs
John D. Bess
Idaho National Laboratory (INL)
www.inl.gov
2011 ANS Annual Meeting
Hollywood, Florida
June 29, 2011
2. Topics
• Purpose of the International Criticality Safety Benchmark Evaluation
Project (ICSBEP)
• Development of an ICSBEP Benchmark
• Publication in the ICSBEP Handbook (Quality Assurance)
• ICSBEP Handbook
2
4. Purpose of the ICSBEP
• Compile benchmark-experiment data into a standardized format that
allows analysts to easily use the data to validate calculational
techniques and cross section data.
• Evaluate the data and quantify overall uncertainties through various
types of sensitivity analyses
• Eliminate a large portion of the tedious and redundant research and
processing of experiment data
• Streamline the necessary step of validating computer codes and
nuclear data with experimental data
• Preserve valuable experimental data that will be of use for decades
4
8. ICSBEP CONTENT & FORMAT
1.0 DETAILED DESCRIPTION
1.1 Overview of Experiment
• Summary of the experiment, its original purpose, the parameters that
vary in a series of configurations
• Name of the facility, when the experiments were performed, the
organization that performed the experiments, and perhaps the names
of the experimenters if available
• The conclusions of the Evaluation of Experimental Data (Section,
Section 2) should be briefly stated
8
9. ICSBEP CONTENT & FORMAT (Cont.)
1. 2 Description of Experimental Configuration
• Detailed description of the physical arrangement and dimensions of
the experiment
• Uncertainties assigned by the experimenter
• Method of making the specific measurements
• Some measurement types such as subcritical measurements may
require more detailed information about the source and detectors
than is typically required for critical assemblies
9
10. ICSBEP CONTENT & FORMAT (Cont.)
1.3 Description of Material Data
• Detailed description of all materials used in the experiment as well as
significant materials in the surroundings
• Uncertainties assigned by the experimenter
• Specify source of composition data (physical or chemical analyses or
from material handbooks when only the type of material was
specified)
• Details of the methods of analysis and uncertainties
• Dates of the experiment, of the chemical analysis, and of isotopic
analysis or purification (When isotopic buildup and decay are
important)
10
11. ICSBEP CONTENT & FORMAT (Cont.)
1.4 Temperature Information
• The Temperature at which the experiments were preformed is given
and discussed.
11
12. ICSBEP CONTENT & FORMAT (Cont.)
1.5 Supplemental Experimental Measurements
• Additional experimental data that are not necessarily relevant to the
derivation of the benchmark model
• Subcritical measurements include a description of the measurement
technology and a discussion on the interpretation of the
measurements as well as the measured data
12
14. ICSBEP CONTENT & FORMAT (Cont.)
2.0 EVALUATION OF EXPERIMENTAL DATA
• Evaluation of the experimental data and conclusions
• Missing data or weaknesses and inconsistencies in published data
• Effects of uncertainties (if uncertainties are not provided, they must
be estimated)
• Summary table
• Unacceptable data are not included in Sections 3 & 4
• Unacceptable data may still be used in validation efforts if the
uncertainty is properly taken into account
• Random versus Systematic Uncertainty
14
16. ICSBEP CONTENT & FORMAT (Cont.)
3.0 BENCHMARK SPECIFICATIONS
• Benchmark specifications provide the data necessary to
construct calculational models – Should be concise and
complete
• Retain as much detail as necessary to preserve all
important aspects of the actual experiment
• Simplifications include description of the transformation
from the measured values to the benchmark-model values,
the transformation, and the uncertainties associated with
the transformation
16
17. ICSBEP CONTENT & FORMAT (Cont.)
3.1 Description of Model
• General description of main physical features of the benchmark
model(s)
• Simplifications and approximations made to geometric configurations
or material compositions are described and justified
• Resulting biases and additional uncertainties in keff are quantified
• Justification for omitting any constituents of the materials
17
18. ICSBEP CONTENT & FORMAT (Cont.)
3. 2 Dimensions
• Include all dimensions and information needed to completely
describe the geometry of the benchmark model(s)
• Sketches, including dimensions and labels, of the benchmark
model(s) should be used liberally
• Reviewer should be able to derive all dimensions in Section 3 from
the information included in Sections 1 and 2.
18
19. ICSBEP CONTENT & FORMAT (Cont.)
3. 3 Material Data
• Atom densities for all materials specified for the model(s) are
concisely listed
• Provide unique or complicated formulas for deriving atom densities
• Reviewer should be able to derive all material specifications in
Section 3 from the information included in Sections 1 and 2.
19
20. ICSBEP CONTENT & FORMAT (Cont.)
3. 4 Temperature Data
• Temperature data for the model(s)
20
21. ICSBEP CONTENT & FORMAT (Cont.)
3.5 Experimental and Benchmark-Model keff and/or Subcritical
Parameters
• Experimental Values
• Benchmark Values (adjusted to account for bias)
• Uncertainty in the Benchmark Value
21
23. ICSBEP CONTENT & FORMAT (Cont.)
4.0 RESULTS OF SAMPLE CALCULATIONS
• Calculated results obtained with the benchmark-model specification
data given in Section 3
• Sample calculations only
• Discrepancies between Benchmark values (Section 3.5) and
calculated values (Section 4.0) are noted
23
24. ICSBEP CONTENT & FORMAT (Cont.)
5.0 REFERENCES
• All formally published documents referenced in the evaluation that
contain relevant information about the experiments
• References to handbooks, logbooks, code manuals, textbooks,
personal communications with experts, etc. are given in footnotes
24
25. ICSBEP CONTENT & FORMAT (Cont.)
APPENDIX A Typical Input Listings
Brief comments about options chosen for calculations are included in an introductory paragraph. Any
small differences from the benchmark-model specifications in Section 3 are noted. This paragraph
states the version of the code (e.g., KENO-IV, KENO-V.a, MONK6B, etc.) that was used for the
calculations and additional information including:
• SN Codes
• Quadrature order (i.e., N)
• Scattering order for cross sections (P1, P2, P3, etc.; corrected or not corrected for higher-order
effects)
• Convergence criteria for eigenvalue and flux
• Representative mesh size (cm)
•
• Monte Carlo Codes
• Number of active generations
• Number of skipped generations
• Number of histories per generation or total number of histories
Unique and/or important features regarding the input may also be discussed just prior to the input
listings. Listing titles refer to the case number and number of the table in Section 4.0 that gives the
calculated result.
25
26. Why Such a Rigorous Format?
• Handbook or Reference Book
– For the benefit of the user
– Orderly layout to assist the user
– Information is always in the same location
– Information has been rigorously verified
• Separation of Geometry, Materials, Temperature
– Neutronics computer code input
– Allows systematic & detailed review / verification
• Not a Compilation of Technical Reports
26
28. Quality Assurance
Each experiment evaluation included in the Handbook
undergoes a thorough internal review by the evaluator's
organization. Internal reviewers are expected to verify:
1. The accuracy of the descriptive information given in the
evaluation by comparison with original documentation
(published and unpublished).
2. That the benchmark specification can be derived from
the descriptive information given in the evaluation
3. The completeness of the benchmark specification
4. The results and conclusions
5. Adherence to format.
28
29. Quality Assurance (continued)
In addition, each evaluation undergoes an independent
peer review by another Technical Review Group member
at a different facility. Starting with the evaluator's submittal
in the appropriate format, independent peer reviewers are
expected to verify:
1. That the benchmark specification can be derived from
the descriptive information given in the evaluation
2. The completeness of the benchmark specification
3. The results and conclusions
4. Adherence to format.
29
30. Quality Assurance (continued)
A third review by the Technical Review Group verifies that
the benchmark specifications and the conclusions were
adequately supported.
30
32. International Handbook of Evaluated Criticality
Safety Benchmark Experiments
September 2010 Edition
• 20 Contributing Countries
• Spans over 55,000 Pages
• Evaluation of 516 Experimental Series
• 4,405 Critical or Subcritical Configurations
• 24 Criticality-Alarm/Shielding Benchmark
Configurations – numerous dose points
each
• 155 fission rate and transmission
measurements and reaction rate ratios for
45 different materials
• http://icsbep.inl.gov
32
33. International Handbook of Evaluated Criticality
Safety Benchmark Experiments
September 2011 Edition
• 20 Contributing Countries
• Spans approximately 58,000 Pages
• Evaluation of 533 Experimental Series
• 4,551 Critical or Subcritical Configurations
• 24 Criticality-Alarm/Shielding Benchmark
Configurations – numerous dose points
each
• 155 fission rate and transmission
measurements and reaction rate ratios for
45 different materials
• Available in September 2011
• Contact email: icsbep@inl.gov