This document summarizes an analysis of failure data from digital reactor protection systems (RPS) on Combustion Engineering (CE) nuclear power plants from 1984 to 2005. The analysis estimated failure rates and common cause failure (CCF) rates for subsystems in the CE core protection calculators (CPCS). The dominant CCF events involved incorrect data sets being uploaded to multiple channels, such as inaccurate cross-calibration of sensors. Risk assessments found that CCF unavailability and core damage probability were highest for latent failures with long durations, like data upload errors. While software CCF was important, other human and hardware errors contributed more to total RPS risk based on actual operating experience.
Updated Digital I&C Reliability And Ccf DataJohn Bickel
This document discusses an updated study on common cause failures (CCFs) of computer process control systems (CPCS) in nuclear power plants. The study analyzed additional failure data from 1982-2006, increasing the sample size. The additional data resulted in 3 more CCF events being identified. The top CCF contributors remained inaccurate cross-calibration of reactor power signals and personnel errors in inputting data. Specific CCF and component failure rates were estimated based on the expanded data set.
This document discusses the process of optimizing a 3G radio network. It covers the various phases of network optimization including single site verification, RF optimization, service testing and parameter optimization, and regular reference route testing. It then provides details on RF optimization including preparation, targets, solutions, and the analysis of drive test data to identify issues and determine required changes. Examples are also given of antenna adjustment, drop call analysis, and neighbor list verification.
The document discusses key performance indicators (KPIs) for cellular networks and provides relationships between network components and their capacities. It also analyzes reasons for call blocking, dropping, and failures during call setup and solutions to address them, including parameter tuning, hardware checks, interference mitigation, and useful reports.
This document discusses possible reasons for SDCCH drops due to "Other Reasons" and provides steps to analyze and address the issue. It lists hardware faults, parameter definition issues, DIP status problems, frequency interference, C7 link stability, and improper MSC parameter definition as potential causes. The document advises checking logs, parameters, DIP quality, interference levels, C7 link commands, and MSC settings related to call setup, paging, location updating, authentication, ciphering, and SMS when analyzing SDCCH drops.
The document discusses optimization of the top 10 worst performing cells in a network. It covers analyzing dropped call statistics to identify problem cells, investigating reasons for dropped calls, and making configuration changes or troubleshooting issues to improve network performance. The objectives are to monitor network performance, improve it, and meet key performance indicators by focusing on optimizing the cells with the highest dropped call rates.
RF optimisation aims to identify and resolve potential faults in the network before they affect performance through activities like pre-launch optimisation, continuous optimisation, and swap management. Key aspects of optimisation include drive testing, parameter tuning, antenna adjustments, and monitoring KPIs to maintain network quality. GTL provides end-to-end optimisation services both on-site and through a virtual optimisation centre with remote analytics, tools, and concentrated RF expertise.
This document outlines key metrics and tests for a single cell functional test (SCFT). The SCFT checks:
1) Basic cell functionality like scrambling code identification, cell broadcast name, landline-mobile calls, and mobile-mobile calls between the same and other networks.
2) Radio channel quality metrics like BLER, EcNo, RSCP, and RSSI and whether values are within expected ranges.
3) Handover configurations and whether intra-site, inter-site, 3G-2G, and 2G-3G handovers occur as expected.
4) Data throughput tests including checking available data rates and latency via ping tests.
How to perform trouble shooting based on countersAbdul Muin
This document provides guidance on troubleshooting radio resource control (RRC) connection failures based on measurement counters. It describes the different phases of the RRC connection process and lists the possible failure causes within each phase. For each failure cause, it gives steps to analyze alarms, measurements, and parameters to identify the potential root causes such as coverage issues, interference, configuration errors, or hardware problems. The goal is to systematically work through each failure cause using available diagnostics to pinpoint where problems may exist.
Updated Digital I&C Reliability And Ccf DataJohn Bickel
This document discusses an updated study on common cause failures (CCFs) of computer process control systems (CPCS) in nuclear power plants. The study analyzed additional failure data from 1982-2006, increasing the sample size. The additional data resulted in 3 more CCF events being identified. The top CCF contributors remained inaccurate cross-calibration of reactor power signals and personnel errors in inputting data. Specific CCF and component failure rates were estimated based on the expanded data set.
This document discusses the process of optimizing a 3G radio network. It covers the various phases of network optimization including single site verification, RF optimization, service testing and parameter optimization, and regular reference route testing. It then provides details on RF optimization including preparation, targets, solutions, and the analysis of drive test data to identify issues and determine required changes. Examples are also given of antenna adjustment, drop call analysis, and neighbor list verification.
The document discusses key performance indicators (KPIs) for cellular networks and provides relationships between network components and their capacities. It also analyzes reasons for call blocking, dropping, and failures during call setup and solutions to address them, including parameter tuning, hardware checks, interference mitigation, and useful reports.
This document discusses possible reasons for SDCCH drops due to "Other Reasons" and provides steps to analyze and address the issue. It lists hardware faults, parameter definition issues, DIP status problems, frequency interference, C7 link stability, and improper MSC parameter definition as potential causes. The document advises checking logs, parameters, DIP quality, interference levels, C7 link commands, and MSC settings related to call setup, paging, location updating, authentication, ciphering, and SMS when analyzing SDCCH drops.
The document discusses optimization of the top 10 worst performing cells in a network. It covers analyzing dropped call statistics to identify problem cells, investigating reasons for dropped calls, and making configuration changes or troubleshooting issues to improve network performance. The objectives are to monitor network performance, improve it, and meet key performance indicators by focusing on optimizing the cells with the highest dropped call rates.
RF optimisation aims to identify and resolve potential faults in the network before they affect performance through activities like pre-launch optimisation, continuous optimisation, and swap management. Key aspects of optimisation include drive testing, parameter tuning, antenna adjustments, and monitoring KPIs to maintain network quality. GTL provides end-to-end optimisation services both on-site and through a virtual optimisation centre with remote analytics, tools, and concentrated RF expertise.
This document outlines key metrics and tests for a single cell functional test (SCFT). The SCFT checks:
1) Basic cell functionality like scrambling code identification, cell broadcast name, landline-mobile calls, and mobile-mobile calls between the same and other networks.
2) Radio channel quality metrics like BLER, EcNo, RSCP, and RSSI and whether values are within expected ranges.
3) Handover configurations and whether intra-site, inter-site, 3G-2G, and 2G-3G handovers occur as expected.
4) Data throughput tests including checking available data rates and latency via ping tests.
How to perform trouble shooting based on countersAbdul Muin
This document provides guidance on troubleshooting radio resource control (RRC) connection failures based on measurement counters. It describes the different phases of the RRC connection process and lists the possible failure causes within each phase. For each failure cause, it gives steps to analyze alarms, measurements, and parameters to identify the potential root causes such as coverage issues, interference, configuration errors, or hardware problems. The goal is to systematically work through each failure cause using available diagnostics to pinpoint where problems may exist.
Key performance indicators (KPIs) in telecommunications measure important aspects of network performance such as traffic levels, interference, and quality of service. Erlangs are used to quantify traffic volume. Interference can occur between networks using the same frequencies and affects accessibility, call setup success rates, and retainability. Accessibility KPIs include paging success rate, SDCCH access success rate, and SDCCH drop rate. Retainability is measured by call drop rate and handover success rate, which can be affected by signal strength, interference, and congestion.
3 g ibs walk test report dhk_v1415_temsIshaque uddin
This 3G single site verification report summarizes drive test results for site DHK_V1415. Drive testing was conducted on the site on 8/09/2014. Key findings include:
- 2G Rx level and RxQual were fair or under satisfactory levels on some floors due to issues with the distributed antenna system connectivity. Additional antennas may be required.
- 3G RSCP was poor on some floors also likely due to distributed antenna system issues. The CPICH power may need increasing to improve RSCP. Additional antennas also recommended for some floors.
- The report provides plots and data on 2G and 3G serving patterns, Rx levels, RxQual, RSCP and Ec/Io across different
This document describes the making of a partial discharge (PD) data acquisition system based on a National Instruments USB digitizer (NI-5133). The system was developed to measure PDs in insulation systems and present the data graphically using phase resolved analysis. Hardware choices and software development in LabVIEW are discussed. The acquisition system was tested using artificially generated pulses and compared to measurements from a Haefley PD568 system.
1.training lte ran kpi & counters rjilSatish Jadav
This document provides an introduction and overview of key performance indicators (KPIs) and associated counters for monitoring the performance of Samsung LTE networks. It describes accessibility KPIs related to session setup success rates, retainability KPIs like call drop rates, integrity KPIs involving throughput measurements, and mobility KPIs covering handover success rates. Formulas for calculating each KPI are provided along with explanations of relevant counters for each performance measurement area.
This document discusses key performance indicators (KPIs) for evaluating 3G networks. It describes various KPIs for measuring accessibility, retainability, mobility, coverage, service integrity, availability, and traffic. Formulas for calculating several KPIs are provided. Troubleshooting methods and examples are given for accessibility, retainability, and mobility-related issues. Sample daily reports and the Gsmart optimization tool interface are also shown.
This document describes an FPGA-based passive K-Delta-1-Sigma (KD1S) sigma-delta modulator designed and tested by researchers. The modulator uses eight phase-shifted clocks on an FPGA to achieve an effective sampling rate of 450 MHz without active analog components. Testing showed the design achieved a peak SNR of 58 dB and ENOB of 9.3 bits at this high sampling rate, demonstrating the benefits of this passive approach for wide bandwidth applications.
The document describes the Radio Link Control (RLC) sub layer in 3GPP LTE, including its functions, modes of operation (unacknowledged, acknowledged, and transparent), state variables, procedures for transmitting and receiving data, and retransmission processes. The RLC sub layer provides transfer of upper layer PDUs, error correction, segmentation/reassembly, reordering, duplication detection, and supports both acknowledged and unacknowledged data transfer.
B.PRADEEP KUMAR has over 15 years of experience in instrumentation and control engineering. He has worked on major projects for ADMA, TOTAL, and Sohar Aromatic. On the ADMA - Zakum Central Super Complex De-Mothballing Project, he led the upgrade of the fire and gas system including migrating to an Ethernet-based solution. He has also worked on FPSO and power plant projects.
This document describes a study that generated and verified VHDL code for a modified repetitive controller to control a dynamic voltage restorer (DVR) power quality conditioner. The controller is designed to limit fault current during downstream faults and compensate for voltage sags, harmonics, and imbalances. The MATLAB HDL Coder was used to generate VHDL code from a MATLAB model of the repetitive controller. The generated code was then verified in the Modelsim simulator, demonstrating the controller design's hardware compatibility for implementation in an FPGA. The repetitive controller incorporates a feedforward term for fast response and feedback term to ensure zero steady-state error. A logic circuit detects downstream faults by monitoring load current.
An access point based fec mechanism for video transmission over wireless la nsIEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
The document discusses key performance indicators (KPIs) for LTE networks. It describes KPIs related to accessibility, retainability, integrity, and mobility. Accessibility KPIs measure how successfully users can access the network, including session setup success rate, RRC connection success rate, and ERAB establishment success rate. Retainability KPIs evaluate how well services are retained once connected, such as call drop rate and ERAB drop rate. Integrity KPIs relate to throughput, including downlink and uplink throughput. Mobility KPIs cover handover success rates for different types of handovers between nodes. Formulas are provided for calculating many of the KPIs.
Machine Learning Based Session Drop Prediction in LTE Networks and its SON As...Ericsson
Abnormal bearer session release (i.e. bearer session drop) in cellular telecommunication networks may seriously impact the quality of experience of mobile users. The latest mobile technologies enable high granularity real-time reporting of all conditions of individual sessions, which gives rise to use data analytics methods to process and monetize this data for network optimization. One such example for analytics is Machine Learning (ML) to predict session drops well before the end of session.
The GENEX Assistant is excellent software tool for
Post-Processing 2G & 3G Drive Test Data.
With the GENEXAssistant, you can:
Have a panorama view of network performance
Locate network troubles
Improve network quality
Verify network planning and optimization
ANALYSIS OF LOGFILE
FOR POST PROCESSING OF LOGFILE IN
GENEX ASSISTANCE WE NEED TO
OPEN A NEW PROJECT
The document proposes adding memory channel testing to the OCP certification process to improve reliability and address issues like row hammer failures. It outlines conducting an electrical audit and protocol timing audit using a logic analyzer to qualitatively check signals and timing specifications are met. The goal is not comprehensive validation but a low-cost audit. Tests would check eyes, bursts, and for JEDEC spec compliance over an hour. Results would document for each server, slot, and channel. The testing also aims to investigate the row hammer failure mechanism and potential mitigation strategies.
Abstract— During the past year Xilinx, for the first time ever, set out to quantify the soft error rate of a multi-core microprocessor. This work extends on Xilinx’s 10+ years of heritage in FPGA radiation testing. Built on the 28 nanometer technology node, Xilinx’s ZynqTM family of devices integrate a processor subsystem with programmable logic. The processor subsystem includes two 32 bit ARM CortexTM-A9 CPU’s, two NEONTM floating point units, two SIMD processing units, an L1 and L2 cache, on chip SRAM memory and various peripherals. The programmable logic is directly connected with the processing subsystem via ARM’s AMBATM 4 AXI interface. This programmable logic is based on the 7 Series FPGA fabric, consisting of 6-input LUTs and DFFs along with Block RAM, DSP slices, multi-gigabit transceivers, and other blocks. Tests were performed using a proton beam to analyze the soft error susceptibility of the new device. Proton beam testing was deemed acceptable since previous neutron beam and proton beam testing had shown virtually identical cross-sections for 7 Series programmable logic. The results are promising and yield a solid baseline for a typical embedded application targeting any of the Zynq SoC devices. As a foray into processor testing, this Zynq work has laid a solid foundation for future Xilinx SoC test campaigns.
Austin Lesea, Wojciech Koszek, Glenn Steiner, Gary Swift, and Dagan White Xilinx, Inc.
Paper: SELSE 2014 @ Stanford University (PDF, 456KB), 2014
Slides: (PDF, 933KB), 2014
The document describes several hardware-based data prefetching schemes that aim to reduce memory stalls by prefetching data into caches before it is needed by a program. It introduces fixed offset prefetching, stride-based prefetching, and tag correlated prefetching. It then discusses the simulation setup used to evaluate these schemes and presents results on their performance in terms of CPI, cache hit rate, and average memory access time. The tag correlated prefetching scheme achieved the best overall performance but at the cost of higher hardware complexity compared to the other schemes.
[EWiLi2016] Enabling power-awareness for the Xen HypervisorMatteo Ferroni
Virtualization allows simultaneous execution of multi-tenant workloads on the same platform, either a server or an embedded system. Unfortunately, it is non-trivial to attribute hardware events to multiple virtual tenants, as some system’s metrics relate to the whole system (e.g., RAPL energy counters). Virtualized environments have then a rather incomplete picture of how tenants use the hardware, limiting their optimization capabilities. Thus, we propose XeMPower, a lightweight monitoring solution for Xen that precisely accounts hardware events to guest workloads. It also enables attribution of CPU power consumption to individual tenants. We show that XeMPower introduces negligible overhead in power consumption, aiming to be a reference design for power-aware virtualized environments.
Full paper: http://ceur-ws.org/Vol-1697/EWiLi16_10.pdf
This document discusses the Packet Data Convergence Protocol (PDCP) sublayer in 3GPP LTE networks. It describes the key functions of PDCP including header compression, ciphering, integrity protection, and transmission of user and control plane data. It also explains PDCP's use of ROHC for header compression and the various PDCP protocol data unit formats used for control and user plane messages.
This document discusses Self-Organizing Networks (SON) and its features in LTE networks. It describes the key drivers for SON in LTE including reducing manual intervention, improving performance and user experience. The main SON features covered are self-configuration, self-optimization, and self-healing. Specific use cases explained include PCI planning, ANR, MRO and energy savings. The LTE SON framework and architecture specified by 3GPP is also summarized.
LTE KPI Optimization - A to Z Abiola.pptxssuser574918
1. The document discusses LTE post launch optimization, including problem causes, solutions, and case studies.
2. It describes different types of counters used to collect PM statistics, including peg, gauge, accumulator, scan, PDF, DDM, calculated, trigACC, and trigSCAN counters.
3. Potential causes of poor accessibility for E-RAB establishment are discussed, including poor coverage, alarms, high load, hardware issues, high UL interference, PCI conflicts, RACH root sequence index planning, UE camping in wrong cells, wrong system constant settings, and VSWR or cell availability issues.
The Run Control and Monitoring System (RCMS) controls and monitors the CMS experiment during data taking. It controls the Data Acquisition System and monitors the Detector Control System. At test runs like the Magnet Test and Cosmic Challenge, RCMS used function managers to control subsystem partitions like the tracker, calorimeters, and muon systems. RCMS was stable during these test runs and successfully recorded over 160 million cosmic ray events over one month of running. RCMS is also being used as part of the GRIDCC project to provide remote access and control of complex distributed instrumentation over the grid.
FVCAG: A framework for formal verification driven power modelling and verific...Arun Joseph
FVCAG is a formal verification driven framework for power modeling and verification of IPs. It uses a single formal verification run to determine the preferred input pin conditions for accurate power modeling of IPs as well as identify any instances in a design where those conditions are violated. The framework was experimentally evaluated on an industry microprocessor design and found to determine power modeling criteria for standard cells and macros more quickly and with fewer errors compared to manual approaches. It identified the correct input conditions for power modeling in under a minute across thousands of IP instances.
Key performance indicators (KPIs) in telecommunications measure important aspects of network performance such as traffic levels, interference, and quality of service. Erlangs are used to quantify traffic volume. Interference can occur between networks using the same frequencies and affects accessibility, call setup success rates, and retainability. Accessibility KPIs include paging success rate, SDCCH access success rate, and SDCCH drop rate. Retainability is measured by call drop rate and handover success rate, which can be affected by signal strength, interference, and congestion.
3 g ibs walk test report dhk_v1415_temsIshaque uddin
This 3G single site verification report summarizes drive test results for site DHK_V1415. Drive testing was conducted on the site on 8/09/2014. Key findings include:
- 2G Rx level and RxQual were fair or under satisfactory levels on some floors due to issues with the distributed antenna system connectivity. Additional antennas may be required.
- 3G RSCP was poor on some floors also likely due to distributed antenna system issues. The CPICH power may need increasing to improve RSCP. Additional antennas also recommended for some floors.
- The report provides plots and data on 2G and 3G serving patterns, Rx levels, RxQual, RSCP and Ec/Io across different
This document describes the making of a partial discharge (PD) data acquisition system based on a National Instruments USB digitizer (NI-5133). The system was developed to measure PDs in insulation systems and present the data graphically using phase resolved analysis. Hardware choices and software development in LabVIEW are discussed. The acquisition system was tested using artificially generated pulses and compared to measurements from a Haefley PD568 system.
1.training lte ran kpi & counters rjilSatish Jadav
This document provides an introduction and overview of key performance indicators (KPIs) and associated counters for monitoring the performance of Samsung LTE networks. It describes accessibility KPIs related to session setup success rates, retainability KPIs like call drop rates, integrity KPIs involving throughput measurements, and mobility KPIs covering handover success rates. Formulas for calculating each KPI are provided along with explanations of relevant counters for each performance measurement area.
This document discusses key performance indicators (KPIs) for evaluating 3G networks. It describes various KPIs for measuring accessibility, retainability, mobility, coverage, service integrity, availability, and traffic. Formulas for calculating several KPIs are provided. Troubleshooting methods and examples are given for accessibility, retainability, and mobility-related issues. Sample daily reports and the Gsmart optimization tool interface are also shown.
This document describes an FPGA-based passive K-Delta-1-Sigma (KD1S) sigma-delta modulator designed and tested by researchers. The modulator uses eight phase-shifted clocks on an FPGA to achieve an effective sampling rate of 450 MHz without active analog components. Testing showed the design achieved a peak SNR of 58 dB and ENOB of 9.3 bits at this high sampling rate, demonstrating the benefits of this passive approach for wide bandwidth applications.
The document describes the Radio Link Control (RLC) sub layer in 3GPP LTE, including its functions, modes of operation (unacknowledged, acknowledged, and transparent), state variables, procedures for transmitting and receiving data, and retransmission processes. The RLC sub layer provides transfer of upper layer PDUs, error correction, segmentation/reassembly, reordering, duplication detection, and supports both acknowledged and unacknowledged data transfer.
B.PRADEEP KUMAR has over 15 years of experience in instrumentation and control engineering. He has worked on major projects for ADMA, TOTAL, and Sohar Aromatic. On the ADMA - Zakum Central Super Complex De-Mothballing Project, he led the upgrade of the fire and gas system including migrating to an Ethernet-based solution. He has also worked on FPSO and power plant projects.
This document describes a study that generated and verified VHDL code for a modified repetitive controller to control a dynamic voltage restorer (DVR) power quality conditioner. The controller is designed to limit fault current during downstream faults and compensate for voltage sags, harmonics, and imbalances. The MATLAB HDL Coder was used to generate VHDL code from a MATLAB model of the repetitive controller. The generated code was then verified in the Modelsim simulator, demonstrating the controller design's hardware compatibility for implementation in an FPGA. The repetitive controller incorporates a feedforward term for fast response and feedback term to ensure zero steady-state error. A logic circuit detects downstream faults by monitoring load current.
An access point based fec mechanism for video transmission over wireless la nsIEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
The document discusses key performance indicators (KPIs) for LTE networks. It describes KPIs related to accessibility, retainability, integrity, and mobility. Accessibility KPIs measure how successfully users can access the network, including session setup success rate, RRC connection success rate, and ERAB establishment success rate. Retainability KPIs evaluate how well services are retained once connected, such as call drop rate and ERAB drop rate. Integrity KPIs relate to throughput, including downlink and uplink throughput. Mobility KPIs cover handover success rates for different types of handovers between nodes. Formulas are provided for calculating many of the KPIs.
Machine Learning Based Session Drop Prediction in LTE Networks and its SON As...Ericsson
Abnormal bearer session release (i.e. bearer session drop) in cellular telecommunication networks may seriously impact the quality of experience of mobile users. The latest mobile technologies enable high granularity real-time reporting of all conditions of individual sessions, which gives rise to use data analytics methods to process and monetize this data for network optimization. One such example for analytics is Machine Learning (ML) to predict session drops well before the end of session.
The GENEX Assistant is excellent software tool for
Post-Processing 2G & 3G Drive Test Data.
With the GENEXAssistant, you can:
Have a panorama view of network performance
Locate network troubles
Improve network quality
Verify network planning and optimization
ANALYSIS OF LOGFILE
FOR POST PROCESSING OF LOGFILE IN
GENEX ASSISTANCE WE NEED TO
OPEN A NEW PROJECT
The document proposes adding memory channel testing to the OCP certification process to improve reliability and address issues like row hammer failures. It outlines conducting an electrical audit and protocol timing audit using a logic analyzer to qualitatively check signals and timing specifications are met. The goal is not comprehensive validation but a low-cost audit. Tests would check eyes, bursts, and for JEDEC spec compliance over an hour. Results would document for each server, slot, and channel. The testing also aims to investigate the row hammer failure mechanism and potential mitigation strategies.
Abstract— During the past year Xilinx, for the first time ever, set out to quantify the soft error rate of a multi-core microprocessor. This work extends on Xilinx’s 10+ years of heritage in FPGA radiation testing. Built on the 28 nanometer technology node, Xilinx’s ZynqTM family of devices integrate a processor subsystem with programmable logic. The processor subsystem includes two 32 bit ARM CortexTM-A9 CPU’s, two NEONTM floating point units, two SIMD processing units, an L1 and L2 cache, on chip SRAM memory and various peripherals. The programmable logic is directly connected with the processing subsystem via ARM’s AMBATM 4 AXI interface. This programmable logic is based on the 7 Series FPGA fabric, consisting of 6-input LUTs and DFFs along with Block RAM, DSP slices, multi-gigabit transceivers, and other blocks. Tests were performed using a proton beam to analyze the soft error susceptibility of the new device. Proton beam testing was deemed acceptable since previous neutron beam and proton beam testing had shown virtually identical cross-sections for 7 Series programmable logic. The results are promising and yield a solid baseline for a typical embedded application targeting any of the Zynq SoC devices. As a foray into processor testing, this Zynq work has laid a solid foundation for future Xilinx SoC test campaigns.
Austin Lesea, Wojciech Koszek, Glenn Steiner, Gary Swift, and Dagan White Xilinx, Inc.
Paper: SELSE 2014 @ Stanford University (PDF, 456KB), 2014
Slides: (PDF, 933KB), 2014
The document describes several hardware-based data prefetching schemes that aim to reduce memory stalls by prefetching data into caches before it is needed by a program. It introduces fixed offset prefetching, stride-based prefetching, and tag correlated prefetching. It then discusses the simulation setup used to evaluate these schemes and presents results on their performance in terms of CPI, cache hit rate, and average memory access time. The tag correlated prefetching scheme achieved the best overall performance but at the cost of higher hardware complexity compared to the other schemes.
[EWiLi2016] Enabling power-awareness for the Xen HypervisorMatteo Ferroni
Virtualization allows simultaneous execution of multi-tenant workloads on the same platform, either a server or an embedded system. Unfortunately, it is non-trivial to attribute hardware events to multiple virtual tenants, as some system’s metrics relate to the whole system (e.g., RAPL energy counters). Virtualized environments have then a rather incomplete picture of how tenants use the hardware, limiting their optimization capabilities. Thus, we propose XeMPower, a lightweight monitoring solution for Xen that precisely accounts hardware events to guest workloads. It also enables attribution of CPU power consumption to individual tenants. We show that XeMPower introduces negligible overhead in power consumption, aiming to be a reference design for power-aware virtualized environments.
Full paper: http://ceur-ws.org/Vol-1697/EWiLi16_10.pdf
This document discusses the Packet Data Convergence Protocol (PDCP) sublayer in 3GPP LTE networks. It describes the key functions of PDCP including header compression, ciphering, integrity protection, and transmission of user and control plane data. It also explains PDCP's use of ROHC for header compression and the various PDCP protocol data unit formats used for control and user plane messages.
This document discusses Self-Organizing Networks (SON) and its features in LTE networks. It describes the key drivers for SON in LTE including reducing manual intervention, improving performance and user experience. The main SON features covered are self-configuration, self-optimization, and self-healing. Specific use cases explained include PCI planning, ANR, MRO and energy savings. The LTE SON framework and architecture specified by 3GPP is also summarized.
LTE KPI Optimization - A to Z Abiola.pptxssuser574918
1. The document discusses LTE post launch optimization, including problem causes, solutions, and case studies.
2. It describes different types of counters used to collect PM statistics, including peg, gauge, accumulator, scan, PDF, DDM, calculated, trigACC, and trigSCAN counters.
3. Potential causes of poor accessibility for E-RAB establishment are discussed, including poor coverage, alarms, high load, hardware issues, high UL interference, PCI conflicts, RACH root sequence index planning, UE camping in wrong cells, wrong system constant settings, and VSWR or cell availability issues.
The Run Control and Monitoring System (RCMS) controls and monitors the CMS experiment during data taking. It controls the Data Acquisition System and monitors the Detector Control System. At test runs like the Magnet Test and Cosmic Challenge, RCMS used function managers to control subsystem partitions like the tracker, calorimeters, and muon systems. RCMS was stable during these test runs and successfully recorded over 160 million cosmic ray events over one month of running. RCMS is also being used as part of the GRIDCC project to provide remote access and control of complex distributed instrumentation over the grid.
FVCAG: A framework for formal verification driven power modelling and verific...Arun Joseph
FVCAG is a formal verification driven framework for power modeling and verification of IPs. It uses a single formal verification run to determine the preferred input pin conditions for accurate power modeling of IPs as well as identify any instances in a design where those conditions are violated. The framework was experimentally evaluated on an industry microprocessor design and found to determine power modeling criteria for standard cells and macros more quickly and with fewer errors compared to manual approaches. It identified the correct input conditions for power modeling in under a minute across thousands of IP instances.
System on Chip Based RTC in Power ElectronicsjournalBEEI
Current control systems and emulation systems (Hardware-in-the-Loop, HIL or Processor-in-the-Loop, PIL) for high-end power-electronic applications often consist of numerous components and interlinking busses: a micro controller for communication and high level control, a DSP for real-time control, an FPGA section for fast parallel actions and data acquisition, multiport RAM structures or bus systems as interconnecting structure. System-on-Chip (SoC) combines many of these functions on a single die. This gives the advantage of space reduction combined with cost reduction and very fast internal communication. Such systems become very relevant for research and also for industrial applications. The SoC used here as an example combines a Dual-Core ARM 9 hard processor system (HPS) and an FPGA, including fast interlinks between these components. SoC systems require careful software and firmware concepts to provide real-time control and emulation capability. This paper demonstrates an optimal way to use the resources of the SoC and discusses challenges caused by the internal structure of SoC. The key idea is to use asymmetric multiprocessing: One core uses a bare-metal operating system for hard real time. The other core runs a “real-time” Linux for service functions and communication. The FPGA is used for flexible process-oriented interfaces (A/D, D/A, switching signals), quasi-hard-wired protection and the precise timing of the real-time control cycle. This way of implementation is generally known and sometimes even suggested–but to the knowledge of the author’s seldomly implemented and documented in the context of demanding real-time control or emulation. The paper details the way of implementation, including process interfaces, and discusses the advantages and disadvantages of the chosen concept. Measurement results demonstrate the properties of the solution.
Arm7 microcontroller based fuzzy logic controller for liquid level control sy...IAEME Publication
This document summarizes an academic journal article that presents a liquid level control system using an ARM7 microcontroller-based fuzzy logic controller. The system uses a differential pressure transducer to measure liquid level in a tank and controls a pneumatic valve using PWM to maintain the level at a desired setpoint. Fuzzy logic is used to control the valve position based on error and change in error signals. Hardware details of the tank, sensors, actuators and ARM7 microcontroller are provided. The performance of the fuzzy controller is compared to a PID controller for liquid level control.
The energy-producing mechanism in a fusion reactor is the joining together of two light atomic nuclei. When two nuclei fuse, a small amount of mass is converted into a large amount of energy. Energy (E) and mass (m) are related through Einstein’s relation, E = mc2, by the large conversion factor c2, where c is the speed of light (about 3 × 108 metres per second, or 186,000 miles per second). Mass can be converted to energy also by nuclear fission, the splitting of a heavy nucleus. This splitting process is utilized in nuclear reactors.
The energy-producing mechanism in a fusion reactor is the joining together of two light atomic nuclei. When two nuclei fuse, a small amount of mass is converted into a large amount of energy. Energy (E) and mass (m) are related through Einstein’s relation, E = mc2, by the large conversion factor c2, where c is the speed of light (about 3 × 108 metres per second, or 186,000 miles per second). Mass can be converted to energy also by nuclear fission, the splitting of a heavy nucleus. This splitting process is utilized in nuclear reactors.
The energy-producing mechanism in a fusion reactor is the joining together of two light atomic nuclei. When two nuclei fuse, a small amount of mass is converted into a large amount of energy. Energy (E) and mass (m) are related through Einstein’s relation, E = mc2, by the large conversion factor c2, where c is the speed of light (about 3 × 108 metres per second, or 186,000 miles per second). Mass can be converted to energy also by nuclear fission, the splitting of a heavy nucleus. This splitting process is utilized in nuclear reactors.
The energy-producing mechanism in a fusion reactor is the joining together of two light atomic nuclei. When two nuclei fuse, a small amount of mass is converted into a large amount of energy. Energy (E) and mass (m) are related through Einstein’s relation, E = mc2, by the large conversion factor c2, where c is the speed of light (about 3 × 108 metres per second, or 186,000 miles per second). Mass can be converted to energy also by nuclear fission, the splitting of a heavy nucleus. This splitting processThe energy-producing mechanism in a fusion reactor is the joining together of two light atomic nuclei. When two nuclei fuse, a small amount of mass is converted into a large amount of energy. Energy (E) and mass (m) are related through Einstein’s relation, E = mc2, by the large conversion factor c2, where c is the speed of light (about 3 × 108 metres per second, or 186,000 miles per second). Mass can be converted to energy also by nuclear fission, the splitting of a heavy nucleus. This splitting process is utilized in nuclear reactors.
is utilized in nuclear reactors.
The energy-producing mechanism in a fusion reactor is the joining together of two light atomic nuclei. When two nuclei fuse, a small amount of mass is converted into a large amount of energy. Energy (E) and mass (m) are related through Einstein’s relation, E = mc2, by the large conversion.
IRJET- Patient Health Monitoring System using Can ProtocolIRJET Journal
This document describes a patient health monitoring system that uses CAN protocol to measure the heart rate and body temperature of one or more patients in real-time. The system includes sensors to detect heart rate and temperature, microcontrollers, and CAN transceivers to transmit the sensor data via CAN bus to a display. This allows doctors to monitor multiple patients' vital signs from a single display. The system aims to reduce monitoring time and increase flexibility compared to only being able to measure one patient at a time.
1) Software parallelization is required to handle the increasing scale and complexity of high-energy physics (HEP) experiments, which produce vast amounts of data from particle collisions.
2) The authors developed a programming model called Communication Capability (CoCa) that allows parallelization at different levels of granularity and reduces software complexity.
3) CoCa is based on the database transaction paradigm and allows the results of components executing in parallel to be combined while ensuring consistency, as required for HEP event reconstruction.
OPAL-RT RT13 Conference: Rapid control prototyping solutions for power electr...OPAL-RT TECHNOLOGIES
This document describes rapid control prototyping (RCP) solutions from OPAL-RT for power electronics, electric drives, and power systems. RCP allows users to build real-time experimental setups to test and validate control designs without extensive coding. The OPAL-RT solution features high-speed I/O, flexible connectivity options, and real-time simulation tools to efficiently develop and test control algorithms. Example applications discussed include electric motor drives, modular multilevel converters, and multi-terminal HVDC systems.
1) The document describes the design of a Dual Redundancy CAN-bus Controller (DRCC) based on an FPGA chip to improve reliability and real-time performance over software-based redundancy approaches.
2) The DRCC design includes two CAN controller blocks, a redundancy management block, and RAM blocks. The redundancy management block manages message transmission across the two channels and switches channels if one channel fails.
3) Simulation tests verified that the DRCC design could reliably switch channels within 25ms when one channel failed, ensuring messages are transmitted successfully with high reliability and real-time performance.
The document provides information about the structure, operation, and control of power systems. It discusses:
1) The typical structure of power systems including generation, transmission, and distribution systems organized into interconnected regional grids and pools.
2) SCADA and EMS systems which monitor power system parameters, send real-time data to control centers, and support functions like generation control, scheduling, forecasting, and contingency analysis to guide optimal system operation.
3) Key aspects of power system operation and control including load frequency control, automatic voltage control, state estimation, and flexible AC transmission systems which maintain system stability and security through monitoring and automated response.
The document describes the initialization and setup procedures between a Node B, RNC, and core network nodes in a UMTS network. It includes procedures for Node B initialization like the audit procedure, cell setup procedure, and common transport channel setup procedure. It also covers call flow scenarios for RRC connection establishment, location updates, circuit switched call setup, and handovers between nodes. The end-to-end protocol stacks for the circuit switched and packet switched domains are illustrated as well.
Overview of DuraMat software tool development(poster version)Anubhav Jain
This document provides an overview of software tools being developed by the DuraMat project to analyze photovoltaic systems. It summarizes six software tools that serve two main purposes: core functions for PV analysis and modeling operation/degradation, and tools for project planning and reducing levelized cost of energy (LCOE). The core function tools include PVAnalytics for data processing and a PV-Pro preprocessor. Tools for operation/degradation include PV-Pro, PVOps, PVArc, and pv-vision. Tools for project planning and LCOE include a simplified LCOE calculator and VocMax string length calculator. All tools are open source and designed for large PV data sets.
1. The document describes a miRNA PCR array experiment to analyze miRNA expression during osteogenesis and neurogenesis differentiation. Samples were collected from human mesenchymal stem cells (hMSCs) at different time points during differentiation and analyzed in triplicate using PCR arrays.
2. The data analysis plan involves first calculating the ΔCt for each gene of interest by normalizing to housekeeping genes. The ΔCts are then averaged for each gene within each sample group. Fold changes between groups will then be determined using the ΔΔCt method to identify differentially expressed miRNAs during differentiation.
RT15 Berkeley | Introduction to FPGA Power Electronic & Electric Machine real...OPAL-RT TECHNOLOGIES
FPGA simulation provides high-fidelity models for hardware-in-the-loop testing of electric machines and power electronics. It allows control algorithms to be tested with highly resolved non-ideal behaviors faster and at lower cost compared to physical testing. The document discusses how eFPGAsim utilizes FPGA technologies to simulate electric drive systems with models exported from finite element analysis, improving collaboration between design and control engineers.
Similar to Jh Bickel Risk Implications Of Digital Rps Operating Experience (20)
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Jh Bickel Risk Implications Of Digital Rps Operating Experience
1. Risk Implications of Digital RPS
Operating Experience
For Presentation at
IAEA Technical Meeting on Common-Cause Failures in
Digital Instrumentation and Control Systems of
Nuclear Power Plants
June 19-21, 2007
Bethesda, Maryland, USA
Dr. John H. Bickel
Evergreen Safety & Reliability Technologies, LLC
1
2. Motivations for this work:
No prior risk or importance analysis of
existing digital RPS failure experience exists
Prior NRC Research reports concluded LER
data too sparse to use
– Only found: 18 microprocessor failures, 4 software failures
– Suggested need to consider data from aerospace, medical, transport
systems
Lack of data implied: inability to risk-inform
digital I&C applications and issues
My belief:
Much more data actually exists on CE CPCS
Risks from CPCS experience should be assessed
2
JHBickel - ESRT, LLC
3. CE Digital Core Protection Calculator Basics:
CE High LPD, Low DNBR RPS design switched from analog
Thermal Margin/Low Pressure Trip to digital Core Protection
Calculators in mid 1970’s
Used 6 specially qualified minicomputers running stored
computer software and addressable constants
CPCS performs static/dynamic projections of local power
density and DNBR based upon:
Ex-core neutron flux
Pressurizer pressure
Reactor Tcold, Thot
RCP pump speed
Control rod positions
CPCS generates: alarms, pre-trip, and trip safety actions
Original system was licensed on ANO-2 in 1978
Subsequently utilized at: SONGS-2/3, Waterford-3, Palo
Verde-1/2/3 …. and Korean Standard NPPs
3
JHBickel - ESRT, LLC
4. CE Digital Core Protection Calculator Basics:
CPCS credited for reactor trip for following events:
Uncontrolled Control Rod withdrawal from critical (>10-4 power)
Uncontrolled Boron Dilution from critical (>10-4 power)
Uncontrolled Control Rod withdrawal from power operation
Dropped, or mis-positioned Control Rods
Ejected Control Rods
Single RCP loss of flow
Single RCP shaft seizure
4-RCP loss of flow
Electrical grid under-frequency
Excess secondary steam flow (including turbine bypass valve
malfunction)
Excess feedwater flow
Loss of feedwater heater
Steam line break
Single MSIV closure
Rapid increase in local power 4
5. CE Digital CPCS Software Basics:
Software Design: “One Good Version” not “N-Version”
5
JHBickel - ESRT, LLC
6. CE Digital CPCS Interchannel Communications Basics:
4 CPC computers evaluate: LPD, DNBR– using neutron flux, temperature,
RCS flow and control rod position inputs in each quadrant
2 CEA computers (CEACs) monitor all quadrants for CEA deviations within
groups and generate Penalty Factors transmitted to all 4 CPCs
CEACs communicate to CPCs via one-way “simplex” communication links
6
JHBickel - ESRT, LLC
7. CE Reactor Protection System PRA Basics:
PRA Assessments of overall
CE RPS have existed for some
time (2001)
Component unavailabilities
based on “time averaged”
values
NUREG/CR-5500 Vol.10:
QRPS = 7.2E-6 (Digital
CPCS, w/o Operator
Action)
QRPS = 1.6E-6 (Digital
CPCS, w/ Operator
Action)
Relay and breaker CCF
dominates predicted QRPS :
CCF of master trip relays
(K-1 through K-4)
CCF of reactor trip
breaker is not as
significant on CE design
due to configuration
7
JHBickel - ESRT, LLC
8. How This Study was Carried Out:
Failure experience from on-line NRC LER data base
currently goes back to 1984
(NOTE: misses first 6 years ANO-2 experience)
Post-1984 CPC LERs on CE plants were evaluated
CPCS Failure experience categorized by subsystem
Size of operating experience pool:
141 LERs (1984 – 2005)
~145.5 Rx years (or: 1.27x106 Rx hr)
70 actual CPC reactor trip demands
26 events involving latent CCF (including: 1 latent software CCF)
Subsystem failure rates calculated via Bayesian
estimation using Jeffrey’s non-informative prior
CCDP risk estimated via ASP approach
Method highlights CCDP impact of “higher” than average unavailability
8
JHBickel - ESRT, LLC
9. How Component Population Was Estimated:
Total CPCS subsystem operating time estimation was based upon
above component inventory per plant
Total CPCS operating time (for 4/4 Channel CCF estimation) was simply
total plant operating time.
9
JHBickel - ESRT, LLC
10. How Subsystem Operating Time Was Estimated
Each of 4 CPC Computers and 2 CEAC Computers contain: 1 processor
board, 1 memory board, 1 multiplexer board, 1 external Watchdog Timer
Each of 4 CPC Channels contains: 1 PZR pressure sensor, 3 ex-core
neutron flux inputs, 4 RCP speed sensors, 2 Tcold and 2 Thot inputs
10
JHBickel - ESRT, LLC
11. Subsystem failure rates were calculated via Bayesian
estimation using Jeffrey’s non-informative prior
Technique allows bounding failure rate estimation for “0” observed failures
11
JHBickel - ESRT, LLC
12. Failure Rate and Unavailability Estimation Issues
Data Needs for Risk Estimation Process:
Ability to estimate CCDP given specific event demands and
event-conditional system unavailabilities (such as RPS)
Includes conditional unavailability due to specific
combinations of input conditions to digital system
Certain software “bugs” only triggered by unusual input sets
Overall RPS unavailability must consider combinations of
random and CCF events
Operating experience estimates failure rates: λ
Conversion to RPS unavailability uses estimate of time to
detect and restore: P = λ x (fault duration)
In many cases for latent Digital CCFs fault durations are many months
12
JHBickel - ESRT, LLC
15. CPCS Single Subsystem Failure Rates
Also important to note:
Failure modes of recent regulatory concern which have not occurred in
population exposure time
Recall failure rates can be estimated as: λ ~ 0.5/T
Faults propagated via inter-channel communication:
2 events noted involving loss of CPCS -> Plant Computer communications
link that resulted in failure to perform Tech. Spec. required cross-checks,
λ = 2.5 / ( 6 x 1.27x106 hours) = 3.3 x10-7/hr
Other events in which communication link failure occurred without
operation impairment likely occurred but not reported in LER data base
Events involving a failure propagating to CPC or CEAC would be in LER
data base if they occurred
“0” events noted in which a communication link failure caused corruption to
CPC or CEAC channel, λ ~ 0.5 / ( 6 x 1.27x106 hours), or: ~ 6.6 x10-8/hr
15
JHBickel - ESRT, LLC
18. Results: CPCS System CCF Failure Rates
Computer Technicians insert Wrong Data Sets to all 4 CPCS Channels
Breakdown of Common Mode Failures
Reactor Vendor supplies Erroneous Data Sets
input to all 4 CPCS Channels
Reactor Vendor Supplies Software Update Containing
4% 4% Latent Software Error
11%
4% Operators Fail to Confirm ASI in all four CPCS Channels
4% when Reactor Power > 20%
8% Incorrect Acceptance Criteria Used for
4%
Excore Data Set Calibration Checks >80%
4% Inaccurate Cross Calibration of Excore Data Sets
8% (Cross Channel, COLSS, etc.)
8% High Log Power Bypass Removal Setpoints (1E-4) Incorrect
Inaccurate Cross Calibration of RCS Flow Data Sets
11% 4%
(Cross Channel, COLSS, etc.)
Operators Fail to Perform 12hr Auto-RESTART Surveillance
on all CPCS Channels
26% Operators Fail to Perform Refueling Interval Surveillance
on all CPCS Channels
Communication Data Link Failure to Plant Computer
results in Missed Surveillances on both CEAC Channels
2 of 2 CEACs Inoperable
3 of 4 CPCS Neutron Flux Cross Channel Calibrations OOT
The issue of latent software CCF represents only 4% of the CCF experience
Calibration, generating, loading of incorrect data sets are the dominant
sources of CCF
18
JHBickel - ESRT, LLC
19. Types of Observed CCF Events:
Inaccurate cross-calibration of all Ex-core neutron flux
(7 events) or all RCS flow channels (2 events)
Computer technicians insert wrong addressable
constant data sets into all 4 CPCS channels (3 events)
Swapping addressable data sets between units
CE supplies erroneous data sets (2 events)
Software update provided to plant with incorrect logic
for processing of indicated failed sensors (1 event)
19
JHBickel - ESRT, LLC
20. Risk significance of this failure experience?
None of actual CCF events resulted in core damage
(all were latent faults missing “triggering event”)
Need to consider CCDP implications of specific
failure modes
Intent: apply risk screening process similar to NRC
ASP program which focuses on higher than
average values of system unavailability
Use: ASP-type failure rate data, SPAR plant specific
risk models, actual observed unavailability
CCDP = Σ λi x PCPCS-CCF x HEPNR
CPCS-
PCPCS-CCF = λCPCS-CCF x (duration of latent fault)
CPCS- CPCS-
First: How sensitive is CCDP to RPS Logic CCF ?
20
JHBickel - ESRT, LLC
21. How sensitive is CCDP to RPS Logic CCF ?
RPS failure considers:
– Mechanical CCF jamming of
control rods
– Relay/Breaker CCF failure
– RPS Logic CCF
– Operators fail to manually trip
– Operators fail to trip MG sets
Loss of Offsite Power
generates reactor trip
without RPS
Sensitivity studies
conducted using NRC
SPAR PRA models
21
JHBickel - ESRT, LLC
22. How sensitive is CCDP to RPS Logic CCF ?
Variations in RPS-LOGIC-CCF are not risk significant until > 1x10-3
22
JHBickel - ESRT, LLC
23. Some example risk assessments of
actual Digital CCF events
23
JHBickel - ESRT, LLC
24. 1995 SONGS 2-3 Addressable Data Swapped
Rod shadowing constants (on data disks) were swapped
between adjacent SONGS units for 10,968 hours.
Units at different power and burnup history, rod shadowing
corrections thus different.
Rod shadowing constants only impact power density
predictions when control rods dropped, or partially inserted.
PCPCS-CCF = 2.75x10-6/hr x 10,968 hr = 3.0 x10-2
Summing over all initiating events involving dropped control
rods and rod cycling tests, yields:
CCDP < 0.488/yr x 3.0 x10-2 x 0.01 = 1.5 x 10-4
This represents bounding conservative estimate because
better knowledge of duty cycle of rod cycling tests would
likely reduce by factor of 10 or more.
24
JHBickel - ESRT, LLC
25. 1984 Erroneous Fx,y factors supplied by CE
and uploaded to SONGS-2
Incorrect Fx,y factors generated by CE and used for CPCS
LPD calculations from 2-7-84 to 3-20-84 (1,032 hrs).
Events such as this have occurred twice.
PCPCS-CCF = 1.96x10-6/hr x 1,032 hr = 2.0 x10-3
CCDP = 0.488/yr x 2.0 x10-3 x 0.01 = 1.5 x 10-4
25
JHBickel - ESRT, LLC
26. 2005 Software Design Error in Software
Upgrade at Palo Verde 2 for 2,736 hrs.
Original software design:
Trip CPC channel if sensor detected to be “Failed – Out of Range”
Software hardware upgrade:
Use inputs from two sets of instruments and multiplexers (primary and
secondary)
Out of Range Sensor Failure:
Primary detected sensor failure results in switchover to secondary.
Out of Range Failure on secondary reverts to “last stored good value”
CCF of all sensors of one type could result in continuous use of
“last good value” in all 4 CPCS channels rather than TRIP.
PCPCS-CCF = 8 x PSensor-CCF x 2.75x10-6/hr x 2,736 hr
=8 x 8.4 x10-4 x 2.75x10-6/hr x 2,736 hr = 5.0 x10-5
Given CCF of instruments, no credit for operators, HEP=1.0
CCDP = 0.289/yr x 5.0 x10-5 x 1.0 = 1.44 x10-5
26
JHBickel - ESRT, LLC
27. PRPS-CCF values from
single events span many
decades
Fault duration times drive
PRPS-CCF values
Latent data uploading
errors are dominant
unavailability contributors
Data uploading errors
larger than relay and
breaker CCF found in
NUREG/CR-5500 (which
used time-averaged
values)
27
28. Event specific CCDP
also dominated by
data uploading errors
Latent software CCF
event is smaller due to
unlikelihood of
triggering condition.
28
29. Observations from this “Total Picture of RPS”:
Designers of Digital I&C not particularly surprised by relative
dominance of:
Calibration problems and human errors uploading wrong data sets
CCF due to errors by vendor in generating data sets
These failure modes also existed in NPPs with Analog I&C
CCF Unavailability and event CCDP estimates from
operating experience are dominated by latent events with
very long fault duration intervals
Software-related CCF, while important, isn’t dominant CCF
source when actual operating experience is evaluated
Likely because: software V&V processes more rigorous than
operational controls after deployment at NPP
Most-obvious software “bugs” generally caught by burn-in testing
and qualification programs
Software “bugs” triggered by highly unlikely input combinations are
not key sources of RPS unavailability or CCDP risk
29
JHBickel - ESRT, LLC
30. What is Concluded from all this?
To Digital I&C risk it’s necessary to view Total Picture of RPS
– not just “software” or : “microprocessors”:
Final trip relays and trip breakers - will still be there
Problems cross calibrating nuclear with thermal - will still be there
Human errors inputting set-points and coefficients - will still be there
When this is done - Total Picture of RPS risk emerges
NPPs with CPCS have been operating since 1978 in typical,
controlled, nuclear operations environment, which includes:
Vendor generation of cycle specific constants, set-points
Routine hardware, software upgrades developed and installed
Routine operation, trouble alarms, and alarm response
Impact of Technical Specifications, Testing, Calibrations
Actual nuclear field reliability experience is better source of data
than non-nuclear sources or theoretical models
Ability to estimate, or bound risks of specific Digital I&C CCF
failure modes thus: clearly exists
30
JHBickel - ESRT, LLC