Parma 2016-05-17 - JGrass-NewAGE - Some About The State of ArtRiccardo Rigon
This describes the motivation behind the JGrass-NewAGE infrastructure. It also shows the main components that were implemented. Finally it shows and comments some case studies and some use cases
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
The ideology behind the hydrological modelling I do. It is a revisiting of part of a talk I gave at CUAHSI biennial meeting in Boulder (CO) on July 2008. It promotes the modeling-by-components paradigm
The development of embedded applications (such as Wireless Sensor Network protocols) often
requires a shift to formal specifications. To insure the reliability and the performance of the
WSNs, such protocols must be designed following some methods reducing error rate. Formal
methods (as Automata, Petri nets, algebra, logics, etc.) were largely used in the specification of
these protocols, their analysis and their verification. After that, their implementation is an
important phase to deploy, test and use those protocols in real environments. The main
objective of the current paper is to formalize the transformation from high-level specification (in
Timed Automata) to low-level implementation (in NesC language and TinyOs system) and to
automate such transformation. The proposed transformation approach defines a set of rules that
allow the passage between these two levels. We implemented our solution and we illustrated the
proposed approach on a protocol case study for the "humidity" and "temperature" sensing in
WSNs applications.
FAULT MODELING OF COMBINATIONAL AND SEQUENTIAL CIRCUITS AT REGISTER TRANSFER ...VLSICS Design
As the complexity of Very Large Scale Integration (VLSI) is growing, testing becomes tedious and tougher. As of now fault models are used to test digital circuits at the gate level or below that level. By using fault models at the lower levels, testing becomes cumbersome and will lead to delays in the design cycle. In addition, developments in deep submicron technology provide an opening to new defects. We must develop efficient fault detection and location methods in order to reduce manufacturing costs and time to market. Thus there is a need to look for a new approach of testing the circuits at higher levels to speed up the design cycle. This paper proposes on Register Transfer Level (RTL) modeling for digital circuits and computing the fault coverage. The result obtained through this work establishes that the fault coverage with the RTL fault model is comparable to the gate level fault coverage.
LDAC 2015 - Towards an industry-wide ifcOWL: choices and issuesPieter Pauwels
Presentation at LDAC 2015 (http://ldac-2015.bwk.tue.nl/) in Eindhoven, together with Maria Poveda-Villalon (UPMadrid): Towards an industry-wide ifcOWL: choices and issues.
Spark-MPI: Approaching the Fifth Paradigm with Nikolay MalitskyDatabricks
Over the past decade, the fourth paradigm of data-intensive science rapidly became a major driving concept of multiple application domains encompassing and generating large-scale devices such as light sources and cutting edge telescopes. The success of data-intensive projects subsequently triggered the next generation of machine learning approaches. These new artificial intelligent systems clearly represent a paradigm shift from data processing pipelines towards the fifth paradigm of knowledge-centric cognitive applications requiring the integration of Big Data processing platforms and HPC technologies.
The talk addresses the existing impedance mismatch between data-intensive and compute-intensive ecosystems by presenting the Spark-MPI approach based on the MPI Process Management Interface (PMI). The approach was originally designed for building high-performance streaming image reconstruction pipelines at light source facilities. This talk will demonstrate Spark-MPI within the context of distributed deep learning applications by integrating the Apache Spark platform, PMI Exascale and Horovod MPI-based training framework for TensorFlow.
Fault Modeling for Verilog Register Transfer Levelidescitation
As the complexity of Very Large Scale Integration
(VLSI) increases, testing becomes tedious. Currently fault
models are used to test digital circuits at gate level or at levels
lower than gate. Modeling faults at these levels, leads to
increase in the design cycle time period. Hence, there is a
need to explore new approaches for modeling faults at higher
levels. This paper proposes fault modeling at the Register
Transfer Level (RTL) for digital circuits. Using this level of
modeling, results are obtained for fault coverage, area and
test patterns. A software prototype, FEVER, has been developed
in C which reads a RTL description and generates two output
files: one a modified RTL with test features and two a file
consisting of set of test patterns. These modified RTL and test
patterns are further used for fault simulation and fault
coverage analysis. Comparison is performed between the RTL
and Gate level modeling for ISCAS benchmarks and the
results of the same are presented. Results are obtained using
Synopsys, TetraMax and it is shown that it is possible to achieve
100% fault coverage with no area overhead at the RTL level
Parma 2016-05-17 - JGrass-NewAGE - Some About The State of ArtRiccardo Rigon
This describes the motivation behind the JGrass-NewAGE infrastructure. It also shows the main components that were implemented. Finally it shows and comments some case studies and some use cases
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
The ideology behind the hydrological modelling I do. It is a revisiting of part of a talk I gave at CUAHSI biennial meeting in Boulder (CO) on July 2008. It promotes the modeling-by-components paradigm
The development of embedded applications (such as Wireless Sensor Network protocols) often
requires a shift to formal specifications. To insure the reliability and the performance of the
WSNs, such protocols must be designed following some methods reducing error rate. Formal
methods (as Automata, Petri nets, algebra, logics, etc.) were largely used in the specification of
these protocols, their analysis and their verification. After that, their implementation is an
important phase to deploy, test and use those protocols in real environments. The main
objective of the current paper is to formalize the transformation from high-level specification (in
Timed Automata) to low-level implementation (in NesC language and TinyOs system) and to
automate such transformation. The proposed transformation approach defines a set of rules that
allow the passage between these two levels. We implemented our solution and we illustrated the
proposed approach on a protocol case study for the "humidity" and "temperature" sensing in
WSNs applications.
FAULT MODELING OF COMBINATIONAL AND SEQUENTIAL CIRCUITS AT REGISTER TRANSFER ...VLSICS Design
As the complexity of Very Large Scale Integration (VLSI) is growing, testing becomes tedious and tougher. As of now fault models are used to test digital circuits at the gate level or below that level. By using fault models at the lower levels, testing becomes cumbersome and will lead to delays in the design cycle. In addition, developments in deep submicron technology provide an opening to new defects. We must develop efficient fault detection and location methods in order to reduce manufacturing costs and time to market. Thus there is a need to look for a new approach of testing the circuits at higher levels to speed up the design cycle. This paper proposes on Register Transfer Level (RTL) modeling for digital circuits and computing the fault coverage. The result obtained through this work establishes that the fault coverage with the RTL fault model is comparable to the gate level fault coverage.
LDAC 2015 - Towards an industry-wide ifcOWL: choices and issuesPieter Pauwels
Presentation at LDAC 2015 (http://ldac-2015.bwk.tue.nl/) in Eindhoven, together with Maria Poveda-Villalon (UPMadrid): Towards an industry-wide ifcOWL: choices and issues.
Spark-MPI: Approaching the Fifth Paradigm with Nikolay MalitskyDatabricks
Over the past decade, the fourth paradigm of data-intensive science rapidly became a major driving concept of multiple application domains encompassing and generating large-scale devices such as light sources and cutting edge telescopes. The success of data-intensive projects subsequently triggered the next generation of machine learning approaches. These new artificial intelligent systems clearly represent a paradigm shift from data processing pipelines towards the fifth paradigm of knowledge-centric cognitive applications requiring the integration of Big Data processing platforms and HPC technologies.
The talk addresses the existing impedance mismatch between data-intensive and compute-intensive ecosystems by presenting the Spark-MPI approach based on the MPI Process Management Interface (PMI). The approach was originally designed for building high-performance streaming image reconstruction pipelines at light source facilities. This talk will demonstrate Spark-MPI within the context of distributed deep learning applications by integrating the Apache Spark platform, PMI Exascale and Horovod MPI-based training framework for TensorFlow.
Fault Modeling for Verilog Register Transfer Levelidescitation
As the complexity of Very Large Scale Integration
(VLSI) increases, testing becomes tedious. Currently fault
models are used to test digital circuits at gate level or at levels
lower than gate. Modeling faults at these levels, leads to
increase in the design cycle time period. Hence, there is a
need to explore new approaches for modeling faults at higher
levels. This paper proposes fault modeling at the Register
Transfer Level (RTL) for digital circuits. Using this level of
modeling, results are obtained for fault coverage, area and
test patterns. A software prototype, FEVER, has been developed
in C which reads a RTL description and generates two output
files: one a modified RTL with test features and two a file
consisting of set of test patterns. These modified RTL and test
patterns are further used for fault simulation and fault
coverage analysis. Comparison is performed between the RTL
and Gate level modeling for ISCAS benchmarks and the
results of the same are presented. Results are obtained using
Synopsys, TetraMax and it is shown that it is possible to achieve
100% fault coverage with no area overhead at the RTL level
Electrical Engineer in search of a team to join that values creativity, continual learning, and bringing value to the marketplace that goes beyond the bottom line. In other words, I want to do good work with great people.
This is the user guide for the quantum simulator I have developed. It includes tutorials on advanced quantum mechanics, quantum algorithms and geometric algebra, together with examples of using the simulator do quantum mechanics, Special Relativity, geometric algebra and quantum computing.
O(n) in time, O(1) in space substr-search algorithm, using O(1)-updatable hash derived using Group Theory. Hash function is greyed out, to prevent IP-theft. Algorithm is far simpler than trie or Knuth-Morris-Pratt. Very surprised it has not been discovered by others.
O(n) in time, O(1) in space substr-search algorithm, using O(1)-updatable hash derived using Group Theory. Hash function is greyed out, to prevent IP-theft. Algorithm is far simpler than trie or Knuth-Morris-Pratt. Very surprised it has not been discovered by others.
O(n) in time, O(1) in space substr-search algorithm, using O(1)-updatable hash derived using Group Theory. Hash function is greyed out, to prevent IP-theft. Algorithm is far simpler than trie or Knuth-Morris-Pratt. Very surprised it has not been discovered by others.
An O(n+m) in time and O(1) in space search-algorithm for substr, length m, in target string, length n. Simpler than trie or Knuth-Morris-Pratt. Can be reduced to O(n) using Group Theoretic O(1)-updatable hash that replaces the cumulative sum, which requires an O(m) verification step because "+" is commutative and cannot distinguish between anagrams: a+b+c = a+c+b = c+a+b etc. I am not uploading the hash, because it means giving away R&D I do in quantum simulations.
This is a roughly O(n) algorithm that generates the kth lexicographically ordered permutation of an n-element array from the integer k. Example, for a three-element
array:
0 --> 0 1 2
1 --> 0 2 1
2 --> 1 0 2
3 --> 1 2 0
4 --> 2 0 1
5 --> 2 1 0
This is a roughly O(n) algorithm that generates the kth lexicographically ordered permutation of an n-element array from the integer k. Example, for a three-element array:
0 --> 0 1 2
1 --> 0 2 1
2 --> 1 0 2
3 --> 1 2 0
4 --> 2 0 1
5 --> 2 1 0
A class that automates conversion from a C++ recursive function to an iterative function. It allow the recursive function to preserve its structure by reproducing the "call stack" on an std::stack. The examples use combinatorics to illustrate usage.
Dirac demo (quantum mechanics with C++). Please note: There is a problem with...Russell Childs
Simple demo of a framework I wrote for doing quantum mechanics in C++. It uses Dirac bras, kets, inner & tensor products, operators and so forth for the linear algebra of QM.
UML design for C++11 written to solve a problem at interview, please also see "Interview C++11 code". The UML design can be zoomed to render it more legible.
Example of Dynamic Programming to achieve O(n).
Interview question: N houses, each with weighting on value of goods. How can a burglar maximise profit if they are not allowed to visit neighbouring houses?
Solution: optimum(i) = max( optimum(i-1)-w(i-1)+w(i) , optimum(i-2)+w(i) )
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
Full resume dr_russell_john_childs_2013
1. RUSSELL JOHN CHILDS
London,SE18 4PN, UK
Email:
russelljohnchilds@gmail.com
Seattle, WA, 98122
LinkedIn:
www.linkedin.com/pub/russell-childs/4a/271/98
OBJECTIVE:
To utilise further my experience in analysis and OO architecture, design and safety-critical C++ within an organisation seeking technical software skills supported by a solid background in physics, mathematics and Best Practices. Primary skills: algorithm/data structure design and C++ HPC modelling/simulation.
QUALIFICATIONS:
Over 10 years of experience in full life cycle technical software engineering, primarily modelling complex systems.
Systematic and disciplined approach to solving engineering problems and providing software solutions to rigorous standard (ISO 9000).
Confirmed ability to communicate technical concepts and present them to large audiences.
Proven ability in rapid assimilation, application and eidetic consolidation of new material.
Solid skills in effective mentoring, training and supervision of junior staff.
TECHNICAL SKILLS AND KNOWLEDGE:
Hardware:
Verilog(RTL, Seq/Comb, FSMs, timings), Vera, PLI.
Software:
C++/C++11, STL, UML, design patterns (core skills).
Tools:
Eclipse, Visual Studio 2013, Rational Rose, Intel Parallel Studio and VTune C++ profiler, gdb/dbx, Cantata for C++, QAC++, X-Designer, NetView/6000.
Platforms:
Solaris, AIX, Linux (Mint), Windows 95 -7, .Net.
Standards:
ISO 9000, SIL3, CMMI3.
Knowledge and Experience:
Data structures/algorithms/graph algorithms, Statistics(regression, classifiers, Markov processes, tensor calculus), Information Theory, Digital Signal Processing, Financial Calculus, Quantum Algorithms and Computing,
Low-latency (Data Oriented Design, cache-coherency, C++ optimization for pipelining/branch-prediction, OO architectural optimization), Design For Testability, ASIC/FPGA Design (Altera Quartus II, TimeQuest, Modelsim), IPC, Threading: C++11(lock-based, lock-free)/OpenMP/Intel TBB/POSIX, BSD sockets, P2P protocol BTP/1.0, Octave/MATLAB, SQL, FoxPro, Fortran.
Life cycle:
Requirements management, formal specifications, UML design, safety critical C++ coding and exhaustive, instrumented testing during “live” emulation.
EDUCATION AND TRAINING:
PhD:
BSc:
Particle Physics, Birmingham University, Birmingham, United Kingdom.
Physics, Liverpool University, Liverpool, United Kingdom.
Conducted postgraduate research at the CERN facility, Geneva, Switzerland.
RESEARCH ACCOMPLISHMENTS:
(1) Non-exponential solution to travelling salesman problem in absence of constraints on number of salesmen. (2) Emulation of combinational logic using diffraction gratings, femto-second semiconductor absorbers and mode-locked lasers. (3) Use of pions to facilitate catalysed fusion without muon-sticking. (4) Optimal data compression via enumeration scheme. (5) Typical-set error-correction encoding scheme.
2. PROFESSIONAL EXPERIENCE:
Ripple Labs, San Francisco, Senior Software Engineer.
Researcher 2009-Present
Concurrency maximisation: Multivariate atomics using pointer switching between individual
variables and aggregated structs for lock-free, atomic transactions. Current aim is to use atomics to
automate thread-safety for safety-critical applications, where deadlock may be unavoidable due to
lock sequences depending on variable values. Additional work includes lock-free data structures
without the need for memory reclamation using reserved buffer recycling, general techniques for
linearised vectorisation and Verilog netlist optimisation for obfuscation and vectorisation of C++
algorithms.
Relationship between Taylor and Fourier coefficients of a signal, allowing classes of problems to
be solved using a Taylor series and then converted to DSP.
Foundational physics: Application of Bayes nets to quantum transition statistics. Connections
between the metric and purely trigonometric properties of n R R 2 planes. Removal of physics
constraints on Feynman Path Integrals in exploring the origins of the axioms of quantum theory.
Microsoft Corporation, Redmond, WA
SDE II May 2008-Aug 2009
Undertook debugging and maintenance of code base, build optimisation, query latency reduction.
Development of a load-balancing algorithm for query distribution across Bing services.
Statistical modeling of counters.
Researcher 2005-May 2008
Entropy encoding:
A mapping has been identified from the complete set of states of a system of a given entropy to a
contiguous enumeration yielding a rate:
N number of particles, m number of particle states,
F particle state frequency distribution, F 1 multiplicity.
Mapping assumes i.i.d. particle states and loses statistical information, F , that must be supplied during
decoding via header or predictive model. Current research is aimed at application to data compression.
Advantest, Santa Clara, CA
Senior Software Engineer
2002-2004
Undertook architecture of a framework for event-based simulation of ATE hardware modules.
Undertook creation and implementation of a requirements management strategy to promote
company CMMI compliance.
Assisted in the creation of a software coding standards manual.
Performed integration testing of C++ software modules.
Performed technical documentation and writing of formal specifications.
Undertook design/implementation of a framework for automated testing of software modules.
Sun Microsystems, Menlo Park, CA
Member of Technical Staff
2000-2002
3. Undertook formal design, implementation and formal testing of a Verilog/C++ behavioural
model for a rapid address/data switch ASIC. Model was event driven and comprised generic
objects whose collective instantiations provided emulation of only active parts of the network
for maximal efficiency. The objects were self-configuring, to be a particular ASIC, and self-organising
according to the network packets received. PLI provided a layer between the C++
model and a thin Verilog interface providing port compatibility during Verilog co-simulation..
Acted as lead designer for a template class library extension to Vera facilitating use of STL-like
container classes built around an efficient self-balancing tree implementation with amortised
rebalancing rather than amortised for Red-Black, AVL trees, and for
min/max/predecessor/successor. The tree was an augmented structure doubling as sorted linked-list.
Designed and implemented a formal strategy for non-invasive, grey-box, automated unit and
integration testing of C++ and Vera classes.
Mentored, trained and supervised an intern on one year placement with Sun.
Acted as reader/inspector for formal design reviews.
EDS, Hook, Hampshire, United Kingdom
Information Analyst
1999-2000
Engaged in formal design and implementation of fault tolerant, safety critical C++ code for
National Air Traffic Services, subject to ISO 9000 quality standard and SIL3 safety standard.
Developed GUIs under X-Designer and NetView/6000 on AIX.
Performed static code analysis (standard metric and conformance verification) using QAC++.
Performed instrumented coverage testing (100% statement, 100% entry point, 80% decision)
and McCabe metric analysis using Cantata for C++.
Undertook FMECA hazard analysis.
Participated in formal peer reviews of design, code and coverage tests.
Rolls-Royce Control Systems, Derby, Derbyshire, United Kingdom
Analyst Programmer
1998-1999
Developed physics modelling methods and supported code for the Naval Nuclear Propulsion
Programme, subject to ISO 9000 quality standard. Utilised C and embedded SQL.
Generated C shell and Perl automated test scripts.
Developed graphical user library using OpenGL, C/C++.
Developed, maintained and supported FoxPro 2.5, 2.6 and Visual FoxPro applications.
Participated in formal reviews of safety critical ADA code.
Merlin Distribution, Westbury, Wiltshire, United Kingdom
Systems Analyst/Programmer
1996-1998
Developed, implemented and supported bespoke applications in FoxPro 2.6, Visual Studio 5.
Developed optimisation algorithms for vehicle loading and route scheduling (NP-Hard).
Validated against Monte-Carlo data using minimum 2, maximum log-likelihood fits.
Private Research, United Kingdom
Researcher
1993-1996
Performed analysis and modelling of catalysed thermonuclear fusion
[ ] [ ] [ ] [ ] D H D H resonant formation, fusion cross-sections, rotating magnetic field
plasma confinement, magnetically funnelled ion energy extraction.)
Performed analysis and modelling of mode locked, pulsed (10-15s) laser computer switches.
4. Royal Naval College, Greenwich, London, United Kingdom
Senior Lecturer
1991-1993
Lectured in atomic and nuclear physics and mathematics to MSc level.
Undertook student laboratory supervision.
Supervised student field studies.
Drafted, invigilated and assessed examination papers.
Conducted research into consequential risk and stochastic health effects of nuclear accidents.
ACADEMIC EXPERIENCE:
Postgraduate research included:
Statistical analysis of spectral mass distributions. Identification of particle resonances above phase space background. Identification of particles through quantum signatures and branching ratios. Determination of candidates for exotic gluonic states.
Determination of gamma detector spatial and energy resolution through simplified statistical imaging techniques. Calibration of gamma detector through known particle decays
5. RUSSELL JOHN CHILDS – RESUME ADDENDUM
DETAILED PROFESSIONAL AND ACADEMIC HISTORY
Microsoft Corporation
SDE II
2008-2009
Primary responsibilities:
1. Undertook debugging and maintenance of code base, build optimisation, query latency reduction.
2. Development of a load-balancing algorithm for query distribution across Bing services.
3. Statistical modeling of counters.
Advantest, Santa Clara, CA
Senior Software Engineer
2002-2004
Primary responsibilities:
1. Developing architecture for framework permitting integration of C++ or Verilog models of ATE hardware modules. Framework utilised concept of events to notify different components of system activity. Event traffic was coordinated by the framework allowing models to communicate through event notifications. Cyclisation was regarded as a specialised type of event handling in which a rising or falling clock edge occurred at regular intervals and could be ignored by unregistered components alleviating the problem of expensive calls across the PLI layer. Interrupts became a seamless component of the methodology.
2. Developing methodology for streamlining requirements management replacing existing ad hoc and informal requirements control. Identified replicated, redundant and conflicting requirements moving company toward CMMI3 compliancy. Company was disinclined toward use of commercial tools such as Requisite Pro necessitating a strategy to be developed that had minimal impact on existing processes. To this end a spreadsheet was used into which cross-referenced requirements were entered, maintained, categorised and validated for acceptance or rejection. Proposals were made for a best practice procedure for submission and review of new requirements. CMMI compliance was made mandatory by the customer, which approved and welcomed the strategy adopted.
3. Performing integration testing of multiple class base and developing a framework for automation of testing of all parameters in all classes such that normal, boundary and error cases were exhaustively covered. Testing was performed in-vivo on objects whilst they were in use in their system, rather than in-vitro as standalone objects.
4. Developing framework for unit testing of individual classes
5. Writing technical user guides and specifications reversed engineered from class code. Defined Open Standards from BNF specifications, class interfaces and tester opcodes. Advising company on pitfalls of reverse engineering and mechanisms for mitigating against proprietary creep into open architecture standards.
Sun Microsystems, Menlo Park, CA
Member of Technical Staff
2000-2002
Primary responsibilities:
1. Leading the development of a behavioural model for a rapid address/data switch ASIC:
To improve the performance of system simulations of hardware components, models were developed to act as fast surrogates for the hardware. Verilog RTL was used to synthesise the processor hardware and the models comprised a fast C++ Core wrapped by a thin Verilog layer. The Core implemented the behavioural emulation of the hardware whilst the Verilog wrapper provided a pin compatible
6. interface to the rest of the hardware system. This provided for direct, transparent substitution of the
model into the hardware environment.
The Verilog wrapper invoked the C++ Core at each clock cycle, unless the Core sent notification of
inactivity. The PLI layer accommodated communication between the Verilog and the C++ for which I
devised an efficient protocol. Events received by the Verilog, such as network traffic packets or
requests to access registers, were conveyed directly to the Core through this protocol. The results of
processed events were also communicated back to the Verilog wrapper through the protocol.
The C++ Core comprised a set of base classes, which were reusable across different ASICs, and a set
of final layer classes specific to the ASIC being modelled. The principal functionality of the base
classes was to provide automatic registration and boot-strapping of active Core components across
clock cycles (between which the Core went out of scope), strictly controlled resource management,
arbitration during resource contention and controlled access to system registers. The final layer
classes were specific to the switch ASIC and processed incoming network traffic, routed it to
destination clients, processed parity and unrecoverable errors and implemented network traffic flow
and freeze control.
An object oriented design methodology was undertaken in which components were decoupled to
allow for individual replacement, in the event of changes to the hardware being modelled, with
minimal impact on the remaining architecture. The model was event driven and run-time configurable
to represent any chip in the network, upon instantiation. Clock edges were treated as generic events
and asynchronous interrupts were handled seamlessly. The reusable components used messaging to
determine the runtime behaviour resulting from events spawned by clock edges and interrupts. For
example, an arbiter component could me made to use a round robin or an alternative scheduling
scheme according to messages received. To conserve memory and improve performance only the
active sections of the network and active parts of active chips were represented within a given clock
cycle. The collective behaviour of active model components provided the emulation of routing of
network traffic and the observance of protocols, such as cache coherency, in the hardware.
Function templates and a recurring template pattern were used to eliminate most runtime
polymorphism, increasing performance and type safety. Model component classes were strongly
decoupled and utilised fast, direct messaging to facilitate communication. New types could be added
to the system without recompilation of existing code.
2. Leading the design and development of a template class pre-processor and library for Vera.
Vera was used by verifiers to exercise features of the hardware prior to fabrication. To provide greater
ease of usage and introduce greater OO design flexibility I developed a pre-processor lexical
extension allowing template classes syntax to be processed into strongly typed Vera classes, upon
instantiation. The use of the pre-processor enabled template class syntax to be incorporated into Vera
code non-invasively. This prevented interference with compiler and debugger error line reporting.
Building upon the capabilities of the pre-processor extension I designed a Vera template class library
that mirrored a subset of the C++ Standard Template Library. The Vera template class library
comprised a set of strongly typable container classes, self-validating data-type classes, with automatic
serialisation, and event handler classes including timer and messaging classes. The foundation for the
container classes was a highly optimised augmented binary tree class with additional pointers
providing linked list nodes. This offered dual and simultaneous capabilities as a binary tree, providing
searches, insertions, deletions and linear indexing of log ( ) 2 O size in time and a sorted linear linked
list allowing for fast copying to an array for linear traversal and linear time rebalancing. This was
accommodated through the node structures for the tree. An efficient tree balancing mechanism was
implemented and subject to rigorous mathematical proof.
3. Development and implementation of a non-invasive, grey-box, automated and repeatable test strategy
for C++ and Vera classes.
7. To facilitate rigorous testing of C++ and Vera classes I developed a non-invasive test strategy. Wrapper classes, deriving from classes under test (CUTs), contained overridden methods that intercepted and monitored invocations of public and internal methods in the CUTs during program execution by sandwiching calls to the base method between pre and post condition test code..
The wrapper classes replaced the CUTs within real-world code test harnesses designed to test the classes in anger. Test harness operation was governed by a dedicated TestCase class, which provided automated program flow validation and check-pointing, through the wrapper class overrides, together with reporting of test results. The class allowed for precise profiling of class failures under test.
Test plans were developed which enumerated the normal, boundary and error cases and provided an operations list detailing coverage of the test cases, predicted program flow and check-point values. The operations list was a description of the test harness implementation.
The test strategy was validated, by induction, through the use of the TestCase class to test itself.
4. Additional duties included mentoring, training and supervising an intern on one-year placement together with participation in design reviews as reader and inspector.
I undertook the training, in C++ and event driven, object oriented techniques and methodologies, of an intern on placement with Sun. I developed his capacities as a technical software engineer and encouraged independent and lateral thinking. I placed great emphasis on initiative and tenacious, disciplined problem solving. I also placed great emphasis on providing encouragement and team recognition of his accomplishments and contributions during his involvement with behavioural modelling. I supervised his work and quarterly evaluations.
As reader and inspector at formal design reviews I enforced rigorous standards in the drafting of hardware design documents. I required that they provide the requisite detail and clarity appropriate to implementation by hardware engineers and modelling and testing by verifiers.
EDS, Hook, Hampshire, UK
Information Analyst
1999-2000
Primary responsibilities:
1. Development of C++ APIs for local-local, local-remote and remote-remote client file transfers and centralised print services. The development strongly utilised object oriented methodology and the implementation required good knowledge of AIX kernel processes, primarily sockets, named pipes, shared memory and kernel printer protocols.
2. Development of a GUI enabling monitoring of system resources and activity. X-Designer and NetView/6000 were fully utilised during development.
3. Application of ISO 9000 and SIL3 safety standards. Since the development constituted part of the central architecture responsible for providing Air Traffic Control with proximity information, as part of the collision avoidance mechanism for transatlantic flights, strict standards were applied. The development was subject to peer and formal review during design, implementation and unit testing. Strict coding standards were enforced and high levels of coverage testing were formally required. I conducted FMECA hazard analysis on all module inputs and assessed risk levels spanning low severity to catastrophic failure.
Rolls-Royce Control Systems, Derby, Derbyshire, UK
Analyst Programmer
1998-1999
Primary responsibilities:
8. 1. Development of algorithms for numerical interpolation of solutions to the neutron transport equation
across finite elements of a fission reactor core model. The development of the model was subject to
ISO 9000 standards and under contract to the Royal Navy. The core was designed for use in nuclear
powered submarines and development constituted part of the Naval Nuclear Propulsion Programme.
The algorithms were developed in C and interfaced with nuclear databases through embedded SQL.
2. Development of a graphical user library. The user library was developed to facilitate fast production
of graphical applets, for presentation on web sites and at exhibitions, and 3-D data representations.
Much of the development required optimised transformation algorithms to provide higher frame rates,
curve fitting and surface rendering.
3. General development of database applications in FoxPro 2.6/Visual FoxPro 5 and reviewing of ADA
code and design. These duties were undertaken whilst awaiting Secret Atomic security clearance from
the MoD.
Merlin Distribution, Westbury, Wiltshire, United Kingdom
Systems Analyst/Programmer
1996-1998
Principal duties:
1. Development of a route-scheduling algorithm for steel distribution across sites in England, Scotland
and Wales. Shipment of steel from production sites in Wales to other parts of the country was subject
to manual scheduling. I developed a partial solution to the problem of optimising route scheduling
based upon an algorithmic solution to the Travelling Salesman Problem I developed earlier. It
produced disjoint routes, each satisfying the Single Travelling Salesman Problem and jointly
minimising distance and number of salesmen. Although graph algorithms existed to join these routes,
I could not prove analytically that this solved the single TSP. I used sets of towns with known
solutions for Monte-Carlo simulations to obtain confidence limits on the optimality of solutions. The
algorithm was roughly ( . ) 4 5 time O n .
2. Development of Visual FoxPro 5 inventory management software.
Royal Naval College, Greenwich, London, United Kingdom
Senior Lecturer
1991-1993
Principal Duties:
1. Lecturing in nuclear and atomic physics and mathematics. Courses ranged from basic to MSc level.
The student body comprised naval officers, submariners, health physicists and dockyard nuclear
operators. I had responsibilities for providing training in nuclear theory, special relativity, quantum
mechanics and mathematics. The courses were designed to provide training for non-technical
personnel and university graduates. Accordingly, I developed an ability to express complex principles
in mathematics and physics in a manner allowing for their assimilation by personnel from varying
educational backgrounds. I placed heavy emphasis on everyday analogy and providing explanations
from a variety of perspectives to accommodate the differences in thinking processes across the student
body. I adopted an informal approach in my relationship with my students and encouraged team
cooperation within the groups to ensure that all students were assisted in their understanding of the
complex lecture material. Much of the tutoring took place out of hours in informal settings to allow
non-technical students to be given additional help in understanding the material.
9. 2. Supervision of student laboratories and field studies. Practical application of nuclear principles took place within student laboratories and industrial application was demonstrated during field trips to nuclear establishments. I supervised laboratory practicals and long term visits to Rolls-Royce and the Dounreay reprocessing and fission fast breeder research complex.
3. Research into the risks and stochastic health effects of nuclear incidents. To ensure safe levels of operation during berthing of nuclear submarines in populated regions the college conducted research into the likelihood, severity and health implications of reactor core breaches. I conducted research into the distribution of fission products released during such incidents, the effects of prevailing weather conditions on particulate and gaseous emissions and the effects of inhalation, ingestion and physical exposure on population groups. Hazard analyses were performed which identified main risks to populations of long term health impairment, incapacitation or death and assessed the likelihood against severity of different categories of nuclear incidents.
PhD: Particle Physics, Birmingham University, Birmingham, United Kingdom
12/1990
Lecture courses: Relativistic Quantum Mechanics, Quantum Field Theory, Gauge Field Theory, Unified Field Theory, Superstring Theory, Supersymmetry Theory, General Relativity and Theoretical Particle Physics.
Research: Identification of exotic gluonium states, glueballs, produced in 300 GeV/c proton-proton collisions. Conducted partial wave analysis and meson spectral mass determination through least square and maximum likelihood fits to Gaussian, Breit-Wigner, Weibull and Granet phase-space background distributions. Potential candidates were identified through their quantum signatures and subject to deeper analysis of production cross-sections, decay branching ratios and angular decay distributions.
Experimental: Undertook complete supervision of a large, medium grain and a small, high grain photon calorimeter prior to and during data runs at the CERN facility, Geneva. I calibrated the calorimeters against 0 and 0 mass distributions, determined threshold operational voltage values, improved signal discrimination, and developed pattern recognition algorithms to detect and measure photon hits, energy deposition and spatial distribution within the calorimeters. I also undertook reduction of raw data to physics data for use in the analysis of the particle interactions giving rise to production. Monte-Carlo simulations were used to determine the effects of calorimeter geometry together with spatial and energy resolution on detector efficiency. Known particle decays were simulated according to branching ratio and random orientation and used to determine the average detection efficiency of the calorimeters.
BSc: Physics, Liverpool University, Liverpool, United Kingdom
07/1985
Lecture Courses: Special Relativity, Quantum Mechanics, Particle Physics, Astrophysics, Low Temperature Physics, Solid State Physics, Atomic Physics, Nuclear Physics, Mechanics, Optics, Wave Mechanics, General Theory, Physics of Nuclear Reactors, Geophysics, Mathematics and Electronics.
10. Dr Russell John Childs – Publications 1987-1990
1. Nucleus-Nucleus Interactions using the CERN Spectrometer and a Multiparticle High T p Detector.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, M.T. Trainor, O. Villalobos Baillie, M.F. Votruba (Birmingham)
+ Athens, Bari, CERN, Collège de France and Paris.
IX Autumn School ‘The Physics of the Quark Gluon Plasma’, Lisbon, Dec. 1987.
2. Preliminary results on and Production at T p > 1.0 GeV/c in the Central Rapidity Region in 200 GeV/c Sulphur
Tungsten Interactions.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, O. Villalobos Baillie, M.F. Votruba (Birmingham) + Athens,
Bari, CERN, Collège de France and Paris.
Proceedings of the XXIIIrd Recontres de Moriond, ‘Current Issues in Particle Physics’, Les Arcs, March 1988, pp 127-134.
3. A search for glueballs in the Central Region in the Reaction f s p p p (X ) p 0 at 300 GeV/c Using the CERN
Spectrometer.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, H.R. Shaylor, O. Villalobos Baillie, M.F. Votruba (Birmingham)
+ Athens, Bari, CERN, Collège de France and Paris.
American Institute of Physics, Particles and Fields 36 (1988) 340-349.
4. Direct 0 Production at Large T p .
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, I.C. Print, H.R. Shaylor, O. Villalobos Baillie, M.F. Votruba
(Birmingham) + Athens, Bari, CERN, Collège de France and Paris.
Nuclear Physics 7B (1989) 228-242.
5. Use of Silicon Microstrips for Precise Measurement of High Momenta.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, H.R. Shaylor, M.T. Trainor, O. Villalobos Baillie, M.F. Votruba
(Birmingham) + Athens, Bari, CERN, Collège de France and Paris.
Nuclear Instruments and Methods A274 (1989) 165-170.
6. Observation of Double -Meson Production in the Central Region for the Reaction f s p p p (K K K K ) p at 300
GeV/c.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, S.J. Prosser, H.R. Shaylor, O. Villalobos Baillie, M.F. Votruba
(Birmingham) + Athens, Bari, CERN, Collège de France.
Physics Letters 221B (1989) 221-226.
7. A Spin Parity Analysis of the (1285) 1 f and (1420) 1 f Mesons Centrally Produced in the Reaction
f s p p p (K K K ) p 0 at 300 GeV/c.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, H.R. Shaylor, O. Villalobos Baillie, M.F. Votruba (Birmingham)
+ Athens, Bari, CERN, Collège de France.
Physics Letters 221B (1989) 216-220.
11. 8. Observation of Centrally Produced / (1720) 2 f in the Reaction f s p p p (K K) p at 300 GeV/c.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, O. Villalobos Baillie, M.F. Votruba (Birmingham) + Athens,
Bari, CERN, Collège de France.
Physics Letters 227B (1989) 186-190.
9. Evidence for New States Produced in the Central Region in the Reaction f s p p p ( ) p at 300 GeV/c
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, O. Villalobos Baillie, M.F. Votruba (Birmingham) + Athens,
Bari, CERN, Collège de France.
Physics Letter 228B (1989) 536-542.
10. Search for Non- qq Mesons in the Central Region in the Reaction f s p p p (X ) p 0 at 300 GeV/c using the CERN
Spectrometer.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, O. Villalobos Baillie, M.F. Votruba (Birmingham) + Athens,
Bari, CERN, Collège de France.
Proc. Int. Conf. On High Energy Experiments and Methods, Prague, June 1989, pp 21-26.
11. Search for Non- qq Mesons in Central Production.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, O. Villalobos Baillie, M.F. Votruba (Birmingham) + Athens,
Bari, CERN, Collège de France.
Proc. Int. Europhysics Conf. On High Energy Physics, Madrid, Sept. 1989.
12. A Study of Centrally Produced * * K K in the Final State in the Reaction f s p p p (K K ) p at 300 GeV/c.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, O. Villalobos Baillie, M.F. Votruba (Birmingham) + Athens,
Bari, CERN, Collège de France.
Zeitscrift für Physik C. – Particles and Fields 46 (1990) 405-410.
13. Recent WA76 Results on Central Production.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, O. Villalobos Baillie, M.F. Votruba (Birmingham) + Athens,
Bari, CERN, Collège de France.
Proc. Int. Conf. On Hadron Spectroscopy, Ajaccio, Sept. 1989, pp 91-99.
14. A Study of the Centrally Produced 0 System formed in the Reaction f s p p p ( ) p 0 at 300 GeV/c.
I.J. Bloodworth, J.N. Carney, R. Childs, J.B.Kinson, A. Kirk, O. Villalobos Baillie, M.F. Votruba (Birmingham) + Athens,
Bari, CERN, Collège de France.
Zeitscrift für Physik C. – Particles and Fields 48 (1990) 213-220.