This document proposes a theoretical quantum brain model called a Recurrent Quantum Neural Network (RQNN) to describe eye movements when tracking moving targets. The model suggests that a quantum process in the brain mediates the collective response of neurons. When simulating the model, two phenomena are observed: 1) as eye sensor data is processed, a wave packet is triggered in the quantum brain that moves like a particle, and 2) when tracking a fixed target, this wave packet moves discretely rather than continuously, resembling saccadic eye movements. The model precisely predicts eye movements, performing better than classical models like the Kalman filter.
A STDP RULE THAT FAVOURS CHAOTIC SPIKING OVER REGULAR SPIKING OF NEURONSijaia
We compare the number of states of a Spiking Neural Network (SNN) composed from chaotic spiking
neurons versus the number of states of a SNN composed from regular spiking neurons while both SNNs
implementing a Spike Timing Dependent Plasticity (STDP) rule that we created. We find out that this
STDP rule favors chaotic spiking since the number of states is larger in the chaotic SNN than the regular
SNN. This chaotic favorability is not general; it is exclusive to this STDP rule only. This research falls
under our long-term investigation of STDP and chaos theory.
Boundness of a neural network weights using the notion of a limit of a sequenceIJDKP
feed forward neural network with backpropagation le
arning algorithm is considered as a black box
learning classifier since there is no certain inter
pretation or anticipation of the behavior of a neur
al
network weights. The weights of a neural network ar
e considered as the learning tool of the classifier
, and
the learning task is performed by the repetition mo
dification of those weights. This modification is
performed using the delta rule which is mainly used
in the gradient descent technique. In this article
a
proof is provided that helps to understand and expl
ain the behavior of the weights in a feed forward n
eural
network with backpropagation learning algorithm. Al
so, it illustrates why a feed forward neural networ
k is
not always guaranteed to converge in a global minim
um. Moreover, the proof shows that the weights in t
he
neural network are upper bounded (i.e. they do not
approach infinity). Data Mining, Delta
A STDP RULE THAT FAVOURS CHAOTIC SPIKING OVER REGULAR SPIKING OF NEURONSijaia
We compare the number of states of a Spiking Neural Network (SNN) composed from chaotic spiking
neurons versus the number of states of a SNN composed from regular spiking neurons while both SNNs
implementing a Spike Timing Dependent Plasticity (STDP) rule that we created. We find out that this
STDP rule favors chaotic spiking since the number of states is larger in the chaotic SNN than the regular
SNN. This chaotic favorability is not general; it is exclusive to this STDP rule only. This research falls
under our long-term investigation of STDP and chaos theory.
Boundness of a neural network weights using the notion of a limit of a sequenceIJDKP
feed forward neural network with backpropagation le
arning algorithm is considered as a black box
learning classifier since there is no certain inter
pretation or anticipation of the behavior of a neur
al
network weights. The weights of a neural network ar
e considered as the learning tool of the classifier
, and
the learning task is performed by the repetition mo
dification of those weights. This modification is
performed using the delta rule which is mainly used
in the gradient descent technique. In this article
a
proof is provided that helps to understand and expl
ain the behavior of the weights in a feed forward n
eural
network with backpropagation learning algorithm. Al
so, it illustrates why a feed forward neural networ
k is
not always guaranteed to converge in a global minim
um. Moreover, the proof shows that the weights in t
he
neural network are upper bounded (i.e. they do not
approach infinity). Data Mining, Delta
Neuromorphic Computing indicates a broad area of research that aims at achieving means of physical information processing that are inspired by biological brains. As such, this kind of systems is envisaged as being the ideal approach for implementing artificial neural networks concepts. With the rapid pace of development in Deep Learning, the synergy between the development of neuromorphic hardware and neural network concepts is fundamental to obtain intelligent systems that can exploit the full potential of learning efficiently.
This talk aims at giving a broad overview of the possibilities of such synergy. First, we will quickly explore the fundamental differences between neuromorphic and traditional computing, and then we will focus on concepts, algorithms, and neural architectures that are prone to neuromorphic implementation.
Dynamics of a chaotic spiking neuron model are being studied mathematically and experimentally. The
Nonlinear Dynamic State neuron (NDS) is analysed to further understand the model and improve it.
Chaos has many interesting properties such as sensitivity to initial conditions, space filling, control and
synchronization. As suggested by biologists, these properties may be exploited and play vital role in
carrying out computational tasks in human brain. The NDS model has some limitations; in thus paper the
model is investigated to overcome some of these limitations in order to enhance the model. Therefore, the
model’s parameters are tuned and the resulted dynamics are studied. Also, the discretization method of
the model is considered. Moreover, a mathematical analysis is carried out to reveal the underlying
dynamics of the model after tuning of its parameters. The results of the aforementioned methods revealed
some facts regarding the NDS attractor and suggest the stabilization of a large number of unstable
periodic orbits (UPOs) which might correspond to memories in phase space.
Recurrent Neural Networks (RNNs) represent the reference class of Deep Learning models for learning from sequential data. Despite the widespread success, a major downside of RNNs and commonly derived ‘gating’ variants (LSTM, GRU) is given by the high cost of the involved training algorithms. In this context, an increasingly popular alternative is the Reservoir Computing (RC) approach, which enables limiting the training algorithm to operate only on a restricted set of (output) parameters. RC is appealing for several reasons, including the amenability of being implemented in low-powerful edge devices, enabling adaptation and personalization in IoT and cyber-physical systems applications.
This webinar will introduce Reservoir Computing from scratch, covering all the fundamental design topics as well as good practices. It is targeted to both researchers and practitioners that are interested in setting up fastly-trained Deep Learning models for sequential data.
The presentation by Cerenaut at the orientation (2021-06-05) for the 5th WBA Hackathon, an online AI competition to implement working memory
https://wba-initiative.org/en/18687/
- Neuroscientific issues
- Architecture details
- Instruction on the CodaLab competition
A Threshold Logic Unit (TLU) is a mathematical function conceived as a crude model, or abstraction of biological neurons. Threshold logic units are the constitutive units in an artificial neural network. In this paper a positive clock-edge triggered T flip-flop is designed using Perceptron Learning Algorithm, which is a basic design algorithm of threshold logic units. Then this T flip-flop is used to design a two-bit up-counter that goes through the states 0, 1, 2, 3, 0, 1… Ultimately, the goal is to show how to design simple logic units based on threshold logic based perceptron concepts.
allocation of entropy are convex cone, this report shows the 3D view of convex cone of allocation of entropy for bipartite and tripartite quantum system
Recognizing Locations on Objects by Marcus LewisNumenta
Marcus gave a talk called "Recognizing Locations on Objects" during the HTM Meetup on 11/03/2017.
The brain learns and recognizes objects with independent moving sensors. It’s not obvious how a network of neurons would do this. Numenta has suggested that the brain solves this by computing each sensor’s location relative to the object, and learning the object as a set of features-at-locations. Marcus showed how the brain might determine this “location relative to the object.” He extended the model from Numenta’s recent paper, "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World," so that it computes this location. This extended model takes two inputs: each sensor’s input, and each sensor’s “location relative to the body.” The model connects the columns in such a way that a column can compute its “location relative to the object” from another column’s “location relative to object.” When a column senses a feature, it recalls a union of all locations where it has sensed this feature, then the columns work together to narrow their unions. This extended model essentially takes its sensory input and asks, “Do I know any objects that contain this spatial arrangement of features?”
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...hirokazutanaka
This is lecure 3 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=dtpgJLRt90M
A General Principle of Learning and its Application for Reconciling Einstein’...Jeffrey Huang
The human brain has an amazing power at solving different cognitive tasks, ranging from visual perception, playing chess, building airplanes, to the discovery of the mass and energy relation E=mc^2. However, up to now, nobody knows how the brain works and what is the principle underlying its power. In this presentation, a general principle of learning is proposed to understand intelligence like the human one. It will attempt to reveal the logical structure underlying intelligence so that we can implement it using machines to help us to smooth out the differences in individual intelligence just like machinery to individual physical power. It can help us to build better human societies for equality, peace, and prosperity.
Neuromorphic Computing indicates a broad area of research that aims at achieving means of physical information processing that are inspired by biological brains. As such, this kind of systems is envisaged as being the ideal approach for implementing artificial neural networks concepts. With the rapid pace of development in Deep Learning, the synergy between the development of neuromorphic hardware and neural network concepts is fundamental to obtain intelligent systems that can exploit the full potential of learning efficiently.
This talk aims at giving a broad overview of the possibilities of such synergy. First, we will quickly explore the fundamental differences between neuromorphic and traditional computing, and then we will focus on concepts, algorithms, and neural architectures that are prone to neuromorphic implementation.
Dynamics of a chaotic spiking neuron model are being studied mathematically and experimentally. The
Nonlinear Dynamic State neuron (NDS) is analysed to further understand the model and improve it.
Chaos has many interesting properties such as sensitivity to initial conditions, space filling, control and
synchronization. As suggested by biologists, these properties may be exploited and play vital role in
carrying out computational tasks in human brain. The NDS model has some limitations; in thus paper the
model is investigated to overcome some of these limitations in order to enhance the model. Therefore, the
model’s parameters are tuned and the resulted dynamics are studied. Also, the discretization method of
the model is considered. Moreover, a mathematical analysis is carried out to reveal the underlying
dynamics of the model after tuning of its parameters. The results of the aforementioned methods revealed
some facts regarding the NDS attractor and suggest the stabilization of a large number of unstable
periodic orbits (UPOs) which might correspond to memories in phase space.
Recurrent Neural Networks (RNNs) represent the reference class of Deep Learning models for learning from sequential data. Despite the widespread success, a major downside of RNNs and commonly derived ‘gating’ variants (LSTM, GRU) is given by the high cost of the involved training algorithms. In this context, an increasingly popular alternative is the Reservoir Computing (RC) approach, which enables limiting the training algorithm to operate only on a restricted set of (output) parameters. RC is appealing for several reasons, including the amenability of being implemented in low-powerful edge devices, enabling adaptation and personalization in IoT and cyber-physical systems applications.
This webinar will introduce Reservoir Computing from scratch, covering all the fundamental design topics as well as good practices. It is targeted to both researchers and practitioners that are interested in setting up fastly-trained Deep Learning models for sequential data.
The presentation by Cerenaut at the orientation (2021-06-05) for the 5th WBA Hackathon, an online AI competition to implement working memory
https://wba-initiative.org/en/18687/
- Neuroscientific issues
- Architecture details
- Instruction on the CodaLab competition
A Threshold Logic Unit (TLU) is a mathematical function conceived as a crude model, or abstraction of biological neurons. Threshold logic units are the constitutive units in an artificial neural network. In this paper a positive clock-edge triggered T flip-flop is designed using Perceptron Learning Algorithm, which is a basic design algorithm of threshold logic units. Then this T flip-flop is used to design a two-bit up-counter that goes through the states 0, 1, 2, 3, 0, 1… Ultimately, the goal is to show how to design simple logic units based on threshold logic based perceptron concepts.
allocation of entropy are convex cone, this report shows the 3D view of convex cone of allocation of entropy for bipartite and tripartite quantum system
Recognizing Locations on Objects by Marcus LewisNumenta
Marcus gave a talk called "Recognizing Locations on Objects" during the HTM Meetup on 11/03/2017.
The brain learns and recognizes objects with independent moving sensors. It’s not obvious how a network of neurons would do this. Numenta has suggested that the brain solves this by computing each sensor’s location relative to the object, and learning the object as a set of features-at-locations. Marcus showed how the brain might determine this “location relative to the object.” He extended the model from Numenta’s recent paper, "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World," so that it computes this location. This extended model takes two inputs: each sensor’s input, and each sensor’s “location relative to the body.” The model connects the columns in such a way that a column can compute its “location relative to the object” from another column’s “location relative to object.” When a column senses a feature, it recalls a union of all locations where it has sensed this feature, then the columns work together to narrow their unions. This extended model essentially takes its sensory input and asks, “Do I know any objects that contain this spatial arrangement of features?”
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...hirokazutanaka
This is lecure 3 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=dtpgJLRt90M
A General Principle of Learning and its Application for Reconciling Einstein’...Jeffrey Huang
The human brain has an amazing power at solving different cognitive tasks, ranging from visual perception, playing chess, building airplanes, to the discovery of the mass and energy relation E=mc^2. However, up to now, nobody knows how the brain works and what is the principle underlying its power. In this presentation, a general principle of learning is proposed to understand intelligence like the human one. It will attempt to reveal the logical structure underlying intelligence so that we can implement it using machines to help us to smooth out the differences in individual intelligence just like machinery to individual physical power. It can help us to build better human societies for equality, peace, and prosperity.
Malarial Parasite Classification using Recurrent Neural NetworkCSCJournals
Malaria parasite detection relies mainly on the manual examination of Giemsa-stained blood microscopic slides whereas it is very long, tedious, and prone to error. Automatic malarial parasite analysis and classification has opened a new area for the early malaria detection that showed potential to overcome the drawbacks of manual strategies. This paper presented a method for automatic detection of falciparum and vivax plasmodium. Although, malaria cell segmentation and morphological analysis is a challenging problem due to both the complex cell nature uncertainty in microscopic videos. To improve the performance of malaria parasite segmentation and classification, segmented the RBC and used RNN for classification into its type. Segmented RBCs are classified into normal RBC and infected cell. RNN identify the infected cells into further types.
Efficient Lattice Rescoring Using Recurrent Neural Network Language Models
X. Liu, Y. Wang, X. Chen, M. J. F. Gales & P. C. Woodland
ICASSP 2014
I introduced this paper at NAIST Machine Translation Study Group.
MediaEval 2015 - Automatically Estimating Emotion in Music with Deep Long-Sho...multimediaeval
In this paper we describe our approach for the MediaEval's "Emotion in Music" task. Our method consists of deep Long-Short Term Memory Recurrent Neural Networks (LSTM-RNN) for dynamic Arousal and Valence regression, using acoustic and psychoacoustic features extracted from the songs that have been previously proven as effective for emotion prediction in music. Results on the challenge test demonstrate an excellent performance for Arousal estimation (r = 0.613 ± 0.278), but not for Valence (r = 0.026 ± 0.500). Issues regarding the quality of the test set annotations' reliability and distributions are indicated as plausible justifications for these results. By using a subset of the development set that was left out for performance estimation, we could determine that the performance of our approach may be underestimated for Valence (Arousal: r = 0.596 ± 0.386; Valence: r = 0.458 ± 0.551).
http://ceur-ws.org/Vol-1436/
http://www.multimediaeval.org
MediaEval 2015 - Time-continuous estimation of real-valued dimensions of emot...multimediaeval
In this paper, we describe the IRIT's approach used for the MediaEval 2015 "Emotion in Music" task. The goal was to predict two real-valued emotion dimensions, namely valence and arousal, in a time-continuous fashion. We chose to use recurrent neural networks (RNN) for their sequence modeling capabilities. Hyperparameter tuning was performed through a 10-fold cross-validation setup on the 431 songs of the development subset. With the baseline set of 260 acoustic features, our best system achieved averaged root mean squared errors of 0.250 and 0.238, and Pearson's correlation coefficients of 0.703 and 0.692, for valence and arousal, respectively. These results were obtained by first making predictions with an RNN comprised of only 10 hidden units, smoothed by a moving average filter, and used as input to a second RNN to generate the final predictions. This system gave our best results on the official test data subset for arousal (RMSE=0.247, r=0.588), but not for Valence. Valence predictions were much worse (RMSE=0.365, r=0.029). This may be explained by the fact that in the development subset, valence and arousal values were very correlated (r=0.626), and this was not the case with the test data. Finally, slight improvements over these figures were obtained by adding spectral atness and spectral valley features to the baseline set.
http://ceur-ws.org/Vol-1436/
http://www.multimediaeval.org
EMNLP 2015 読み会 @ 小町研 "Morphological Analysis for Unsegmented Languages using ...Yuki Tomo
首都大学東京 情報通信システム学域 小町研究室に行われた EMNLP 2015 読み会で "Morphological Analysis for Unsegmented Languages using Recurrent Neural Network Language Model" を紹介した際の資料です。
Neural Networks with Anticipation: Problems and ProspectsSSA KPI
AACIMP 2010 Summer School lecture by Alexander Makarenko. "Applied Mathematics" stream. "General Tasks and Problems of Modelling of Social Systems. Problems and Models in Sustainable Development" course. Part 6.
More info at http://summerschool.ssa.org.ua
Neuron based time optimal controller of horizontal saccadic eye movementsAlireza Ghahari
A neural network model of biophysical neurons in the midbrain for controlling oculomotor
muscles during horizontal human saccades is presented. Neural circuitry that includes
omnipause neuron, premotor excitatory and inhibitory burst neurons, long lead burst neuron,
tonic neuron, interneuron, abducens nucleus and oculomotor nucleus is developed to
investigate saccade dynamics. The final motoneuronal signals drive a time-optimal
controller that stimulates a linear homeomorphic model of the oculomotor plant.
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Numenta
This was a presentation given on December 15, 2017 at the MIT Center for Brains, Minds + Machines as part of their Brains, Minds and Machines Seminar Series.
You can watch the recording of the presentation after Slide 1.
In this talk, Jeff describes a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.
Jeff discusses material from these two papers. Others can be found at https://numenta.com/papers
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
URL: https://doi.org/10.3389/fncir.2017.00081
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortex
URL: https://doi.org/10.3389/fncir.2016.00023
Many oscillatory systems of great interest such as networks of reies, neurons, and relaxation oscillators exhibit pulsing behavior. The analysis of such
oscillators has historically utilized a linear-phase model such as the Kuramoto
equation to describe their dynamics. These models accurately describe the behavior of pulsing oscillators on larger timescales, but do not explicitly capture the pulsing nature of the system being analyzed. Indeed, the Kuramoto model and its derivatives abstract the pulsing dynamics and instead use a constantly advancing phase, thereby blurring the specific dynamics in order to fit to an analytically tractable framework. In this thesis, a modification is presented by introducing a phase-dependence to the frequency of such oscillators. Consequently, this modication induces clear pulsing behavior, and thus ntroducesnew dynamics such as nonlinear phase progressions that more accurately reflect the nature of systems such as neurons, relaxation oscillators, and fireflies. The
analysis of this system of equations is presented and the discovery of a heretofore unknown phenomenon termed periodic stability is described in which the phase-locked state of the system oscillates between stability and instability at a frequency determined by the mean phase. The implications of this periodic stability on the system such as oscillations in the coherence, or total degree of synchronization of the oscillator's trajectories, are discussed. The theoretical predictions made by this novel analysis are simulated numerically, and extended to real experimental systems such as electrical Wien-Bridge oscillators and neurons; systems previously described using the abstract Kuramoto model. Lattices constructed using this novel model yield predictions widely
observed in real biological and chemical systems such as spiral waves. As a
result, this model provides a fresh paradigm for exploring systems of coupled
oscillators. The results of this work thus have clear implications on all real
systems described presently by the Kuramoto model.
Incorporating Kalman Filter in the Optimization of Quantum Neural Network Par...Waqas Tariq
Kalman filter have been used for the estimation of instantaneous states of linear dynamic systems. It is a good tool for inferring of missing information from noisy measurement. The quantum neural network is another approach to the merging of fuzzy logic with the neural network and that by the investment of quantum mechanics theory in building the structure of neural network. The gradient descent algorithm has been used, widely, in training the neural network, but the problem of local minima is one of the disadvantages of this algorithm. This paper presents an algorithm to train the quantum neural network by using the extended kalman filter.
Neuro Quantology is an international, interdisciplinary, open-access, peer-reviewed journal that publishes original research and review articles on the interface between quantum physics and neuroscience. The journal focuses on the exploration of the neural mechanisms underlying consciousness, cognition, perception, and behavior from a quantum perspective. Neuro Quantology is published monthly.
Eeg time series data analysis in focal cerebral ischemic rat modelijbesjournal
The mammalian brain exists in a number of attractors. In order to characterize these attractors we have collected the time series data from the EEG recording of rat models. The time series was obtained by recording of the frontoparietal, occipital and temporal regions of the rat brain. Significant changes have
been observed in the dimensionalities of these brain attractors between the normal state, focal ischemic
state and the drug induced state. Thus, these three states were characterized by unique lyapunov exponents,
correlation dimensions and embedding dimensions. The inverse of the lyapunov exponent gave us the long
term coherence of the rat brain and was found to differ for the three states. The autocorrelation function
measured the mean similarity of the EEG signal with itself after a time t. The degree of decay was high indicating that there was maximum correlation in the time series. Thus, the autocorrelation functions clearly indicate the effect of focal cerebral ischemia and drugs induced on the rat brain.
Similar to Quantum brain a recurrent quantum neural network model to describe eye tracking of moving targets (20)
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Quantum brain a recurrent quantum neural network model to describe eye tracking of moving targets
1. Quantum Brain: A Recurrent Quantum Neural Network Model to Describe Eye
Tracking of Moving Targets
Laxmidhar Behera,1 Indrani Kar,1 and Avshalom Elitzur2
arXiv:q-bio.NC/0407001 v1 2 Jul 2004
1
Department of Electrical Engineering, Indian Institute of Technology, Kanpur 208 016, UP, INDIA
2
Unit of Interdisciplinary Studies, Bar-IIan University, 52900 Ramat-Gan, Israel
A theoretical quantum brain model is proposed using a nonlinear Schroedinger wave equation.
The model proposes that there exists a quantum process that mediates the collective response of a
neural lattice (classical brain). The model is used to explain eye movements when tracking moving
targets. Using a Recurrent Quantum Neural Network(RQNN) while simulating the quantum brain
model, two very interesting phenomena are observed. First, as eye sensor data is processed in a
classical brain, a wave packet is triggered in the quantum brain. This wave packet moves like a
particle. Second, when the eye tracks a fixed target, this wave packet moves not in a continuous but
rather in a discrete mode. This result reminds one of the saccadic movements of the eye consisting
of ’jumps’ and ’rests’. However, such a saccadic movement is intertwined with smooth pursuit
movements when the eye has to track a dynamic trajectory. In a sense, this is the first theoretical
model explaining the experimental observation reported concerning eye movements in a static scene
situation. The resulting prediction is found to be very precise and efficient in comparison to classical
objective modeling schemes such as the Kalman filter.
PACS numbers:
I.
INTRODUCTION
Information processing in the brain is mediated by the
dynamics of large, highly interconnected neuronal populations. The activity patterns exhibited by the brain are
extremely rich; they include stochastic weakly correlated
local firing, synchronized oscillations and bursts, as well
as propagating waves of activity. Perception, emotion
etc. are supposed to be emergent properties of such a
complex nonlinear neural circuit.
Instead of considering one of the conventional neural
architectures [1, 2, 3, 4, 5], an alternative neural architecture is proposed here for neural computing. Indeed,
there are certain aspects of brain functions that still appear to have no satisfactory explanation. As an alternative, researchers [6, 7, 8, 9] are investigating whether
the brain can demonstrate quantum mechanical behavior. According to current research, microtubules, the basic components of neural cytoskeleton, are very likely to
possess quantum mechanical properties due to their size
and structure. The tubulin protein, which is the structural block of microtubules, has the ability to flip from
one conformation to another as a result of a shift in the
electron density localization from one resonace orbital to
another. These two conformations act as two basis states
of the system according to whether the electrons inside
the tubuline hydrophobic pocket are localized closer to α
or β tubulin. Moreover the system can lie in a superposition of these two basis states, that is, being in both states
simultaneously, which can give a plausible mechanism for
creating a coherent state in the brain. To give credence to
the possibility of existence of a quantum brain, Penrose
([10]) argued that the human brain must utilize quantum
mechanical effects when demonstrating problem solving
feats that cannot be explained algorithmically.
In this paper, instead of going into biological details
of the brain, we propose a theoretical quantum brain
model. The model is referred to as Recurrent Quantum
Neural Network (RQNN). An earlier version [11] of this
model used a linear neural circuit to set up the potential
field in which the quantum brain is dynamically excited.
The present model uses a nonlinear neural circuit. This
fundamental change in the architecture has yielded two
novel features. The wave packets, f (x, t) =| ψ(x, t) |2 ,
are moving like particles. Here ψ(x, t) is the solution of
the nonlinear Schroedinger wave equation that describes
the quantum brain model proposed in this paper to explain eye movements for tracking moving targets. The
other very interesting observation is that the movements
of the wave packets while tracking a fixed target are not
continuous but discrete. These observations accord with
the well-known saccadic movement of the eye [12]. In
a way, our model is the first of its kind to explain the
nature of eye movements in static scenes that consists of
”jumps” (saccades) and ”rests” (fixations). We expect
this result to inspire other researchers to further investigate the possible quantum dynamics of the brain.
II.
A THEORETICAL QUANTUM BRAIN
MODEL
An impetus to hypothesize a quantum brain model
comes from the brain’s necessity to unify the neuronal
response into a single percept. Anatomical, neurophysiological and neuropsychological evidence, as well as brain
imaging using fMRI and PET scans, show that separate
functional maps exist in the biological brain to code separate features such as direction of motion, location, color
and orientation. How does the brain compute all this
data to have a coherent perception? In this paper, a very
simple model of a quantum brain is proposed where a col-
2. 2
lective response of a neuronal lattice is modeled using a
Schroedinger wave equation as shown in FIG. 1. In this
figure, it is shown that an external stimulus reaches each
neuron in a lattice with a probability amplitude function
ψi . Such a hypothesis would suggest that the carrier of
the stimulus performs quantum computation. The collective response of all the neurons is given by the superposition equation:
N
ψ = c1 ψ1 + c2 ψ2 + .. + cN ψN =
ci ψi
(1)
i=0
We suggest that the time evolution of the collective response ψ is described by the Schroedinger wave equation:
i¯
h
h
¯2 2
∂ψ(x, t)
=−
∇ ψ(x, t) + V (x)ψ(x, t)
∂t
2m
n
(2)
where 2π¯ is Planck’s constant, ψ(x, t) is the wave funch
tion (probability amplitude) associated with the quantum object at space-time point(x, t), and m the mass
of the quantum object. Further symbols such as i
and ∇ carry their usual meaning in the context of the
Schroedinger wave equation. Another way to look at our
proposed quantum brain is as follows. A neuronal lattice
sets up a spatial potential field V (x). A quantum process
described by a quantum state ψ which mediates the collective response of a neuronal lattice evolves in the spatial
potential field V (x) according to equation (2). Thus the
classical brain sets up a spatio-temporal potential field
and the quantum brain is excited by this potential field
to provide a collective response. In the next section, we
present a possible eye-movement model for tracking moving target.
ψ1
ψ2
A single
stimulus
section II. The mechanism of eye movements tracking a
moving target consists of three stages as shown in FIG.
2: (i) stochastic filtering of noisy data that impact the
eye sensors; (ii) a predictor that predicts the next spatial
position of the moving target; and (iii) a biological motor
control system that aligns the eye pupil along the moving
targets trajectory. The biological eye sensor fans out the
input signal y to a specific neural lattice in the visual cortex. For clarity, Figure 2 shows a one-dimensional array
of neurons whose receptive fields are excited by the signal
input y reaching each neuron through a synaptic connection described by a nonlinear map. The neural lattice
responds to the stimulus by setting up a spatial potential field, V (x, t), which is a function of external stimulus
y and estimated trajectory y of the moving target:
ˆ
Collective Response
ψ = c1 ψ1 + c2 ψ2 + .. + cN ψN
.
.
ψN
Wi (x, t)φi (ν(t))
V (x, t) =
(3)
i=1
where φi (.) is a Gaussian Kernel function, n represents
the number of such Gaussian functions describing the
nonlinear map that represents the synaptic connections,
ν(t) represents the difference between y and y and W
ˆ
represents the synaptic weights as shown in FIG. 2. The
Gaussian kernel function is taken as:
φi (ν(t)) = exp(−(ν(t) − gi )2 )
(4)
where gi is the center of the ith Gaussian function, φi .
This center is chosen from input space described by the
input signal, ν(t), through uniform random sampling.
Our quantum brain model proposes that a quantum
process mediates the collective response of this neuronal
lattice which sets up a spatial potential field V (x, t).
This happens when the quantum state associated with
this quantum process evolves in this potential field. The
spatio-temporal evolution follows as per equation (2).
We hypothesize that this collective response is described
by a wave packet, f (x, t) =| ψ(x, t) |2 , where the term
ψ(x, t) represents a quantum state. In a generic sense,
we assume that a classical stimulus in a brain triggers
a wave packet in the counterpart ’quantum brain’. This
subjective response, f (x, t), is quantified using the following estimate equation:
Neural Lattice
y (t) =
ˆ
FIG. 1: Quantum Brain - A Theoretical Model
III.
AN EYE TRACKING MODEL
Let us consider a plausible biological mechanism for
eye tracking using the quantum brain model proposed in
x(t)f (x, t)dx
(5)
The estimate equation is motivated by the fact that
the wave packet, f (x, t) =| ψ(x, t) |2 is interpreted as the
probability density function. Based on this estimate, y ,
ˆ
the predictor estimates the next spatial position of the
moving target. To simplify our analysis, the predictor
is made silent. Thus its output is the same as that of
y. The biological motor control is commanded to fixate
ˆ
the eye pupil to align with the target position, which is
3. 3
predicted to be at y . Obviously, we have assumed that
ˆ
biological motor control is ideal.
After the above mentioned simplification, the closed
form dynamics of the model described by Figure 2 becomes:
i¯
h
h
¯2 2
∂ψ(x, t)
=−
∇ ψ(x, t)+
∂t
2m
ζG y(t) −
(6)
x | ψ(x, t) |2 dx ψ(x, t)
where G(.) is a Gaussian kernel map introduced to nonlinearly modulate the spatial potential field that excites
the dynamics of the quantum object. In fact ζG(.) =
V (x, t) where V (x, t) is given in equation (3).
1 W11
y+
1
.
.
.
.
.
.
−
Quantum
V (x) activation
function
(Schroedinger
wave equation)
ψ
ψ ∗ xψdx
n WnN N
motor control
predictor
FIG. 2: Conceptual framework for the Recurrent Quantum
Neural Networks
The nonlinear Schroedinger wave equation given by
equation (6) is one-dimensional with cubic nonlinearity.
Interestingly, the closed form dynamics of the Recurrent
Quantum Neural Network (equation (6)) closely resembles a nonlinear Schroedinger wave equation with cubic
nonlinearity studied in quantum electrodynamics [13]:
i¯
h
∂ψ(x, t)
=
∂t
e
−
2
h
¯ 2 2 e2
ψ(x, t)+
∇ −
2m
r
ψ(x, t) | ψ(x′ , t) |2 ′
dx
| x − x′ |
(7)
where m is the electron mass, e the elementary charge
and r the magnitude of | x |.
Also, nonlinear
Schroedinger wave equations with cubic nonlinearity of
∂
the form ∂t A(t) = c1 A + c3 | A |2 A, where c1 and c3 are
constants, frequently appear in nonlinear optics [14] and
in the study of solitons [15, 16, 17, 18].
In equation (6), the unknown parameters are weights
Wi (x, t) associated with the Gaussian kernel, mass m,
and ζ, the scaling factor to actuate the spatial potential field. The weights are updated using the Hebbian
learning algorithm
∂Wi (x, t)
= βφi (ν(t))f (x, t)
∂t
(8)
y
ˆ
where ν(t) = y(t) − y (t).
ˆ
The idea behind the proposed quantum computing
model is as follows. As an individual observes a moving target, the uncertian spatial position of the moving
target triggers a wave packet within the quantum brain.
The quantum brain is so hypothesized that this wave
packet turns out to be a collective response of a classical
neural lattice. As we combine equations (6) and (8), it is
desired that there exist some parameters m, ζ and β such
that each specific spatial position x(t) triggers a unique
wave packet, f (x, t) =| ψ(x, t) |2 , in the quantum brain.
This brings us to the question whether the closed form
dynamics can exhibit soliton properties. As pointed out
above, our equation has a form that is known to possess
soliton properties for a certain range of parameters and
we just have to find those parameters for each specific
problem.
We would like to reiterate the importance of the soliton properties. According to our model, eye tracking
means tracking of a wave packet in the domain of the
quantum brain. The biological motor control aligns the
eye pupil along the spatial position of the external target that the eye tracks. As the eye sensor receives data
y from this position, the resulting error stimulates the
quantum brain. In a noisy background, if the tracking is
accurate, then this error correcting signal ν(t) has little
effect on the movement of the wave packet. Precisely, it
is the actual signal content in the input y(t) that moves
the wave packet along the desired direction which, in effect, achieves the goal of the stochastic filtering part of
the eye movement for tracking purposes.
IV.
SIMULATION RESULTS
In this section we present simulation results to test
target tracking through eye movement where targets are
either fixed or moving.
For fixed target tracking, we have simulated a stochastic filtering problem of a dc signal embedded in Gaussian
noise. As the eye tracks a fixed target, the corresponding
dc signal is taken as ya (t) = 2.0, embedded in Gaussian
noise with SNR (signal to noise ratio)values of 20 dB, 6
dB and 0 dB.
We compare the results with the performance of a
Kalman filter [19] designed for this purpose. It should
be noted that the Kalman filter has the a priori knowledge that the embedded signal is a dc signal whereas the
RQNN is not provided with this knowledge. The Kalman
filter also makes use of the fact that the noise is Gaussian and estimates the variance of the noise based on this
assumption. Thus it is expected that the performance of
the Kalman filter will degrade as the noise becomes nonGaussian. In contrast, the RQNN model does not make
any assumption about the noise.
It is observed that there are certain values of β, m,
ζ and N for which the model performs optimally. A
univariate marginal distribution algorithm [11] was used
4. 4
10
2.5
8
2
a
b
c
d
4
tracking error
ya(t)
6
3
2
2
0
1
1.5
1
0.5
-2
0
-4
-6
0
1
2
3
t
-0.5
0
4
1
2
t
3
4
10
FIG. 4: The continuous line represents tracking error using
RQNN model while the broken line represents tracking error
using Kalman filter
f(x)
8
6
0.4
Initial Wave Packet
4
1
2
3
0.3
Initial
0
-3
-2
-1
0
x
1
2
3
f(x)
2
1
2
0.2
3
0.1
FIG. 3: (top) Eye tracking of a fixed target in a noisy environment of 0 dB SNR: ’a’ respresents fixed target, ’b’ represents target tracking using RQNN model and ’c’ represents
target tracking using a Kalman filter. The noise envelope
is represented by the curve ’d’; (bottom) The snapshots of
the wave packets at different instances corresponding to the
marker points (1,2,3) as shown in the top figure. The solid line
represent the initial wave packet assigned to the Schroedinger
wave equation.
to get near optimal parameters while fixing N = 400 and
h
¯ = 1.0. The selected values of these parameters are as
follows for all levels of SNR:
β = 0.86; m = 2.5; ζ = 2000;
(9)
The comparative performance of eye tracking in terms
of rms error for all the noise levels is shown in Table I.
It is easily seen from Table I that the rms tracking error
of RQNN is much less than that of the Kalman filter.
Moreover, RQNN performs equally well for all the three
categories of noise levels, whereas the performance of the
Kalman filter degrades with the increase in noise level. In
this sense we can say that our model performs the tracking with a greater efficiency compared to the Kalman
filter. The exact nature of trajectory tracking is shown
for 0 dB SNR in FIG. 3. In this figure, the noise enve-
0
-20
FIG. 5:
weights
-10
0
x
10
20
Wave packet movements for RQNN with linear
lope is shown, and obviously its size is large due to a high
noise content in the signal. The figure shows the trajectory of the eye movement as the eye focuses on a fixed
target. To better appreciate the tracking performance,
an error plot is shown in FIG. 4. Although Kalman filter
tracking is continuous, the RQNN model tracking con-
TABLE I: Performance comparison between Kalman filter
and RQNN for various levels of Gaussian noise
Noise level
in dB
20
6
0
RMS error
RMS error
for Kalman filter for RQNN
0.0018
0.000040
0.0270
0.000062
0.0880
0.000090
5. 5
and the mean of the wave packet moves to -2 when the
signal goes to position 3. This type of continuous movement of wave packet takes place after a series of ’jumps’
of random nature. During continuous movement of the
wave packets, trajectory tracking is smooth, denoted as
smooth pursuit movement in the context of biological eye
tracking. This theoretical result is very similar in nature
as what has been observed experimentally [20].
8
6
tracking error
sists of ’jumps’ and ’fixations’. As the alignment of the
eye pupil becomes closer to the target position, the ’fixation’ time also increases. Similar tracking behaviour was
also observed for the SNR values of 20 and 6 dB. These
theoretical results are very interesting when compared to
experimental results in the field of eye-tracking. In eyetracking experiments, it is known that eye movements
in static scenes are not performed continuously, but consist of ”jumps” (saccades) and ”rests” (fixations). Eyetracking results are represented as lists of fixation data.
Furthermore, if the information is simple or familiar, eye
movement is comparatively smooth. If it is tricky or new,
the eye might pause or even flip back and forth between
images. Similar results are given by our simulations. Our
model tracks the dc signal which can be thought of as
equivalent to a static scene, in discrete steps rather than
in a continuous fashion. This is very clearly understood
from the tracking error in FIG. 4.
The other interesting aspect of the results is the movement of wave packets. In Figure 3 (bottom), snapshots
of wave packets are plotted at different instances corresponding to marker points as shown along the desired trajectory. It can be noticed that a very flat initial Gaussian
wave packet first moves to the left, and then proceeds toward the right until the mean of the wave packet exactly
matches the actual spatial position. A similar pattern of
movement of wave packets was also noticed in the case
of 20 and 6dB SNR. The wave packet movement is compared with our previous work [11] in FIG. 5. The initial
wave packet in the previous model first splits into two
parts, then moves in a continuous fashion, ultimately going into a state with a mean of approximately 2 but with
high variance. In contrast, in the present model there is
no splitting of the wave packet, movement is discrete and
variance is also much smaller. Thus the soliton behavior
of the present model is very much pronounced.
To analyze the eye movement following a moving target, a sinusoidal signal ya (t) = 2sin2π10t is taken as the
desired dynamic trajectory. This signal is embedded in
20 dB Gaussian noise. The parameter values for tracking this signal were fixed at β = 0.01, m = 1.75 and
ζ = −250. It is observed that during the training phase,
the wave packet jumps from time to time, thus changing
the tracking error until a steady state trajectory following is achieved. This feature is clearly understood from
the tracking error plot which is shown in FIG. 6. In this
figure, it is shown that the wave packet has jumped six
times before the first smooth movement started. Again,
this jump took place four times before the second smooth
movement started and ultimately achieved a steady state.
When the steady state is achieved, the tracking is efficient
and the wave packet movement is continuous, as shown
in FIG. 7. The snapshots of the wave packets are plotted for three different instances of time indicated by the
marker points (1,2,3) as shown in the trajectory tracking. When the signal is at position 1, the corresponding
wave packet has a mean at 0. When the signal is at position 2, the corresponding wave packet has a mean at +2,
4
2
0
-2
-4
0
2
4
t
6
8
10
FIG. 6: Saccadic and pursuit movement of eye during dynamic trajectory following
V.
CONCLUSION
The nature of eye movement has been studied in this
article using the proposed RQNN model, where the predictor and motor control are assumed to be ideal. The
most important finding is that our theoretical model of
eye-tracking agrees with previously observed experimental results. The model predicts that eye movements will
be of saccadic type while following a static trajectory.
In the case of dynamic trajectory following, eye movement consists of saccades and smooth pursuits. In this
sense, the proposed quantum brain concept in this paper
is very successful in explaining the nature of eye movements. Earlier explanation [12] for saccadic movement
has been primarily attributed to motor control mechanism whereas the present model emphasizes that such
eye movements are due to decision making process of the
brain - albeit quantum brain. Thus the significant contribution of this paper to explain biological eye-movement
as a neural information processing event may inspire researchers to study quantum brain models from the biological perspective.
The other significant contribution of this paper is the
prediction efficiency of the proposed model over the prevailing model. The stochastic filtering of a dc signal using RQNN is 1000 times more accurate compared to a
Kalman filter.
6. 6
a
b
2
ya(t)
2
0
1
-2
3
6
6.25
6.5
t
7
6.75
At this point the paper is silent about exact biological
connection between classical and quantum brain since
it is not clear to us. The model just assumes that
the quantum brain is excited by the potential field set
up by the classical brain. Another obvious question is
that of decoherence. In this regard, we admit that the
model proposed here is highly idealized since we have
used Schroedinger wave equation. We intend to replace
Schroedinger wave equation by density matrix approach
in our future work. Also, the phase transition analysis of
closed form dynamics, given in equation (6) with respect
to various parameters m, ζ, β and N , has been kept for
future work.
2
1
2
3
f(x)
1.5
1
0.5
Initial
0
-4
-2
0
x
2
4
FIG. 7: (top) Eye tracking of a moving target in a noisy
environment of 20dB SNR: ’a’ respresents a moving target,
’b’ represents target tracking using RQNN model; (bottom)
The snapshots of the wave packets at different instances corresponding to the marker points (1,2,3) as shown in the top
figure. The solid line represents the initial wave packet assigned to the Schroedinger wave equation.
[1] M.A.Cohen and S. Grossberg, IEEE Trans Syst, Man and
Cybernetics 13, 815 (1983).
[2] S. Amari, IEEE Trans SMC SMC-13, 741 (1983).
[3] L. Behera, M. Gopal, and S. Chaudhury, IEEE Trans
Neural Networks 7, 1401 (1996).
[4] L. Behera, S. Chaudhury, and M. Gopal, IEE Proceedings Control Theory and Applications 145, 134 (1998).
[5] D. Amit,
Modeling brain function (Springerverlag,Berlin/Heidelberg, 1989).
[6] J. A. Tuszynski, S. R. Hameroff, M. V. Sataric, B. Trpisova, and M. L. A. Nip, Journal of Theoretical Biology
174, 371 (1995).
[7] G. Vitiello, International Journal of Modern Physics B
9, 973 (1995).
[8] S. Hagan, S. R. Hameroff, and J. A. Tuszynski,
http://arxiv.org/abs/quant-ph/0005025 (2000).
[9] A. Mershin, D. V. Nanopoulos, and E. Skoulakis,
http://arxiv.org/abs/quant-ph/0007088 (2000).
Finally, we believe that apart from the computational
power derived from quantum computing, quantum learning systems will also provide a potent framework to study
the subjective aspects of the nervous system [21]. The
challenge to bridge the gap between physical and mental
(or objective and subjective) notions of matter may be
most successfully met within the framework of quantum
learning systems. In this framework, we have proposed
a notion of a quantum brain, and a Recurrent Quantum
Neural Network has been hypothesized as a first step towards a neural computing model.
[10] R. Penrose, Shadows of the Mind (Oxford University
Press, 1994).
[11] L. Behera, B. Sundaram, G. Singhal, and M. Agarawal,
IEEE Trans. Neural Networks (2003), revised and submitted.
[12] A. T. Bahill and L. Stark, Scientific American 240, 84
(1979).
[13] S. Gupta and R. Zia, Journal of Computer and System
Sciences 63, 355 (2001).
[14] R. Boyd, Nonlinear Optics (Academic Press, 1991).
[15] E. A. Jackson, Perspectives of Nonlinear Dynamics
(Cambridge, 1991).
[16] I. Bialynicki-Birula and J. Mycielski, Annals of Physics
100, 62 (1976).
[17] A. S. Davydov, Biology and Quantum Mechanics (Pergamon Press, Oxford, 1982).
[18] A. C. Scott, F. Y. F. Chu, and D. W. McLaughlin (1973),
vol. 61.
7. 7
[19] M. S. Grewal and A. P. Andrews, Kalman Filtering : Theory and Practice Using MATLAB (WileyInterscience, 2001).
[20] A. T. Bahill, M. J. Iandolo, and B. T. Troost, Vision
Research 20, 923 (1980).
[21] H. Atmanspacher, Discrete Dynamics 8, 51 (2004).