Computational Cognitive Models of Spatial Memory in Navigation Space: A reviewSeonghyun Kim
This document reviews computational cognitive models of spatial memory in navigation. It discusses both symbolic spatial memory models and neural network-based models that have been evaluated in real-world environments or simulations. Symbolic models emphasize explicit rules and local representations, while neural networks use distributed representations learned from training data. The document also discusses spatial memory models within cognitive architectures like ACT-R. Overall, it notes that models aiming for both high biological plausibility and ability to handle complex real-world environments face significant challenges.
How Environment and Self-motion Combine in Neural Representations of SpaceSeonghyun Kim
This document discusses how head direction cells (HDCs) and grid cells (GCs) in the rat brain may support path integration through their firing patterns. Several findings suggest that HDC firing reflects angular path integration, maintaining directional firing after manipulations and depending on self-motion cues. Evidence also indicates that GC firing reflects translational path integration, preserving grid patterns across environments and rescaling grids relative to boundaries. The document concludes that HDC and GC firing patterns integrate self-motion and environmental information to reduce errors in estimating self-location.
Learning Anticipation via Spiking Networks: Application to Navigation ControlSeonghyun Kim
The document describes a spiking neural network model for navigation control in robots. The network uses spike-timing dependent plasticity (STDP) as an unsupervised learning rule to enable obstacle avoidance and target approaching behaviors. Simulation results show that with STDP learning, the number of collisions decreases for obstacle avoidance. For target approaching, trajectories improve with visual input. The network provides an efficient method for navigation tasks inspired by spatial representations in rat hippocampus neurons.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
Particle swarm optimization (PSO) is a population-based optimization technique that can be used to train radial basis function (RBF) neural networks. PSO simulates the movement of bird flocks or fish schools. In PSO, each potential solution is a "particle" and the particles update their positions based on their own experience and the experience of neighboring particles. This paper proposes using PSO to optimize the parameters of an RBF network by detecting premature convergence and regrouping particles to introduce more diversity and avoid stagnation. Experimental results show that this regrouping PSO approach reduces stagnation compared to standard PSO.
Have We Missed Half of What the Neocortex Does? A New Predictive Framework ...Numenta
Numenta VP of Research Subutai Ahmad delivered this presentation at the Centre for Theoretical Neuroscience, University of Waterloo on October 2, 2018.
Analysis of Multi-focus Image Fusion Method Based on Laplacian PyramidRajyalakshmi Reddy
The document discusses a multi-focus image fusion method based on Laplacian pyramid decomposition. It begins with an introduction to image fusion and multi-scale transforms. It then describes the proposed Laplacian pyramid based fusion method, which decomposes images into multiple resolution levels and fuses the levels using different operators. Experimental results show the proposed method provides better visual quality and quantitative metrics than average and wavelet based fusion methods.
Computational Cognitive Models of Spatial Memory in Navigation Space: A reviewSeonghyun Kim
This document reviews computational cognitive models of spatial memory in navigation. It discusses both symbolic spatial memory models and neural network-based models that have been evaluated in real-world environments or simulations. Symbolic models emphasize explicit rules and local representations, while neural networks use distributed representations learned from training data. The document also discusses spatial memory models within cognitive architectures like ACT-R. Overall, it notes that models aiming for both high biological plausibility and ability to handle complex real-world environments face significant challenges.
How Environment and Self-motion Combine in Neural Representations of SpaceSeonghyun Kim
This document discusses how head direction cells (HDCs) and grid cells (GCs) in the rat brain may support path integration through their firing patterns. Several findings suggest that HDC firing reflects angular path integration, maintaining directional firing after manipulations and depending on self-motion cues. Evidence also indicates that GC firing reflects translational path integration, preserving grid patterns across environments and rescaling grids relative to boundaries. The document concludes that HDC and GC firing patterns integrate self-motion and environmental information to reduce errors in estimating self-location.
Learning Anticipation via Spiking Networks: Application to Navigation ControlSeonghyun Kim
The document describes a spiking neural network model for navigation control in robots. The network uses spike-timing dependent plasticity (STDP) as an unsupervised learning rule to enable obstacle avoidance and target approaching behaviors. Simulation results show that with STDP learning, the number of collisions decreases for obstacle avoidance. For target approaching, trajectories improve with visual input. The network provides an efficient method for navigation tasks inspired by spatial representations in rat hippocampus neurons.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
Particle swarm optimization (PSO) is a population-based optimization technique that can be used to train radial basis function (RBF) neural networks. PSO simulates the movement of bird flocks or fish schools. In PSO, each potential solution is a "particle" and the particles update their positions based on their own experience and the experience of neighboring particles. This paper proposes using PSO to optimize the parameters of an RBF network by detecting premature convergence and regrouping particles to introduce more diversity and avoid stagnation. Experimental results show that this regrouping PSO approach reduces stagnation compared to standard PSO.
Have We Missed Half of What the Neocortex Does? A New Predictive Framework ...Numenta
Numenta VP of Research Subutai Ahmad delivered this presentation at the Centre for Theoretical Neuroscience, University of Waterloo on October 2, 2018.
Analysis of Multi-focus Image Fusion Method Based on Laplacian PyramidRajyalakshmi Reddy
The document discusses a multi-focus image fusion method based on Laplacian pyramid decomposition. It begins with an introduction to image fusion and multi-scale transforms. It then describes the proposed Laplacian pyramid based fusion method, which decomposes images into multiple resolution levels and fuses the levels using different operators. Experimental results show the proposed method provides better visual quality and quantitative metrics than average and wavelet based fusion methods.
Identify objects based on modeling the human visual system, as an effective method in intelligent identification, has attracted the attention of many researchers.Although the machines have high computational speed but are very weak as compared to humans in terms of diagnosis. Experience has shown that in many areas of image processing, algorithms that have biological backing had more simplicity and better performance. The human visual system, first select the main parts of the image which is provided by the visual featured model, then pays to object recognition which is a hierarchical operations according to this, HMAX model is also provided. HMAX object recognition model from the group of hierarchical models without feedback that its structure and parameters selected based on biological characteristics of the visual cortex. This model is a hierarchical model neural network with four layers, is composed of alternating layers that are simple and complex. Due to the high complexity of the human visual system is virtually impossible to replicate it. For each of the above, separate models have been proposed but in the human visual system, this operation is performed seamlessly, thus, by combining the principles of these models is expected to be closer to the human visual system and obtain a higher recognition rate. In this paper, we introduce an architecture to classify images based on a combination of previous work is based on the basic operation of the visual cortex. According to the results presented, the proposed model compared with the main HMAX model has a much higher recognition rate. Simulations was performed on the database of Caltech101.
This document summarizes recent advances in human pose estimation using deep learning methods. It first discusses traditional approaches like pictorial structures. It then covers several deep learning methods including global/holistic view using joint regression, local appearance using body part detection, and combining global and local information. Other methods discussed are using motion features and pose estimation in videos. Evaluation metrics like PCP and PDJ are also introduced. The document outlines many key papers in this area and provides examples of network architectures and results.
This document summarizes research investigating cortico-striatal projections in mice using the Allen Mouse Brain Connectivity Atlas. The researchers plan to compare projections from different cortical layers and regions to better understand basal ganglia circuitry. ImageJ software will be used to quantify projections from cerebral cortex to striatum and between cortical regions. Specifically, the strongest projections will be examined from divergent cortical maps to the striatum in a point-for-point manner. Multiple experiments will be compared to investigate projection patterns within the striatum.
The document summarizes research on statistical connectivity in neocortical neural microcircuits. It finds that statistical structural connectivity, based on placing neuron morphologies randomly in a 3D volume, predicts the distribution of functional synaptic connectivity between neurons as measured by histograms of branch order and path distance. However, it notes limitations such as small sample sizes and uncertainty that putative synapses are truly functional.
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES sipij
Human action recognition is still a challenging problem and researchers are focusing to investigate this
problem using different techniques. We propose a robust approach for human action recognition. This is
achieved by extracting stable spatio-temporal features in terms of pairwise local binary pattern (P-LBP)
and scale invariant feature transform (SIFT). These features are used to train an MLP neural network
during the training stage, and the action classes are inferred from the test videos during the testing stage.
The proposed features well match the motion of individuals and their consistency, and accuracy is higher
using a challenging dataset. The experimental evaluation is conducted on a benchmark dataset commonly
used for human action recognition. In addition, we show that our approach outperforms individual features
i.e. considering only spatial and only temporal feature.
This document discusses unsupervised and supervised approaches to object retrieval.
It begins by covering unsupervised approaches, describing common local and global features used for object retrieval like SIFT, HOG, and deep features. It also discusses feature aggregation methods like bag-of-features and Fisher vectors.
The document then reviews state-of-the-art results, noting methods that achieved mean average precision scores over 0.8 on standard datasets using techniques like selective match kernels and sum-pooled convolutional features.
It concludes by proposing future attempts could explore improving features, distance metrics, and incorporating supervision, suggesting object retrieval may benefit from a dual supervised/unsupervised learning approach.
Recognizing Locations on Objects by Marcus LewisNumenta
Marcus gave a talk called "Recognizing Locations on Objects" during the HTM Meetup on 11/03/2017.
The brain learns and recognizes objects with independent moving sensors. It’s not obvious how a network of neurons would do this. Numenta has suggested that the brain solves this by computing each sensor’s location relative to the object, and learning the object as a set of features-at-locations. Marcus showed how the brain might determine this “location relative to the object.” He extended the model from Numenta’s recent paper, "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World," so that it computes this location. This extended model takes two inputs: each sensor’s input, and each sensor’s “location relative to the body.” The model connects the columns in such a way that a column can compute its “location relative to the object” from another column’s “location relative to object.” When a column senses a feature, it recalls a union of all locations where it has sensed this feature, then the columns work together to narrow their unions. This extended model essentially takes its sensory input and asks, “Do I know any objects that contain this spatial arrangement of features?”
Jeff Hawkins Human Brain Project Summit Keynote: "Location, Location, Locatio...Numenta
The document summarizes Jeff Hawkins' presentation on a proposed framework for understanding intelligence and cortical computation. Some key points:
- Hawkins proposes that grid cells exist in the neocortex and represent the location of sensory input relative to objects. Each cortical column learns a complete model of objects, including their location spaces.
- Objects can be composed of other objects via displacement cells, allowing efficient learning of new combinations without relearning parts. Behaviors can also be represented as sequences of displacements.
- This framework provides insights into neuroscience, concepts, limits of intelligence, and has implications for building true artificial intelligence based on distributed object-centric representations.
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENTijsc
Image segmentation is a technique to locate certain objects or boundaries within an image. Image
segmentation plays a crucial role in many medical imaging applications. There are many algorithms and
techniques have been developed to solve image segmentation problems. Spectral pattern is not sufficient in
high resolution image for image segmentation due to variability of spectral and structural information.
Thus the spatial pattern or texture techniques are used. Thus the concept of Holder Exponent for
segmentation of high resolution medical image is an efficient image segmentation technique. The proposed
method is implemented in Matlab and verified using various kinds of high resolution medical images. The
experimental results shows that the proposed image segmentation system is efficient than the existing
segmentation systems.
Hybrid Pixel-Based Method for Multimodal Medical Image Fusion Based on Integr...Dr.NAGARAJAN. S
Medical imaging plays a vital role in medical diagnosis and treatment. However, distinct imaging modality yields information only in limited domain. Studies are done for analysis information collected from distinct modalities of same patient. This led to the introduction of image fusion in the field of medicine and the progression of image fusion techniques. Image fusion is characterized as the amalgamation of significant data from numerous images and their incorporation into seldom images, generally a solitary one. This fused image will be more instructive and precise than the indi- vidual source images that have been utilized, and the resultant fused image comprises paramount information. The main objective of image fusion is to incorporate all the essential data from source images which would be pertinent and comprehensible for human and machine recognition. Image fusion is the strategy of combining images from distinct modalities into a single image [1]. The resultant image is utilized in variety of applications such as medical diagnosis, identification of tumor and surgery treatment [2]. Before fusing images from two distinct modalities, it is essential to preserve the features so that the fused image is free from inconsistencies or artifacts in the output.
Remote Sensing Image Scene ClassificationGaurav Singh
This project proposes methods for classifying scenes in remote sensing images. It compares the accuracy of traditional bag-of-visual-words (BoVW) models using handcrafted features to a bag-of-convolutional features (BoCF) model using deep learning. It also applies a grey wolf optimizer (GWO) algorithm for image segmentation. Results show BoCF doubled the accuracy of BoVW, and combining BoVW with GWO improved accuracy over BoVW alone. The project concludes more work is needed to better combine remote sensing data and deep learning for classification.
MARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZERijsc
Particle Swarm Optimizer (PSO) is such a complex stochastic process so that analysis on the stochastic
behavior of the PSO is not easy. The choosing of parameters plays an important role since it is critical in
the performance of PSO. As far as our investigation is concerned, most of the relevant researches are
based on computer simulations and few of them are based on theoretical approach. In this paper,
theoretical approach is used to investigate the behavior of PSO. Firstly, a state of PSO is defined in this
paper, which contains all the information needed for the future evolution. Then the memory-less property of
the state defined in this paper is investigated and proved. Secondly, by using the concept of the state and
suitably dividing the whole process of PSO into countable number of stages (levels), a stationary Markov
chain is established. Finally, according to the property of a stationary Markov chain, an adaptive method
for parameter selection is proposed.
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터Seonghyun Kim
* 파이콘 한국 2020의 발표자료입니다.
현대 인공 신경망의 뿌리가 되었던 뇌 과학!
이 발표에서는 인공 신경망에 대한 뇌 과학적 접근과,
뇌 세포의 발화를 모사하는 파이썬 기반의 뉴로모픽 신경망 모델에 대한 사례를 공유할 예정입니다.
뉴로모픽 신경망은 단순히 기존의 딥러닝에서 셀 구조만을 변경한 것이 아닙니다.
실제로 실험을 수행하기 어려운 생물학적 한계점을 뇌 시뮬레이션을 통해서 극복할 수 있으며,
나아가 뇌의 정보처리 메커니즘을 밝히고, 다양한 뇌 질환 치료제의 타겟을 연구하는데 아주 중요한 역할을 할 수 있습니다.
이번 발표를 통해, 기계학습을 연구하고 있는 많은 연구자 분들에게 새로운 아이디어에 대한 영감이 될 수 있기를 희망합니다.
This document discusses backpropagation and how it relates to learning in the brain. It introduces some key concepts:
1) Synaptic plasticity allows connections between neurons to be modified through learning, but it does not explain how these modifications coordinate across a neural network.
2) Backpropagation provides a method for neural networks to learn through calculating error signals that are sent backward through the network to update synaptic weights. However, backpropagation is not biologically plausible.
3) A new method called neural gradient representation uses activity differences between neural patterns to update synapses in a way that may be more consistent with biological circuits compared to backpropagation. This provides a potential solution for how the brain could learn using feedback without literally implementing backpropagation.
Theories of error back propagation in the brain reviewSeonghyun Kim
The document discusses several theories of how error backpropagation may occur in the brain. It begins by introducing artificial neural networks and backpropagation. It then discusses questions around unrealistic models of neurons in artificial networks compared to real neurons.
The rest of the document summarizes different proposed models of biological backpropagation:
- Temporal-error models propose that errors are represented locally over time through anti-Hebbian and Hebbian plasticity rules rather than being explicitly computed.
- Predictive coding models represent errors explicitly using error nodes that receive inhibition from value nodes and compute differences.
- Dendritic error models propose that pyramidal neurons compute errors in their apical dendrites, relating these models to predictive
Enriching Word Vectors with Subword InformationSeonghyun Kim
1) The document proposes a new word vector model that represents words as the sum of their character n-gram vectors to better capture morphological information.
2) It tests this model on nine languages and shows it outperforms previous models on word similarity and analogy tasks.
3) Representing words as combinations of character n-grams allows the model to learn representations for out-of-vocabulary words.
Identify objects based on modeling the human visual system, as an effective method in intelligent identification, has attracted the attention of many researchers.Although the machines have high computational speed but are very weak as compared to humans in terms of diagnosis. Experience has shown that in many areas of image processing, algorithms that have biological backing had more simplicity and better performance. The human visual system, first select the main parts of the image which is provided by the visual featured model, then pays to object recognition which is a hierarchical operations according to this, HMAX model is also provided. HMAX object recognition model from the group of hierarchical models without feedback that its structure and parameters selected based on biological characteristics of the visual cortex. This model is a hierarchical model neural network with four layers, is composed of alternating layers that are simple and complex. Due to the high complexity of the human visual system is virtually impossible to replicate it. For each of the above, separate models have been proposed but in the human visual system, this operation is performed seamlessly, thus, by combining the principles of these models is expected to be closer to the human visual system and obtain a higher recognition rate. In this paper, we introduce an architecture to classify images based on a combination of previous work is based on the basic operation of the visual cortex. According to the results presented, the proposed model compared with the main HMAX model has a much higher recognition rate. Simulations was performed on the database of Caltech101.
This document summarizes recent advances in human pose estimation using deep learning methods. It first discusses traditional approaches like pictorial structures. It then covers several deep learning methods including global/holistic view using joint regression, local appearance using body part detection, and combining global and local information. Other methods discussed are using motion features and pose estimation in videos. Evaluation metrics like PCP and PDJ are also introduced. The document outlines many key papers in this area and provides examples of network architectures and results.
This document summarizes research investigating cortico-striatal projections in mice using the Allen Mouse Brain Connectivity Atlas. The researchers plan to compare projections from different cortical layers and regions to better understand basal ganglia circuitry. ImageJ software will be used to quantify projections from cerebral cortex to striatum and between cortical regions. Specifically, the strongest projections will be examined from divergent cortical maps to the striatum in a point-for-point manner. Multiple experiments will be compared to investigate projection patterns within the striatum.
The document summarizes research on statistical connectivity in neocortical neural microcircuits. It finds that statistical structural connectivity, based on placing neuron morphologies randomly in a 3D volume, predicts the distribution of functional synaptic connectivity between neurons as measured by histograms of branch order and path distance. However, it notes limitations such as small sample sizes and uncertainty that putative synapses are truly functional.
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES sipij
Human action recognition is still a challenging problem and researchers are focusing to investigate this
problem using different techniques. We propose a robust approach for human action recognition. This is
achieved by extracting stable spatio-temporal features in terms of pairwise local binary pattern (P-LBP)
and scale invariant feature transform (SIFT). These features are used to train an MLP neural network
during the training stage, and the action classes are inferred from the test videos during the testing stage.
The proposed features well match the motion of individuals and their consistency, and accuracy is higher
using a challenging dataset. The experimental evaluation is conducted on a benchmark dataset commonly
used for human action recognition. In addition, we show that our approach outperforms individual features
i.e. considering only spatial and only temporal feature.
This document discusses unsupervised and supervised approaches to object retrieval.
It begins by covering unsupervised approaches, describing common local and global features used for object retrieval like SIFT, HOG, and deep features. It also discusses feature aggregation methods like bag-of-features and Fisher vectors.
The document then reviews state-of-the-art results, noting methods that achieved mean average precision scores over 0.8 on standard datasets using techniques like selective match kernels and sum-pooled convolutional features.
It concludes by proposing future attempts could explore improving features, distance metrics, and incorporating supervision, suggesting object retrieval may benefit from a dual supervised/unsupervised learning approach.
Recognizing Locations on Objects by Marcus LewisNumenta
Marcus gave a talk called "Recognizing Locations on Objects" during the HTM Meetup on 11/03/2017.
The brain learns and recognizes objects with independent moving sensors. It’s not obvious how a network of neurons would do this. Numenta has suggested that the brain solves this by computing each sensor’s location relative to the object, and learning the object as a set of features-at-locations. Marcus showed how the brain might determine this “location relative to the object.” He extended the model from Numenta’s recent paper, "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World," so that it computes this location. This extended model takes two inputs: each sensor’s input, and each sensor’s “location relative to the body.” The model connects the columns in such a way that a column can compute its “location relative to the object” from another column’s “location relative to object.” When a column senses a feature, it recalls a union of all locations where it has sensed this feature, then the columns work together to narrow their unions. This extended model essentially takes its sensory input and asks, “Do I know any objects that contain this spatial arrangement of features?”
Jeff Hawkins Human Brain Project Summit Keynote: "Location, Location, Locatio...Numenta
The document summarizes Jeff Hawkins' presentation on a proposed framework for understanding intelligence and cortical computation. Some key points:
- Hawkins proposes that grid cells exist in the neocortex and represent the location of sensory input relative to objects. Each cortical column learns a complete model of objects, including their location spaces.
- Objects can be composed of other objects via displacement cells, allowing efficient learning of new combinations without relearning parts. Behaviors can also be represented as sequences of displacements.
- This framework provides insights into neuroscience, concepts, limits of intelligence, and has implications for building true artificial intelligence based on distributed object-centric representations.
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENTijsc
Image segmentation is a technique to locate certain objects or boundaries within an image. Image
segmentation plays a crucial role in many medical imaging applications. There are many algorithms and
techniques have been developed to solve image segmentation problems. Spectral pattern is not sufficient in
high resolution image for image segmentation due to variability of spectral and structural information.
Thus the spatial pattern or texture techniques are used. Thus the concept of Holder Exponent for
segmentation of high resolution medical image is an efficient image segmentation technique. The proposed
method is implemented in Matlab and verified using various kinds of high resolution medical images. The
experimental results shows that the proposed image segmentation system is efficient than the existing
segmentation systems.
Hybrid Pixel-Based Method for Multimodal Medical Image Fusion Based on Integr...Dr.NAGARAJAN. S
Medical imaging plays a vital role in medical diagnosis and treatment. However, distinct imaging modality yields information only in limited domain. Studies are done for analysis information collected from distinct modalities of same patient. This led to the introduction of image fusion in the field of medicine and the progression of image fusion techniques. Image fusion is characterized as the amalgamation of significant data from numerous images and their incorporation into seldom images, generally a solitary one. This fused image will be more instructive and precise than the indi- vidual source images that have been utilized, and the resultant fused image comprises paramount information. The main objective of image fusion is to incorporate all the essential data from source images which would be pertinent and comprehensible for human and machine recognition. Image fusion is the strategy of combining images from distinct modalities into a single image [1]. The resultant image is utilized in variety of applications such as medical diagnosis, identification of tumor and surgery treatment [2]. Before fusing images from two distinct modalities, it is essential to preserve the features so that the fused image is free from inconsistencies or artifacts in the output.
Remote Sensing Image Scene ClassificationGaurav Singh
This project proposes methods for classifying scenes in remote sensing images. It compares the accuracy of traditional bag-of-visual-words (BoVW) models using handcrafted features to a bag-of-convolutional features (BoCF) model using deep learning. It also applies a grey wolf optimizer (GWO) algorithm for image segmentation. Results show BoCF doubled the accuracy of BoVW, and combining BoVW with GWO improved accuracy over BoVW alone. The project concludes more work is needed to better combine remote sensing data and deep learning for classification.
MARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZERijsc
Particle Swarm Optimizer (PSO) is such a complex stochastic process so that analysis on the stochastic
behavior of the PSO is not easy. The choosing of parameters plays an important role since it is critical in
the performance of PSO. As far as our investigation is concerned, most of the relevant researches are
based on computer simulations and few of them are based on theoretical approach. In this paper,
theoretical approach is used to investigate the behavior of PSO. Firstly, a state of PSO is defined in this
paper, which contains all the information needed for the future evolution. Then the memory-less property of
the state defined in this paper is investigated and proved. Secondly, by using the concept of the state and
suitably dividing the whole process of PSO into countable number of stages (levels), a stationary Markov
chain is established. Finally, according to the property of a stationary Markov chain, an adaptive method
for parameter selection is proposed.
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터Seonghyun Kim
* 파이콘 한국 2020의 발표자료입니다.
현대 인공 신경망의 뿌리가 되었던 뇌 과학!
이 발표에서는 인공 신경망에 대한 뇌 과학적 접근과,
뇌 세포의 발화를 모사하는 파이썬 기반의 뉴로모픽 신경망 모델에 대한 사례를 공유할 예정입니다.
뉴로모픽 신경망은 단순히 기존의 딥러닝에서 셀 구조만을 변경한 것이 아닙니다.
실제로 실험을 수행하기 어려운 생물학적 한계점을 뇌 시뮬레이션을 통해서 극복할 수 있으며,
나아가 뇌의 정보처리 메커니즘을 밝히고, 다양한 뇌 질환 치료제의 타겟을 연구하는데 아주 중요한 역할을 할 수 있습니다.
이번 발표를 통해, 기계학습을 연구하고 있는 많은 연구자 분들에게 새로운 아이디어에 대한 영감이 될 수 있기를 희망합니다.
This document discusses backpropagation and how it relates to learning in the brain. It introduces some key concepts:
1) Synaptic plasticity allows connections between neurons to be modified through learning, but it does not explain how these modifications coordinate across a neural network.
2) Backpropagation provides a method for neural networks to learn through calculating error signals that are sent backward through the network to update synaptic weights. However, backpropagation is not biologically plausible.
3) A new method called neural gradient representation uses activity differences between neural patterns to update synapses in a way that may be more consistent with biological circuits compared to backpropagation. This provides a potential solution for how the brain could learn using feedback without literally implementing backpropagation.
Theories of error back propagation in the brain reviewSeonghyun Kim
The document discusses several theories of how error backpropagation may occur in the brain. It begins by introducing artificial neural networks and backpropagation. It then discusses questions around unrealistic models of neurons in artificial networks compared to real neurons.
The rest of the document summarizes different proposed models of biological backpropagation:
- Temporal-error models propose that errors are represented locally over time through anti-Hebbian and Hebbian plasticity rules rather than being explicitly computed.
- Predictive coding models represent errors explicitly using error nodes that receive inhibition from value nodes and compute differences.
- Dendritic error models propose that pyramidal neurons compute errors in their apical dendrites, relating these models to predictive
Enriching Word Vectors with Subword InformationSeonghyun Kim
1) The document proposes a new word vector model that represents words as the sum of their character n-gram vectors to better capture morphological information.
2) It tests this model on nine languages and shows it outperforms previous models on word similarity and analogy tasks.
3) Representing words as combinations of character n-grams allows the model to learn representations for out-of-vocabulary words.
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingSeonghyun Kim
The document discusses BERT, which stands for Bidirectional Encoder Representations from Transformers. BERT uses bidirectional Transformers to pre-train deep contextual representations of language. It was trained on two unsupervised prediction tasks using large text corpora. Experimental results showed that BERT achieved state-of-the-art results on eleven natural language understanding tasks, including question answering and textual inference. The document outlines the model architecture of BERT and the pre-training and fine-tuning methods used.
The document discusses the history and development of artificial intelligence and chatbots from early examples like Pygmalion and Galatea in ancient times to modern chatbot builders. It describes important early AI programs and chatbots like Colossus, the Turing test, ELIZA, ALICE, and Mitsuku. It also discusses the Korean chatbot Shimshim and provides examples of conversations with ELIZA, ALICE, and Mitsuku. Finally, it lists many popular modern chatbot builders used to create conversational agents.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
3. Spatial Navigation
▪ Prefrontal cortex to represent task space (i.e. goals and values of cues and
action options)
Verschure et al., 2014
4. Place Cell
▪ Place cell show location-specific firing
Hippocampal CA1
Medial entorhin
al cortex (MEC)
O’keefe and Dostrovsky, 1971
5. Place Cell
▪ Place cell has ‘forward sweeps’ (journey-dependent activity)
that is related to the specific route for goal-directed navigation
Grieves et al., 2016, eLIFE
6. Goal-directed Navigation
▪ ‘Tree search’ model for goal-directed navigation model
▪ A rat faces a maze, in which different turns lead to states and
rewards
Daw, 2012, IEEE
7. Place Cell and Goal
▪ Place cell also related to the reward location
Hok et al., 2007
8. Goal-related Cell in mPFC
▪ Cells with spatial correlates have been found in the mPFC of
the rat performing a goal-oriented task
Hok et al., 2005
9. Research Questaion & Aims
▪ How the neural network performs spatial information
processing and computation?
▫ Place cell could be crucial for goal-directed navigation
▫ mPFC cells process the spatial information of the goal location
▪ To learn:
▫ Current location and motor controls
▫ Before place, current place and predicted place
10. Method
▪ Experiment environment
▫ Prometheus simulation
▫ Visual sensor (20 landmarks) and sound sensor (2 s)
▪ Neuron model
▫ Artificial neural network model
▫ Gaussian-like activity curve model as a node in the neural network
▫ Hebbian learning
Goal
16. Conclusion
▪ Place cells successfully predicted the goal location using
neural network model with Hebbian learning paradigm
▪ Place cell represents secondary place field
17. Discussion
▪ A robot that was controlled by the neural network model
successfully encoded the path to the goal location
▪ However, the path was not optimal
Hier et al., 2011
18. Discussion
▪ What is the role of CA1 place cells in this neural networks?
▫ CA1 just received the synaptic input from CA3 and EC neuron
▪ Cognitive map algorithm in this neural network model will be
adapted to neuromorphic neural network model