This document is a seminar report submitted by Arun.R.Nair to the Department of Computer Science at GITAM University in partial fulfillment of the requirements for a Master of Technology degree in Computer Science and Technology. The report discusses the Blue Brain project, which aims to create a virtual brain through detailed computer simulation of the mammalian brain down to the molecular level. The report covers topics such as the working of the natural brain, brain simulation, how the Blue Brain project works, potential applications, advantages and limitations.
This seminar report discusses the Blue Brain project, which aims to create the world's first virtual brain through detailed computer modeling and simulation of the brain's biological systems. The report provides background on how the natural human brain works, including sensory input, integration, and motor output. It then describes how the Blue Brain project will simulate brain microcircuits using a supercomputer, with the goals of understanding brain function and diseases. Potential applications include gathering data, understanding cognition, developing treatments for disorders, and laying the foundation for whole brain simulations.
This seminar report discusses the Blue Brain project, which aims to create the world's first virtual brain through detailed computer modeling and simulation of the brain's biological systems. The report provides background on how the natural human brain works, including sensory input, integration, and motor output. It then describes how the Blue Brain project will simulate brain microcircuits using a supercomputer, with the goals of understanding brain function and disorders, and enabling applications like drug discovery. Potential applications and limitations of the project are also discussed.
The document discusses the Blue Brain project, which aims to simulate the human brain on a
supercomputer. It provides details on how the project uses neuron-level modeling and supercomputers
like IBM's Blue Gene to simulate small networks of neurons and ultimately work towards simulating the
entire human brain. The document also discusses how uploading and simulating an actual human brain
may be possible using nanobots to scan brain structure and activity at a microscopic level.
Today scientists are in research to create an artificial brain that can think, respond,
take decision, and keep anything in memory. The main aim is to upload human brain into
machine. So that man can think, take decision without any effort. After the death of the
body, the virtual brain will act as the man. So, even after the death of a person we will not
lose the knowledge, intelligence, personalities, feelings and memories of that man, that can
be used for the development of the human society. Technology is growing faster than
everything. IBM is now in research to create a virtual brain, called “Blue brain”.
The document discusses the concept of a "blue brain" or virtual brain being developed by IBM to function like the human brain. It explains that a virtual brain is an artificial brain that can think and respond like the natural brain. The key reasons for developing a virtual brain are to preserve human intelligence after death and have intelligent brains available to society. Current research involves simulating the brain's systems to create a 3D model and uploading a person's life experiences and brain structure into a computer through the use of nanobots. Challenges include developing very powerful hardware, software, and nanobots to interface the natural and virtual brains. Potential advantages are remembering things without effort and understanding animal thinking, while disadvantages are dependency on computers and
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSINGDharshika Shreeganesh
Image processing is an active research area in which medical image processing is a highly challenging field. Medical imaging
techniques are used to image the inner portions of the human body for medical diagnosis. Brain tumor is a serious life altering
disease condition. Image segmentation plays a significant role in image processing as it helps in the extraction of suspicious regions
from the medical images. In this paper we have proposed segmentation of brain MRI image using K-means clustering algorithm
followed by morphological filtering which avoids the misclustered regions that can inevitably be formed after segmentation of the brain MRI image for detection of tumor location.
The Blue Brain project aims to create a virtual brain through detailed computer simulations. It seeks to reverse engineer the brain by simulating a cortical column of rat neurons using supercomputers. The goal is to understand how human intelligence and memory works at the neuronal level. If successful, it could lead to cures for neurological diseases and development of artificial general intelligence capable of human-level thought. However, issues around privacy, security and human dependence on technology remain challenges.
The document discusses the Blue Brain project, which aims to create a virtual brain by simulating the human brain on supercomputers. The project involves acquiring data on neuronal structures from brain slices, simulating the behavior of neurons and synapses using software like NEURON, and visualizing the results. The long term goals are to understand how human memory and cognition work, develop treatments for brain disorders, and potentially upload a human brain digitally. The project uses IBM's Blue Gene supercomputers and requires large memory, processing power, and nanobots to interface with natural brains. Applications include remembering without effort, using intelligence after death, and drug discovery without animal testing.
This seminar report discusses the Blue Brain project, which aims to create the world's first virtual brain through detailed computer modeling and simulation of the brain's biological systems. The report provides background on how the natural human brain works, including sensory input, integration, and motor output. It then describes how the Blue Brain project will simulate brain microcircuits using a supercomputer, with the goals of understanding brain function and diseases. Potential applications include gathering data, understanding cognition, developing treatments for disorders, and laying the foundation for whole brain simulations.
This seminar report discusses the Blue Brain project, which aims to create the world's first virtual brain through detailed computer modeling and simulation of the brain's biological systems. The report provides background on how the natural human brain works, including sensory input, integration, and motor output. It then describes how the Blue Brain project will simulate brain microcircuits using a supercomputer, with the goals of understanding brain function and disorders, and enabling applications like drug discovery. Potential applications and limitations of the project are also discussed.
The document discusses the Blue Brain project, which aims to simulate the human brain on a
supercomputer. It provides details on how the project uses neuron-level modeling and supercomputers
like IBM's Blue Gene to simulate small networks of neurons and ultimately work towards simulating the
entire human brain. The document also discusses how uploading and simulating an actual human brain
may be possible using nanobots to scan brain structure and activity at a microscopic level.
Today scientists are in research to create an artificial brain that can think, respond,
take decision, and keep anything in memory. The main aim is to upload human brain into
machine. So that man can think, take decision without any effort. After the death of the
body, the virtual brain will act as the man. So, even after the death of a person we will not
lose the knowledge, intelligence, personalities, feelings and memories of that man, that can
be used for the development of the human society. Technology is growing faster than
everything. IBM is now in research to create a virtual brain, called “Blue brain”.
The document discusses the concept of a "blue brain" or virtual brain being developed by IBM to function like the human brain. It explains that a virtual brain is an artificial brain that can think and respond like the natural brain. The key reasons for developing a virtual brain are to preserve human intelligence after death and have intelligent brains available to society. Current research involves simulating the brain's systems to create a 3D model and uploading a person's life experiences and brain structure into a computer through the use of nanobots. Challenges include developing very powerful hardware, software, and nanobots to interface the natural and virtual brains. Potential advantages are remembering things without effort and understanding animal thinking, while disadvantages are dependency on computers and
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSINGDharshika Shreeganesh
Image processing is an active research area in which medical image processing is a highly challenging field. Medical imaging
techniques are used to image the inner portions of the human body for medical diagnosis. Brain tumor is a serious life altering
disease condition. Image segmentation plays a significant role in image processing as it helps in the extraction of suspicious regions
from the medical images. In this paper we have proposed segmentation of brain MRI image using K-means clustering algorithm
followed by morphological filtering which avoids the misclustered regions that can inevitably be formed after segmentation of the brain MRI image for detection of tumor location.
The Blue Brain project aims to create a virtual brain through detailed computer simulations. It seeks to reverse engineer the brain by simulating a cortical column of rat neurons using supercomputers. The goal is to understand how human intelligence and memory works at the neuronal level. If successful, it could lead to cures for neurological diseases and development of artificial general intelligence capable of human-level thought. However, issues around privacy, security and human dependence on technology remain challenges.
The document discusses the Blue Brain project, which aims to create a virtual brain by simulating the human brain on supercomputers. The project involves acquiring data on neuronal structures from brain slices, simulating the behavior of neurons and synapses using software like NEURON, and visualizing the results. The long term goals are to understand how human memory and cognition work, develop treatments for brain disorders, and potentially upload a human brain digitally. The project uses IBM's Blue Gene supercomputers and requires large memory, processing power, and nanobots to interface with natural brains. Applications include remembering without effort, using intelligence after death, and drug discovery without animal testing.
The Blue Brain Project is an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level. The aim of the project, founded in May 2005 by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, is to study the brain's architectural and functional principles.
The Blue Brain Project aims to create a virtual brain by simulating the brain on a supercomputer. It was started in 2005 as a collaboration between IBM and the Brain Mind Institute in Switzerland. The project uses the IBM Blue Gene supercomputer and neuron simulation software to model brain circuits at the molecular level with the goal of understanding brain function and treating neurological diseases. The Blue Brain simulation has shown flashes of activity resembling those in natural brains, demonstrating its potential to provide insights into human consciousness.
The Blue Brain project aims to create the first virtual brain by simulating the brain down to the molecular level on supercomputers. It involves modeling neurons, connections between neurons, and brain circuits through intensive computation. The goal is to understand how the human brain works and potentially lead to treatments for brain diseases. In the future, it may be possible to upload a human brain into a computer through nanobots scanning brain structure and activity, allowing one to live on digitally after death.
The document discusses the Blue Brain Project, which aims to simulate the mammalian brain through detailed modeling down to the molecular level. The project is led by Henry Markram and uses a IBM Blue Gene supercomputer. The goal is to gain a complete understanding of brain function and enable faster treatments for brain diseases. A long term goal is to fully simulate the human brain to better understand its complex mechanisms.
The document discusses the concept of a virtual brain called the "Blue Brain" project being developed by IBM. It aims to simulate the human brain by uploading its contents into a computer system. This would allow human intelligence and knowledge to persist even after death and be utilized by society. Current research involves using nanobots to scan and interface a physical brain with a computer that would mimic its structure and functions through vast memory, processing power and algorithms. Both advantages like eternal knowledge and disadvantages like dependency on computers are discussed.
3D-DOCTOR is an advanced 3D imaging software developed by Able Software Corp. that uses object-oriented technologies to extract information from medical imaging files like CT, MRI, PET scans to create 3D models for analysis. It supports various file formats and can process large 3D volumes. The software allows 3D visualization, measurements, and shape analysis of image data over time. However, 3D-DOCTOR is an expensive software that requires frequent upgrades to maintain.
The document summarizes the Blue Brain Project, which aims to simulate the mammalian brain through detailed modeling and reverse engineering. Key points include:
- The project uses supercomputers and neuronal modeling software to simulate brain circuits and functions.
- It involves data acquisition of real neurons, building virtual neurons and networks, and simulating their electrical activity.
- Long term goals include fully simulating the human brain to understand cognition and treat neurological diseases.
The document discusses mind reading computers that can summarize a person's mental state by analyzing facial expressions and head gestures using video cameras and machine learning. It can identify features like facial expressions that indicate emotions, thoughts, and mental workload. The technology works by tracking facial feature points and modeling the relationship between expressions and mental states over time. Potential applications include monitoring human interactions, detecting driver states, and developing assistive technologies like mind-controlled wheelchairs. Issues involve ensuring reliability and addressing ethical concerns around predicting future behaviors.
This document provides an overview of the Blue Brain Project, which aims to create a virtual model of the brain through detailed computer simulations. It discusses the goals of creating an accurate whole brain model to better understand brain function and disorders. The architecture of the Blue Gene supercomputer is described, which will be used to model neural microcircuits at a high level of biological detail. The document outlines the basic components needed to reconstruct a microcircuit, including neuron morphology, ion channels, synapse properties, and connectivity statistics.
Artificial Intelligence And Machine Learning PowerPoint Presentation Slides C...SlideTeam
Artificial Intelligence And Machine Learning PowerPoint Presentation Slides arrange insightful data using industry-best design practices. Highlight the differences between machine intelligence, machine learning, and deep learning through our PPT format. Utilize this PowerPoint slideshow to present advantages, disadvantages, learning techniques, and types of supervised machine learning. Further, cover the merits, demerits, and types of unsupervised machine learning. Communicate important details concerning reinforcement learning. Familiarize your viewers with the expert system in artificial intelligence. Outline examples, characteristics, constituents, uses, advantages, drawbacks, and other aspects of the expert system. Compile the deep learning process, recurrent neural networks, and convolutional neural networks through this PowerPoint theme. Present an impactful introduction to artificial intelligence. Introduce kinds, algorithms, trends, and use cases of artificial intelligence. This presentation is not only easy-to-follow but also very convenient to edit, even if you have no prior design experience. Smash the download button and start instant personalization. Our Artificial Intelligence And Machine Learning PowerPoint Presentation Slides Complete Deck are explicit and effective. They combine clarity and concise expression. https://bit.ly/3hKg7PV
an efficient spam detection technique for io t devices using machine learningVenkat Projects
The document proposes a machine learning framework to detect spam on IoT devices. It evaluates five machine learning models on a dataset of IoT device inputs and features to compute a "spamicity score" for each device. This score indicates how trustworthy a device is based on various parameters. The results show the proposed technique is effective at spam detection compared to existing approaches.
Neuromorphic computing is an emerging interdisciplinary field that takes inspiration from biology to design hardware models of neural systems. Specifically, it uses very-large-scale integrated circuits containing analog electronic circuits to mimic the neurobiological architectures in the nervous system, as conceived by Carver Mead in the late 1980s. Two examples are Neurogrid, a mixed-analog-digital multichip system emulating a million neurons and billion connections using subthreshold analog logic, and IBM's TrueNorth, which contains 16 neuromorphic cores and is completely digital. Both aim to achieve the scale and low power operation of the biological brain through novel computing architectures.
Machine Learning - Breast Cancer DiagnosisPramod Sharma
Machine learning is helping in making smart decisions faster. In this presentation measurements carried out on FNAC was analysed. The results were validated using 20 percent of the data. The data used for POC is from UCI Repository/
This document discusses mind reading technology that can analyze a person's facial expressions in real time to infer their mental state. It works by tracking facial feature points and using dynamic Bayesian networks to model the relationship between expressions and mental states. Potential applications include improving human-computer interaction, monitoring human interactions, and detecting driver states like drowsiness. However, issues around privacy and predicting future behavior must still be addressed.
Techniques of Brain Cancer Detection from MRI using Machine LearningIRJET Journal
The document discusses techniques for detecting brain cancer from MRI scans using machine learning. It first provides background on brain tumors and MRI. It then outlines the cancer detection process, including pre-processing the MRI data, segmenting the images, extracting features, and classifying tumors using techniques like CNNs, SVMs, MLP, and Naive Bayes. The document reviews related work applying these techniques and compares their results, finding accuracy can be improved with larger, higher resolution datasets.
Blue Brain Technology is an attempt to reverse engineer the human brain and create simulations inside a computer. This way, we can access someone's brain even when they are not around.
This document summarizes a student project on stroke prediction using machine learning algorithms. The students collected two datasets on stroke from Kaggle, one benchmark and one non-benchmark. They preprocessed the data, addressed imbalance, and performed feature engineering. Various classification algorithms were tested on the data, including KNN, decision trees, SVM, and Naive Bayes. The results were evaluated to determine the most accurate model for predicting stroke risk based on attributes like age, gender, medical history, and lifestyle factors. The project aims to help identify individuals at high risk of stroke so preventative actions can be taken.
Quantum computing has several potential applications in medical technology:
1) Drug design - Quantum computers could enable faster and more cost-effective drug discovery by screening large databases of molecular structures.
2) DNA sequencing and analysis - Quantum computers could sequence and analyze DNA much faster than traditional computers, allowing for more reliable genetic testing and predictions.
3) Personalized healthcare - Quantum computers could store and analyze vast amounts of patient health data to provide personalized surveillance and predictions about a patient's future health.
1. The IBM is developing the first virtual brain called Blue Brain to act as an artificial human brain.
2. Blue Brain will use a supercomputer with vast storage and processing power, and an interface between the human brain and computer, to upload human brain data so the knowledge and intelligence of a person can be preserved after death.
3. Creating a virtual brain that thinks and makes decisions like the human brain could help address issues like memory loss and allow human knowledge and intelligence to continue benefiting society even after death.
This technical seminar report is submitted by Srinivasulu Reddy.J in partial fulfillment of the Bachelor of Technology degree in Computer Science and Engineering. The report discusses Blue Brain, a project aiming to build a synthetic brain by reverse-engineering the mammalian brain down to the molecular level. The report is submitted to the Department of Computer Science and Engineering at Annamacharya Institute of Technology and Sciences to fulfill degree requirements.
The Blue Brain Project is an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level. The aim of the project, founded in May 2005 by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, is to study the brain's architectural and functional principles.
The Blue Brain Project aims to create a virtual brain by simulating the brain on a supercomputer. It was started in 2005 as a collaboration between IBM and the Brain Mind Institute in Switzerland. The project uses the IBM Blue Gene supercomputer and neuron simulation software to model brain circuits at the molecular level with the goal of understanding brain function and treating neurological diseases. The Blue Brain simulation has shown flashes of activity resembling those in natural brains, demonstrating its potential to provide insights into human consciousness.
The Blue Brain project aims to create the first virtual brain by simulating the brain down to the molecular level on supercomputers. It involves modeling neurons, connections between neurons, and brain circuits through intensive computation. The goal is to understand how the human brain works and potentially lead to treatments for brain diseases. In the future, it may be possible to upload a human brain into a computer through nanobots scanning brain structure and activity, allowing one to live on digitally after death.
The document discusses the Blue Brain Project, which aims to simulate the mammalian brain through detailed modeling down to the molecular level. The project is led by Henry Markram and uses a IBM Blue Gene supercomputer. The goal is to gain a complete understanding of brain function and enable faster treatments for brain diseases. A long term goal is to fully simulate the human brain to better understand its complex mechanisms.
The document discusses the concept of a virtual brain called the "Blue Brain" project being developed by IBM. It aims to simulate the human brain by uploading its contents into a computer system. This would allow human intelligence and knowledge to persist even after death and be utilized by society. Current research involves using nanobots to scan and interface a physical brain with a computer that would mimic its structure and functions through vast memory, processing power and algorithms. Both advantages like eternal knowledge and disadvantages like dependency on computers are discussed.
3D-DOCTOR is an advanced 3D imaging software developed by Able Software Corp. that uses object-oriented technologies to extract information from medical imaging files like CT, MRI, PET scans to create 3D models for analysis. It supports various file formats and can process large 3D volumes. The software allows 3D visualization, measurements, and shape analysis of image data over time. However, 3D-DOCTOR is an expensive software that requires frequent upgrades to maintain.
The document summarizes the Blue Brain Project, which aims to simulate the mammalian brain through detailed modeling and reverse engineering. Key points include:
- The project uses supercomputers and neuronal modeling software to simulate brain circuits and functions.
- It involves data acquisition of real neurons, building virtual neurons and networks, and simulating their electrical activity.
- Long term goals include fully simulating the human brain to understand cognition and treat neurological diseases.
The document discusses mind reading computers that can summarize a person's mental state by analyzing facial expressions and head gestures using video cameras and machine learning. It can identify features like facial expressions that indicate emotions, thoughts, and mental workload. The technology works by tracking facial feature points and modeling the relationship between expressions and mental states over time. Potential applications include monitoring human interactions, detecting driver states, and developing assistive technologies like mind-controlled wheelchairs. Issues involve ensuring reliability and addressing ethical concerns around predicting future behaviors.
This document provides an overview of the Blue Brain Project, which aims to create a virtual model of the brain through detailed computer simulations. It discusses the goals of creating an accurate whole brain model to better understand brain function and disorders. The architecture of the Blue Gene supercomputer is described, which will be used to model neural microcircuits at a high level of biological detail. The document outlines the basic components needed to reconstruct a microcircuit, including neuron morphology, ion channels, synapse properties, and connectivity statistics.
Artificial Intelligence And Machine Learning PowerPoint Presentation Slides C...SlideTeam
Artificial Intelligence And Machine Learning PowerPoint Presentation Slides arrange insightful data using industry-best design practices. Highlight the differences between machine intelligence, machine learning, and deep learning through our PPT format. Utilize this PowerPoint slideshow to present advantages, disadvantages, learning techniques, and types of supervised machine learning. Further, cover the merits, demerits, and types of unsupervised machine learning. Communicate important details concerning reinforcement learning. Familiarize your viewers with the expert system in artificial intelligence. Outline examples, characteristics, constituents, uses, advantages, drawbacks, and other aspects of the expert system. Compile the deep learning process, recurrent neural networks, and convolutional neural networks through this PowerPoint theme. Present an impactful introduction to artificial intelligence. Introduce kinds, algorithms, trends, and use cases of artificial intelligence. This presentation is not only easy-to-follow but also very convenient to edit, even if you have no prior design experience. Smash the download button and start instant personalization. Our Artificial Intelligence And Machine Learning PowerPoint Presentation Slides Complete Deck are explicit and effective. They combine clarity and concise expression. https://bit.ly/3hKg7PV
an efficient spam detection technique for io t devices using machine learningVenkat Projects
The document proposes a machine learning framework to detect spam on IoT devices. It evaluates five machine learning models on a dataset of IoT device inputs and features to compute a "spamicity score" for each device. This score indicates how trustworthy a device is based on various parameters. The results show the proposed technique is effective at spam detection compared to existing approaches.
Neuromorphic computing is an emerging interdisciplinary field that takes inspiration from biology to design hardware models of neural systems. Specifically, it uses very-large-scale integrated circuits containing analog electronic circuits to mimic the neurobiological architectures in the nervous system, as conceived by Carver Mead in the late 1980s. Two examples are Neurogrid, a mixed-analog-digital multichip system emulating a million neurons and billion connections using subthreshold analog logic, and IBM's TrueNorth, which contains 16 neuromorphic cores and is completely digital. Both aim to achieve the scale and low power operation of the biological brain through novel computing architectures.
Machine Learning - Breast Cancer DiagnosisPramod Sharma
Machine learning is helping in making smart decisions faster. In this presentation measurements carried out on FNAC was analysed. The results were validated using 20 percent of the data. The data used for POC is from UCI Repository/
This document discusses mind reading technology that can analyze a person's facial expressions in real time to infer their mental state. It works by tracking facial feature points and using dynamic Bayesian networks to model the relationship between expressions and mental states. Potential applications include improving human-computer interaction, monitoring human interactions, and detecting driver states like drowsiness. However, issues around privacy and predicting future behavior must still be addressed.
Techniques of Brain Cancer Detection from MRI using Machine LearningIRJET Journal
The document discusses techniques for detecting brain cancer from MRI scans using machine learning. It first provides background on brain tumors and MRI. It then outlines the cancer detection process, including pre-processing the MRI data, segmenting the images, extracting features, and classifying tumors using techniques like CNNs, SVMs, MLP, and Naive Bayes. The document reviews related work applying these techniques and compares their results, finding accuracy can be improved with larger, higher resolution datasets.
Blue Brain Technology is an attempt to reverse engineer the human brain and create simulations inside a computer. This way, we can access someone's brain even when they are not around.
This document summarizes a student project on stroke prediction using machine learning algorithms. The students collected two datasets on stroke from Kaggle, one benchmark and one non-benchmark. They preprocessed the data, addressed imbalance, and performed feature engineering. Various classification algorithms were tested on the data, including KNN, decision trees, SVM, and Naive Bayes. The results were evaluated to determine the most accurate model for predicting stroke risk based on attributes like age, gender, medical history, and lifestyle factors. The project aims to help identify individuals at high risk of stroke so preventative actions can be taken.
Quantum computing has several potential applications in medical technology:
1) Drug design - Quantum computers could enable faster and more cost-effective drug discovery by screening large databases of molecular structures.
2) DNA sequencing and analysis - Quantum computers could sequence and analyze DNA much faster than traditional computers, allowing for more reliable genetic testing and predictions.
3) Personalized healthcare - Quantum computers could store and analyze vast amounts of patient health data to provide personalized surveillance and predictions about a patient's future health.
1. The IBM is developing the first virtual brain called Blue Brain to act as an artificial human brain.
2. Blue Brain will use a supercomputer with vast storage and processing power, and an interface between the human brain and computer, to upload human brain data so the knowledge and intelligence of a person can be preserved after death.
3. Creating a virtual brain that thinks and makes decisions like the human brain could help address issues like memory loss and allow human knowledge and intelligence to continue benefiting society even after death.
This technical seminar report is submitted by Srinivasulu Reddy.J in partial fulfillment of the Bachelor of Technology degree in Computer Science and Engineering. The report discusses Blue Brain, a project aiming to build a synthetic brain by reverse-engineering the mammalian brain down to the molecular level. The report is submitted to the Department of Computer Science and Engineering at Annamacharya Institute of Technology and Sciences to fulfill degree requirements.
The document is a seminar report on Blue Brain submitted by Billa Prasanna Kumar in partial fulfillment of the requirements for a Bachelor of Technology degree. It provides an abstract that summarizes the Blue Brain project, which aims to simulate the brain's biological systems through modeling at the microscopic level to recreate cognitive functions and shed light on relationships between genetic, molecular and cognitive functions of the brain. The report then covers various topics related to the natural brain, brain simulation, how the Blue Brain project will work, its applications and advantages/limitations.
The document discusses the Blue Brain project which aims to create the first virtual brain through supercomputing. The Blue Brain would function similarly to a natural human brain by thinking, making decisions, and forming memories based on data uploaded from biological brains. The project seeks to advance understanding of brain function and allow knowledge and intelligence to persist indefinitely even after death by preserving them in an artificial brain. It would require extremely powerful computers and scanning technologies like nanobots to map out the connections between all neurons in the brain. The goal is to simulate a human brain through computational modeling in order to study cognition and potentially achieve a form of digital immortality.
Blue brain enables humans to give new dimensions to science and technology and make enormous development in making the best possible enlightenment to the present scenario.the details can be seen by going though the power point presentation
This document discusses the Blue Brain project, which aims to create a virtual brain through detailed computer simulation. The Blue Brain would function similarly to a natural human brain by receiving sensory input, interpreting and processing that input, and generating motor outputs. It would allow uploading of a person's memories, knowledge, and intelligence so they could theoretically live on after death. The project is a collaboration between IBM and the Brain and Mind Institute in Switzerland, using supercomputers to simulate brain circuitry at the neuronal level in an effort to better understand human cognition and potentially cure neurological diseases.
The document discusses the concept of a "Blue Brain" project, which aims to simulate the human brain using supercomputers. It describes how a virtual brain would work similarly to the natural brain, taking in sensory input, interpreting and processing information, and producing motor outputs. The key steps to creating a virtual brain involve using nanobots to scan a natural brain at the neuronal level, uploading that connectome data to a powerful computer system, and using registers in the computer to model brain states and functions like memory, decision making, and more. If successful, a virtual brain could allow human intelligence and knowledge to persist indefinitely even after death. However, some disadvantages include potential dependency on computer systems and threats from viruses or others misusing the
The document discusses the 5 Pen PC technology developed by NEC Corporation. It consists of 5 pen-style components: a CPU pen, communication pen with cellular connectivity, virtual keyboard projector, LED projector, and digital camera. These pens connect wirelessly using Bluetooth and work together to provide computing and communication capabilities. The technology aims to enable ubiquitous computing through minimal and portable pen-sized devices. A conceptual prototype was developed in 2003, but the technology has yet to be commercialized for consumer use. The document provides details on each component and their working, along with the history and objectives of the 5 Pen PC concept.
This document discusses cooling and lubrication of engines. It describes how cooling systems prevent overheating by dissipating heat from combustion. Liquid cooling is more effective than air cooling but also more complex. Lubrication reduces friction and wear between moving parts by maintaining an oil film. Different lubrication systems are used to circulate oil through engines. Both cooling and lubrication are necessary to maximize engine performance and lifespan.
cooling and lubrication of engine reportjyotigangar
The document discusses cooling and lubrication systems for internal combustion engines. It describes various lubrication systems including splash, pressure feed, and dry sump systems. It also discusses properties of lubricants and additives. Regarding cooling systems, it describes the need for cooling, characteristics of efficient systems, and types of systems including liquid cooled, air cooled, thermosyphon, and forced circulation systems. The key components and functioning of these various lubrication and cooling systems are explained.
The document summarizes key information about the iOS operating system developed by Apple. It discusses that iOS was originally created for the iPhone in 2007 and is derived from Mac OS X. The document outlines some of the main features of iOS, including its hybrid kernel architecture, supported languages, latest versions, default user interface, initial lack of support for third-party apps, and the introduction of multitasking in iOS 4. It provides context about the development and releases of iOS over time.
The document presents information about scramjet engines. It discusses the history of scramjet development from World War II to recent test flights reaching Mach 10 speeds. The key components of a scramjet engine are described as a converging inlet to compress incoming air, a combustor where fuel is burned, and a diverging nozzle to accelerate the heated air and produce thrust. Scramjets differ from other jet engines by not using rotating components for compression and relying on high flight speeds to compress air before combustion. Potential applications include hypersonic aircraft that could reduce intercontinental flight times to under 90 minutes.
The document discusses the importance of designing questionnaires to collect accurate information for making good decisions. It outlines key steps in questionnaire design, including determining what information is needed, defining respondents, choosing a method of contact, developing question wording and order, pre-testing the questionnaire, and finalizing the survey form. Well-designed questionnaires can efficiently gather large amounts of data but also have limitations like inability to understand emotions and truthfulness of responses.
Attitude measurement and scaling techniquesCharu Rastogi
This document discusses various techniques for measuring consumer attitudes, including non-structured methods like in-depth interviews and focus groups, as well as structured methods like scales. It describes several types of scales such as graphic rating scales, semantic differential scales, Thurstone scales, and Likert scales. Additionally, it covers multidimensional scaling as a technique to analyze consumer perceptions of products or brands based on multiple attributes simultaneously. The goal of these various attitude measurement methods is to better understand consumers' beliefs, feelings, and likelihood of purchasing particular products or services.
The document discusses the Blue Brain project which aims to create a virtual brain through simulation. The IBM is developing the first virtual brain known as the Blue Brain to function like a human brain through thinking, decision making, memory etc. It would do this by uploading contents of the natural brain into it using nanobots, allowing intelligence and knowledge to be stored forever. The Blue Brain would have many advantages but also disadvantages like potential misuse of knowledge and dependence on computers.
This document describes the P-ISM (Pen-style Personal Networking Gadget Package), which was created in 2012. P-ISM allows users to use two pens to control a projected keyboard and monitor on any flat surface. It functions like a desktop computer through its CPU pen, communication pen, LED projector, virtual keyboard, digital camera, and battery. The document discusses P-ISM's history, components, functions, block diagram, working, merits such as portability, demertis like cost, and references.
The document discusses the Blue Brain project, which aims to create a virtual brain through detailed computer simulation. It describes how a virtual brain would function similarly to the natural brain through processing inputs, interpreting signals, and generating outputs. The document also outlines how nanobots could potentially scan a natural brain and upload its contents and structure into a computer simulation, allowing a digital version of the mind to continue functioning. While creating benefits like preserving intelligence after death, issues around dependency on computers and potential misuse of the technology are also raised.
4G is not a single technology but a collection of technologies that create fully packet-switched networks optimized for data transmission. 4G networks are projected to provide speeds of 100Mbps for mobile users and 1Gbps for stationary users. The development of wireless technology progressed from 1G which used analog signals for early mobile phones, to 2G which used digital signals and had better quality and capacity, to 3G which enabled broadband capabilities and multimedia support, to 4G which provides faster speeds and more services including mobile gaming and broadband access in remote areas. Research into 5G networks is already underway to continue advancing mobile communication technology beyond 4G.
This document discusses the Blue Brain project, which aims to create a virtual human brain through computer simulation. The Blue Brain would function similarly to a human brain by taking inputs, interpreting information, storing memories, and generating outputs. Researchers plan to upload an actual human brain's contents into the Blue Brain by using nanobots to scan neuronal connections and activity in the body and transfer that data to the virtual brain. The Blue Brain could help keep knowledge and intelligence after death and allow people with disabilities to regain abilities. It may take 30 years to fully develop the technology needed.
This document provides information on solar tracking systems and photovoltaic panels. It discusses how solar tracking systems can increase the efficiency of photovoltaic panels by keeping them oriented towards the sun throughout the day. By maintaining an angle of incidence close to 0 degrees, solar tracking maximizes the amount of sunlight absorbed. This can boost the output of PV panels by 30-50% compared to fixed panels. The document also provides details on the components and functioning of solar tracking systems, including sensors, microcontrollers and motors. It examines how improvements in solar cell technology and solar tracking have increased the viability of solar power as a renewable energy source.
The document summarizes the Blue Brain project, which aims to create a virtual model of the human brain. It discusses how IBM, in partnership with Swiss scientists, is using supercomputers to simulate brain tissue and neural connections in order to model cognitive functions like memory, perception, and language. The goal is to better understand the brain and potentially shed light on disorders like depression and autism. The modeling approach involves reconstructing individual neurons and brain circuits at a granular level and observing emergent behaviors from the interactions of basic components.
The document provides details about a seminar report submitted by Prashant Kumar on the topic of the Blue Brain project. The report includes sections on the working of the natural brain, brain simulation, how the Blue Brain project will work, applications of the Blue Brain project, advantages and limitations. The goal of the Blue Brain project is to create the first virtual brain by simulating the brain's biological systems using a supercomputer. It aims to model neural circuits and functions down to the molecular level to gain insights into cognitive processes and brain disorders. If successful, the Blue Brain project could help understand intelligence, treat diseases, and develop new computing technologies.
The document discusses the Blue Brain project which aims to create the first virtual brain through highly advanced computer simulation. The Blue Brain would function similar to a human brain, allowing it to think, learn, remember, and process information. It would be created using supercomputers and neural networking to accurately mimic the structures and activity of the natural human brain. The goal of the Blue Brain project is to upload a person's memories, skills, and intelligence into a virtual brain platform that could exist indefinitely even after death.
The document discusses the Blue Brain project, which aims to reverse engineer the human brain and simulate it using supercomputers and biological data. It describes how the Blue Brain would function similarly to the natural brain by taking inputs, interpreting them, storing memories, and producing outputs. Uploading a human brain would involve nanobots scanning brain structures and connections and inputting that data into a computer. The Blue Brain could help understand brain diseases and potentially allow people to live on digitally after death. However, challenges include the need for extremely powerful hardware and software and risks of becoming too dependent on computers or having personal information misused.
The document discusses the Blue Brain project, which aims to create a virtual brain by simulating the human brain on a supercomputer. It seeks to understand how human intelligence works and could help treat neurological diseases. The project would scan a person's brain using nanobots and upload their connectome data into the virtual brain, allowing their memories and personality to live on digitally after death. While this could benefit society, it also raises ethical concerns about human cloning and a future with limited human interaction.
Report on Blue Brain Technology, Artificial Intelligencejjoyjessy31
The document discusses Blue Brain technology, which aims to simulate the human brain on a supercomputer. It describes how Blue Brain would work by gathering existing neuroscience data, using that data to build biologically realistic models and simulations of neurons and brain circuits, and visualizing the results. The end goal is to fully map the human brain at the cellular level and recreate it in a computer simulation, which could lead to a better understanding of brain function and diseases.
The document discusses Blue Brain, the first virtual brain created by IBM. It aims to simulate the human brain by loading its contents onto computer chips through nanobots scanning the brain's structure. The Blue Brain project seeks to understand brain function by replicating it in silico, requiring supercomputers, large memory, and neural interfaces. While a virtual brain could store human intelligence indefinitely, challenges include dependency on technology and security threats from computer viruses. Overall, the document outlines efforts to simulate the human brain digitally and potential advantages like immortality, while also addressing related risks.
The document discusses the Blue Brain project, which aims to create a virtual human brain through detailed neuron scanning and computer simulation. The Blue Brain would function similarly to a human brain by taking inputs, interpreting information, processing data, storing memories, and producing outputs. Researchers are taking brain scans to map neurons and translate them into algorithms to generate virtual neurons for simulation. The end goal is to upload a human brain's contents to prevent losing knowledge and skills after death.
This document discusses the Blue Brain project, which aims to create a virtual human brain through detailed biological simulations. It describes how the Blue Brain is being constructed by scanning brain tissue to measure neuronal properties, translating these into mathematical models, and running large-scale simulations. The Blue Brain would function similarly to a natural brain, processing inputs, interpreting them through neuronal states, and producing outputs. It could allow human intelligence and memory to be uploaded and stored digitally. The Blue Brain may one day help treat conditions like memory loss and allow life-like artificial intelligence.
IBM is developing a "Blue Brain" project to simulate the human brain using supercomputers. The goal is to understand how the brain works by reverse engineering its biological components into a digital simulated model. If successful, it could allow human intelligence and knowledge to be uploaded into computers. This would allow thinking and memory to continue even after death and help develop treatments for neurological diseases. However, concerns exist that people may become too reliant on computers and that the technology could potentially be misused.
This document discusses the Blue Brain project, which aims to create a virtual brain by simulating the human brain's biological system. The Blue Brain project, led by scientists at EPFL, was the first attempt to simulate the human brain. It used detailed models of neurons and synapses in order to simulate brain regions such as the neocortex. The ultimate goal of the project is to help researchers understand cognitive functions like memory and language by replicating brain circuits and connections in a digital environment. The document also discusses how nanobots could potentially be used to scan a human brain and upload its connections and activity patterns into a computer simulation of a virtual brain.
1) Blue Brain is a project to build a synthetic brain by simulating the human brain on a supercomputer. It involves scanning a person's brain with nanobots to upload their intelligence and memories.
2) The goal is to understand how the brain works by simulating its biological functions and neural connections on a highly powerful system. This could help cure brain diseases and allow people to live on digitally after death.
3) While Blue Brain may provide benefits like enhanced memory and intelligence, it also poses disadvantages such as risks to human cloning and potential dependency on computers to store knowledge.
The document discusses the Blue Brain project, which aims to create the world's first virtual brain through detailed computer simulation. The Blue Brain would function similarly to the human brain by receiving sensory input, interpreting it through neural interactions, and producing motor outputs. Researchers hope to one day be able to upload a person's memories and consciousness into a simulated brain, allowing intelligence and knowledge to persist even after biological death. The project involves building extremely powerful supercomputers and developing nanobots capable of scanning and interfacing with real human brains. While creating virtual brains could benefit medical research and expand human intelligence, it also raises ethical concerns about cloning and potentially misusing individuals' uploaded minds.
This document discusses using artificial neural networks for urban soundscape classification without engineered features. It uses raw spectral data from the CityGram soundscape database to train and test models. The goal is to directly classify sounds from urban recordings using machine learning techniques like sparse auto-encoders and backpropagation to determine classification rules, rather than specifying rules manually. Previous work on sound classification relied more on extracted features, but the author aims to process raw data for better accuracy and generalization of classifying complex, overlapping urban sounds.
This document discusses the Blue Brain project, which aims to create the first virtual brain known as Blue Brain. It outlines how the Blue Brain would function similarly to a human brain by taking inputs, processing information, storing memories, and producing outputs. The document also explains that the goal of the Blue Brain project is to upload a human brain's contents by using nanobots to scan brain structures and connections and inputting that data into a powerful supercomputer. Finally, it discusses both the advantages and applications of the Blue Brain, such as remembering things without effort and potentially understanding and treating neurological diseases, as well as some of the disadvantages like becoming dependent on computers.
The document summarizes research on blue brain technology and virtual brains. It discusses how IBM is developing the first virtual brain called "Blue Brain" to function like the human brain by thinking, responding, making decisions, and storing memories. The goal is to upload a human brain into a machine so that a person's knowledge and intelligence could continue even after death. It also describes how a virtual brain would work by simulating brain functions like sensory input, information processing, and motor output. Researchers are working on methods to scan a brain's structure and activity at the neuronal level to recreate it digitally in a computer.
The document discusses the Blue Brain project, which aims to create the world's first virtual brain through IBM's development of "Blue Brain". The project seeks to reverse engineer the human brain at the cellular level through computer simulation, in order to better understand the brain and develop treatments for brain diseases. It involves studying live brain tissue samples to build biologically accurate models of neurons and networks, which are then simulated on IBM's supercomputer. The goal is to eventually create a fully functioning artificial human brain that can think and perform tasks like a real brain.
The document discusses the Blue Brain project, which aims to create a virtual human brain through detailed biological simulations. It describes how Blue Brain would work by acquiring neuronal data, running simulations, and visualizing results. Some key applications mentioned include gathering brain data, cracking neural codes, and aiding drug discovery. While Blue Brain could advance understanding of the human mind, the technology also poses risks if misused and has high costs. In conclusion, Blue Brain may both benefit and harm society as the technology develops.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
1. A TECHNICAL SEMINAR REPORT
ON
BLUE BRAIN
Submitted by
ARUN.R.NAIR
(2220213101)
in partial fulfilment for the award of the degree
of
MASTER OF TECHNOLOGY
In
COMPUTER SCIENCE & TECHNOLOGY
GITAM UNIVERSITY
(Declared as Deemed to be University u/S 3 of UGC Act, 1956)
HYDERABAD CAMPUS
2013-2014
2. GITAM UNIVERSITY
(Declared as Deemed to be University u/S 3 of UGC Act,
1956)
HYDERABAD CAMPUS
DEPARTMENT OF COMPUTER SCIENCE
BONAFIDE CERTIFICATE
Certified that this seminar report titled “Blue Brain” is the
bonafide work done by Arun.R.Nair who carried out the work under my
supervision.
SEMINAR INCHARGE HEAD OF THE DEPARTMENT
SHANTHI.M M.TECH. (PHD) Dr.S.PHANI KUMAR M.TECH.PH.D
INTERNAL EXAMINER
3. ACKNOWLEDGMENT
The elation and gratification of this seminar will be incomplete without
mentioning all the people who helped me to make it possible, whose gratitude and
encouragement were invaluable to me.
Firstly, I would like to thank GOD, almighty, our supreme guide, for
bestowing his blessings upon me in my entire endeavour. I express my sincere
gratitude to Dr.S.Phani Kumar, Head of Department for his support and guidance.
I also like to thank Mrs.M.Shanthi (Assistant Professor) for her valuable
words of advice.
I am also thankful to all the other lecturers in our department and students of
my class for their support and suggestions.
ARUN.R.NAIR
i
4. ABSTRACT
Today scientists are in research to create an artificial brain that can think,
respond, take decision, and keep anything in memory. The main aim is to upload
human brain into machine. So that man can think, take decision without any effort.
After the death of the body, the virtual brain will act as the man. So, even after the
death of a person we will not lose the knowledge, intelligence, personalities, feelings
and memories of man, that can be used for the development of the human society.
Technology is growing faster than everything. IBM is now in research to create a
virtual brain, called “Blue brain”. If possible, this would be the first virtual brain of
the world. IBM, in partnership with scientists at Switzerland’s Ecole Poly Technique
Federale de Lausanne’s (EPFL) Brain and Mind Institute will begin simulating the
brain’s biological systems and output the data as a working 3-dimensional model that
will recreate the high-speed electro-chemical interactions that take place within the
brain’s interior. These include cognitive functions such as language, learning,
perception and memory in addition to brain malfunction such as psychiatric disorders
like depression and autism. From there, the modelling will expand to other regions of
the brain and, if successful, shed light on the relationships between genetic,
molecular and cognitive functions of the brain.
ii
7. LIST OF FIGURES
2.1 Medial view of the left hemisphere of human brain. . . . . . . . . . . 6
4.1 The Blue Gene/L supercomputer architecture . . . . . . . . . . . . . 13
4.2 Elementary building blocks of neural microcircuits. . . . . . . . . . . 14
4.3 Reconstructing the neocortical column. . . . . . . . . . . . . . . . . 17
4.4 The data manipulation cascade . . . . . . . . . . . . . . . . . . . . . 24
8. CHAPTER 1
INTRODUCTION
Human brain is the most valuable creation of God. The man is called
intelligent because of the brain. The brain translates the information delivered by the
impulses, which then enables the person to react. But we loss the knowledge of a
brain when the body is destroyed after the death of man. That knowledge might have
been used for the development of the human society. What happen if we create a
brain and up load the contents of natural brain into it?
1.1 Blue Brain
The name of the world’s first virtual brain. That means a machine that can
function as human brain. Today scientists are in research to create an artificial brain
that can think, response, take decision, and keep anything in memory. The main aim
is to upload human brain into machine. So that man can think, take decision without
any effort. After the death of the body, the virtual brain will act as the man .So, even
after the death of a person we will not lose the knowledge, intelligence, personalities,
feelings and memories of that man that can be used for the development of the
human society. No one has ever understood the complexity of human brain. It is
complex than any circuitry in the world. So, question may arise “Is it really possible
to create a human brain?” The answer is “Yes”. Because whatever man has created
today always he has followed the nature.
1
9. When man does not have a device called computer, it was a big question for
all. Technology is growing faster than everything. IBM is now in research to create a
virtual brain, called “Blue brain”. If possible, this would be the first virtual brain of
the world. Within 30 years, we will be able to scan ourselves into the computers. Is
this the beginning of eternal life?
1.2 What is Virtual Brain?
Virtual brain is an artificial brain, which does not actually the natural brain,
but can act as the brain. It can think like brain, take decisions based on the past
experience, and response as the natural brain can. It is possible by using a super
computer, with a huge amount of storage capacity, processing power and an interface
between the human brain and this artificial one. Through this interface the data
stored in the natural brain can be up loaded into the computer. So the brain and the
knowledge, intelligence of anyone can be kept and used for ever, even after the death
of the person.
1.3 Why we need Virtual Brain?
Today we are developed because of our intelligence. Intelligence is the
inborn quality that cannot be created. Some people have this quality, so that they can
think up to such an extent where other cannot reach. Human society always is in
need of such intelligence and such an intelligent brain to have with.
2
10. But the intelligence is lost along with the body after the death. The virtual
brain is a solution to it. The brain and intelligence will alive even after the death. We
often face difficulties in remembering things such as people’s names, their birthdays,
and the spellings of words, proper grammar, important dates, history, facts etc...
In the busy life every one want to be relaxed. Can’t we use any machine to
assist for all these? Virtual brain may be the solution to it. What if we upload
ourselves into computer, we were simply aware of a computer, or maybe, what if we
lived in a computer as a program?
1.4 How it is possible?
First, it is helpful to describe the basic manners in which a person may be
uploaded into a computer. Raymond Kurzweil recently provided an interesting paper
on this topic. In it, he describes both invasive and noninvasive techniques. The most
promising is the use of very small robots, or nanobots. These robots will be small
enough to travel throughout our circulatory systems. Traveling into the spine and
brain, they will be able to monitor the activity and structure of our central nervous
system.
They will be able to provide an interface with computers that is as close as
our mind can be while we still reside in our biological form. Nanobots could also
carefully scan the structure of our brain, providing a complete readout of the
connections between each neuron.
3
11. They would also record the current state of the brain. This information, when entered
into a computer, could then continue to function as us.
All that is required is a computer with large enough storage space and processing
power. Is the pattern and state of neuron connections in our brain truly all that makes
up our conscious selves? Many people believe firmly those we possess a soul, while
some very technical people believe that quantum forces contribute to our awareness.
But we have to now think technically. Note, however, that we need not
know how the brain actually functions, to transfer it to a computer. We need only
know the media and contents. The actual mystery of how we achieved consciousness
in the first place, or how we maintain it, is a separate discussion. Really this concept
appears to be very difficult and complex to us. For this we have to first know how
the human brain actually works.
4
12. CHAPTER 2
WORKING OF NATURAL BRAIN
2.1 Getting to know more about Human Brain
The brain essentially serves as the body’s information processing centre. It
receives signals from sensory neurons (nerve cell bodies and their axons and dendrites)
in the central and peripheral nervous systems, and in response it generates and sends
new signals that instruct the corresponding parts of the body to move or react in some
way. It also integrates signals received from the body with signals from adjacent areas
of the brain, giving rise to perception and consciousness. The brain weighs about 1,500
grams (3 pounds) and constitutes about 2 percent of total body weight. It consists of
three major divisions;
The massive paired hemispheres of the cerebrum
The brainstem, consisting of the thalamus, hypothalamus, epithalamus, sub thalamus,
midbrain, pons, medulla oblongata and the cerebellum
The human ability to feel, interpret and even see is controlled, in computer like
calculations, by the magical nervous system. The nervous system is quite like magic
because we can’t see it, but its working through electric impulses through your body.
One of the world’s most “intricately organized” electron mechanisms is the nervous
system.
5
13. Not even engineers have come close to making circuit boards and computers as
delicate and precise as the nervous system. To understand this system, one has to know
the three simple functions that it puts into action; sensory input, integration & motor
output.
Fig. 2.1. Medial view of the left hemisphere of human brain.
2.1.1 Sensory Input
When our eyes see something or our hands touch a warm surface, the sensory
cells, also known as Neurons, send a message straight to your brain. This action of
getting information from your surrounding environment is called sensory input because
we are putting things in your brain by way of your senses.
2.1.2 Integration
Integration is best known as the interpretation of things we have felt, tasted, and
touched with our sensory cells, also known as neurons, into responses that the body
recognizes.
6
14. This process is all accomplished in the brain where many, many neurons work
together to understand the environment.
2.1.3 Motor Output
Once our brain has interpreted all that we have learned, either by touching,
tasting, or using any other sense, then our brain sends a message through neurons to
effecter cells, muscle or gland cells, which actually work to perform our requests and
act upon our environment.
2.2 How we see, hear, feel, & smell?
2.2.1 Nose
Once the smell of food has reached your nose, which is lined with hairs, it
travels to an olfactory bulb, a set of sensory nerves. The nerve impulses travel through
the olfactory tract, around, in a circular way, the thalamus, and finally to the smell
sensory cortex of our brain, located between our eye and ear, where it is interpreted to
be understood and memorized by the body.
2.2.2 Eye
Seeing is one of the most pleasing senses of the nervous system. This cherished
action primarily conducted by the lens, which magnifies a seen image, vitreous disc,
which bends and rotates an image against the retina, which translates the image and
light by a set of cells. The retina is at the back of the eye ball where rods and cones
structure along with other cells and tissues covert the image into nerve impulses which
are transmitted along the optic nerve to the brain where it is kept for memory.
7
15. 2.2.3 Tongue
A set of microscopic buds on the tongue divide everything we eat and drink
into four kinds of taste: bitter, sour, salty, and sweet. These buds have taste pores,
which convert the taste into a nerve impulse and send the impulse to the brain by a
sensory nerve fibre. Upon receiving the message, our brain classifies the different
kinds of taste. This is how we can refer the taste of one kind of food to another.
2.2.4 Ear
Once the sound or sound wave has entered the drum, it goes to a large structure
called the cochlea. In this snail like structure, the sound waves are divided into pitches.
The vibrations of the pitches in the cochlea are measured by the Corti. This organ
transmits the vibration information to a nerve, which sends it to the brain for
interpretation and memory.
8
16. CHAPTER 3
BRAIN SIMULATION
A comparative discussion of Natural Brain and Simulated Brain is given below.
NATURAL BRAIN SIMULATED BRAIN
1. INPUT 1. INPUT
In the nervous system in our body the In a similar way the artificial nervous
neurons are responsible for the message system can be created. The scientist
passing. The body receives the input has already created artificial neurons by
by the sensory cells. These sensory replacing them with the silicon chip. It
cells produces electric impulses which are has also been tested that these neurons
received by the neurons. The neurons can receive the input from the sensory
transfer these electric impulses to the cells. So, the electric impulses from
brain. the sensory cells can be received through
these artificial neurons and send to a super
computer for the interpretation.
2. INTERPRETATION 2. INTERPRETATION
The electric impulses received by the The interpretation of the electric impulses
brain from the neurons are interpreted in received by the artificial neuron can be
the brain. The interpretation in the brain done by means of a set of register. The
is accomplished by the means of certain different values in these register will
states of many neurons. represent different states of the brain.
3. OUTPUT 3. OUTPUT
Based on the states of the neurons the Similarly based on the states of the
brain sends the electric impulses repre- register the output signal can be given to
senting the responses which are further the artificial neurons in the body which
received by the sensory cell of our body will be received by the sensory cell.
to respond. The sensory cells of which
part of our body is going to receive that, it
depends upon the state of the neurons in
the brain at that time.
9
17. 10
NATURAL BRAIN SIMULATED BRAIN
4. MEMORY. 4. MEMORY
There are certain neurons in our brain It is not impossible to store the data
which represent certain states perma- permanently by using the secondary
nently. When required these state is inter- memory. In the similar way the required
preted by our brain and we can remember states of the registers can be stored perma-
the past things. To remember thing we nently. And when required these infor-
force the neurons to represent certain mation can be retrieved and used.
states of the brain permanently or for
any interesting or serious matter this is
happened implicitly.
5. PROCESSING 5. PROCESSING
When we take decision, think about In a similar way the decision making can
something, or make any computation, be done by the computer by using some
Logical and arithmetic calculations are stored states and the received input & by
done in our neural circuitry. The past performing some arithmetic and logical
experience stored and the current input calculations.
received are used and the states of certain
neurons are changed to give the output.
18. CHAPTER 4
HOW THE BLUE BRAIN PROJECT WILL WORK?
4.1 Goals & Objectives
The Blue Brain Project is the first comprehensive attempt to reverse-engineer
the mammalian brain, in order to understand brain function and dysfunction through
detailed simulations. The mission in undertaking The Blue Brain Project is to gather
all existing knowledge of the brain, accelerate the global research effort of reverse
engineering the structure and function of the components of the brain, and to build a
complete theoretical framework that can orchestrate the reconstruction of the brain of
mammals and man from the genetic to the whole brain levels, into computer models
for simulation, visualization and automatic knowledge archiving by 2015.
Biologically accurate computer models of mammalian and human brains
could provide a new foundation for understanding functions and malfunctions of the
brain and for a new generation of information-based, customized medicine.
4.2 Architecture of Blue Gene
Blue Gene/L is built using system-on-a-chip technology in which all
functions of a node (except for main memory) are integrated onto a single
application-specific integrated circuit (ASIC). This ASIC includes 2 PowerPC 440
cores running at 700 MHz. Associated with each core is a 64-bit “double” floating
point unit (FPU) that can operate in single instruction, multiple data (SIMD) mode.
11
19. Each (single) FPU can execute up to 2 “multiply-adds” per cycle, which
means that the peak performance of the chip is 8 floating point operations per cycle
(4 under normal conditions, with no use of SIMD mode). This leads to a peak
performance of 5.6 billion floating point operations per second (gigaFLOPS or
GFLOPS) per chip or node, or 2.8 GFLOPS in non- SIMD mode. The two CPUs
(central processing units) can be used in “co-processor” mode (resulting in one CPU
and 512 MB RAM (random access memory) for computation, the other CPU being
used for processing the I/O (input/output) of the main CPU) or in “virtual node”
mode (in which both CPUs with 256 MB each are used for computation). So, the
aggregate performance of a processor card in virtual node mode is: 2 x node = 2 x
2.8 GFLOPS = 5.6 GFLOPS, and its peak performance (optimal use of double FPU)
is: 2 x 5.6 GFLOPS = 11.2 GFLOPS. A rack (1,024 nodes = 2,048 CPUs) therefore
has 2.8 teraFLOPS or TFLOPS, and a peak of 5.6 TFLOPS. The Blue Brain Projects
Blue Gene is a 4-rack system that has 4,096 nodes, equal to 8,192 CPUs, with a peak
performance of 22.4 TFLOPS. A 64-rack machine should provide 180 TFLOPS, or
360 TFLOPS at peak performance.
12
20. Fig. 4.1. The Blue Gene/L supercomputer architecture
4.3 Modelling the Microcircuit
The scheme shows the minimal essential building blocks required to recon-
struct a neural microcircuit. Microcircuits are composed of neurons and synaptic
connections. To model neurons, the three-dimensional morphology, ion channel
composition, and distributions and electrical properties of the different types of
neuron are required, as well as the total numbers of neurons in the microcircuit and
the relative proportions of the different types of neuron. To model synaptic
connections, the physiological and pharmacological properties of the different types
of synapse that connect any two types of neuron are required, in addition to statistics
on which part of the axonal arborization is used (presynaptic innervation pattern) to
contact which regions of the target neuron (postsynaptic innervations pattern), how
many synapses are involved in forming connections, and the connectivity statistics
between any two types of neuron.
13
21. Fig. 4.2. Elementary building blocks of neural microcircuits.
Neurons receive inputs from thousands of other neurons, which are intricately
mapped onto different branches of highly complex dendritic trees and require tens of
thousands of compartments to accurately represent them. There is therefore a
minimal size of a microcircuit and a minimal complexity of a neuron’s morphology
that can fully sustain a neuron. A massive increase in computational power is
required to make this quantum leap - an increase that is provided by IBM’s Blue
Gene supercomputer. By exploiting the computing power of Blue Gene, the Blue
Brain Project1 aims to build accurate models of the mammalian brain from first
principles.
14
22. The first phase of the project is to build a cellular-level (as opposed to a genetic- or
molecular-level) model of a 2-week-old rat somatosensory neocortex corresponding
to the dimensions of a neocortical column (NCC) as defined by the dendritic
arborizations of the layer 5 pyramidal neurons.
The combination of infrared differential interference microscopy in brain
slices and the use of multi-neuron patch-clamping allowed the systematic
quantification of the molecular, morphological and electrical properties of the
different neurons and their synaptic pathways in a manner that would allow an
accurate reconstruction of the column. Over the past 10 years, the laboratory has
prepared for this reconstruction by developing the multi-neuron patch-clamp
approach, recording from thousands of neocortical neurons and their synaptic
connections, and developing quantitative approaches to allow a complete numerical
breakdown of the elementary building blocks of the NCC.
The recordings have mainly been in the 14-16-day-old rat somatosensory
cortex, which is a highly accessible region on which many researchers have
converged following a series of pioneering studies driven by Bert Sakmann. Much of
the raw data is located in our databases, but a major initiative is underway to make
all these data freely available in a publicly accessible database. The so-called ’blue
print’ of the circuit, although not entirely complete, has reached a sufficient level of
refinement to begin the reconstruction at the cellular level. Highly quantitative data
are available for rats of this age, mainly because visualization of the tissue is optimal
from a technical point of view.
15
23. This age also provides an ideal template because it can serve as a starting point from
which to study maturation and ageing of the NCC. As NCCs show a high degree of
stereotypy, the region from which the template is built is not crucial, but a sensory
region is preferred because these areas contain a prominent layer 4 with cells
specialized to receive input to the neocortex from the thalamus; this will also be
required for later calibration with in vivo experiments.
The NCC should not be overly specialized, because this could make
generalization to other neocortical regions difficult, but areas such as the barrel
cortex do offer the advantage of highly controlled in vivo data for comparison. The
mouse might have been the best species to begin with, because it offers a spectrum of
molecular approaches with which to explore the circuit, but mouse neurons are small,
which prevents the detailed dendritic recordings that are important for modelling the
nonlinear properties of the complex dendritic trees of pyramidal cells (75-80% of the
neurons). The image shows the Microcircuit in various stages of reconstruction. Only
a small fraction of reconstructed, three dimensional neurons is shown. Red indicates
the dendritic and blue the axonal arborizations. The columnar structure illustrates the
layer definition of the NCC.
The microcircuits (from left to right) for layers 2, 3, 4 and 5.
A single thick tufted layer 5 pyramidal neuron located within the column.
One pyramidal neuron in layer 2, a small pyramidal neuron in layer 5 and the large
thick tufted pyramidal neuron in layer
An image of the NCC, with neurons located in layers 2 to 5.
16
24. Fig. 4.3. Reconstructing the neocortical column.
4.4 Simulating the Microcircuit
Once the microcircuit is built, the exciting work of making the circuit function can
begin. All the 8192 processors of the Blue Gene are pressed into service, in a
massively parallel computation solving the complex mathematical equations that
govern the electrical activity in each neuron when a stimulus is applied. As the
electrical impulse travels from neuron to neuron, the results are communicated via
inter- processor communication (MPI). Currently, the time required to simulate the
circuit is about two orders of magnitude larger than the actual biological time
simulated.
17
25. The Blue Brain team is working to streamline the computation so that the circuit can
function in real time - meaning that 1 second of activity can be modelled in one
second.
4.5 Interpreting the Results
Running the Blue Brain simulation generates huge amounts of data. Analyses
of individual neurons must be repeated thousands of times. And analyses dealing
with the network activity must deal with data that easily reaches hundreds of
gigabytes per second of simulation. Using massively parallel computers the data can
be analyzed where it is created (server-side analysis for experimental data, online
analysis during simulation).
Given the geometric complexity of the column, a visual exploration of the
circuit is an important part of the analysis. Mapping the simulation data onto the
morphology is invaluable for an immediate verification of single cell activity as well
as network phenomena. Architects at EPFL have worked with the Blue Brain devel-
opers to design a visualization interface that translates the Blue Gene data into a 3D
visual representation of the column. A different supercomputer is used for this
compu-tationally intensive task.
The visualization of the neurons’ shapes is a challenging task given the fact
that a column of 10,000 neurons rendered in high quality mesh accounts for
essentially 1 billion triangles for which about 100GB of management data is
required. Simulation data with a resolution of electrical compartments for each
neuron accounts for another 150GB.
18
26. As the electrical impulse travels through the column, neurons light up and
change color as they become electrically active. A visual interface makes it possible
to quickly identify areas of interest that can then be studied more extensively using
further simulations. A visual representation can also be used to compare the
simulation results with experiments that show electrical activity in the brain.
4.6 Data Manipulation Cascade
Building the Blue Column requires a series of data manipulations .The first
step is to parse each three-dimensional morphology and correct errors due to the in
vitro preparation and reconstruction. The repaired neurons are placed in a database
from which statistics for the different anatomical classes of neurons are obtained.
These statistics are used to clone an indefinite number of neurons in each class to
capture the full morphological diversity.
The next step is to take each neuron and insert ion channel models in order to
produce the array of electrical types. The field has reached a sufficient stage of
convergence to generate efforts to classify neurons, such as the Petilla Convention - a
conference held in October 2005 on anatomical and electrical types of neocortical
interneuron, established by the community. Single-cell gene expression studies of
neocortical interneurons now provide detailed predictions of the specific
combinations of more than 20 ion channel genes that underlie electrical diversity. A
database of biologically accurate Hodgkin-Huxley ion channel models is being
produced.
19
27. The simulator NEURON is used with automated fitting algorithms running
on Blue Gene to insert ion channels and adjust their parameters to capture the
specific electrical properties of the different electrical types found in each anatomical
class. The statistical variations within each electrical class are also used to generate
subtle variations in discharge behaviour in each neuron. So, each neuron is morpho-
logically and electrically unique. Rather than taking 10,000 days to fit each neuron’s
electrical behaviour with a unique profile, density and distribution of ion channels,
applications are being prepared to use Blue Gene to carry out such a fit in a day.
These functionalized neurons are stored in a database. The three-dimensional
neurons are then imported into Blue Builder, a circuit builder that loads neurons into
their layers according to a “recipe” of neuron numbers and proportions. A collision
detection algorithm is run to determine the structural positioning of all axo-dendritic
touches, and neurons are jittered and spun until the structural touches match
experimentally derived statistics.
Probabilities of connectivity between different types of neuron are used to
determine which neurons are connected, and all axo-dendritic touches are converted
into synaptic connections. The manner in which the axons map onto the dendrites
between specific anatomical classes and the distribution of synapses received by a
class of neurons are used to verify and fine-tune the biological accuracy of the
synaptic mapping between neurons. It is therefore possible to place 10-50 million
synapses in accurate three-dimensional space, distributed on the detailed three-
dimensional morphology of each neuron.
20
28. The synapses are functionalized according to the synaptic parameters for
different classes of synaptic connection within statistical variations of each class,
dynamic synaptic models are used to simulate transmission, and synaptic learning
algorithms are introduced to allow plasticity. The distance from the cell body to each
synapse is used to compute the axonal delay, and the circuit configuration is
exported.
The configuration file is read by a NEURON subroutine that calls up each
neuron and effectively inserts the location and functional properties of every synapse
on the axon, soma and dendrites. One neuron is then mapped onto each processor
and the axonal delays are used to manage communication between neurons and
processors. Effectively, processors are converted into neurons, and MPI (message-
passing interface)- based communication cables are converted into axons
interconnecting the neurons - so the entire Blue Gene is essentially converted into a
neocortical microcircuit. We developed two software programs for simulating such
large-scale networks with morphologically complex neurons.
A new MPI version of NEURON has been adapted by Michael Hines to run
on Blue Gene. The second simulator uses the MPI messaging component of the
large-scale Neocortical Simulator (NCS), which was developed by Philip Goodman,
to manage the communication between NEURON-simulated neurons distributed on
different processors.
21
29. The latter simulator will allow embedding of a detailed NCC model into a
simplified large-scale model of the whole brain. Both of these software’s have
already been tested, produce identical results and can simulate tens of thousands of
morphologically and electrically complex neurons (as many as 10,000 compartments
per neuron with more than a dozen Hodgkin-Huxley ion channels per compartment).
Up to 10 neurons can be mapped onto each processor to allow simulations of
the NCC with as many as 100,000 neurons. Optimization of these algorithms could
allow simulations to run at close to real time. The circuit configuration is also read
by a graphic application, which renders the entire circuit in various levels of textured
graphic formats.
Real-time stereo visualization applications are programmed to run on the
terabyte SMP (shared memory processor) Extreme series from SGI (Silicon
Graphics, Inc.). The output from Blue Gene (any parameter of the model) can be fed
directly into the SGI system to perform in silico imaging of the activity of the inner
workings of the NCC. Eventually, the simulation of the NCC will also include the
vasculature, as well as the glial network, to allow capture of neuron-glia interactions.
Simulations of extracellular currents and field potentials, and the emergent
electroencephalogram (EEG) activity will also be modelled.
22
30. 4.7 Whole Brain Simulations
The main limitations for digital computers in the simulation of biological
processes are the extreme temporal and spatial resolution demanded by some
biological processes, and the limitations of the algorithms that are used to model
biological processes. If each atomic collision is simulated, the most powerful super-
computers still take days to simulate a microsecond of protein folding, so it is, of
course, not possible to simulate complex biological systems at the atomic scale.
However, models at higher levels, such as the molecular or cellular levels, can
capture lower-level processes and allow complex large-scale simulations of
biological processes.
The Blue Brain Project’s Blue Gene can simulate a NCC of up to 100,000
highly complex neurons at the cellular or as many as 100 million simple neurons
(about the same number of neurons found in a mouse brain). However, simulating
neurons embedded in microcircuits, microcircuits embedded in brain regions, and
brain regions embedded in the whole brain as part of the process of understanding
the emergence of complex behaviors of animals is an inevitable progression in
understanding brain function and dysfunction, and the question is whether whole-
brain simulations are at all possible. Computational power needs to increase about 1-
million-fold before we will be able to simulate the human brain, with 100 billion
neurons, at the same level of detail as the Blue Column. Algorithmic and simulation
efficiency (which ensure that all possible FLOPS are exploited) could reduce this
requirement by two to three orders of magnitude.
23
31. Simulating the NCC could also act as a test-bed to refine algorithms required to
simulate brain function, which can be used to produce field programmable gate array
(FPGA)-based chips. FPGAs could increase computational speeds by as much as two
orders of magnitude. The FPGAs could, in turn, provide the testing ground for the
production of specialized NEURON solver application-specific integrated circuits
(ASICs) that could further increase computational speed by another one to two
orders of magnitude.
It could therefore be possible, in principle, to simulate the human brain
even with current technology. The computer industry is facing what is known as a
discontinuity, with increasing processor speed leading to unacceptably high power
consumption and heat production. This is pushing a qualita-tively new transition in
the types of processor to be used in future computers. These advances in computing
should begin to make genetic- and molecular-level simulations possible. Software
applications and data manipulation required to model the brain with
Fig. 4.4. The data manipulation cascade
24
32. biological accuracy. Experimental results that provide the elementary building blocks
of the microcircuit are stored in a database. Before three-dimensional neurons are
modelled electrically, the morphology is parsed for errors, and for repair of
arborisations damaged during slice preparation.
The morphological statistics for a class of neurons are used to clone multiple copies
of neurons to generate the full morphological diversity and the thousands of neurons
required in the simulation. A spectrum of ion channels is inserted, and conductance
and distributions are altered to fit the neurons electrical properties according to
known statistical distributions, to capture the range of electrical classes and the
uniqueness of each neurons behaviour (model fitting/electrical capture).
A circuit builder is used to place neurons within a three-dimensional
column, to perform axo-dendritic collisions and, using structural and functional
statistics of synaptic connectivity, to convert a fraction of axo-dendritic touches into
synapses. The circuit configuration is read by NEURON, which calls up each
modelled neuron and inserts the several thousand synapses onto appropriate cellular
locations. The circuit can be inserted into a brain region using the brain builder. An
environment builder is used to set up the stimulus and recording conditions. Neurons
are mapped onto processors, with integer numbers of neurons per processor. The
output is visualized, analysed and/or fed into real-time algorithms for feedback
stimulation.
25
33. CHAPTER 5
APPLICATIONS OF BLUE BRAIN PROJECT
5.1 What can we learn from Blue Brain?
Detailed, biologically accurate brain simulations offer the opportunity to
answer some fundamental questions about the brain that cannot be addressed with
any current experimental or theoretical approaches. These include,
5.1.1 Defining functions of the basic elements
Despite a century of experimental and theoretical research, we are unable to
provide a comprehensive definition of the computational function of different ion
channels, receptors, neurons or synaptic pathways in the brain. A detailed model will
allow fine control of any of these elements and allow a systematic investigation of
their contribution to the emergent behaviour.
5.1.2 Understanding complexity
At present, detailed, accurate brain simulations are the only approach that
could allow us to explain why the brain needs to use many different ion channels,
neurons and synapses, a spectrum of receptors, and complex dendritic and axonal
arborizations, rather than the simplified, uniform types found in many models.
26
34. 5.1.3 Exploring the role of dendrites.
This is the only current approach to explore the dendritic object theory, which
proposes that three-dimensional voltage objects are generated continuously across
dendritic segments regardless of the origin of the neurons, and that spikes are used to
maintain such dendritic objects.
5.1.4 Revealing functional diversity
Most models engineer a specific function, whereas a spectrum of functions
might be possible with a biologically based design. Understanding memory storage
and retrieval. This approach offers the possibility of determining the manner in
which representations of information are imprinted in the circuit for storage and
retrieval, and could reveal the part that different types of neuron play in these crucial
functions.
5.1.5 Tracking the emergence of intelligence
This approach offers the possibility to re-trace the steps taken by a network of
neurons in the emergence of electrical states used to embody representations of the
organism and its world.
5.1.6 Identifying points of vulnerability
Although the neocortex confers immense computational power to mammals,
defects are common, with catastrophic cognitive effects. At present, a detailed model
is the only approach that could produce a list of the most vulnerable circuit
parameters, revealing likely candidates for dysfunction and targets for treatment.
27
35. 5.1.7 Simulating disease and developing treatments
Such simulations could be used to test hypotheses for the pathogenesis of
neurological and psychiatric diseases, and to develop and test new treatment
strategies.
5.1.8 Providing a circuit design platform
Detailed models could reveal powerful circuit designs that could be
implemented into silicone chips for use as intelligence devices in industry.
5.2 Applications of Blue Brain
5.2.1 Gathering and Testing 100 Years of Data
The most immediate benefit is to provide a working model into which the
past 100 years knowledge about the microstructure and workings of the neocortical
column can be gathered and tested. The Blue Column will therefore also produce a
virtual library to explore in 3D the microarchitecture of the neocortex and access all
key research relating to its structure and function.
5.2.2 Cracking the Neural Code
The Neural Code refers to how the brain builds objects using electrical
patterns. In the same way that the neuron is the elementary cell for computing in the
brain, the NCC is the elementary network for computing in the neocortex. Creating
an accurate replica of the NCC which faithfully reproduces the emergent electrical
dynamics of the real microcircuit, is an absolute requirement to revealing how the
neocortex processes, stores and retrieves information.
28
36. 5.2.3 Understanding Neocortical Information Processing
The power of an accurate simulation lies in the predictions that can be
generated about the neocortex. Indeed, iterations between simulations and
experiments are essential to build an accurate copy of the NCC. These iterations are
therefore expected to reveal the function of individual elements (neurons, synapses,
ion channels, and receptors), pathways (mono-synaptic, disynaptic, multisynaptic
loops) and physiological processes (functional properties, learning, reward, goal-
oriented behaviour).
5.2.4 A Novel Tool for Drug Discovery for Brain Disorders
Understanding the functions of different elements and pathways of the NCC will
provide a concrete foundation to explore the cellular and synaptic bases of a wide
spectrum of neurological and psychiatric diseases. The impact of receptor, ion
channel, cellular and synaptic deficits could be tested in simulations and the optimal
experimental tests can be determined.
5.2.5 A Global Facility
A software replica of a NCC will allow researchers to explore hypotheses of
brain function and dysfunction accelerating research. Simulation runs could
determine which parameters should be used and measured in the experiments. An
advanced 2D, 3D and 3D immersive visualization system will allow “imaging” of
many aspects of neural dynamics during processing, storage and retrieval of
information. Such imaging experiments may be impossible in reality or may be
prohibitively expensive to perform.
29
37. 5.2.6 A Foundation for Whole Brain Simulations
With current and envisage able future computer technology it seems unlikely
that a mammalian brain can be simulated with full cellular and synaptic complexity
(above the molecular level). An accurate replica of an NCC is therefore required in
order to generate reduced models that retain critical functions and computational
capabilities, which can be duplicated and interconnected to form neocortical brain
regions. Knowledge of the NCC architecture can be transferred to facilitate
reconstruction of subcortical brain regions.
5.2.7 A Foundation for Molecular Modeling of Brain Function
An accurate cellular replica of the neocortical column will provide the first
and essential step to a gradual increase in model complexity moving towards a
molecular level description of the neocortex with biochemical pathways being
simulated.
A molecular level model of the NCC will provide the substrate for interfacing
gene expression with the network structure and function. The NCC lies at the
interface between the genes and complex cognitive functions. Establishing this link
will allow predictions of the cognitive consequences of genetic disorders and allow
reverse engineering of cognitive deficits to determine the genetic and molecular
causes. This level of simulation will become a reality with the most advanced phase
of Blue Gene development.
30
38. CHAPTER 6
ADVANTAGES AND LIMITATIONS
6.1 Advantages
We can remember things without any effort.
Decision can be made without the presence of a person.
Even after the death of a man his intelligence can be used.
The activity of different animals can be understood. That means by interpretation
of the electric impulses from the brain of the animals, their thinking can be
understood easily.
It would allow the deaf to hear via direct nerve stimulation, and also be helpful
for many psychological diseases. By down loading the contents of the brain that
was uploaded into the computer, the man can get rid from the madness.
6.2 Limitations
Further, there are many new dangers these technologies will open. We will be
susceptible to new forms of harm.
We become dependent upon the computer systems.
Others may use technical knowledge against us.
Computer viruses will pose an increasingly critical threat.
31
39. The real threat, however, is the fear that people will have of new technologies.
That fear may culminate in a large resistance. Clear evidence of this type of fear
is found today with respect to human cloning.
32
40. CHAPTER 7
FUTURE PERSPECTIVE
The synthesis era in neuroscience started with the launch of the Human Brain
Project and is an inevitable phase triggered by a critical amount of fundamental data.
The data set does not need to be complete before such a phase can begin. Indeed, it is
essential to guide reductionist research into the deeper facets of brain structure and
function. As a complement to experimental research, it offers rapid assessment of the
probable effect of a new finding on pre-existing knowledge, which can no longer be
managed completely by any one researcher.
Detailed models will probably become the final form of databases that are
used to organize all knowledge of the brain and allow hypothesis testing, rapid
diagnoses of brain malfunction, as well as development of treatments for
neurological disorders. In short, we can hope to learn a great deal about brain
function and dysfunction from accurate models of the brain. The time taken to build
detailed models of the brain depends on the level of detail that is captured. Indeed,
the first version of the Blue Column, which has 10,000 neurons, has already been
built and simulated; it is the refinement of the detailed properties and calibration of
the circuit that takes time.
33
41. A model of the entire brain at the cellular level will probably take the next
decade. There is no fundamental obstacle to modelling the brain and it is therefore
likely that we will have detailed models of mammalian brains, including that of man,
in the near future. Even if overestimated by a decade or two, this is still just a ’blink
of an eye’ in relation to the evolution of human civilization. As with Deep Blue, Blue
Brain will allow us to challenge the foundations of our understanding of intelligence
and generate new theories of consciousness.
34
42. CHAPTER 8
CONCLUSION
In conclusion, we will be able to transfer ourselves into computers at some
point. Most arguments against this outcome are seemingly easy to circumvent. They
are either simple minded, or simply require further time for technology to increase.
The only serious threats raised are also overcome as we note the combination of
biological and digital technologies.
35
43. REFERENCES
[1] “Engineering in Medicine and Biology Society”, 2008. EMBS
2008. 30th Annual International Conference of the IEEE
[2] Henry Markram, “The Blue Brain Project”, Nature Reviews
Neuroscience 2006 February.
[3] Simulated brain closer to thought BBC News 22 April 2009.
[4] “Project Milestones”. Blue Brain.
http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085
[5] Graham-Rowe, Duncan. “Mission to build a simulated brain
begins”, NewSci-entist, June 2005. pp. 1879-85.
[6] Blue Gene: http://www.research.ibm.com/bluegene
[7] The Blue Brain Project: http://bluebrainproject.epfl.ch
36