ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
Brain Project Proposal
1. Brain Perception Project Proposal
Nowadays, as computer science is developing rapidly, computer-generated
reality, in other words, digital virtual reality is going to reach our everyday life
more and more comprehensively. This might be exactly what “future world” looks
like for people who lived in 60s.
At the same time I would think, how do people ultimately perceive the
environments around us? Is it really so different between analog world and digital
world? Is there another layer of “virtual”? What does the concept of “true” and
“false” mean to our brain. What would the “virtual reality” be like 20 or 50 years
later from now?
Inspired by a quote by Dr. Joe Dispenza, who is a neuroscientist, saying that “It’s
not your eyes that see things, but your brain that sees things”, I would like to
propose a project that brings perception to brain level, simulates the way brain
functions in the process of human perception, hopefully including vision, hearing,
olfaction, taste and tactility; how people can alter or manipulate perception via
controlling the way brain functions with exactly same input signal and raises
people’s awareness of our potential future way of perception. For current stage, I
would like to start from vision first, due to the reason that 95% of the information
we get in our normal life is through vision.
The actual work would be interactive foam or plastic human brain model with two
kinds of inputs and one kind of output. In our brain’s visual cortex, which includes
V1, as well as V2, V3, V4 and V5, Cells are tuned to simple properties such as
orientation, spatial frequency, and color. Theoretically, filters from these areas
can carry out neuronal processing of spatial frequency, orientation, motion,
direction, speed (thus temporal frequency), and many other spatiotemporal
features. So one of the inputs would be an actual sequence of images for “eyes”
to see, and the other one would be signals sent from participants to manipulate
those areas that determines what the brain will “see”, and the output would be
the altered sequence that changes in real-time, which doesn’t really exist in our
brain. So basically, participants are allowed to re-construct a new sequence of
images by changing the way our brain “sees” things and then project it out.
Well, I’m also looking for actual related data. If I can find them, I’ll be happy to
use them to make a data re-construction rather than simulation as well.