The document discusses a new "Sine Wave theory of Pixel" proposed by Mutawaqqil Billah. The theory proposes designing pixels as sine waves rather than rectangles to better match how light comes into the eye as waves. This new pixel design could eliminate 90-95% of unnecessary processing points and greatly improve visual quality for visual devices, computer vision, and robot vision. The document contains comments from discussions on this theory, with Billah providing additional details on the potential benefits and applications of the sine wave pixel design.
The document discusses using binary classifiers to improve speech recognition of unknown persons. It proposes using a series of binary classifiers that sequentially reduce variations in speech data by distinguishing factors like gender, age, location, etc. This prepares the data for a main classifier by narrowing the variations, making recognition of words and conversion to text easier compared to directly training a single classifier on all variations. Binary classifiers are also suggested for other pattern recognition problems involving large datasets.
This document discusses pixel-based designs using sine wave theory. It shows more than a single row of pixel samples that undulate in random directions with varying wave lengths and amplitudes. The document includes figures of 79 pixel design samples and information about the author.
This document proposes a method for storing visual, audio, and memory data in a nonlinear network of sections and stations. Data would be inserted randomly into different sized sections. Stations would be connected to sections and have individual rankings. When searching for data, stations with higher rankings would be queried first. If a station finds data, its ranking increases, and if not, its ranking decreases. This network aims to allow frequently searched data to be found quickly through higher ranked connections.
- The document proposes a new type of tree structure to recognize spoken words from sound files containing multiple speakers, without needing to parse the sound data.
- The tree would insert sound file data as numbers and examine each number/bundle of numbers to find words. Each level of the tree would have up to 20 branches corresponding to numbers 0-19.
- Words from a training set would be inserted into the tree by their number sequences. The tree could then be used to find words within long number strings, even if they start in any position.
This document proposes a new "sine wave theory of pixels" that models pixels as undulating sine waves rather than fixed rectangles. The theory claims this better simulates how light travels as waves to the eye and would provide more useful visual information than the current pixel format. It argues this could improve computer vision tasks by bringing performance closer to that of the human eye. The document requests experts review this theory and considers it a significant discovery that could revolutionize the field of computer vision.
The document proposes an alternative technique to pixels for extracting objects from images. The author suggests using curved lines of the same or varying sizes instead of the standard rectangular pixels. This would make it easier to cluster data and obtain natural objects from images, as the screen data is more curvilinear than rectangular in real life. More research is needed to determine the optimal shape, size, and characteristics of the curved lines for different applications. The author believes this non-linear approach would be more effective than the current pixel method that restricts image analysis to a linear framework.
This document proposes a colony-based data storage system with multiple layers containing ranked colonies, towns, and groups. Data is stored in simple containers within groups. As data is searched for more frequently, its ranking increases and it can move to higher layers and towns. This models how frequently accessed data rises in the brain's storage layers. Data not in use remains accessible in lower layers.
This document discusses pixel-based designs using sine wave theory. It shows more than a single row of pixel samples that undulate in random directions with varying wave lengths and amplitudes. The document includes figures of 79 pixel design samples and information about the author.
The document discusses using binary classifiers to improve speech recognition of unknown persons. It proposes using a series of binary classifiers that sequentially reduce variations in speech data by distinguishing factors like gender, age, location, etc. This prepares the data for a main classifier by narrowing the variations, making recognition of words and conversion to text easier compared to directly training a single classifier on all variations. Binary classifiers are also suggested for other pattern recognition problems involving large datasets.
This document discusses pixel-based designs using sine wave theory. It shows more than a single row of pixel samples that undulate in random directions with varying wave lengths and amplitudes. The document includes figures of 79 pixel design samples and information about the author.
This document proposes a method for storing visual, audio, and memory data in a nonlinear network of sections and stations. Data would be inserted randomly into different sized sections. Stations would be connected to sections and have individual rankings. When searching for data, stations with higher rankings would be queried first. If a station finds data, its ranking increases, and if not, its ranking decreases. This network aims to allow frequently searched data to be found quickly through higher ranked connections.
- The document proposes a new type of tree structure to recognize spoken words from sound files containing multiple speakers, without needing to parse the sound data.
- The tree would insert sound file data as numbers and examine each number/bundle of numbers to find words. Each level of the tree would have up to 20 branches corresponding to numbers 0-19.
- Words from a training set would be inserted into the tree by their number sequences. The tree could then be used to find words within long number strings, even if they start in any position.
This document proposes a new "sine wave theory of pixels" that models pixels as undulating sine waves rather than fixed rectangles. The theory claims this better simulates how light travels as waves to the eye and would provide more useful visual information than the current pixel format. It argues this could improve computer vision tasks by bringing performance closer to that of the human eye. The document requests experts review this theory and considers it a significant discovery that could revolutionize the field of computer vision.
The document proposes an alternative technique to pixels for extracting objects from images. The author suggests using curved lines of the same or varying sizes instead of the standard rectangular pixels. This would make it easier to cluster data and obtain natural objects from images, as the screen data is more curvilinear than rectangular in real life. More research is needed to determine the optimal shape, size, and characteristics of the curved lines for different applications. The author believes this non-linear approach would be more effective than the current pixel method that restricts image analysis to a linear framework.
This document proposes a colony-based data storage system with multiple layers containing ranked colonies, towns, and groups. Data is stored in simple containers within groups. As data is searched for more frequently, its ranking increases and it can move to higher layers and towns. This models how frequently accessed data rises in the brain's storage layers. Data not in use remains accessible in lower layers.
This document discusses pixel-based designs using sine wave theory. It shows more than a single row of pixel samples that undulate in random directions with varying wave lengths and amplitudes. The document includes figures of 79 pixel design samples and information about the author.
16 OpenCV Functions to Start your Computer Vision journey.docxssuser90e017
This article discusses 16 OpenCV functions for computer vision tasks with Python code examples. It begins with an introduction to computer vision and why OpenCV is useful. It then covers functions for reading/writing images, changing color spaces, resizing images, rotating images, translating images, thresholding images, adaptive thresholding, image segmentation with watershed algorithm, bitwise operations, edge detection, image filtering, contours, SIFT, SURF, feature matching, and face detection. Code examples are provided for each function to demonstrate its use.
This document provides an introduction to OpenCV, an open source computer vision library. It discusses what computer vision is, including examples of applications like self-driving cars and facial recognition. It then defines OpenCV as a library for real-time computer vision that is cross-platform and can be used with Python. Digital images are explained as pixel matrices, with grayscale images having one channel and color images having three RGB channels. NumPy is also introduced as a library that OpenCV relies on for numerical operations and array processing of images.
Michael Abrash's "What VR could, should, and almost certainly will be within ...SteamDB
Compelling consumer-priced VR hardware is coming within two years according to the document. Valve has been researching and developing VR prototypes and believes VR will transform entertainment if presence, the feeling of truly being somewhere else, can be achieved. Key factors for presence are wide field of view, high resolution, low latency, low persistence, and precise tracking. Valve is collaborating with Oculus and believes they could deliver strong presence for PC VR within two years if they continue to improve their hardware.
CppCat, an Ambitious C++ Code Analyzer from TulaAndrey Karpov
This article was originally published (in Russian) at the website siliconrus.com. It is an interview with Evgeniy Ryzhkov by an author and editor at siliconrus.com, Konstantin Panphilov. The article was translated and published at our blog by the editors' permission.
Today we had a conversation with Evgeniy Ryzhkov, CEO of the "Program Verification Systems" company developing software products in the area of software testing systems and static code analysis systems. The company currently offers two products, PVS-Studio and a recently released CppCat. Both are static analyzers for C++ code.
The Nature of Code via Cinder - Modeling the Natural World in C++Nathan Koch
Modeling the natural world through Daniel Shiffman's book "The Nature of Code" - a look at the nature of animation and interaction as found in the natural world.
I use Cinder and C++ for my examples and explain why one would use those tools as opposed to say, Processing, or Flash.
A neural system is a progression of calculations that attempts to perceive fundamental connections in a lot of information through a procedure that impersonates the manner in which the human brain works.
Megapixel Value Debunked | The Truth about Megapixel Value of a CameraVikrant Mane
This presentation explains the truth about the Megapixel Value of a Camera.
This is an excerpt from Skillshare Course on 'How to Choose the Perfect Smartphone For You'. Follow this link : http://skl.sh/2pFyZ9P to Join this Class & 14,246 other Premium Classes for a minimal fee of $10 Per Month. Grab it now.
Paco Viñoly, Designing in a Developer World, WarmGun 2013500 Startups
The document provides guidelines for how design and engineering teams at Square work together to achieve the best possible product outcomes. It begins with an example case study of how the teams collaborated to design an accurate one-pixel straight line. The presentation outlines that design and engineering are interdependent and cannot exist without each other. It emphasizes starting projects with all teams involved from the beginning, understanding requirements, designing all states, and engineers making designs come to life beyond initial visions. Common language and proper communication between the disciplines is key.
From biological to artificial neurons. An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system.
To learn the basics of neural networks on this workshop we sill explain one of it implement in python. During the workshop we will explain also the script which are the layers processing unit and they function using simply matrix operation such as Hadammard or Dot product. Basic features such as learning modifier (alpha) and bias units are implemented as well..
Data Science - Part XVII - Deep Learning & Image ProcessingDerek Kane
This lecture provides an overview of Image Processing and Deep Learning for the applications of data science and machine learning. We will go through examples of image processing techniques using a couple of different R packages. Afterwards, we will shift our focus and dive into the topics of Deep Neural Networks and Deep Learning. We will discuss topics including Deep Boltzmann Machines, Deep Belief Networks, & Convolutional Neural Networks and finish the presentation with a practical exercise in hand writing recognition technique.
What is "deep learning" and why is it suddenly so popular? In this talk I explore how Deep Learning provides a convenient framework for expressing learning problems and using GPUs to solve them efficiently.
Computer vision aims to build machines that can see like humans. The document introduces computer vision, discussing how it takes images from cameras and analyzes them using software to understand scenes. While deep learning is popular, understanding first principles is important for tasks where data is limited, to understand failures, and because curiosity drives humans to understand how things work. The document outlines the modules to be covered, including imaging, features, 3D reconstruction from single and multiple views, and perception.
The document provides an overview of artificial intelligence and machine learning techniques for image classification using small datasets. It describes how to build a basic convolutional neural network from scratch or fine-tune a pre-trained model like VGG16 to classify images of cats and dogs with only 2000 training examples. Fine-tuning the top layers of VGG16 improved accuracy from 79% using just bottleneck features to 98%, showing how transfer learning can boost performance for small datasets.
The document discusses how early lens design progress was hindered by slow hand calculations and lack of modern materials. It provides examples of simple lens designs that were possible even pre-computer but had limited applications without modern technologies. The document emphasizes that while computers have advanced design capabilities, fundamental design ideas and theories are more important. It provides several examples of innovative lens designs the author developed through conceptual thinking alone. The document cautions against overuse of new technologies like freeform surfaces and metasurfaces without consideration of conventional design alternatives.
This document discusses lane detection techniques for self-driving cars. It begins by introducing autonomous vehicles and their ability to navigate without human input using sensors and machine learning. The document then focuses on implementing a simple lane detection method in OpenCV to detect straight lanes using steps like edge detection, defining a region of interest, and applying Hough transforms to detect lines. It shows how to optimize the detected lines and add them to the original image or video frames to identify lanes. The goal of this technique is to help enable self-driving capabilities like navigation and accident avoidance.
What can neural networks do in the context of art? (style transfer, DeepDream, etc.) How does image recognition play into all of this? Presented at creAIte 2017.
1. The document discusses various methods for implementing K-nearest neighbors algorithm for pattern matching large datasets efficiently.
2. Method one involves dividing each property into equal sections based on the property's dynamic range in the dataset, and assigning data to sections. Test data can then be quickly matched to training data based on matching section numbers rather than exact values.
3. Method two improves on method one by creating a tree structure using the section assignments, allowing even faster matching by traversing the tree to find matching training data.
The document discusses several methods for efficiently using classifiers in computer vision tasks that involve large datasets. It proposes using multiple classifiers together by dividing the data and assigning each classifier a subset, such as individual columns, rows, blocks, or variable blocks of pixels from images. This allows large datasets to be broken down into smaller pieces that fit individual classifiers. The methods also involve using genetic algorithms to help determine optimal combinations and structures of classifiers.
16 OpenCV Functions to Start your Computer Vision journey.docxssuser90e017
This article discusses 16 OpenCV functions for computer vision tasks with Python code examples. It begins with an introduction to computer vision and why OpenCV is useful. It then covers functions for reading/writing images, changing color spaces, resizing images, rotating images, translating images, thresholding images, adaptive thresholding, image segmentation with watershed algorithm, bitwise operations, edge detection, image filtering, contours, SIFT, SURF, feature matching, and face detection. Code examples are provided for each function to demonstrate its use.
This document provides an introduction to OpenCV, an open source computer vision library. It discusses what computer vision is, including examples of applications like self-driving cars and facial recognition. It then defines OpenCV as a library for real-time computer vision that is cross-platform and can be used with Python. Digital images are explained as pixel matrices, with grayscale images having one channel and color images having three RGB channels. NumPy is also introduced as a library that OpenCV relies on for numerical operations and array processing of images.
Michael Abrash's "What VR could, should, and almost certainly will be within ...SteamDB
Compelling consumer-priced VR hardware is coming within two years according to the document. Valve has been researching and developing VR prototypes and believes VR will transform entertainment if presence, the feeling of truly being somewhere else, can be achieved. Key factors for presence are wide field of view, high resolution, low latency, low persistence, and precise tracking. Valve is collaborating with Oculus and believes they could deliver strong presence for PC VR within two years if they continue to improve their hardware.
CppCat, an Ambitious C++ Code Analyzer from TulaAndrey Karpov
This article was originally published (in Russian) at the website siliconrus.com. It is an interview with Evgeniy Ryzhkov by an author and editor at siliconrus.com, Konstantin Panphilov. The article was translated and published at our blog by the editors' permission.
Today we had a conversation with Evgeniy Ryzhkov, CEO of the "Program Verification Systems" company developing software products in the area of software testing systems and static code analysis systems. The company currently offers two products, PVS-Studio and a recently released CppCat. Both are static analyzers for C++ code.
The Nature of Code via Cinder - Modeling the Natural World in C++Nathan Koch
Modeling the natural world through Daniel Shiffman's book "The Nature of Code" - a look at the nature of animation and interaction as found in the natural world.
I use Cinder and C++ for my examples and explain why one would use those tools as opposed to say, Processing, or Flash.
A neural system is a progression of calculations that attempts to perceive fundamental connections in a lot of information through a procedure that impersonates the manner in which the human brain works.
Megapixel Value Debunked | The Truth about Megapixel Value of a CameraVikrant Mane
This presentation explains the truth about the Megapixel Value of a Camera.
This is an excerpt from Skillshare Course on 'How to Choose the Perfect Smartphone For You'. Follow this link : http://skl.sh/2pFyZ9P to Join this Class & 14,246 other Premium Classes for a minimal fee of $10 Per Month. Grab it now.
Paco Viñoly, Designing in a Developer World, WarmGun 2013500 Startups
The document provides guidelines for how design and engineering teams at Square work together to achieve the best possible product outcomes. It begins with an example case study of how the teams collaborated to design an accurate one-pixel straight line. The presentation outlines that design and engineering are interdependent and cannot exist without each other. It emphasizes starting projects with all teams involved from the beginning, understanding requirements, designing all states, and engineers making designs come to life beyond initial visions. Common language and proper communication between the disciplines is key.
From biological to artificial neurons. An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system.
To learn the basics of neural networks on this workshop we sill explain one of it implement in python. During the workshop we will explain also the script which are the layers processing unit and they function using simply matrix operation such as Hadammard or Dot product. Basic features such as learning modifier (alpha) and bias units are implemented as well..
Data Science - Part XVII - Deep Learning & Image ProcessingDerek Kane
This lecture provides an overview of Image Processing and Deep Learning for the applications of data science and machine learning. We will go through examples of image processing techniques using a couple of different R packages. Afterwards, we will shift our focus and dive into the topics of Deep Neural Networks and Deep Learning. We will discuss topics including Deep Boltzmann Machines, Deep Belief Networks, & Convolutional Neural Networks and finish the presentation with a practical exercise in hand writing recognition technique.
What is "deep learning" and why is it suddenly so popular? In this talk I explore how Deep Learning provides a convenient framework for expressing learning problems and using GPUs to solve them efficiently.
Computer vision aims to build machines that can see like humans. The document introduces computer vision, discussing how it takes images from cameras and analyzes them using software to understand scenes. While deep learning is popular, understanding first principles is important for tasks where data is limited, to understand failures, and because curiosity drives humans to understand how things work. The document outlines the modules to be covered, including imaging, features, 3D reconstruction from single and multiple views, and perception.
The document provides an overview of artificial intelligence and machine learning techniques for image classification using small datasets. It describes how to build a basic convolutional neural network from scratch or fine-tune a pre-trained model like VGG16 to classify images of cats and dogs with only 2000 training examples. Fine-tuning the top layers of VGG16 improved accuracy from 79% using just bottleneck features to 98%, showing how transfer learning can boost performance for small datasets.
The document discusses how early lens design progress was hindered by slow hand calculations and lack of modern materials. It provides examples of simple lens designs that were possible even pre-computer but had limited applications without modern technologies. The document emphasizes that while computers have advanced design capabilities, fundamental design ideas and theories are more important. It provides several examples of innovative lens designs the author developed through conceptual thinking alone. The document cautions against overuse of new technologies like freeform surfaces and metasurfaces without consideration of conventional design alternatives.
This document discusses lane detection techniques for self-driving cars. It begins by introducing autonomous vehicles and their ability to navigate without human input using sensors and machine learning. The document then focuses on implementing a simple lane detection method in OpenCV to detect straight lanes using steps like edge detection, defining a region of interest, and applying Hough transforms to detect lines. It shows how to optimize the detected lines and add them to the original image or video frames to identify lanes. The goal of this technique is to help enable self-driving capabilities like navigation and accident avoidance.
What can neural networks do in the context of art? (style transfer, DeepDream, etc.) How does image recognition play into all of this? Presented at creAIte 2017.
Similar to sine_wave_theory_of_pixel_comments (20)
1. The document discusses various methods for implementing K-nearest neighbors algorithm for pattern matching large datasets efficiently.
2. Method one involves dividing each property into equal sections based on the property's dynamic range in the dataset, and assigning data to sections. Test data can then be quickly matched to training data based on matching section numbers rather than exact values.
3. Method two improves on method one by creating a tree structure using the section assignments, allowing even faster matching by traversing the tree to find matching training data.
The document discusses several methods for efficiently using classifiers in computer vision tasks that involve large datasets. It proposes using multiple classifiers together by dividing the data and assigning each classifier a subset, such as individual columns, rows, blocks, or variable blocks of pixels from images. This allows large datasets to be broken down into smaller pieces that fit individual classifiers. The methods also involve using genetic algorithms to help determine optimal combinations and structures of classifiers.
The document proposes a method called MB Predefined K nearest neighbor to improve K nearest neighbor classification when some property values may be incorrect. It divides properties randomly into small packets to create multiple trees, assigns unique IDs to leaf nodes, and builds a master tree from the leaf nodes. During training, it analyzes data routes through the master tree to create match lists for each leaf node. When classifying a new data point, it uses the data point's leaf node IDs to quickly retrieve the best match list and perform classification, improving accuracy over standard KNN even if some property values differ from the training data.
The document proposes a method for storing data without containers by assigning numeric values to letters and using the numeric sequence of words to calculate their memory location, removing the need for indexing structures. It suggests creating a new operating system that can directly access any memory location as needed to implement this approach. The method aims to allow words to provide clues to find their stored data by deriving a memory location directly from their sequence of letters.
The document proposes an alternative to using pixels for image analysis. The author suggests using curved lines of the same or varying sizes instead of uniform pixels to more easily identify objects and extract logical information from images. Curved lines could capture adjacent area information without calculating gradients over the whole image and help cluster data. Pixels restrict analysis to a linear representation whereas the real world is non-linear, so a non-linear technique using curved lines may perform better by mimicking natural shapes. Further research is needed to determine optimal line curvatures, lengths, and whether lines should be uniformly or variably shaped for different applications.
This document proposes a new "sine wave theory of pixels" that models pixels as undulating sine waves rather than fixed rectangles. The theory claims this better simulates how light travels as waves to the eye and would provide more useful visual information than the current pixel format. It argues this could improve computer vision tasks by bringing performance closer to that of the human eye. The document requests experts review this theory and considers it a significant discovery that could revolutionize the field of computer vision.
2.
Just for a second, if you think that my new design of pixel is better design for our eye to process screen
data as that will reduce many unnecessary calculations, that will be a start.
but, for all visual devices, print media, computer vision and robot vision, it is certain that my design is
way better than current one and you will observe that within very short time. All the products in these
four fields will undoubtedly and undeniably use my design, that is a mathematical certainly, either
anyone believe it or not. As soon as I have proposed this new design of pixel, old design is history.
there is a slight possibility that our eye actually receive light as quantized with having this new design of
pixel placed in our eye and creates the undulation by itself as we will do that using new design for
robots.
But, my idea is both is happening. Light is coming as real wave and our eye receives this real wave and
also using this new pixel design to eliminate points and reduce work.
think of light is coming as quantized wave, so one ray will not be straight line, instead, it will be a
undulated line where each particle will be part of the undulation and they all come to our eye and eye
receives the whole string of points or particles one after another using this new design placed in our
eye. this will help to find the anomaly as this ray is undulating and will bring adjacent areas information
and help to eliminate all same colored points and leave only the edge or gradient points. This is MB
quantum theory, light is coming as quantized, but still as undulated wave.
so, there will be three theories of light ‐ wave theory which is pure wave, quantum theory which is pure
quantized without wave and MB quantum theory where it is quantized but still coming as wave.
This might also solve the difference between quantum theory and wave theory and give proper
explanation of all supporting experiment of quantum theory as well as wave theory.
comment 3 :
3.
thank you for your nice comments.
As I said earlier, to accept this for our eyes, we really need to do research to find out the reality. It
surely open doors for search work on this subject. We have not discovered yet how our eye detects the
wave, but, it does not mean that it will not get discovered anytime, if we do research on this, we might
find something which will explain how the wave gets detected by our eye. Please do not think the
research of how our eye functions is completed.
Please read the article, I have mentioned many different designs for various applications. For only visual
devices, I have proposed only wave structure to all points or pixels, that will remove sharp edges and
undulation will help to mix with linear and non linear data data and it will find the edge quickly and
improve visual quality as it will not use unnecessarily coloring a pixel which we do now. you can see the
example where I have said to break a pixel into four parts and gave some explanation.
I also have proposed a two layer system for computer and robot vision where all small points will have
wave structure and some points will combine to a bigger pixel. in that case, we can do whatever we
want. We can increase wave length, decrease it. change the angle from 0 to 360 degree, that will be
user option to find better view. we can change amplitude, ever move to random direction with random
wave length, amplitude and frequency. We can try many thing with that. That is probably for later. We
have to start with simple version with keeping in mind that we can eliminate 90% points with that. More
research on this will give more idea for new algorithms with very less work to accomplish our tasks in
computer and robot vision.
So, people who have long experience in computer and robot vision, they should have no problem with
this to use this for all visual devices, print media, computer and robot vision.
Now, for the physics part of it, these are the new ideas are coming :
1. light comes to our eye as undulated wave and our eye has no arrangement as I proposed for new
pixel design.
5. comment 4:
"2) What is the effect on changing wavelength on a fixed physical structure? If you use a row to detect a
wave then surely your looking for waves that are an integer multiple of that row's length? "
wave will undulate and bring the information about adjacent area. for example, one row up and one
row down. so, think a ray coming to you from front, then, it will go down and up and bring the
information that is it a possible gradient or edge point or not. Row is not detecting a wave, it is finding
anomaly in the data.
comment 5:
Thank you for your comment. Diagram will make it clear, but, basically, just sine function we draw in
graph paper with many variations. For example, same amplitude, different amplitude, same wave
length, different wave length, horizontal, vertical, any angle between 0‐360 degree, goes straight to
opposite point in the row, or goes down, up, gets split in the middle once, many times, amplitude and
wave length gets changed many times in a row, moves in any random direction with any random wave
length and amplitude.
Basically, all possible undulated wave. Natural process has no linearity in it, most of the natural process
in non linear. when something happen in nature, you have to give it freedom of movement, same as a
tree growing up, it goes in different direction with different length.
Author of this article:
Mutawaqqil Billah
Independent Research Scientist,
B.Sc in Computer Science and Mathematics,
Ramapo College of New Jersey, USA