An automated C++ program was created to process atomic force microscopy (AFM) image data to reduce manual processing time from 24 hours to less than 2 hours. The program analyzes AFM image cross-sections by automatically extracting height, width, and area values with 99.5% precision compared to manual processing. It prompts the user for input, imports AFM data, analyzes slopes and derivatives to identify cross-section boundaries, then calculates metrics like height, width, and area which it outputs to an Excel file.
Lab 2: Classification and Regression Prediction Models, training and testing ...Yao Yao
https://github.com/yaowser/data_mining_group_project
https://www.kaggle.com/c/zillow-prize-1/data
From the Zillow real estate data set of properties in the southern California area, conduct the following data cleaning, data analysis, predictive analysis, and machine learning algorithms:
Lab 2: Classification and Regression Prediction Models, training and testing splits, optimization of K Nearest Neighbors (KD tree), optimization of Random Forest, optimization of Naive Bayes (Gaussian), advantages and model comparisons, feature importance, Feature ranking with recursive feature elimination, Two dimensional Linear Discriminant Analysis
Analytical Study and Newer Approach towards Frequent Pattern Mining using Boo...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Lab 2: Classification and Regression Prediction Models, training and testing ...Yao Yao
https://github.com/yaowser/data_mining_group_project
https://www.kaggle.com/c/zillow-prize-1/data
From the Zillow real estate data set of properties in the southern California area, conduct the following data cleaning, data analysis, predictive analysis, and machine learning algorithms:
Lab 2: Classification and Regression Prediction Models, training and testing splits, optimization of K Nearest Neighbors (KD tree), optimization of Random Forest, optimization of Naive Bayes (Gaussian), advantages and model comparisons, feature importance, Feature ranking with recursive feature elimination, Two dimensional Linear Discriminant Analysis
Analytical Study and Newer Approach towards Frequent Pattern Mining using Boo...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This tutor introduces the basic idea of machine learning with a very simple example. Machine learning teaches machines (and me too) to learn to carry out tasks and concepts by themselves. It is that simple, so here is an overview:
http://www.softwareschule.ch/examples/machinelearning.jpg
Mini-lab 1: Stochastic Gradient Descent classifier, Optimizing Logistic Regre...Yao Yao
https://github.com/yaowser/data_mining_group_project
https://www.kaggle.com/c/zillow-prize-1/data
From the Zillow real estate data set of properties in the southern California area, conduct the following data cleaning, data analysis, predictive analysis, and machine learning algorithms:
Mini-lab 1: Stochastic Gradient Descent classifier, Optimizing Logistic Regression Model Performance, Optimizing Support Vector Machine Classifier, Accuracy of results and efficiency, Logistic Regression Feature Importance, interpretation of support vectors, Density Graph
Program slicing technique is used for decomposition of a program by analyzing that particular program data
and control flow. The main application of program slicing includes various software engineering activities such as
program debugging, understanding, program maintenance, and testing and complexity measurement. When a slicing
technique gathers information about the data and control flow of the program taking an actual and specific execution
(or set of executions) of it, then it is said to be dynamic slicing, otherwise it is said to be static slicing. Generally,
dynamic slices are smaller than static because the statements of the program that affect by the slicing criterion for a
particular execution are contained by dynamic slicing. This paper reports a new approach of program slicing that is a
mixed approach of static and dynamic slice (S-D slicing) using Object Oriented Concepts in C++ Language that will
reduce the complexity of the program and simplify the program for various software engineering applications like
program debubbing.
Recent years have seen the emergence of several static analysis techniques for reasoning about programs. This talk presents several major classes of techniques and tools that implement these techniques. Part of the presentation will be a demonstration of the tools.
Dr. Subash Shankar is an Associate Professor in the Computer Science department at Hunter College, CUNY. Prior to joining CUNY, he received a PhD from the University of Minnesota and was a postdoctoral fellow in the model checking group at Carnegie Mellon University. Dr. Shankar also has over 10 years of industrial experience, mostly in the areas of formal methods and tools for analyzing hardware and software systems.
"Клиент Плюс" (www.clientplus.ru) - доступное и быстрое решение, которое поможет сэкономить время при работе с клиентами и повысить качество обслуживания.
This tutor introduces the basic idea of machine learning with a very simple example. Machine learning teaches machines (and me too) to learn to carry out tasks and concepts by themselves. It is that simple, so here is an overview:
http://www.softwareschule.ch/examples/machinelearning.jpg
Mini-lab 1: Stochastic Gradient Descent classifier, Optimizing Logistic Regre...Yao Yao
https://github.com/yaowser/data_mining_group_project
https://www.kaggle.com/c/zillow-prize-1/data
From the Zillow real estate data set of properties in the southern California area, conduct the following data cleaning, data analysis, predictive analysis, and machine learning algorithms:
Mini-lab 1: Stochastic Gradient Descent classifier, Optimizing Logistic Regression Model Performance, Optimizing Support Vector Machine Classifier, Accuracy of results and efficiency, Logistic Regression Feature Importance, interpretation of support vectors, Density Graph
Program slicing technique is used for decomposition of a program by analyzing that particular program data
and control flow. The main application of program slicing includes various software engineering activities such as
program debugging, understanding, program maintenance, and testing and complexity measurement. When a slicing
technique gathers information about the data and control flow of the program taking an actual and specific execution
(or set of executions) of it, then it is said to be dynamic slicing, otherwise it is said to be static slicing. Generally,
dynamic slices are smaller than static because the statements of the program that affect by the slicing criterion for a
particular execution are contained by dynamic slicing. This paper reports a new approach of program slicing that is a
mixed approach of static and dynamic slice (S-D slicing) using Object Oriented Concepts in C++ Language that will
reduce the complexity of the program and simplify the program for various software engineering applications like
program debubbing.
Recent years have seen the emergence of several static analysis techniques for reasoning about programs. This talk presents several major classes of techniques and tools that implement these techniques. Part of the presentation will be a demonstration of the tools.
Dr. Subash Shankar is an Associate Professor in the Computer Science department at Hunter College, CUNY. Prior to joining CUNY, he received a PhD from the University of Minnesota and was a postdoctoral fellow in the model checking group at Carnegie Mellon University. Dr. Shankar also has over 10 years of industrial experience, mostly in the areas of formal methods and tools for analyzing hardware and software systems.
"Клиент Плюс" (www.clientplus.ru) - доступное и быстрое решение, которое поможет сэкономить время при работе с клиентами и повысить качество обслуживания.
Es un video muy interesante donde se explica el concepto, las ventajas y desventajas de utilizar la herramienta de VIDEOCONFERENCIA, en la aulas de clases, aprendiendo a enseñar.-
It is quite often that software developers have absolutely no clue about the cost of an error. It is very important that the error be found at the earliest possible stage.
Preparing
Benchmark
How to Load Files on NativeActivity
How to Make Hand Detector
Calculate Histgram of Skin Color
Detect Skin Area from CapImage
Calculate the Largest Skin Area
Matching Histgrams
COMP 2103X1 Assignment 2Due Thursday, January 26 by 700 PM.docxdonnajames55
COMP 2103X1 Assignment 2
Due Thursday, January 26 by 7:00 PM
General information about assignments (important!):
http://cs.acadiau.ca/~jdiamond/comp2103/assignments/General-info.html
Information on passing in assignments:
http://cs.acadiau.ca/~jdiamond/comp2103/assignments/Pass-in-info.html
Information on coding style:
http://cs.acadiau.ca/~jdiamond/comp2103/assignments/C-coding-style-notes
[1] A filter program is a program which reads its input from “standard input” (“stdin”) and writes
its output to “standard output” (“stdout”). Filter programs are useful because they make it easy
to combine the functions they provide to solve more complex problems using the standard shell
facilities. Filter programs are also nice to write, because the programmer doesn’t have to worry
about writing code to open and close files, nor does the programmer have to worry about dealing
with related error conditions. In some respects, filter programs are truly “win-win”.
Write a filter program which uses getchar() to read in characters from stdin, continuing until
end of file (read the man page and/or textbook to see the details on getchar(), or, heaven forbid,
review the class slides). Your program must count the number of occurrences of each character in
the input. After having read all of the input, it outputs a table similar to the one below which, for
each character seen at least once, lists the total number of times that character was seen as well as
its relative frequency (expressed as a percentage). Note that the characters \n, \r, \t, \0, \a, \b,
\f, and \v (see man ascii) must be displayed with the appropriate “escape sequence”. Ordinary
printable characters must be output as themselves. Non-printable characters (see man isprint)
must be printed with their three-digit octal code (see man printf).
You can get input into a filter program (a2p1 in this case) in three ways:
(a) “pipe” data from another program into it, like
$ echo blah blah | a2p1
(b) “redirect” the contents of a file into the program, like
$ a2p1 < some-file
(c) type at the keyboard, and (eventually) type ^D (control-d) at the beginning of a line to
signify end of file.
Your output must look like the following, for this sample case:
$ echo ^Aboo | a2p1
Char Count Frequency
--------------------------
001 1 20.00%
\n 1 20.00%
b 1 20.00%
o 2 40.00%
Note: in the above examples, and from now on in this course’s assignments, text in red is text that
the human types, and a “$" at the beginning of a line like that represents the shell prompt.
1
http://cs.acadiau.ca/~jdiamond/comp2103/assignments/General-info.html
http://cs.acadiau.ca/~jdiamond/comp2103/assignments/Pass-in-info.html
http://cs.acadiau.ca/~jdiamond/comp2103/assignments/C-coding-style-notes
Note that I entered a ^A (control-a, not the circumflex character followed by the capital A) by
typing ^V^A. The ^V tells your shell that you want it to interpret the next character literally, rather
than to use any special meani.
Secure-Vault is an easy-to-use command-line tool for encrypting and decrypting messages with different ciphers, including custom formulas. It helps you protect sensitive information and decode encrypted messages simply and effectively.
1. Cory E. Bethrant
Polymer Brush Data Processor
Christopher Kemper Ober Lab Group, Department of Materials Science and Engineering,
Cornell University
Cory Bethrant, CCMR REU, Howard University
Atomic Force Microscopy (AFM) images can take up to 24 hours to retrieve data from a single
sample manually. This is because the process requires multiple programs as well as multiple
processes. To save the human labor and make analysis consistent without human error, a C++
based program was created to conduct automated data processing. Compared to manual
processing, final automated data processing could reach 99.5% precision and the time for analysis
could be reduced to less than 2 hours.
Introduction
Data Processing can be very time-consuming,
even in modern research; many times taking
more time than actual research. In this case,
AFM images had to have a cross-section
collected and normalized. Then the user would
have to extract, using Gwydion software, the
detailed information like height and width of
the cross-section from raw data. Finally, the
user exports the image as an Origin-friendly file
in order to calculate the integral of the cross-
section. Using Origin, the area of the cross-
section is calculated. This process has to be
repeated twice for each scan to be processed,
and usually two scans are taken of the same
line. This results in a large, cumbersome
process that can take up to an entire day to
complete.
The program allows a user to batch collect, the
height, width and area of a cross-section. This
reduces manual processing down to only
collecting a cross-section and normalizing it.
The rest is automated. The resulting process
length was reduced down to somewhere
between one and two hours.
The main limitation, is that the AFM image
cross-section has to be relatively clean (But
some noise is filtered out).
Algorithm
The program begins by prompting the user for
the number of files that the user desires to be
processed. Then it prompts for a custom
sensitivity, which manually allows the user to
pick if desired results aren’t achieved at default
settings. The user then submits the sample
name for batch execution. Then the user inputs
the names of the desired lines to be processed.
The program then proceeds to allocate
dynamic memory for the following:
i. slopepositiveFLAG – Indicator of
whether a datasets slope is positive
ii. slopenegativeFLAG – Indicator of
whether a datasets slope is negative
iii. x – Contains all x-coordinates
iv. y – Contains all y-coordinates
v. slope – Contains the slope of each
dataset
2. Cory E. Bethrant
2
vi. firstDery – Contains the first derivative
of each dataset with respect to y
vii. secDery – Contains the second
derivative of each dataset with respect
to y
viii. filename – Contains all filenames
The program then proceeds to import the data and
process it. It begins by determining the slope of
each dataset. Then the first derivative and second
derivative of each dataset is calculated. Then the
dataset is flagged as positive, negative or neither.
Data chunks of 5 are created and used to calculate
a slightly longer-term data trend. The program
then loops through points from the left to the right
lowering the triggering slope through each
iteration of the loop. Then once the program finds
the point in the middle of the left border, the
Program begins a scan in the left direction to find
the beginning of the border. That position is then
saved. The process is repeated on the right border
in the opposite direction (Figure 1.1). Once the
bounds of the cross-section are known it’s
relatively simple for the program to calculate
everything else. The baseline is an average of the
two borders y-position. This is subtracted from the
max y-position within the bounds to calculate the
height. Then the left border x-position is subtracted
from the right border to calculate the width. The
program then uses numerical approximation to
find the area under the curve. The information is
then output to an Excel file for easy creation of a
chart.
Figure 1.1 Demonstrates Program Accuracy
Performance
As long as the left and right border is calculated
correctly, everything else will always be correct.
Because of this fact, the program checks for bound-
related errors such as:
i. A left-bound and right-border that is
zero (No border detected).
ii. A height that is negative (Baseline
incorrect due to noise).
iii. Integral that is negative or zero
(Incorrect Baseline and border incorrect
due to noise or no border being
detected).
If any of these errors are detected scans are
performed again with different sensitivity
parameters. If the program still cannot process the
data taking a new cross-section being sure to
remove significant noise will always solve the issue.
The program has a completion time of 2 seconds
per cross-section which is significantly faster than
manually calculating those values. The data is also
automatically placed in an Excel spreadsheet,
eliminating another extra process.
The program can work with any amount of data
from a single sample at once. Processing multiple
samples at once can be easily added but the time
savings is not significant because cross sections still
have to be created the same way.
4. Cory E. Bethrant
4
slope = new double[numberoflines];
firstDery = new double[numberoflines];
secDery = new double[numberoflines];
if (slopepositiveFLAG== nullptr || slopenegativeFLAG== nullptr ||
slope == nullptr || firstDery == nullptr || secDery == nullptr || x == nullptr || y
== nullptr){
cout << "Error: memory could notbe allocated" << 'n' << "Program
Terminated.Email Cory Bethrant @ cory.bethrant@gmail." << 'n' << 'n' <<
"Press Enter to Close Program."<<'n';
delete[] filename;
delete[] x;
delete[] y;
delete[] slope;
delete[] firstDery;
delete[] secDery;
delete[] slopepositiveFLAG;
delete[] slopenegativeFLAG;
cin.get();
return 0;
}
else
// Unit type determination based on filenamestring //
unitType = "m";
// Import Data //
for (int i = 0; i <= numberoflines; i++){
f >> x[i] >> y[i];
}
for (int i = 0; i < numberoflines; i++){
// Perform Calculations//
slopepositiveFLAG[i] = false;
slopenegativeFLAG[i] = false;
slope[i] = ((y[i+1]-y[i])/(x[i+1]-x[i]));
slopeTotal += slope[i];
slopeAVG = slopeTotal / 5;
if ((i+1) % 5 == 0){
// Calculate DataChunkInformation //
slopeAVG = (slopeTotal/(i+1));
slopeTotal = 0;
}
// Print Current Data Set Valuesand Set Flags//
cout << "Data Set " << i+1 << 'n' << "x1: " << x[i] << unitType << 't'
<< "y1: " << y[i] << unitType<< 'n' << "x2: " << x[i+1] << unitType << 't'
<< "y2: " << y[i+1] << unitType << 'n';
if (x[i+1] == y[i+1] && x[i] == y[i]){
cout << "The two data pointsare the same." << 'n' << 'n';
}
else if ((y[i+1]-y[i])==0){
cout << "The slope of Data Set " << i+1 << " isUndefined." << 'n'
<< 'n';
}
else if (slope[i] > 0.03){
cout << "The slope of Data Set " << i+1 << " isPositive." << 'n';
slopepositiveFLAG[i] = true;
cout << "The slope of Data Set " << i+1 << " equals: " <<
slope[i]<< 'n' << 'n';
}
else if (slope[i] < -0.03){
cout << "The slope of Data Set " << i+1 << " isNegative." << 'n';
slopenegativeFLAG[i] = true;
cout << "The slope of Data Set " << i+1 << " equals: " <<
slope[i]<< 'n' << 'n';
}
else {
cout << "The slope of Data Set " << i+1 << " isNeutral." << 'n';
slopepositiveFLAG[i] = false;
slopenegativeFLAG[i] = false;
cout << "The slope of Data Set " << i+1 << " equals: " <<
slope[i]<< 'n' << 'n';
}
// Determine if Pattern Border HasBeen Reached //
if (slope[i] > sensitivity && onPattern == false && patternDone ==
false){
cout << "Left Pattern Border Reached." << 'n' << 'n';
for (int f = 1; f < numberoflines; f++){
if (slopepositiveFLAG[i-f] == false && slopepositiveFLAG[i-f-1]
== false){
break;
}
if (slopepositiveFLAG[i-f] == true){
j++;
}
}
leftborderPOS = x[i-j-1];
leftborderPOSint= i - j - 1;
onPattern = true;
slopePositiveCount = 0;
slopeNegativeCount = 0;
slopeNeutralCount= 0;
j = 0;
}
if (slope[i] < -sensitivity && onPattern == true && patternDone ==
false && rightborder == false){
cout << "Right Pattern Border Reached." << 'n' << 'n';
rightborder = true;
rightborderPOS = x[i];
patternDone = true;
slopePositiveCount = 0;
slopeNegativeCount = 0;
slopeNeutralCount= 0;
}
if (onPattern == true && rightborder == true){
if (slopenegativeFLAG[i] == true){
;
}
else {
onPattern = false;
rightborderPOS = 0;
rightborderPOS = x[i];
rightborderPOSint = i;
k = -1;
}
}
// CalculateDerivatives//
double t = 10^-7;
firstDery[i] = ( 4.0 / 3.0 * (y[i + 1] - y[i - 1]) / (2.0 * t)
- 1.0 / 3.0 * (y[i + 2] - y[i - 2]) / (4.0 * t) );
secDery[i] = ( 4.0 / 3.0 * (firstDery[i + 1] - firstDery[i - 1]) / (2.0 * t)
- 1.0 / 3.0 * (firstDery[i + 2] - firstDery[i - 2]) / (4.0 * t) );
// Print Derivative Information//
if (i < 2 || i > numberoflines- 2){
cout << "First Derivative can't be calculatedcurrently. " << 'n';
}
else {
cout << "First Derivative is: " << firstDery[i] << 'n';
}
if (i < 4 || i > numberoflines- 4){
cout << "Second Derivativecan't be calculated currently. " << 'n'
<< 'n' << 'n';
}
else {
cout << "Second Derivativeis: " << secDery[i] << 'n' << 'n' << 'n'
;
}
5. Cory E. Bethrant
5
// Determine Data Trend for Data Chunk//
if ((i+1) % 5 == 0 && slopeAVG > 0.5 && i > 0){
cout << br << 'n' << "Data Trend for last 5 pointsisPositive." <<
'n' << "Average Slope: " << slopeAVG << 'n' << br << 'n' << 'n';
TrendPositive = true;
if (slopeAVG > 0.5){
slopePositiveCount++ ;
if (slopeAVG > 2){
slopePositiveCount= 3;
}
}
else if (slopeAVG < -0.5){
slopeNegativeCount++;
if (slopeAVG > 2){
slopeNegativeCount = 3;
}
}
else {
slopeNeutralCount++ ;
if (slopeAVG > 2){
slopeNeutralCount = 3;
}
}
slopeAVG = 0;
}
else if ((i+1) % 5 == 0 && slopeAVG < -0.5 && i > 0) {
cout << br << 'n' << "Data Trend for last 5 pointsisNegative." <<
'n' << "Average Slope:" << slopeAVG << 'n' << br << 'n' << 'n';
slopeTotal = 0;
TrendNegative= true;
if (slopeAVG > 0.5){
slopePositiveCount++ ;
if (slopeAVG > 2){
slopePositiveCount= 3;
}
}
else if (slopeAVG < -0.5){
slopeNegativeCount++;
if (slopeAVG < -2){
slopeNegativeCount = 3;
}
}
else {
slopeNeutralCount++ ;
if (slopeAVG > 2){
slopeNeutralCount = 3;
}
}
slopeAVG = 0;
}
else if ((i+1)% 5 == 0 && i > 0){
cout << br << 'n' << "Data Trend for last 5 pointsisNeutral." <<
'n' << "Average Slope:" << slopeAVG << 'n' << br << 'n' << 'n';
slopeTotal = 0;
if (slopeAVG > 0.5){
slopePositiveCount++ ;
if (slopeAVG > 2){
slopePositiveCount= 3;
}
}
else if (slopeAVG < -0.5){
slopeNegativeCount++;
if (slopeAVG > 2){
slopeNegativeCount = 3;
}
}
else {
slopeNeutralCount++ ;
if (slopeAVG > 2){
slopeNeutralCount = 3;
}
}
slopeAVG = 0;
}
}
double max = 0;
for(int i = leftborderPOSint; i < rightborderPOSint; i++){
if(y[i] > max)
max = y[i];
// End of Data Loop //
}
baseline = (y[leftborderPOSint] + y[rightborderPOSint])/2;
// Find Integral //
integral = 0.0;
for(int i = leftborderPOSint; i <= rightborderPOSint; i++){
integral += ((((y[i]+y[i+1])/2)-baseline)*(x[i+1]-x[i]));
}
// Final Calculations//
height = max - baseline;
width = rightborderPOS - leftborderPOS;
// Error Catching (If error isdetected lineisrescanned with lower
sensitivity) //
if (width <=0 || integral <= 0 || height <= 0 || leftborderPOSint <= 0){
patternDone = false;
rightborder = false;
onPattern = false;
patternDone = false;
TrendNegative = false;
TrendPositive = false;
leftborderPOS = 0.0;
rightborderPOS = 0.0;
baseline = 0.0;
height = 0.0;
width = 0.0;
slopeTotal = 0.0;
slopeAVG = 0.0;
integral = 0.0;
rightborderPOSint= 0;
leftborderPOSint= 0;
j = 0;
k = -1;
for(int q = 2; q < numberoflines; q++){
for(int v = 1; (width <=0 || integral <= 0 || height<= 0 ||
leftborderPOSint <= 0) && v < (sensitivity / .0001); v++){
if (slope[q] > (sensitivity - (v * .0001)) && onPattern == false &&
patternDone == false){
for (int f = 1; f < numberoflines; f++){
if (slopepositiveFLAG[q-f] == false){
break;
}
if (slopepositiveFLAG[q-f] == true){
j++;
}
leftborderPOS= x[q-j-1];
leftborderPOSint = q - j - 1;
onPattern = true;
slopePositiveCount= 0;
slopeNegativeCount = 0;
slopeNeutralCount = 0;
j = 0;
}
}
}
}
for(int q = numberoflines-2; q >= 0; q--){
for(int v = 1; (width <=0 || integral <= 0 || height<= 0 ||
rightborderPOS == 0) && v < (sensitivity / .0001); v++){
k = -1;
slopePositiveCount = 0;
slopeNegativeCount = 0;
slopeNeutralCount= 0;
if (slope[q] < -(sensitivity - (v * .0001)) && onPattern == true &&
patternDone == false && rightborder == false){
rightborder = true;
rightborderPOS= x[q];
if (onPattern == true && rightborder == true){
if (slopenegativeFLAG[q] == true){
;
}
else {
onPattern = false;
6. Cory E. Bethrant
6
patternDone = true;
rightborderPOS= 0;
rightborderPOS= x[q];
rightborderPOSint = q;
k = -1;
}
}
rightborderPOS= 0;
rightborderPOS= x[q];
rightborderPOSint = q;
k = -1;
slopePositiveCount= 0;
slopeNegativeCount = 0;
slopeNeutralCount = 0;
}
for(int i = leftborderPOSint;i < rightborderPOSint; i++){
if(y[i] > max)
max = y[i];
// End of Data Loop //
}
}
baseline = (y[leftborderPOSint] + y[rightborderPOSint])/2;
if (leftborderPOSint == 0){
leftborderPOSint = 1;
}
// Find Integral //
for(int i = leftborderPOSint; i <= rightborderPOSint;i++){
integral += ((((y[i]+y[i+1])/2)-baseline)*(x[i+1]-x[i]));
}
// Final Calculations//
height = max - baseline;
width = rightborderPOS - leftborderPOS;
}
}
// Find Integral //
integral = 0.0;
for(int i = leftborderPOSint; i <= rightborderPOSint; i++){
integral += ((((y[i]+y[i+1])/2)-baseline)*(x[i+1]-x[i]));
}
// Output //
cout << brlg << 'n' << "Final Statisticsfor Pattern: " << filename[d] <<
'n' << "Left Border: x = " << leftborderPOS << unitType<< 'n' << "Right
Border: x = " << rightborderPOS<< unitType << 'n' << 'n' << "Baseline
Value: y = " << baseline << unitType<<'n' << "Max:y = " << max <<
unitType<< 'n' << "Width: x = " << width << unitType << 'n' << "Height: y =
" << height << unitType << 'n' << "Integral of Patternis: " << integral <<
unitType<< 'n' << 'n';
// Output Data to Excel File //
MyExcelFile<< filename[d]<< "," << width << "," << height << "," <<
integral << endl;
// Reset Loop //
patternDone = false;
rightborder = false;
onPattern = false;
patternDone = false;
TrendNegative = false;
TrendPositive= false;
leftborderPOS = 0.0;
rightborderPOS = 0.0;
baseline = 0.0;
height = 0.0;
width = 0.0;
slopeTotal = 0.0;
slopeAVG = 0.0;
integral = 0.0;
numberoflines= 0;
slopeNeutralCount = 0;
slopeNegativeCount= 0;
slopePositiveCount = 0;
rightborderPOSint= 0;
leftborderPOSint = 0;
j = 0;
k = -1;
f.clear();
f.seekg(0);
f.close();
d++;
}
MyExcelFile.close();
// Delete All Dynamically Allocated Memory //
delete[] filename;
cout << "Thanksfor using thisprogram :)" << 'n' << 'n' << "Press Enter
to Close...";
cin.get();
return 0;
}
Conclusion
Since each cross-section can be processed within 2
seconds, the time savings gained by automating
this process will increase over time. The program
will allow more research to be completed and less
human error to occur.
Acknowledgments
I would like to thank Wei-Liang Chen, Dr.
Christopher Ober, and the Ober lab for their help
and support with this project. This work was
supported in part by the Cornell Center for
Materials Research with funding from the Research
Experience for Undergraduates program (DMR-
1460428 and DMR-1120296) and the NSF PREM
program (DMR-1205608). This work made use of
the Cornell Center for Materials Research Shared
Facilities which are supported through the NSF
MRSEC program (DMR-1120296).