Sheng Li completed work in the Spacecraft Relative Motion Control Lab over the summer of 2016. This included developing multiple object tracking for the motion capture system, building a simple estimator for the model predictive control (MPC) controller in Simulink, and gaining an understanding of the robot controller codes. Key achievements were realizing multiple object tracking by modifying variable dimensions, developing a simple estimator to account for camera measurement errors, and learning how to control the robot motion with existing codes.
Tensorflow in practice by Engineer - donghwi chaDonghwi Cha
- Tensorflow is an introduction to the machine learning framework Tensorflow covering key concepts like computation graphs, operations, sessions, training, replication, and clustering.
- Key aspects discussed include how Tensorflow executes operations as a static computation graph, uses sessions to run graphs and tensors to hold values, and supports data parallelism through replication across devices/workers.
- The document provides examples of building neural network models in Tensorflow and discusses techniques for training models like backpropagation and distributing training using data parallelism.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2019-alliance-vitf-facebook
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Joseph Spisak, Product Manager at Facebook, delivers the presentation "PyTorch Deep Learning Framework: Status and Directions" at the Embedded Vision Alliance's December 2019 Vision Industry and Technology Forum. Spisak gives an update on the Torch deep learning framework and where it’s heading.
The slides shown here have been used for talks given to scientists in informal contexts.
Python is introduced as a valuable tool for both producing and evaluating data.
The talk is essentially a guided tour of the author's favourite parts of the Python ecosystem. Besides the Python language itself, NumPy and SciPy as well as Matplotlib are mentioned.
A last part of the talk concerns itself with code execution speed. With this problem in sight, Cython and f2py are introduced as means of glueing different languages together and speeding Python up.
The source code for the slides, code snippets and further links are available in a git repository at
https://github.com/aeberspaecher/PythonForScientists
PyTorch is one of the most widely used deep learning library in python community. In this talk I will cover the basic to advanced guide to implement deep learning model using PyTorch. My goal is to introduce PyTorch and show how to use it for deep learning project.
Basic concept of Deep Learning with explaining its structure and backpropagation method and understanding autograd in PyTorch. (+ Data parallism in PyTorch)
Threads and Callbacks for Embedded PythonYi-Lung Tsai
Python is a great choice to be customized plug-ins for existing applications. Extending existing applications with Python program is also practical. For large systems, multi-thread programming is ubiquitous along with asynchronous programming, such as event routing. This presentation focuses on dealing with threads and callbacks while embedding Python in other applications.
C++ How I learned to stop worrying and love metaprogrammingcppfrug
Cette présentation parcours quelques applications directes de la méta-programmation en C++(11/14) avec comme objectif de démontrer son utilité dans un cadre applicatif.
The document provides an introduction to Objective-C, including background information on its origins and current usage. It discusses key Objective-C concepts like classes, methods, memory management, and the differences between static, stack and heap memory. Code examples are provided to demonstrate how to declare classes, instantiate objects, call methods, and handle memory allocation and release of objects.
Tensorflow in practice by Engineer - donghwi chaDonghwi Cha
- Tensorflow is an introduction to the machine learning framework Tensorflow covering key concepts like computation graphs, operations, sessions, training, replication, and clustering.
- Key aspects discussed include how Tensorflow executes operations as a static computation graph, uses sessions to run graphs and tensors to hold values, and supports data parallelism through replication across devices/workers.
- The document provides examples of building neural network models in Tensorflow and discusses techniques for training models like backpropagation and distributing training using data parallelism.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2019-alliance-vitf-facebook
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Joseph Spisak, Product Manager at Facebook, delivers the presentation "PyTorch Deep Learning Framework: Status and Directions" at the Embedded Vision Alliance's December 2019 Vision Industry and Technology Forum. Spisak gives an update on the Torch deep learning framework and where it’s heading.
The slides shown here have been used for talks given to scientists in informal contexts.
Python is introduced as a valuable tool for both producing and evaluating data.
The talk is essentially a guided tour of the author's favourite parts of the Python ecosystem. Besides the Python language itself, NumPy and SciPy as well as Matplotlib are mentioned.
A last part of the talk concerns itself with code execution speed. With this problem in sight, Cython and f2py are introduced as means of glueing different languages together and speeding Python up.
The source code for the slides, code snippets and further links are available in a git repository at
https://github.com/aeberspaecher/PythonForScientists
PyTorch is one of the most widely used deep learning library in python community. In this talk I will cover the basic to advanced guide to implement deep learning model using PyTorch. My goal is to introduce PyTorch and show how to use it for deep learning project.
Basic concept of Deep Learning with explaining its structure and backpropagation method and understanding autograd in PyTorch. (+ Data parallism in PyTorch)
Threads and Callbacks for Embedded PythonYi-Lung Tsai
Python is a great choice to be customized plug-ins for existing applications. Extending existing applications with Python program is also practical. For large systems, multi-thread programming is ubiquitous along with asynchronous programming, such as event routing. This presentation focuses on dealing with threads and callbacks while embedding Python in other applications.
C++ How I learned to stop worrying and love metaprogrammingcppfrug
Cette présentation parcours quelques applications directes de la méta-programmation en C++(11/14) avec comme objectif de démontrer son utilité dans un cadre applicatif.
The document provides an introduction to Objective-C, including background information on its origins and current usage. It discusses key Objective-C concepts like classes, methods, memory management, and the differences between static, stack and heap memory. Code examples are provided to demonstrate how to declare classes, instantiate objects, call methods, and handle memory allocation and release of objects.
1. Arrays declared with a fixed size limit the program size, while dynamically allocated arrays using heap memory allow the size to be determined at runtime.
2. The heap segment is used for dynamic memory allocation using functions like malloc() and new to request memory from the operating system as needed.
3. Deallocation of dynamically allocated memory is required using free() and delete to avoid memory leaks and ensure memory is returned to the operating system.
Despite being a slow interpreter, Python is a key component in high-performance computing (HPC). Python is easy to use. C++ is fast. Together they are a beautiful blend. A new tool, pybind11, makes this approach even more attractive to HPC code. It focuses on the niceties C++11 brings in. Beyond the syntactic sugar around the Python C API, it is interesting to see how pybind11 handles the vast difference between the two languages, and what matters to HPC.
Go Native : Squeeze the juice out of your 64-bit processor using C++Fernando Moreira
The document discusses native 64-bit development using C/C++. It begins with introductions from the presenter and asks attendees about their experiences. The talk's schedule is then outlined, covering introducing 64-bit processors, advantages over 32-bit, how native 64-bit development looks, data models, common pitfalls, optimization tips, real-world tips, code analysis and debugging tools, and prospecting the costs of 64-bit development. The presentation aims to help developers optimize their code and leverage 64-bit capabilities.
Notes about moving from python to c++ py contw 2020Yung-Yu Chen
The document discusses moving from Python to C++. It notes that Python is commonly used for application development due to its ease of use, while C++ is used for high-performance computing kernels due to better performance. It recommends binding Python and C++ together, using C++ for performance critical parts and Python for user interfaces and scripting. Pybind11 is identified as a good tool for binding Python and C++ that provides modern C++ features and is easy to use.
Smart pointers help solve memory management issues like leaks by automatically freeing memory when an object goes out of scope. They support the RAII idiom where resource acquisition is tied to object lifetime. Common smart pointers include std::shared_ptr, std::unique_ptr, std::weak_ptr, which help avoid leaks from exceptions or cycles in object graphs. make_shared is preferable to separate allocation as it can allocate the object and control block together efficiently in one allocation.
If you've tried Apache Solr 1.4, you've probably had a chance to take it for a spin indexing and searching your data, and getting acquainted with its powerful, versatile new features and functions. Now, it's time to roll up your sleeves and really master what Solr 1.4 has to offer.
Boost.Python allows extending C++ code with Python by exposing C++ functions, classes, and objects to Python. It provides a simpler approach than other tools by using only C++. Boost.Python handles interfacing C++ and Python types and memory management. The document discusses exposing C++ code like functions, classes with inheritance and special methods, constants, and enums. It also covers passing objects between C++ and Python like lists and custom types. While Boost.Python enables seamless integration, there are still challenges around complex C++ features and performance. The document ends by demonstrating embedding Python in C++.
PyTorch constructs dynamic computational graphs that allow for maximum flexibility and speed for deep learning research. Dynamic graphs are useful when the computation cannot be fully determined ahead of time, as they allow the graph to change on each iteration based on variable data. This makes PyTorch well-suited for problems with dynamic or variable sized inputs. While static graphs can optimize computation, dynamic graphs are easier to debug and create extensions for. PyTorch aims to be a simple and intuitive platform for neural network programming and research.
The document provides an overview of rvalue references and move semantics in C++, which allow avoiding unnecessary copying by allowing objects to be moved instead of copied in certain situations. It discusses the differences between lvalues and rvalues, the purpose of rvalue references, how move semantics and std::move work, and briefly covers forwarding references and perfect forwarding. The objective is to leave the reader with a high-level understanding of these C++11 features rather than detailed specifics.
Pythran: Static compiler for high performance by Mehdi Amini PyData SV 2014PyData
Pythran is a an ahead of time compiler that turns modules written in a large subset of Python into C++ meta-programs that can be compiled into efficient native modules. It targets mainly compute intensive part of the code, hence it comes as no surprise that it focuses on scientific applications that makes extensive use of Numpy. Under the hood, Pythran inter-procedurally analyses the program and performs high level optimizations and parallel code generation. Parallelism can be found implicitly in Python intrinsics or Numpy operations, or explicitly specified by the programmer using OpenMP directives directly in the Python source code. Either way, the input code remains fully compatible with the Python interpreter. While the idea is similar to Parakeet or Numba, the approach differs significantly: the code generation is not performed at runtime but offline. Pythran generates C++11 heavily templated code that makes use of the NT2 meta-programming library and relies on any standard-compliant compiler to generate the binary code. We propose to walk through some examples and benchmarks, exposing the current state of what Pythran provides as well as the limit of the approach.
This document contains information about strings in C programming including how they are represented in memory, standard string functions like strlen(), strcpy(), strcmp(), etc. It also provides examples of using these string functions. The document discusses arrays of pointers as an alternative to 2D character arrays for storing strings to avoid memory wastage. It includes questions and answers related to strings and structures.
PyTorch crash course: Introduction to PyTorch deep learning framework and step by step guide to configuring PyCharm for using a remote server for implementing deep learning, plus a summary of Linux's most relevant commands.
[JavaOne 2011] Models for Concurrent ProgrammingTobias Lindaaker
The document discusses models for concurrent programming. It summarizes common misconceptions about threads and concurrency, and outlines some of the core abstractions and tools available in Java for writing concurrent programs, including threads, monitors, volatile variables, java.util.concurrent classes like ConcurrentHashMap, and java.util.concurrent.locks classes like ReentrantLock. It also discusses some models not currently supported in Java like parallel arrays, transactional memory, actors, and Clojure's approach to concurrency using immutable data structures, refs, and atoms.
C is a widely used programming language developed in the 1970s. It is efficient and commonly used for system software and applications. Variables in C have automatic, static, or allocated storage classes. Static variables retain their value between function calls. Hashing is used to convert data into integers to enable fast searching when there is no inherent ordering. Include files can be nested and precompiled headers improve compilation speed. Pointers can have a null value to represent no target. calloc() allocates memory for an array and initializes elements to 0 while malloc() only allocates raw memory.
This document discusses speaker diarization, which is the process of segmenting an audio stream into homogeneous segments according to speaker identity. It covers feature extraction methods like MFCCs, segmentation using Bayesian Information Criteria to compare Gaussian mixture models, and clustering algorithms like k-means and hierarchical agglomerative clustering. Dendrogram visualizations are used to identify natural speaker clusters. The overall goal is to partition audio recordings of discussions or debates into homogeneous segments to attribute speech segments to individual speakers.
The document discusses different types of memory areas in C++ including stack, heap, static, and const data areas. It compares pointers and references, explaining that pointers can be null while references must always refer to a valid object. The document also covers memory management topics like new and delete operators, placement new, and smart pointers. Common memory problems are outlined along with solutions like using destructors and smart pointers to avoid leaks.
Statistical Machine Learning for Text Classification with scikit-learn and NLTKOlivier Grisel
This document discusses using machine learning algorithms and natural language processing tools for text classification tasks. It covers using scikit-learn and NLTK to extract features from text, build predictive models, and evaluate performance on tasks like sentiment analysis, topic categorization, and language identification. Feature extraction methods discussed include bag-of-words, TF-IDF, n-grams, and collocations. Classifiers covered are Naive Bayes and linear support vector machines. The document reports typical accuracy results in the 70-97% range for different datasets and models.
Seeing with Python presented at PyCon AU 2014Mark Rees
This document discusses the history and current state of computer vision. It begins with definitions of computer vision from the 1980s, focusing on machine vision and automatically analyzing images. It then provides a 2014 definition that emphasizes duplicating human vision abilities through electronic image perception and understanding using models from various fields. The document notes computer vision involves more than just image capture, including image processing, algorithm development, and display control. It also lists and briefly describes several popular Python libraries for computer vision tasks, such as PIL, Scipy ndimage, Mahotas, PCV, SimpleCV, and OpenCV. It concludes with resources for learning more about computer vision and Python.
La revista de ortopedia con más citas en 2015 fue Spine, con 38,769 citas. Tuvo un factor de impacto de 2.439 y un índice de inmediatez de 0.334. Dentro de su categoría, ocupó el puesto 16 de 74 revistas y se ubicó en el cuartil Q1.
Este documento describe los objetivos y pasos para completar la Tarea 2 sobre competencias informacionales. Los objetivos son buscar monografías sobre "Cuidados enfermeros para pacientes críticos y enfermos terminales" en el catálogo Fama usando operadores booleanos, importar 5 monografías pertinentes a Mendeley, y crear una bibliografía en formato Vancouver. El documento explica la estrategia de búsqueda utilizada en Fama, el proceso de importar los resultados a Mendeley, y cómo configurar Mendeley para citar en formato Vancouver.
1. Arrays declared with a fixed size limit the program size, while dynamically allocated arrays using heap memory allow the size to be determined at runtime.
2. The heap segment is used for dynamic memory allocation using functions like malloc() and new to request memory from the operating system as needed.
3. Deallocation of dynamically allocated memory is required using free() and delete to avoid memory leaks and ensure memory is returned to the operating system.
Despite being a slow interpreter, Python is a key component in high-performance computing (HPC). Python is easy to use. C++ is fast. Together they are a beautiful blend. A new tool, pybind11, makes this approach even more attractive to HPC code. It focuses on the niceties C++11 brings in. Beyond the syntactic sugar around the Python C API, it is interesting to see how pybind11 handles the vast difference between the two languages, and what matters to HPC.
Go Native : Squeeze the juice out of your 64-bit processor using C++Fernando Moreira
The document discusses native 64-bit development using C/C++. It begins with introductions from the presenter and asks attendees about their experiences. The talk's schedule is then outlined, covering introducing 64-bit processors, advantages over 32-bit, how native 64-bit development looks, data models, common pitfalls, optimization tips, real-world tips, code analysis and debugging tools, and prospecting the costs of 64-bit development. The presentation aims to help developers optimize their code and leverage 64-bit capabilities.
Notes about moving from python to c++ py contw 2020Yung-Yu Chen
The document discusses moving from Python to C++. It notes that Python is commonly used for application development due to its ease of use, while C++ is used for high-performance computing kernels due to better performance. It recommends binding Python and C++ together, using C++ for performance critical parts and Python for user interfaces and scripting. Pybind11 is identified as a good tool for binding Python and C++ that provides modern C++ features and is easy to use.
Smart pointers help solve memory management issues like leaks by automatically freeing memory when an object goes out of scope. They support the RAII idiom where resource acquisition is tied to object lifetime. Common smart pointers include std::shared_ptr, std::unique_ptr, std::weak_ptr, which help avoid leaks from exceptions or cycles in object graphs. make_shared is preferable to separate allocation as it can allocate the object and control block together efficiently in one allocation.
If you've tried Apache Solr 1.4, you've probably had a chance to take it for a spin indexing and searching your data, and getting acquainted with its powerful, versatile new features and functions. Now, it's time to roll up your sleeves and really master what Solr 1.4 has to offer.
Boost.Python allows extending C++ code with Python by exposing C++ functions, classes, and objects to Python. It provides a simpler approach than other tools by using only C++. Boost.Python handles interfacing C++ and Python types and memory management. The document discusses exposing C++ code like functions, classes with inheritance and special methods, constants, and enums. It also covers passing objects between C++ and Python like lists and custom types. While Boost.Python enables seamless integration, there are still challenges around complex C++ features and performance. The document ends by demonstrating embedding Python in C++.
PyTorch constructs dynamic computational graphs that allow for maximum flexibility and speed for deep learning research. Dynamic graphs are useful when the computation cannot be fully determined ahead of time, as they allow the graph to change on each iteration based on variable data. This makes PyTorch well-suited for problems with dynamic or variable sized inputs. While static graphs can optimize computation, dynamic graphs are easier to debug and create extensions for. PyTorch aims to be a simple and intuitive platform for neural network programming and research.
The document provides an overview of rvalue references and move semantics in C++, which allow avoiding unnecessary copying by allowing objects to be moved instead of copied in certain situations. It discusses the differences between lvalues and rvalues, the purpose of rvalue references, how move semantics and std::move work, and briefly covers forwarding references and perfect forwarding. The objective is to leave the reader with a high-level understanding of these C++11 features rather than detailed specifics.
Pythran: Static compiler for high performance by Mehdi Amini PyData SV 2014PyData
Pythran is a an ahead of time compiler that turns modules written in a large subset of Python into C++ meta-programs that can be compiled into efficient native modules. It targets mainly compute intensive part of the code, hence it comes as no surprise that it focuses on scientific applications that makes extensive use of Numpy. Under the hood, Pythran inter-procedurally analyses the program and performs high level optimizations and parallel code generation. Parallelism can be found implicitly in Python intrinsics or Numpy operations, or explicitly specified by the programmer using OpenMP directives directly in the Python source code. Either way, the input code remains fully compatible with the Python interpreter. While the idea is similar to Parakeet or Numba, the approach differs significantly: the code generation is not performed at runtime but offline. Pythran generates C++11 heavily templated code that makes use of the NT2 meta-programming library and relies on any standard-compliant compiler to generate the binary code. We propose to walk through some examples and benchmarks, exposing the current state of what Pythran provides as well as the limit of the approach.
This document contains information about strings in C programming including how they are represented in memory, standard string functions like strlen(), strcpy(), strcmp(), etc. It also provides examples of using these string functions. The document discusses arrays of pointers as an alternative to 2D character arrays for storing strings to avoid memory wastage. It includes questions and answers related to strings and structures.
PyTorch crash course: Introduction to PyTorch deep learning framework and step by step guide to configuring PyCharm for using a remote server for implementing deep learning, plus a summary of Linux's most relevant commands.
[JavaOne 2011] Models for Concurrent ProgrammingTobias Lindaaker
The document discusses models for concurrent programming. It summarizes common misconceptions about threads and concurrency, and outlines some of the core abstractions and tools available in Java for writing concurrent programs, including threads, monitors, volatile variables, java.util.concurrent classes like ConcurrentHashMap, and java.util.concurrent.locks classes like ReentrantLock. It also discusses some models not currently supported in Java like parallel arrays, transactional memory, actors, and Clojure's approach to concurrency using immutable data structures, refs, and atoms.
C is a widely used programming language developed in the 1970s. It is efficient and commonly used for system software and applications. Variables in C have automatic, static, or allocated storage classes. Static variables retain their value between function calls. Hashing is used to convert data into integers to enable fast searching when there is no inherent ordering. Include files can be nested and precompiled headers improve compilation speed. Pointers can have a null value to represent no target. calloc() allocates memory for an array and initializes elements to 0 while malloc() only allocates raw memory.
This document discusses speaker diarization, which is the process of segmenting an audio stream into homogeneous segments according to speaker identity. It covers feature extraction methods like MFCCs, segmentation using Bayesian Information Criteria to compare Gaussian mixture models, and clustering algorithms like k-means and hierarchical agglomerative clustering. Dendrogram visualizations are used to identify natural speaker clusters. The overall goal is to partition audio recordings of discussions or debates into homogeneous segments to attribute speech segments to individual speakers.
The document discusses different types of memory areas in C++ including stack, heap, static, and const data areas. It compares pointers and references, explaining that pointers can be null while references must always refer to a valid object. The document also covers memory management topics like new and delete operators, placement new, and smart pointers. Common memory problems are outlined along with solutions like using destructors and smart pointers to avoid leaks.
Statistical Machine Learning for Text Classification with scikit-learn and NLTKOlivier Grisel
This document discusses using machine learning algorithms and natural language processing tools for text classification tasks. It covers using scikit-learn and NLTK to extract features from text, build predictive models, and evaluate performance on tasks like sentiment analysis, topic categorization, and language identification. Feature extraction methods discussed include bag-of-words, TF-IDF, n-grams, and collocations. Classifiers covered are Naive Bayes and linear support vector machines. The document reports typical accuracy results in the 70-97% range for different datasets and models.
Seeing with Python presented at PyCon AU 2014Mark Rees
This document discusses the history and current state of computer vision. It begins with definitions of computer vision from the 1980s, focusing on machine vision and automatically analyzing images. It then provides a 2014 definition that emphasizes duplicating human vision abilities through electronic image perception and understanding using models from various fields. The document notes computer vision involves more than just image capture, including image processing, algorithm development, and display control. It also lists and briefly describes several popular Python libraries for computer vision tasks, such as PIL, Scipy ndimage, Mahotas, PCV, SimpleCV, and OpenCV. It concludes with resources for learning more about computer vision and Python.
La revista de ortopedia con más citas en 2015 fue Spine, con 38,769 citas. Tuvo un factor de impacto de 2.439 y un índice de inmediatez de 0.334. Dentro de su categoría, ocupó el puesto 16 de 74 revistas y se ubicó en el cuartil Q1.
Este documento describe los objetivos y pasos para completar la Tarea 2 sobre competencias informacionales. Los objetivos son buscar monografías sobre "Cuidados enfermeros para pacientes críticos y enfermos terminales" en el catálogo Fama usando operadores booleanos, importar 5 monografías pertinentes a Mendeley, y crear una bibliografía en formato Vancouver. El documento explica la estrategia de búsqueda utilizada en Fama, el proceso de importar los resultados a Mendeley, y cómo configurar Mendeley para citar en formato Vancouver.
El documento presenta información sobre un estudiante de derecho en la Universidad Yacambú en Venezuela. El estudiante se llama Carlos I. Marciales E. y su número de identificación es V-14267702. Está tomando un curso de Derecho Internacional Privado sobre exequátur y extradición.
Alvin Rohit is a multidisciplinary designer with over 30 digital projects delivered ranging from web, mobile apps, and eLearning. He has strong skills in visual design, interaction design, typography, and design software like Photoshop, Illustrator, Premiere, and After Effects. Some of his project experiences include designing the Meditravel medical tourism mobile app, the Navdeck data analytics iOS app, and the K12 eLearning government project built in Articulate Storyline. He aims to gain user insights and create intuitive designs that integrate visuals and interactions following industry standards.
El documento trata sobre la tuberculosis. Describe que la tuberculosis es una enfermedad infecciosa causada por el bacilo Mycobacterium tuberculosis que generalmente afecta los pulmones. Explica los diferentes métodos de diagnóstico como la radiografía de tórax, pruebas inmunológicas y microbiológicas. También cubre las manifestaciones radiológicas de la enfermedad en los pulmones, ganglios y pleura.
This document discusses PGE's proposal to expand natural gas generation at its Carty plant near Boardman, Oregon and how it relates to Oregon's climate goals. It notes that adding two new natural gas units would significantly increase greenhouse gas emissions, even more so when accounting for upstream methane leaks. Oregon's climate policy commitments and goals would not be met if the Carty expansion moves forward given the massive methane pollution. Activists plan to intervene and provide the PUC with new methane information and climate science to justify stronger rules that prevent increased long-term methane usage and ensure Oregon's climate targets are achieved.
The Maltreatment and Adolescent Pathways (MAP) ProjectChristine Wekerle
The MAP Project studies maltreatment and adolescent pathways. It involves tracking youth with open child welfare cases every 6 months to assess violence, mental health, substance use, and risky sexual practices. The project has multiple studies funded by various sources. It uses standardized tests and collects data from over 500 youth involved with child welfare in Ontario. Results show high rates of maltreatment, bullying, dating violence, and delinquent behaviors in this population.
Lecture 4 from the COSC 426 graduate class on Augmented Reality. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. August 1st 2012
Dynamic memory allocation allows programs to request additional memory at runtime as needed, rather than having a fixed amount allocated at compile time. The malloc function allocates memory on the heap and returns a pointer to it. This memory must later be freed using free to avoid memory leaks. Calloc initializes allocated memory to zero, while realloc changes the size of previously allocated memory. C supports dynamic memory allocation through library functions like malloc, calloc, free and realloc.
How Automated Vulnerability Analysis Discovered Hundreds of Android 0-daysPriyanka Aash
Death from a million bugs. Android has become one of the world’s most deployed operating systems. Recently researchers have been focused on uncovering vulnerabilities in the Android smartphone ecosystem. This session will present newly developed automated vulnerability analysis techniques that resulted in the discovery of hundreds of previously unknown vulnerabilities.
Learning Objectives:
1: Learn how to use automated vulnerability analysis to ID security bugs at scale.
2: Learn about state-of-the-art and novel techniques for automated vulnerability analysis.
3: Learn proven techniques to find vulnerabilities in bootloaders, kernel drives and apps.
(Source: RSA Conference USA 2018)
Using an Array include ltstdiohgt include ltmpih.pdfgiriraj65
Using an Array:
#include <stdio.h>
#include <mpi.h>
int main(int argc, char** argv) {
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
// Define topology
int topology[5][2] = {{1, 4}, {0, 2, 3}, {1,-1}, {0, 4}, {0, 3}};
// Initialize index and edge arrays
int index[5] = {0};
int edges[5][3] = {{0}};
// Add edges into index array
int i =0;
int j =0;
for (i = 0; i < 5; i++)
{
index[i] = sizeof(topology[i]) / sizeof(int);
for (j = 0; j < index[i]; j++)
{
edges[i][j] = topology[i][j];
}
}
// Display topology for each process
for (i = 0; i < size; i++)
{
if (rank == i)
{
printf("Process %d has %d neighbors: ", i, index[i]);
for (j = 0; j < index[i]; j++)
{
printf("%d ", edges[i][j]);
}
printf("n");
}
MPI_Barrier(MPI_COMM_WORLD);
}
MPI_Finalize();
return 0;
}
MPI Functions:
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv)
{
int rank, size;
MPI_Init(&argc, &argv); // Initialize MPI
MPI_Comm_rank(MPI_COMM_WORLD, &rank); // Get the rank of the current process
MPI_Comm_size(MPI_COMM_WORLD, &size); // Get the total number of processes
int nnodes = 5; // Number of nodes in the topology
int nedges = 8; // Number of edges in the topolofy
// Index of the first edge for each node
int index[5] = {2, 5, 6, 8, 9};
int edges[9] = {4, 1, 0, 2, 3, 1, 4, 0, 3}; // List of all edges
MPI_Comm graph_comm; // Create a new communicator for the graph topology
MPI_Graph_create(MPI_COMM_WORLD, nnodes, index, edges, 0, &graph_comm); // Create the
graph topology
int count; // Number of neighbors
int* neighbors; // Neighbor Ranks
MPI_Graph_neighbors_count(graph_comm, rank, &count); // Get the number of neighbors for the
current process
neighbors = (int*) malloc(count * sizeof(int)); // Allocate memory for the array of neighbor ranks
MPI_Graph_neighbors(graph_comm, rank, count, neighbors); // Get the neighbor ranks for the
current process
// Display process and the node in the topologies neighbors
printf("Process %d has %d neighbors:", rank, count);
int i;
for (i = 0; i < count; i++)
{
printf(" %d", neighbors[i]); // Print the neighbor ranks
}
printf("n");
MPI_Finalize();
return 0;
}
ive provided my own code above in case that helps or makes it easier on you guys.
My output isnt quite right ive been at it for a while if someone could fix the output and explain it to
me id be very happy :)
4. Use Graph topology to create following one. Once you create your topology, use one process
(e.g., process 0 ) to display the number of neighbors and its neighbors at each node (i.e.,
process). Use following two methods to check your topology. (20 points) i) Use two arrays, index
and edges, to display number of neighbors and its neighbors for each node. ii) Use two functions
in MPI, "MPI Graph neighbors count" and "MPI Graph neighbors". llinn@scholar-fe06: /470 $
mpirun -n 5 ./hmwk5q4-b Process 0 has 2 neighbors: 41 Process 1 has 3 neighbors: 023 Process
2 has 1 neighbors: 1 Process 3 has 2 neighbors: 40.
MeCC: Memory Comparison-based Code Clone Detector영범 정
The document describes MeCC, a memory comparison-based code clone detector. MeCC estimates program semantics by analyzing programs statically to produce abstract memories. Abstract memories map abstract addresses to abstract values. MeCC detects clones by comparing abstract memories and identifying similarities. This allows MeCC to find semantic clones that are syntactically different but have identical behaviors, such as clones involving control replacements, capturing procedural effects, or more complex transformations.
MeCC: Memory Comparison based Clone DetectorSung Kim
The document describes MeCC, a memory comparison-based code clone detector. MeCC estimates the semantics of programs by analyzing them to produce abstract memories, which are maps from abstract addresses to abstract values. MeCC detects clones by comparing abstract memories and calculating their similarity scores. Unlike previous clone detectors, MeCC can detect semantic clones that are syntactically different due to code transformations like statement reordering, variable replacement, and statement splitting.
The document discusses whether the PyPy implementation of Python is ready for production use. It provides an overview of PyPy, benchmarks various workloads against CPython, and evaluates PyPy based on common criteria for determining if a software project is production-ready. While some workloads are slower on PyPy and it fails with some Python modules, it meets most criteria and provides performance improvements for CPU-bound tasks. Overall, the document concludes PyPy could be considered for production use, especially given its advantages in scalability and upcoming improvements to its just-in-time compiler and Python 3 support.
After hot discussions on the article about "The Big Calculator" I felt like checking some other projects related to scientific computations. The first program that came to hand was the open-source project OpenMS dealing with protein mass spectrometry. This project appeared to have been written in a very serious and responsible way. Developers use at least Cppcheck to analyze their project. That's why I didn't hope to find anything sensational left unnoticed by that tool. On the other hand, I was curious to see what bugs PVS-Studio would be able to find in the code after Cppcheck. If you want to know this too, follow me.
The document summarizes MLOps using Protobuf in Unity for a 3D game called FunAI. It discusses using Unity and MLAgents to build a learning environment, training models in Python and playing them in a Unity docker container. The key steps are:
1. Building a Unity environment with MLAgents to get observations from sensors and take actions through behaviors.
2. Recording data from the Unity environment and using it to train models in Python.
3. Serializing the data with Protobuf for efficient communication between Python and Unity via gRPC.
4. Dockerizing the training process and playing trained models to deploy the MLOps pipeline.
Python se ha convertido en el lenguaje más usado para desarrollar herramientas dentro del ámbito de la seguridad. Esta charla se centrará en las diferentes formas en que un analista puede aprovechar el lenguaje de programación Python tanto desde el punto de vista defensivo como ofensivo.
Desde el punto de vista defensivo Python es una de las mejores opciones como herramienta de pentesting por la gran cantidad de módulos que nos pueden ayudar a desarrollar nuestras propias herramientas con el objetivo de realizar un análisis de nuestro objetivo.
Desde el punto de vista ofensivo podemos utilizar Python para recolección de información de nuestro objetivo de forma pasiva y activa. El objetivo final es obtener el máximo conocimiento posible en el contexto que estamos auditando. Entre los principales puntos a tratar podemos destacar:
1.Introducción a Python para proyectos de ciberseguridad(5 min)
2.Herramientas de pentesting(10 min)
3.Herramientas Python desde el punto de vista defensivo(10 min)
4.Herramientas Python desde el punto de vista ofensivo(10 min)
Beyond Breakpoints: A Tour of Dynamic AnalysisFastly
Despite advances in software design and static analysis techniques, software remains incredibly complicated and difficult to reason about. Understanding highly-concurrent, kernel-level, and intentionally-obfuscated programs are among the problem domains that spawned the field of dynamic program analysis. More than mere debuggers, the challenge of dynamic analysis tools is to be able record, analyze, and replay execution without sacrificing performance. This talk will provide an introduction to the dynamic analysis research space and hopefully inspire you to consider integrating these techniques into your own internal tools.
The fundamentals and advance application of Node will be covered. We will explore the design choices that make Node.js unique, how this changes the way applications are built and how systems of applications work most effectively in this model. You will learn how to create modular code that’s robust, expressive and clear. Understand when to use callbacks, event emitters and streams.
Profiling in Python provides concise summaries of key profiling tools in 3 sentences:
cProfile and line_profiler profile execution time and identify slow lines of code. memory_profiler profiles memory usage with line-by-line or time-based outputs. YEP extends profiling to compiled C/C++ extensions like Cython modules, which are not covered by the standard Python profilers.
This document describes the steps to convert a TensorFlow model to a TensorRT engine for inference. It includes steps to parse the model, optimize it, generate a runtime engine, serialize and deserialize the engine, as well as perform inference using the engine. It also provides code snippets for a PReLU plugin implementation in C++.
This document describes the steps to convert a TensorFlow model to a TensorRT engine for inference. It includes steps to parse the model, optimize it, generate a runtime engine, serialize and deserialize the engine, as well as perform inference using the engine. It also provides code snippets for a PReLU plugin implementation in C++.
Scientific Computing with Python Webinar --- May 22, 2009Enthought, Inc.
This document provides information about NumPy memory mapped arrays and Enthought Python Distribution (EPD). It discusses how memory mapped arrays allow large datasets to be accessed as NumPy arrays while residing in disk files rather than main memory. It also advertises EPD software and training courses for scientific computing in Python.
Adam Sitnik "State of the .NET Performance"Yulia Tsisyk
MSK DOT NET #5
2016-12-07
In this talk Adam will describe how latest changes in.NET are affecting performance.
Adam wants to go through:
C# 7: ref locals and ref returns, ValueTuples.
.NET Core: Spans, Buffers, ValueTasks
And how all of these things help build zero-copy streams aka Channels/Pipelines which are going to be a game changer in the next year.
This document summarizes Adam Sitnik's presentation on .NET performance. It discusses new features in C# 7 like ValueTuple, ref returns and locals, and Span. It also covers .NET Core improvements such as ArrayPool and ValueTask that reduce allocations. The presentation shows how these features improve performance through benchmarks and reduces GC pressure. It provides examples and guidance on best using new features like Span, pipelines, and unsafe code.
Building Network Functions with eBPF & BCCKernel TLV
eBPF (Extended Berkeley Packet Filter) is an in-kernel virtual machine that allows running user-supplied sandboxed programs inside of the kernel. It is especially well-suited to network programs and it's possible to write programs that filter traffic, classify traffic and perform high-performance custom packet processing.
BCC (BPF Compiler Collection) is a toolkit for creating efficient kernel tracing and manipulation programs. It makes use of eBPF.
BCC provides an end-to-end workflow for developing eBPF programs and supplies Python bindings, making eBPF programs much easier to write.
Together, eBPF and BCC allow you to develop and deploy network functions safely and easily, focusing on your application logic (instead of kernel datapath integration).
In this session, we will introduce eBPF and BCC, explain how to implement a network function using BCC, discuss some real-life use-cases and show a live demonstration of the technology.
About the speaker
Shmulik Ladkani, Chief Technology Officer at Meta Networks,
Long time network veteran and kernel geek.
Shmulik started his career at Jungo (acquired by NDS/Cisco) implementing residential gateway software, focusing on embedded Linux, Linux kernel, networking and hardware/software integration.
Some billions of forwarded packets later, Shmulik left his position as Jungo's lead architect and joined Ravello Systems (acquired by Oracle) as tech lead, developing a virtual data center as a cloud-based service, focusing around virtualization systems, network virtualization and SDN.
Recently he co-founded Meta Networks where he's been busy architecting secure, multi-tenant, large-scale network infrastructure as a cloud-based service.
The document discusses Java memory allocation profiling using the Aprof tool. It explains that Aprof works by instrumenting bytecode to inject calls that count and track object allocations. This allows it to provide insights on where memory is being allocated and identify potential performance bottlenecks related to garbage collection.
1. 1
A Summary of Work in Spacecraft Relative Motion Control Lab
Sheng Li
Aerospace Engineering, University of Michigan, Ann Arbor, MI
This documentation keeps a record of my work in Spacecraft Relative Motion Control Lab in the
summer 2016 from May to July. This Lab is still under construction. The purpose of Spacecraft
Relative Motion Control Lab is to scale down the motion of real spacecraft to the Lab scale in both
time and length, and to emulate the motion of spacecraft with Omni-directional robot. My summer
work focuses on camera tracking system, robot control, and MPC controller. The main
achievement of my work includes realizing the multiple-object tracking function for the motion
capture codes which run on Raspberry PI, finding the method of automatic C code generation for
MPC Simulink model, building a simple version of estimator for MPC controller in Simulink, having
a primary understanding of the robot controller codes developed by Dominic Liao-McPherson and
Richard Sutherland and how to control the motion of robot with their codes. My colleague Weitao
Sun has developed the scaling blocks in Simulink. An integrated Simulink model includes scaling
blocks, MPC controller blocks and estimator blocks is to be built for the next step.
2. 2
I. Multiple-object Tracking
The original object tracking codes were developed by Pedro Donato on the basis of the codes
developed by NaturalPoint Inc. The original object tracking codes were for single-object tracking.
In order to realize the purpose of the Spacecraft Relative Motion Control Lab, multiple-object
tracking codes must be developed. I developed the multiple-object tracking codes by modifying
the original single-object tracking codes.
The original single-object tracking codes were designed to run on Linux and read the broadcasted
position and attitude data from OptiTrack motion capture system which runs on Windows. Due to
the variable types of the original codes, the function of the codes is restricted to single-object
tracking.
The first step of realizing the multiple-tracking function is to increase the dimension of variables
defined in the header file run_motion_capture.h, to create room for multiple objects.
• The definition of structure motion_capture_obs in original header file codes:
struct motion_capture_obs{
double time;
// position (x,y,z,roll,pitch,yaw)
double pose[6];
};
struct optitrack_message {
int ID;
float x, y, z;
float qx, qy, qz, qw;
};
• The new definition of structure motion_capture_obs in the modified header file codes:
struct motion_capture_obs{
int ID[objlimit]; // The ID of objects
char name[objlimit][256];
double time;
// position (x,y,z,roll,pitch,yaw)
double pose[objlimit][6];
};
struct optitrack_message {
int ID[objlimit];
char name[objlimit][256];
float x[objlimit], y[objlimit], z[objlimit];
float qx[objlimit], qy[objlimit], qz[objlimit], qw[objlimit];
3. 3
};
where objlimit is the upper limit of the number of object to be tracked and defined as
follows in the very beginning of the header file:
#define objlimit 20 //20 is a customized number
In addition, the definition of the global variables also should be slightly changed:
• Original codes:
#ifndef MOTION_CAPTURE_GLOBALS
#define MOTION_CAPTURE_GLOBALS
EXTERN struct motion_capture_obs mcap_obs[2];
EXTERN pthread_mutex_t mcap_mutex;
EXTERN FILE *mcap_txt;
#endif
• Modified codes:
#ifndef MOTION_CAPTURE_GLOBALS
#define MOTION_CAPTURE_GLOBALS
EXTERN struct motion_capture_obs mcap_obs[objlimit]; //increase dim.
EXTERN pthread_mutex_t mcap_mutex;
EXTERN FILE *mcap_txt;
EXTERN int nRigidBodies; //add a global counter for objects
#endif
The rest of the header file remains the same.
The second step is to modify the PacketClient.cpp file, which contains the major functions that
transfer camera data broadcasted from OptiTrack. However the only function is to be modified is
void Unpack_to_code(char* pData, struct optitrack_message *optmsg) since it is the
function which is called in run_motion_capture.c. The purpose of modification is also to increase
the dimension of variables.
• Original codes:
void Unpack_to_code(char* pData, struct optitrack_message *optmsg)
{
int major = NatNetVersion[0];
int minor = NatNetVersion[1];
char *ptr = pData;
int MessageID = 0;
memcpy(&MessageID, ptr, 2); ptr += 2;
4. 4
int nBytes = 0;
memcpy(&nBytes, ptr, 2); ptr += 2;
if(MessageID == 7) // FRAME OF MOCAP DATA packet
{
// frame number
int frameNumber = 0; memcpy(&frameNumber, ptr, 4); ptr += 4;
// number of data sets (markersets, rigidbodies, etc)
int nMarkerSets = 0; memcpy(&nMarkerSets, ptr, 4); ptr += 4;
for (int i=0; i < nMarkerSets; i++)
{
// Markerset name
char szName[256];
strcpy(szName, ptr);
int nDataBytes = (int) strlen(szName) + 1;
ptr += nDataBytes;
// marker data
int nMarkers = 0; memcpy(&nMarkers, ptr, 4); ptr += 4;
for(int j=0; j < nMarkers; j++)
{
float x = 0; memcpy(&x, ptr, 4); ptr += 4;
float y = 0; memcpy(&y, ptr, 4); ptr += 4;
float z = 0; memcpy(&z, ptr, 4); ptr += 4;
}
}
// unidentified markers
int nOtherMarkers = 0; memcpy(&nOtherMarkers, ptr, 4); ptr += 4;
for(int j=0; j < nOtherMarkers; j++)
{
float x = 0.0f; memcpy(&x, ptr, 4); ptr += 4;
float y = 0.0f; memcpy(&y, ptr, 4); ptr += 4;
float z = 0.0f; memcpy(&z, ptr, 4); ptr += 4;
}
// rigid bodies
int nRigidBodies = 0;
memcpy(&nRigidBodies, ptr, 4); ptr += 4;
if (nRigidBodies > 1) {
printf("Error: Number of rigid bodies = %dn", nRigidBodies);
}
for (int j=0; j < nRigidBodies; j++)
{
5. 5
// rigid body position/orientation
memcpy(&(optmsg->ID), ptr, 4); ptr += 4;
memcpy(&(optmsg->x), ptr, 4); ptr += 4;
memcpy(&(optmsg->y), ptr, 4); ptr += 4;
memcpy(&(optmsg->z), ptr, 4); ptr += 4;
memcpy(&(optmsg->qx), ptr, 4); ptr += 4;
memcpy(&(optmsg->qy), ptr, 4); ptr += 4;
memcpy(&(optmsg->qz), ptr, 4); ptr += 4;
memcpy(&(optmsg->qw), ptr, 4); ptr += 4;
}
}
else
{
printf("Unrecognized Packet Type.n");
}
}
• Modified codes for multiple-object tracking:
void Unpack_to_code(char* pData, struct optitrack_message *optmsg)
{
int major = NatNetVersion[0];
int minor = NatNetVersion[1];
char *ptr = pData;
// message ID
int MessageID = 0;
memcpy(&MessageID, ptr, 2); ptr += 2;
// size
int nBytes = 0;
memcpy(&nBytes, ptr, 2); ptr += 2;
if(MessageID == 7) // FRAME OF MOCAP DATA packet
{
// frame number
int frameNumber = 0; memcpy(&frameNumber, ptr, 4); ptr += 4;
// number of data sets (markersets, rigidbodies, etc)
int nMarkerSets = 0; memcpy(&nMarkerSets, ptr, 4); ptr += 4;
for (int i=0; i < nMarkerSets; i++)
{
// Markerset name
char szName[256];
strcpy(szName, ptr);
int nDataBytes = (int) strlen(szName) + 1;
6. 6
//add a new field “NAME” to help identify different objects
strcpy(optmsg->name[i], ptr);
/////////////////////////////
ptr += nDataBytes;
// marker data
int nMarkers = 0; memcpy(&nMarkers, ptr, 4); ptr += 4;
for(int j=0; j < nMarkers; j++)
{
float x = 0; memcpy(&x, ptr, 4); ptr += 4;
float y = 0; memcpy(&y, ptr, 4); ptr += 4;
float z = 0; memcpy(&z, ptr, 4); ptr += 4;
}
}
// unidentified markers
int nOtherMarkers = 0; memcpy(&nOtherMarkers, ptr, 4); ptr += 4;
for(int j=0; j < nOtherMarkers; j++)
{
float x = 0.0f; memcpy(&x, ptr, 4); ptr += 4;
float y = 0.0f; memcpy(&y, ptr, 4); ptr += 4;
float z = 0.0f; memcpy(&z, ptr, 4); ptr += 4;
}
// rigid bodies
// int nRigidBodies = 0;
// nRigidbodies Globally declared in header
memcpy(&nRigidBodies, ptr, 4); ptr += 4;
for (int j=0; j < nRigidBodies; j++)
{
// rigid body position/orientation
// forming arrays
memcpy(&(optmsg->ID[j]), ptr, 4); ptr += 4;
memcpy(&(optmsg->x[j]), ptr, 4); ptr += 4;
memcpy(&(optmsg->y[j]), ptr, 4); ptr += 4;
memcpy(&(optmsg->z[j]), ptr, 4); ptr += 4;
memcpy(&(optmsg->qx[j]), ptr, 4); ptr += 4;
memcpy(&(optmsg->qy[j]), ptr, 4); ptr += 4;
memcpy(&(optmsg->qz[j]), ptr, 4); ptr += 4;
memcpy(&(optmsg->qw[j]), ptr, 4);
ptr += 94; // 94 is a very important number, obtained from lots of test.
}
7. 7
}
else
{
printf("Unrecognized Packet Type.n");
}
}
*The last increment of pointer ptr is 94, which exactly leads the pointer to the beginning
of the information of next object to be read. The number 94 came from multiple tests.
Therefore, optmsg contains position and orientation information of multiple objects.
The last step is to modify the top-level code motion_capture_standalone.c. This code is
executed on the top-level to start threads and store data in buffer, also to print the position and
orientation information of tracked objects. Function void *print_motion_capture(void
*userdata) needs modification.
• Original codes:
void *print_motion_capture(void *userdata){
struct motion_capture_obs mc;
/* Local copy of mcap_obs object (to update) */
while(1){
printf(stdout,"%lf,%lf,%lf,%lf,%lf,%lf,%lfn",mc.time,
mc.pose[0],mc.pose[1],mc.pose[2],mc.pose[3],mc.pose[4],mc.pose[5]);
fflush(stdout);
pthread_mutex_lock(&mcap_mutex);
memcpy(&mc, mcap_obs, sizeof(struct motion_capture_obs));
pthread_mutex_unlock(&mcap_mutex);
usleep(100000); /* 10Hz Hard coded for now */
}
return NULL;
}
• Modified codes (the dimension of mc increased):
void *print_motion_capture(void *userdata){
struct motion_capture_obs mc;
/* Local copy of mcap_obs object (to update) */
while(1){
for (int j=0; j < nRigidBodies; j++)
{
printf("%lf,%d,%s,%lf,%lf,%lf,%lf,%lf,%lfn",mc.time,
mc.ID[j],mc.name[j],mc.pose[j][0],mc.pose[j][1],mc.pose[j][2],
8. 8
mc.pose[j][3],mc.pose[j][4],mc.pose[j][5]);
fflush(stdout);
}
pthread_mutex_lock(&mcap_mutex);
memcpy(&mc, mcap_obs, sizeof(struct motion_capture_obs));
pthread_mutex_unlock(&mcap_mutex);
usleep(100000); /* 10Hz Hard coded for now */
}
return NULL;
}
The remaining file util.c does not need modification.
Method of running the object tracking codes:
1. Connect the camera system to power supply;
2. Run the Windows system and the OptiTrack software;
3. Build rigid body and broadcast its information on OptiTrack;
4. Run console on Linux (Raspbian on Raspberry PI);
5. Type command: cd ~/…/Camera/motioncapture_multi (… depends on the path of
Camera folder);
6. Type command: make clean (clean the existing binary executable files);
7. Type command: make (generate the new binary executable files);
8. Type command: cd bin;
9. Type command: ./motion_capture_standalone to run the object tracking system on
Linux.
9. 9
II. A Simple Version of Estimator for MPC
Since there exist errors when determining the position and attitude of a satellite in the space due
to the limitation of accuracy of measurement instruments (sensors), we need an estimator in the
control model to make the measurement of the status more accurate. The estimator makes a
weighted average between the status of satellite calculated by mathematical model and the status
measured by sensors. In the Spacecraft Relative Motion Control Lab, the measurement instruments
are cameras and the corresponding software system, there are random and systematic errors
produced by the camera system. Therefore, I build a simple version of estimator for the MPC
Simulink model based on the error of camera system. The block diagram shows the working
principle of the estimator.
Fig. 1. Conceptual Estimator Block Diagram
Note that the Random block is an imitation source of measurement error produced by camera
system.
The reason for saying this is a simple estimator is that: 1. The source of inaccuracy is a simple
random function; 2. The mathematical model I used is the linear Hill-Clohessy-Wiltshire dynamic
model; 3. The method of average is a simple arithmetic average (for further study weighted average
could be applied).
10. 10
This is the original MPC loop:
Fig. 2. Original MPC Loop by Christopher Peterson
This is the modified MPC loop with estimator:
Fig. 3. Modified MPC Loop with Estimator
11. 11
This is the sub-system of the estimator block:
Fig. 4. Estimator Sub-system
Camera Simu function:
function x_cam = fcn(x)
x_cam = zeros(6,1);
for i = 1:1:6
if i<=3
rand_add=(rand(1,1)-.5)*2*.1; %scaled for distance
else
rand_add=(rand(1,1)-.5)*2*.0001; %scaled for speed
end
x_cam(i) = x(i) + rand_add;
end
end
Avg function:
function x_est = fcn(x_cam, x_mm)
x_est = zeros(6,1);
for i = 1:1:6
x_est(i) = (x_cam(i) + x_mm(i))/2;
end
return
12. 12
This is the sub-system of the Hill-Clohessy-Wiltshire (linear) sub-system (MPC.A and MPC.B
are constants prescribed by the Simulink model):
Fig. 5. Hill-Clohessy-Wiltshire (linear) Sub-system
The following figures show the ideal MPC simulation results and the simulation results of MPC
with estimator. The latter results are more realistic and reasonable.
i. Ideal MPC simulation results
Fig. 6. Original MPC Simulation Trajectory
13. 13
Fig. 7. Instantaneous Velocity Change in Radial Direction (ideal MPC)
Fig. 8. Instantaneous Velocity Change in In-Track Direction (ideal MPC)
14. 14
ii. Simulation results of MPC with estimator
Fig. 9. Simulation Trajectory of MPC with Estimator
Fig. 10. Instantaneous Velocity Change in Radial Direction (MPC with estimator)
15. 15
Fig. 11. Instantaneous Velocity Change in In-Track Direction (MPC with estimator)
III. Method of Running the Robot Control Code
The following steps give an instruction of the method of running the robot with the robot controller
and camera system:
1. Connect the camera system to power supply;
2. Run the Windows system and the OptiTrack software;
3. Build rigid body and broadcast its information on OptiTrack;
4. Connect the Arduino and Raspberry PI with USB cable;
5. Connect Raspberry PI to a portable power supply (5V, 1A), and wait for 30 seconds;
6. Have a computer connect to the Lab router (Jasper);
7. Remotely login to the Raspberry PI using SSH by typing ssh pi@10.0.1.4 in console
and the password is Yoda;
8. Locate the path of robot_controller folder;
9. Type cd ~/…/robot_controller/src in console;
10. Run the binary executable file by typing ./t1.out;
If the controller codes are modified, re-compiling is needed before running the robot:
1. Use console get to src level in the robot_controller folder;
2. Type make in the console and re-compiling will be accomplished;
16. 16
For now, the robot is able to be guided to the origin of the coordinate system on the ground from
any position (only if detected by the camera system). There still exists problem guiding the robot
to move on a specific trajectory (e.g. a circular trajectory centered at origin).
VI. Automatic C Code Generation Using Simulink
Although it is very easy to auto generate C code with Simulink and a properly built model (just a
click of the Build Model button), it is the hardest part to be done. Since what we have in the
present Simulink model is a pure theoretical MPC model. If we want to have MPC C code that can
control robot, we will need to find out the interface between the robot controller and the MPC
Simulink model, and then we should be able to build the interface using s-function builder block
provided in Simulink. I need to learn more about both the s-function builder and the robot
controller code.