This document provides an introduction to higher mathematics. It covers topics in logic, proofs, number theory, and functions. The introduction defines logical operations and formulas that are used to combine statements in mathematics. Logical operations allow complex statements to be built from simpler ones and include negation, conjunction, disjunction, and implication. A brief biography is provided for mathematicians mentioned in the text, such as George Boole, who contributed to the study and development of logic.
A buffer overflow study attacks and defenses (2002)Aiim Charinthip
This document provides an overview of buffer overflow attacks and defenses. It discusses stack and heap overflows, and how programs can be exploited by overwriting memory buffers. It then summarizes various protection solutions, including Libsafe and the Grsecurity kernel patch, which make the stack and heap non-executable to prevent execution of injected code. The document serves as an introduction to buffer overflows and techniques for mitigating these vulnerabilities.
This dissertation by Hualong Feng presents a computational method for simulating 3D vortex sheets using an adaptive triangular panel/particle method. The document includes an introduction outlining the motivation to study vortex sheets and vortex rings. It then provides mathematical background on vortex methods and previous computational work modeling 2D and axisymmetric vortex sheets. The core chapters describe the author's development of a discretization and adaptive refinement approach for representing and computing the dynamics of 3D vortex sheets, including the generation of vorticity at density interfaces. The method is applied to simulations of vortex ring instability and the collision of vortex rings. Results are also presented for vortex sheet computations in 3D density-stratified flows.
This document is the master's thesis of Tamás Martinec titled "Real-Time Non-Photorealistic Shadow Rendering". The thesis discusses non-photorealistic rendering (NPR) techniques, real-time shadow rendering algorithms, and presents an example of combining hatching-based NPR with shadow mapping to generate stylized shadows in real-time. The thesis is divided into chapters covering NPR techniques and styles, real-time shadow rendering methods, graphics hardware and shaders, and a demonstration implementing hatching and shadowed hatching shaders.
This document describes a project to create a realistic car driving simulation using rigid body dynamics and physics modeling. It outlines the objectives to physically simulate driving a car using a 3D OpenGL environment. The simulation will include realistic graphics rendering, as well as modeling forces like braking, acceleration and tire traction. It discusses prior work in flight and racing simulations and the techniques that will be implemented, such as bump mapping, environment mapping, lighting effects and motion blur. The document also covers integrating controls, sound and developing a racing game interface. Overall goals are to achieve high simulation accuracy and stability for a realistic driving experience.
Information extraction systems aspects and characteristicsGeorge Ang
This document provides a survey of information extraction systems and techniques. It discusses the main components and design approaches of information extraction, including manual and automatic pattern discovery. It also reviews several important prior information extraction systems and approaches to wrapper generation, including both supervised and unsupervised methods. The document serves to describe the state of the art in information extraction and provide an overview of the field.
This document describes a semester project involving rigid body sound synthesis. The project uses modal synthesis in the frequency domain to generate contact sounds based on forces from a rigid body simulation. It utilizes various math libraries like Armadillo for linear algebra and SVDLIBC for sparse matrix decompositions. The simulation models rigid bodies using finite elements, computes their vibration modes, and plays back the resulting sounds using SDL audio. Key aspects covered include matrix formation, material parameters, modal analysis calculations, and audio playback implementation.
This document provides an overview of logical approaches to analyzing the security of distributed systems. It discusses cryptographic protocols, web services, and modeling tools. The document is divided into three sections. The first section describes cryptographic protocols and web services. The second section discusses tools for modeling these systems using first-order logic. The third section presents symbolic models for cryptographic protocols and a proposed model for analyzing web services security.
This document provides an introduction to higher mathematics. It covers topics in logic, proofs, number theory, and functions. The introduction defines logical operations and formulas that are used to combine statements in mathematics. Logical operations allow complex statements to be built from simpler ones and include negation, conjunction, disjunction, and implication. A brief biography is provided for mathematicians mentioned in the text, such as George Boole, who contributed to the study and development of logic.
A buffer overflow study attacks and defenses (2002)Aiim Charinthip
This document provides an overview of buffer overflow attacks and defenses. It discusses stack and heap overflows, and how programs can be exploited by overwriting memory buffers. It then summarizes various protection solutions, including Libsafe and the Grsecurity kernel patch, which make the stack and heap non-executable to prevent execution of injected code. The document serves as an introduction to buffer overflows and techniques for mitigating these vulnerabilities.
This dissertation by Hualong Feng presents a computational method for simulating 3D vortex sheets using an adaptive triangular panel/particle method. The document includes an introduction outlining the motivation to study vortex sheets and vortex rings. It then provides mathematical background on vortex methods and previous computational work modeling 2D and axisymmetric vortex sheets. The core chapters describe the author's development of a discretization and adaptive refinement approach for representing and computing the dynamics of 3D vortex sheets, including the generation of vorticity at density interfaces. The method is applied to simulations of vortex ring instability and the collision of vortex rings. Results are also presented for vortex sheet computations in 3D density-stratified flows.
This document is the master's thesis of Tamás Martinec titled "Real-Time Non-Photorealistic Shadow Rendering". The thesis discusses non-photorealistic rendering (NPR) techniques, real-time shadow rendering algorithms, and presents an example of combining hatching-based NPR with shadow mapping to generate stylized shadows in real-time. The thesis is divided into chapters covering NPR techniques and styles, real-time shadow rendering methods, graphics hardware and shaders, and a demonstration implementing hatching and shadowed hatching shaders.
This document describes a project to create a realistic car driving simulation using rigid body dynamics and physics modeling. It outlines the objectives to physically simulate driving a car using a 3D OpenGL environment. The simulation will include realistic graphics rendering, as well as modeling forces like braking, acceleration and tire traction. It discusses prior work in flight and racing simulations and the techniques that will be implemented, such as bump mapping, environment mapping, lighting effects and motion blur. The document also covers integrating controls, sound and developing a racing game interface. Overall goals are to achieve high simulation accuracy and stability for a realistic driving experience.
Information extraction systems aspects and characteristicsGeorge Ang
This document provides a survey of information extraction systems and techniques. It discusses the main components and design approaches of information extraction, including manual and automatic pattern discovery. It also reviews several important prior information extraction systems and approaches to wrapper generation, including both supervised and unsupervised methods. The document serves to describe the state of the art in information extraction and provide an overview of the field.
This document describes a semester project involving rigid body sound synthesis. The project uses modal synthesis in the frequency domain to generate contact sounds based on forces from a rigid body simulation. It utilizes various math libraries like Armadillo for linear algebra and SVDLIBC for sparse matrix decompositions. The simulation models rigid bodies using finite elements, computes their vibration modes, and plays back the resulting sounds using SDL audio. Key aspects covered include matrix formation, material parameters, modal analysis calculations, and audio playback implementation.
This document provides an overview of logical approaches to analyzing the security of distributed systems. It discusses cryptographic protocols, web services, and modeling tools. The document is divided into three sections. The first section describes cryptographic protocols and web services. The second section discusses tools for modeling these systems using first-order logic. The third section presents symbolic models for cryptographic protocols and a proposed model for analyzing web services security.
Discrete Mathematics - Mathematics For Computer ScienceRam Sagar Mourya
This document is a table of contents for a textbook on mathematics for computer science. It lists 10 chapters that cover topics like proofs, induction, number theory, graph theory, relations, and sums/approximations. Each chapter is divided into multiple sections that delve deeper into the chapter topic, with descriptive section titles providing a sense of what each chapter covers at a high level.
Fundamentals of computational_fluid_dynamics_-_h._lomax__t._pulliam__d._zinggRohit Bapat
This document provides an overview of computational fluid dynamics (CFD) and summarizes its key steps and concepts. It discusses the fundamentals of CFD, including conservation laws, governing equations, finite difference approximations, semi-discrete and finite volume methods, and time-marching algorithms. The document is intended to introduce readers to the basic theory and methods in CFD for modeling fluid flow and transport phenomena.
This document is a guide for playing the game Civilization. It covers strategies for the early game such as initial exploration, technology and city building. It also discusses managing cities, military strategy, civics policies, religion, diplomacy, and dealing with unhappiness. The guide is intended to help new players with their early moves and general city and empire management throughout the game.
This document discusses machine learning and large data sets. It covers types of algorithms, including deterministic and adaptive models. It describes the enormous growth of digital data and challenges of data analysis. Machine learning is defined, and applications like credit analysis, autonomous vehicles and medical diagnosis are mentioned. Analyzing large data sets through manual and automated methods like data mining is also discussed. Examples of large data set analysis include images, videos and medical areas like mammography and colonoscopy. The conclusion is that computational analysis of large amounts of data presents opportunities.
This document provides an introduction to integral calculus and demonstrates how to perform integral calculations using the computer algebra system Sage. It covers key integral calculus concepts such as the definition of the integral, Riemann sums, the Fundamental Theorem of Calculus, and techniques for evaluating integrals such as substitution, integration by parts, and trigonometric substitutions. It also discusses applications of integrals to computing areas, volumes, arc lengths, averages, and centers of mass. The document is intended as a preliminary version of an instructional text on integral calculus using Sage.
Computer vision handbook of computer vision and applications volume 1 - sen...Ta Nam
This document provides a summary of a book chapter about image sensors. The chapter discusses:
1. Solid-state image sensing, including fundamentals of photosensing, photocurrent processing, transportation of photosignals, and architectures of image sensors.
2. HDRC imagers that provide log compression at the pixel site for natural visual perception, with random pixel access and optimized SNR through bandwidth control per pixel.
3. Image sensors in TFA (Thin Film on ASIC) technology, which uses thin-film detectors deposited directly on readout integrated circuits for compact camera modules.
This document is a draft of a textbook titled "Applied Calculus" written by Karl Heinz Dovermann, a professor of mathematics at the University of Hawaii. It is dedicated to his wife and sons. The textbook covers topics in calculus including definitions of derivatives, integrals, and applications of calculus through 12 chapters with sections on background concepts, derivatives, applications of derivatives, integration, and prerequisites from precalculus.
Vector spaces, vector algebras, and vector geometriesRichard Smith
Vector spaces over an arbitrary field are treated. Exterior algebra and linear geometries based on vector spaces are introduced. Scalar product spaces and the Hodge star are included.
This document is a manual for JIU (Java Imaging Utilities), an open source Java library for image processing. It introduces JIU, describes its image data types and classes for loading, saving and manipulating images. It provides overviews of operations, codecs, color processing and the GUI capabilities of JIU. It also gives guidance for developers on writing custom image operations and codecs to extend JIU's functionality.
This document provides an outline for the course MBA 604 Introduction to Probability and Statistics. It lists 11 topics that will be covered in the course, including data analysis, probability, random variables, sampling distributions, estimation, hypothesis testing, regression, and analysis of variance. The course is taught by Muhammad El-Taha in the Department of Mathematics and Statistics at the University of Southern Maine.
Masters Thesis: A reuse repository with automated synonym support and cluster...Laust Rud Jacobsen
Having a code reuse repository available can be a great asset for a programmer. But locating components can be difficult if only static documentation is available, due to vocabulary mismatch. Identifying informal synonyms used in documentation can help alleviate this mismatch. The cost of creating a reuse support system is usually fairly high, as much manual effort goes into its construction.
This project has resulted in a fully functional reuse support sys- tem with clustering of search results. By automating the construc- tion of a reuse support system from an existing code reuse repository, and giving the end user a familiar interface, the reuse support system constructed in this project makes the desired functionality available. The constructed system has an easy to use interface, due to a fa- miliar browser-based front-end. An automated method called LSI is used to handle synonyms, and to some degree polysemous words in indexed components.
In the course of this project, the reuse support system has been tested using components from two sources, the retrieval performance measured, and found acceptable. Clustering usability is evaluated and clusters are found to be generally helpful, even though some fine-tuning still has to be done.
This document is an introduction to plasma physics that covers several key topics:
1. It defines plasma as a gas of charged particles and discusses the conditions needed for a plasma state, including debye shielding and plasma parameters.
2. It describes different models for plasma description including fluid, MHD, and two-fluid models. It also covers continuity, Euler, and state equations.
3. It discusses MHD equilibria and waves, including Alfven and magnetosonic modes.
4. It examines MHD discontinuities and shocks.
5. It presents the two-fluid description and generalized Ohm's law.
6. It explores waves in dispers
This document provides an overview and outline of computer graphics lecture notes. It covers topics such as raster displays, basic line drawing, curves, transformations, 3D objects, camera models, visibility, lighting and reflection. The document was created by the Computer Science Department at the University of Toronto for a computer graphics course and is copyrighted.
This document provides an introduction to queueing theory. It discusses key concepts such as random variables, probability distributions, performance measures, Little's law and the PASTA property. It then examines several common queueing models including the M/M/1, M/M/c, M/Er/1, M/G/1 and G/M/1 queues. For each model it derives the equilibrium distribution and discusses measures like mean queue length and waiting time. The goal is to give an overview of basic queueing theory concepts and common single-server and multi-server queues.
This document provides an introduction and overview of data structures and algorithms. It discusses linked lists, binary search trees, heaps, sets, queues, and the AVL tree data structure. It also covers sorting algorithms like merge sort, quicksort, and insertion sort as well as numeric algorithms for primality testing, base conversions, finding greatest common denominators, and more. The goal is to provide annotated references and examples of how to implement and use various common data structures and algorithms.
This document contains notes from a trigonometry course. It includes 10 chapters that cover topics like geometric foundations, the Pythagorean theorem, angle measurement, trigonometric functions, graphing trigonometric functions, inverse trigonometric functions, and working with trigonometric identities. Each chapter also includes supplemental problems for additional practice.
This document contains notes from a trigonometry class taught by Steven Butler at Brigham Young University in Fall 2002. It is divided into 9 chapters that cover topics such as geometric foundations, the Pythagorean theorem, angle measurement, trigonometry with right triangles, trigonometry with circles, graphing trigonometric functions, inverse trigonometric functions, and working with trigonometric identities. Each chapter contains sections that explain key concepts and include supplemental practice problems.
The document discusses spherical harmonics and their properties and applications. Spherical harmonics are orthogonal functions defined on the surface of a sphere that can be used to represent functions defined over the spherical domain, similar to how Fourier series represent functions over a 1D or 2D domain. The document first reviews mathematical fundamentals including orthogonal functions and spherical coordinates. It then defines spherical harmonics and describes some of their key properties such as rotational invariance. Finally, it discusses two applications of spherical harmonics in computer graphics: representing environment maps and performing real-time spherical harmonic lighting calculations for dynamic scenes.
This document outlines the contents of a graduate-level quantum mechanics course. It introduces the major sources consulted in developing the course material and provides an overview of the fundamental concepts covered, including the breakdown of classical physics, polarization of photons, ket and bra spaces, operators, eigenvalues and eigenvectors, observables, measurements, expectation values, and more. The document then delves into specific topics like position and momentum, quantum dynamics, angular momentum, approximation methods, scattering theory and others.
This document provides a report for the proposed CarbonSat mission to observe greenhouse gases from space. It discusses the scientific background and justification for the mission, including the need to improve understanding and quantification of regional carbon fluxes and budgets. The mission objectives are outlined as observing regional, country-scale, and local carbon fluxes. Requirements for the mission include precision of better than 1% for CO2 and 3% for CH4 observations from space, with high spatial and temporal resolution. The proposed mission architecture involves a satellite carrying a spectrometer payload in a sun-synchronous orbit.
This document is Cosimo Fedeli's dissertation submitted to the University of Heidelberg for the degree of Doctor of Natural Sciences. The dissertation explores strong gravitational lensing by galaxy clusters through both analytical and numerical methods. It presents a novel semi-analytic method for computing strong lensing cross sections of clusters that reproduces results from full numerical simulations faster. Applying this method to a cluster population, it finds that mergers significantly increase the probability of strong lensing. It also analyzes the effects of early dark energy on strong lensing statistics and explores selection effects and scaling relations of clusters.
Discrete Mathematics - Mathematics For Computer ScienceRam Sagar Mourya
This document is a table of contents for a textbook on mathematics for computer science. It lists 10 chapters that cover topics like proofs, induction, number theory, graph theory, relations, and sums/approximations. Each chapter is divided into multiple sections that delve deeper into the chapter topic, with descriptive section titles providing a sense of what each chapter covers at a high level.
Fundamentals of computational_fluid_dynamics_-_h._lomax__t._pulliam__d._zinggRohit Bapat
This document provides an overview of computational fluid dynamics (CFD) and summarizes its key steps and concepts. It discusses the fundamentals of CFD, including conservation laws, governing equations, finite difference approximations, semi-discrete and finite volume methods, and time-marching algorithms. The document is intended to introduce readers to the basic theory and methods in CFD for modeling fluid flow and transport phenomena.
This document is a guide for playing the game Civilization. It covers strategies for the early game such as initial exploration, technology and city building. It also discusses managing cities, military strategy, civics policies, religion, diplomacy, and dealing with unhappiness. The guide is intended to help new players with their early moves and general city and empire management throughout the game.
This document discusses machine learning and large data sets. It covers types of algorithms, including deterministic and adaptive models. It describes the enormous growth of digital data and challenges of data analysis. Machine learning is defined, and applications like credit analysis, autonomous vehicles and medical diagnosis are mentioned. Analyzing large data sets through manual and automated methods like data mining is also discussed. Examples of large data set analysis include images, videos and medical areas like mammography and colonoscopy. The conclusion is that computational analysis of large amounts of data presents opportunities.
This document provides an introduction to integral calculus and demonstrates how to perform integral calculations using the computer algebra system Sage. It covers key integral calculus concepts such as the definition of the integral, Riemann sums, the Fundamental Theorem of Calculus, and techniques for evaluating integrals such as substitution, integration by parts, and trigonometric substitutions. It also discusses applications of integrals to computing areas, volumes, arc lengths, averages, and centers of mass. The document is intended as a preliminary version of an instructional text on integral calculus using Sage.
Computer vision handbook of computer vision and applications volume 1 - sen...Ta Nam
This document provides a summary of a book chapter about image sensors. The chapter discusses:
1. Solid-state image sensing, including fundamentals of photosensing, photocurrent processing, transportation of photosignals, and architectures of image sensors.
2. HDRC imagers that provide log compression at the pixel site for natural visual perception, with random pixel access and optimized SNR through bandwidth control per pixel.
3. Image sensors in TFA (Thin Film on ASIC) technology, which uses thin-film detectors deposited directly on readout integrated circuits for compact camera modules.
This document is a draft of a textbook titled "Applied Calculus" written by Karl Heinz Dovermann, a professor of mathematics at the University of Hawaii. It is dedicated to his wife and sons. The textbook covers topics in calculus including definitions of derivatives, integrals, and applications of calculus through 12 chapters with sections on background concepts, derivatives, applications of derivatives, integration, and prerequisites from precalculus.
Vector spaces, vector algebras, and vector geometriesRichard Smith
Vector spaces over an arbitrary field are treated. Exterior algebra and linear geometries based on vector spaces are introduced. Scalar product spaces and the Hodge star are included.
This document is a manual for JIU (Java Imaging Utilities), an open source Java library for image processing. It introduces JIU, describes its image data types and classes for loading, saving and manipulating images. It provides overviews of operations, codecs, color processing and the GUI capabilities of JIU. It also gives guidance for developers on writing custom image operations and codecs to extend JIU's functionality.
This document provides an outline for the course MBA 604 Introduction to Probability and Statistics. It lists 11 topics that will be covered in the course, including data analysis, probability, random variables, sampling distributions, estimation, hypothesis testing, regression, and analysis of variance. The course is taught by Muhammad El-Taha in the Department of Mathematics and Statistics at the University of Southern Maine.
Masters Thesis: A reuse repository with automated synonym support and cluster...Laust Rud Jacobsen
Having a code reuse repository available can be a great asset for a programmer. But locating components can be difficult if only static documentation is available, due to vocabulary mismatch. Identifying informal synonyms used in documentation can help alleviate this mismatch. The cost of creating a reuse support system is usually fairly high, as much manual effort goes into its construction.
This project has resulted in a fully functional reuse support sys- tem with clustering of search results. By automating the construc- tion of a reuse support system from an existing code reuse repository, and giving the end user a familiar interface, the reuse support system constructed in this project makes the desired functionality available. The constructed system has an easy to use interface, due to a fa- miliar browser-based front-end. An automated method called LSI is used to handle synonyms, and to some degree polysemous words in indexed components.
In the course of this project, the reuse support system has been tested using components from two sources, the retrieval performance measured, and found acceptable. Clustering usability is evaluated and clusters are found to be generally helpful, even though some fine-tuning still has to be done.
This document is an introduction to plasma physics that covers several key topics:
1. It defines plasma as a gas of charged particles and discusses the conditions needed for a plasma state, including debye shielding and plasma parameters.
2. It describes different models for plasma description including fluid, MHD, and two-fluid models. It also covers continuity, Euler, and state equations.
3. It discusses MHD equilibria and waves, including Alfven and magnetosonic modes.
4. It examines MHD discontinuities and shocks.
5. It presents the two-fluid description and generalized Ohm's law.
6. It explores waves in dispers
This document provides an overview and outline of computer graphics lecture notes. It covers topics such as raster displays, basic line drawing, curves, transformations, 3D objects, camera models, visibility, lighting and reflection. The document was created by the Computer Science Department at the University of Toronto for a computer graphics course and is copyrighted.
This document provides an introduction to queueing theory. It discusses key concepts such as random variables, probability distributions, performance measures, Little's law and the PASTA property. It then examines several common queueing models including the M/M/1, M/M/c, M/Er/1, M/G/1 and G/M/1 queues. For each model it derives the equilibrium distribution and discusses measures like mean queue length and waiting time. The goal is to give an overview of basic queueing theory concepts and common single-server and multi-server queues.
This document provides an introduction and overview of data structures and algorithms. It discusses linked lists, binary search trees, heaps, sets, queues, and the AVL tree data structure. It also covers sorting algorithms like merge sort, quicksort, and insertion sort as well as numeric algorithms for primality testing, base conversions, finding greatest common denominators, and more. The goal is to provide annotated references and examples of how to implement and use various common data structures and algorithms.
This document contains notes from a trigonometry course. It includes 10 chapters that cover topics like geometric foundations, the Pythagorean theorem, angle measurement, trigonometric functions, graphing trigonometric functions, inverse trigonometric functions, and working with trigonometric identities. Each chapter also includes supplemental problems for additional practice.
This document contains notes from a trigonometry class taught by Steven Butler at Brigham Young University in Fall 2002. It is divided into 9 chapters that cover topics such as geometric foundations, the Pythagorean theorem, angle measurement, trigonometry with right triangles, trigonometry with circles, graphing trigonometric functions, inverse trigonometric functions, and working with trigonometric identities. Each chapter contains sections that explain key concepts and include supplemental practice problems.
The document discusses spherical harmonics and their properties and applications. Spherical harmonics are orthogonal functions defined on the surface of a sphere that can be used to represent functions defined over the spherical domain, similar to how Fourier series represent functions over a 1D or 2D domain. The document first reviews mathematical fundamentals including orthogonal functions and spherical coordinates. It then defines spherical harmonics and describes some of their key properties such as rotational invariance. Finally, it discusses two applications of spherical harmonics in computer graphics: representing environment maps and performing real-time spherical harmonic lighting calculations for dynamic scenes.
This document outlines the contents of a graduate-level quantum mechanics course. It introduces the major sources consulted in developing the course material and provides an overview of the fundamental concepts covered, including the breakdown of classical physics, polarization of photons, ket and bra spaces, operators, eigenvalues and eigenvectors, observables, measurements, expectation values, and more. The document then delves into specific topics like position and momentum, quantum dynamics, angular momentum, approximation methods, scattering theory and others.
This document provides a report for the proposed CarbonSat mission to observe greenhouse gases from space. It discusses the scientific background and justification for the mission, including the need to improve understanding and quantification of regional carbon fluxes and budgets. The mission objectives are outlined as observing regional, country-scale, and local carbon fluxes. Requirements for the mission include precision of better than 1% for CO2 and 3% for CH4 observations from space, with high spatial and temporal resolution. The proposed mission architecture involves a satellite carrying a spectrometer payload in a sun-synchronous orbit.
This document is Cosimo Fedeli's dissertation submitted to the University of Heidelberg for the degree of Doctor of Natural Sciences. The dissertation explores strong gravitational lensing by galaxy clusters through both analytical and numerical methods. It presents a novel semi-analytic method for computing strong lensing cross sections of clusters that reproduces results from full numerical simulations faster. Applying this method to a cluster population, it finds that mergers significantly increase the probability of strong lensing. It also analyzes the effects of early dark energy on strong lensing statistics and explores selection effects and scaling relations of clusters.
This document provides an overview and printing history of the book "Lessons In Electric Circuits, Volume III – Semiconductors" by Tony R. Kuphaldt. It discusses the topics that will be covered in the book, including solid-state device theory, diodes and rectifiers, bipolar junction transistors, and more. The printing history section notes that the book was originally published in 2000 and has since had four subsequent editions, with new sections and corrections added over time. The latest edition discussed is the fifth edition from July 2007.
The document is the R Language Definition and provides details about the R programming language. It discusses the different types of objects in R like vectors, lists, and data frames. It also describes how expressions are evaluated in R through functions, control structures, arithmetic operations, and indexing. Permission is granted to distribute copies of the manual provided the copyright is preserved.
This document is a master's thesis that examines localization techniques in wireless sensor networks. It provides background on wireless sensor networks and how they emerged from military applications but are now used in various civil applications. The thesis focuses on developing and analyzing new localization algorithms. It presents the results of experiments measuring received signal strength indication (RSSI) from wireless sensor nodes, which indicate significant fluctuations that could limit the reliability of localization schemes. Overall, the thesis evaluates localization methods and develops new algorithms to improve positioning accuracy in wireless sensor networks.
Reconstruction of Surfaces from Three-Dimensional Unorganized Point Sets / Ro...Robert Mencl
This document is a dissertation written by Robert Mencl to earn a Doctor of Natural Sciences degree from the University of Dortmund. The dissertation proposes a new algorithm for reconstructing surfaces from unorganized 3D point clouds. The algorithm uses the Euclidean minimum spanning tree to create an environment graph, then incrementally constructs the surface by adding triangles while ensuring the resulting triangles satisfy necessary conditions to approximate the underlying surface. The dissertation provides detailed descriptions of the algorithm's components and theoretical analysis to prove properties like the triangles constructed will have bounded edge lengths and converge to the natural neighbor embedding of the surface.
This document provides an overview of GPS theory and practice. It begins with the history and development of global surveying techniques leading to GPS. It then covers the key components of GPS including the space, control, and user segments. Following this, it discusses reference systems, satellite orbits, satellite signals, observable data, and effects impacting measurements. The document concludes by addressing surveying applications of GPS including terminology, observation techniques, equipment, and planning GPS surveys.
This document proposes a system to allow a robot to automatically find a path to a predefined goal in uncontrolled environments. The system has three main modules: 1) An artificial vision module that obtains a quantified representation of the robot's vision using local feature detection and visual words. 2) A reinforcement learning module that receives the vision input and sensor data to compute the state and reward. The state is a normalized vector and sensor data, and reward is based on distance to the goal. 3) A behavior control module. The robot is tested using Sony Aibo to seek the goal and change behavior based on experience, but does not find the optimal route.
This document provides an introduction to queueing theory, covering basic concepts from probability theory used in queueing models like random variables, generating functions, and common probability distributions. It then discusses fundamental queueing models and relations, including Kendall's notation for describing queueing systems and Little's Law relating average queue length and waiting time. Specific queueing models are analyzed like the M/M/1, M/M/c, M/Er/1, M/G/1, and G/M/1 queues.
This document provides an introduction to queueing theory. It discusses key concepts such as random variables, probability distributions, performance measures, Little's law and the PASTA property. It then examines several common queueing models including the M/M/1, M/M/c, M/Er/1, M/G/1 and G/M/1 queues. For each model it derives the equilibrium distribution and discusses measures like mean queue length and waiting time. The goal is to provide the fundamental mathematical techniques for analyzing queueing systems.
This document describes Kerry Steven Hall's dissertation research on using air-coupled ultrasonic tomography to image concrete elements. The research aims to integrate recent developments in air-coupled ultrasonic measurements with advanced tomography technology to apply them to concrete structures. Finite element models are developed and used to simulate measurement configurations and optimize data collection procedures. Non-contact and semi-contact ultrasonic sensors are developed and tested on concrete cylinder and block specimens. Tomographic reconstructions with error calculations are performed to image inclusions and defects within the concrete. Issues related to applying the techniques to full-scale concrete structures are also discussed.
This document provides course notes on information visualization. It covers topics such as the history of information visualization, techniques for visualizing different data types like hierarchies, networks, and multidimensional data. It also discusses concepts in visual perception and lists many examples of visualization systems developed over the years for different data types. The document is intended as a reference for students taking a course on information visualization.
This document provides a preface and table of contents for a book titled "I do like CFD, VOL.1" by Katate Masatsuka. It discusses governing equations and exact solutions for computational fluid dynamics. The preface notes that it is the intellectual property of the author and protected by copyright, with permission required for modification or reproduction. It provides contact information for the author and notes that the PDF version is hyperlinked for ease of navigation. A hard copy version is also available for purchase.
This document describes a research project aimed at extracting the cortical surface and separating the hemispheres in MRI datasets using 3D image segmentation techniques. For cortical surface extraction, a conditional dilation approach is used to "open" closed cavities in the segmented cortex to obtain a surface with hollow sphere topology. For hemisphere separation, marker volumes are defined and dilated to grow segmentation masks for each hemisphere, addressing challenges like marker volumes growing into each other. Experimental results demonstrate the feasibility of the proposed approaches.
This document is Roman Zeyde's 2013 master's thesis from the Technion submitted in partial fulfillment of the requirements for a Master of Science degree in Computer Science. The thesis describes research on computational electrokinetics, which involves developing a numerical scheme to solve the governing equations for electrokinetic phenomena such as electrophoresis and ion exchange. The numerical scheme is based on a finite volume method in spherical coordinates. Results are presented comparing the numerical solutions to asymptotic analytical solutions for steady-state velocity profiles.
Metatron Technology Consulting 's MySQL to PostgreSQL ...webhostingguy
This document provides a guide for migrating a database from MySQL to PostgreSQL. It discusses key differences between the two databases, including features available in one but not the other. It also provides references for porting SQL functions and tools to help with the migration process. Common problems that may occur during migration like error messages are also addressed.
This document is a textbook for high school students studying physics. It is titled "The Free High School Science Texts: A Textbook for High School Students Studying Physics". The textbook is published under the GNU Free Documentation License, which allows users to copy, distribute, and modify the document. The textbook covers various topics in physics, including units, waves, geometrical optics, vectors, forces, and Newton's laws of motion. It provides explanations, examples, and important equations for each topic.
Similar to Rapport de stage: Optimisation d'une starshade (20)
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
2. Abstract
Before looking for extra-terrestrial life, detecting exo-planets is one of the most promising quest in astronomy.
However, the difficulties that science and technology encounter make tough the detection and observation of exo-
planets, especially exo-Earths. In this paper, I itemize concretely what we are looking for and the various methods
used up to now in the search of exo-planets. I will be paying particular attention to the direct imaging, which enables
spectral characterization of a planet.
Suppressing the light coming from the host star by orders of magnitude to reveal the faint light coming from the
exo-planet is one of the most efficient way for characterizing an exo-Earth and maybe, finding life. This can be done
by the use of internal or external occulters (coronagraphs or starshades) with numerous different properties. I discuss
here how to compute the physical expression of the light intensity when combining a starshade with a telescope. I
also explain how to build the shape of an occulter through an apodization function coming from a numerical and
analytical optimization, which I implemented. Then, I investigate all the related parameters such as the diameter,
the petal length, the inner working angle, etc..., to highlight all the various behaviors of the apodization through a
range of data corresponding to the science we aim to do.
4. Chapter 1
Science of extra-solar planets
1.1 Exo-Planets
Are we alone in the Universe? This fundamental and philosophical question could find a simple scientific yes-or-no
answer by finding life in other solar systems, other Earths around stars. Since we discovered the first exoplanets, the
research in this field has grown tremendously and more than 450 planets have been catalogued to create a huge archive
of various planets. There are planets with masses from 2 M ([1]) to 13 MJup (limiting mass for thermonuclear
fusion of deuterium [2]), the radius varies from 2 R ([3]) to 2 RJup ([4]) and the temperature from 50◦ K ([5]) to
1100◦ K ([6]). The boundaries of the high values are quite difficult to establish as the limit between gas giants and
brown dwarfs/aborted stars is as of now not really clear and can be defined based on different formation scenarios
instead of the mass ([7] & [8]). Gravity is also important as it depends on the density and radius; it can hardly mould
the shape and consistency of the surface. Moreover, if we look for life, the age of the star will be another important
factor as young stars won’t be good candidates since it took Earth at most one billion year to appear. The next and
last step in the search of life deals with the characterization of the atmosphere. Nowadays, we count few planets
which show evidence of carbon dioxide, methane and sodium ([9], [10] & [11]).
In order to find candidate planets sheltering life, we have to differentiate the kinds of exoplanets: gas giant, ice giants
and terrestrial planets, etc. For a given terrestrial planet, the habitability is defined by the presence of liquid water
(habitable zone – def HZ). If life exists, then we can search for it by using biomarkers. Many more hot Jupiter-like
planets have been found in comparison to terrestrial-like exoplanets, and most of them have been classified in different
kinds. In the next two parts we give a general description of gas giants and terrestrial planets, more precisely where
the search of life starts.
1.1.1 Gas Giants
Based on Jovian planets of our solar system, gas giants are also called hot/cold Jupiters. They mainly represent a
class of gaseous planets, that are almost always the mass of Jupiter or more, and are also above a vague boundary
of 10 M . Their types have been classified by David Sudarsky in regards of some characteristics like temperature
and composition ([12]). The gas giants like Jupiter and Saturn are mostly composed of hydrogen and helium, which
are the most abundant elements in the Sun whereas the ice giants like Uranus and Neptune are primarily made of
heavier components such as oxygen, carbon, nitrogen, and sulfur. They also differ by the size of their respective cores
around which the gas orbits. The scientific community first attempted to observe them while they were far away
from their host stars, like it is the case for Jupiter, Saturn, Uranus and Neptune. However, a significant proportion
of discoveries of gas giants that were very close to their host stars (which helped for their detection) and also more
discoveries of retrograde orbits made scientists rethink their ideas of star system formation.
Hydrogen, helium, methane and ammonia are the main components detected in gas giants. Age and temperature are
important properties and the latter is principally governed by the distance to the host star, e.g. closer planets mean
hotter planets. Their temperature will lead to different structures as the fluid becomes either a gas or solid. Moreover,
younger gas giants are significantly more luminous, which makes them easier to observe with direct imaging in the
near infrared.
The big question about whether a detection is a gas giant, a brown dwarf, or a binary star is still relevant. There
are many observations of speculative close gas giants that need confirmation before they can be recognized officially
as new exoplanets.
2
5. 1.1.2 Terrestrial exoplanets
As previously stated, exo-Earths are the Holy Grail of exo-planet research. Up to now, the lightest exoplanet
discovered is GJ 581e with a minimum mass of 1.9 M ([13]) and the smallest is CoRoT-Exo-7b with a radius of
1.76 R ([3]). In our quest for life, we mainly look around Sun-like stars, even if it might look anthropocentric to
search for similar biological markers. However, starting with the various forms that life can take on our planet, we
might include huge possibilities of other means of life.
The key parameters to describe a possible Earth-twin are orbit, mass, radius, visible/infrared spectrum, and also
their variations during time. These parameters and their combinations provide us information about numerous other
properties, like effective temperature, density (→ surface gravity with the help of the radius) and albedo (→ surface
reflectance).
The first step is to define an habitable zone. In our galaxy, disruptive gravitational forces or strong emissions of
infrared radiation and X-rays could cause the impossibility for life to grow close to the galactic center. In the outer
limit of the galaxy, the abundance of heavy elements decreases due to galactic chemical evolution. Next, as the life
form we know could not inhabit planets like Neptune or Venus, the stellar-habitable zone has be defined to set a
range of distance around the star where all these conditions are met to encourage the development of life mainly
defined by the presence of water ([14]). To help find this range, we can use our own solar system as a base. Here the
habitable zone goes from 0.7 AU to 1.5 AU (Earth being, of course, at 1 AU). We scale this frame by the square root
of the stellar luminosity, which will lead to the following equation:
L∗
Habitable Zone (AU) ∈ [0.7 − 1.5] ×
L
(see figure 1.1). We can also translate the habitable zone in AU into an angular distance and through
a(AU) L∗ /L
θ(”) = =
d(pc) d(pc)
which will lead to a frame of 70 milli-arcseconds (mas) to 120 mas (by taking the ratio L∗ /L between 0.5 and 1.5).
A useful parameter which expresses the angular separation is the Inner Working Angle (IWA). It expresses the angle
between the host star and the planet seen from the telescope and takes the previous values in the section devoted to
the starshade.
Figure 1.1: Habitable zone (HZ) in Earth radius as a function of the star masses. As we can see, for increasing mass,
the luminosity will be higher and it will push away the HZ. For a Sun-like star, Earth is between the [0.7 - 1.5] AU
boundaries whereas Mars is just at the limit of the HZ. Figure credit: GFDL.
In spite of the increase of discoveries thanks to transit and radial velocity methods, characterization of the atmosphere
requires a higher level of planet detection. Direct imaging is the key to getting spectroscopic data. Currently, transit
spectroscopy provides many spectral characterizations ([15]) but remains inefficient for terrestrial planets closer to
their host stars (except for some dwarfs), in the habitable zone. The wavelengths of interest are included in a range
3
6. of 1 to 12 µm which reveal information about water, methane and carbon monoxide signatures. They help us to
understand about the surface type, clouds or atmospheric retention too. Finally, O2 and O3 , as biogenic traces will
indicate evidences for life ([16]).
Figure 1.2: Histograms of the number of exoplanets discovered as a function of radius and mass (figure credits:
exoplanet.eu).
1.2 Detection methods
1.2.1 Radial Velocity
Radial Velocity has been so far the more prolific method in detecting exo-planets. Also called the Doppler spec-
troscopy, it uses gravitational laws and the Doppler effect. Historically, it has also been the first method used by
astronomers ([17]) to discover a Jupiter-class planet around 51 Pegasi.
The exoplanets in a system have elliptical orbits and the host star moves in a small counter-orbit around a common
barycenter due to the attraction of planets. This movement will change the radial velocity of the star seen from Earth
and the spectral lines will show small blue shifts and red shifts.
At first, the errors of radial velocity measurements were too big to detect exo-planets. For example, the Sun gets
an additional movement due to Jupiter of 13 m/s and the errors were of 1000 m/s. However, in 1988, the Canadian
astronomers Bruce Campbell, G. A. H. Walker, and S. Yang suggested that a planet was orbiting the star Gamma
Cephei using a method that allowed them to detect radial velocity movements to a precision of 15 m/s. Now the
High Accuracy Radial velocity Planet Searcher (HARPS) at La Silla Observatory in Chile, can reach a precision of
almost 0.97 m/s (in comparison, Earth causes a fluctuation of 10 cm/s for the Sun).
This method provides a lower mass limit to the planet since radial velocity measure M∗ · sin(i) due to the inclination
of the orbital plane, with i being the angle of inclination. Further astrometric observations tracking the movement of
the star may change an exo-planet discovery to a brown dwarf detection. As of now, there are 425 planets discovered,
with numerous ground missions already working: AFOE, Anglo-Autralian Planet Search Program, Automated Planet
Finder, California & Carnegie Planet Search, Coralie at Leonard Euler Telescope, Elodie, Sophie, Exoplanet Tracker,
HARPS, Hobby-Eberly Telescope Magellan 6.5m Telescope, Mc Donald Observatory, NK2 Consortium, TNG High
Resolution Spectrograph, UVES. In the next years, the Absolute Astronomical Accelerometry, Carmenes, HARPS-N,
OWL, PRVS will complete the huge panel of missions using radial velocity ([18]).
4
7. Figure 1.3: Principle of the radial velocity method: when the star moves away, the Doppler shift will be red, and
when the star gets closer, the shift moves to blue. Figure credit: NASA/JPL image.
1.2.2 Transit
First proposed by Otto Struve in 1951, the idea is to study the luminosity of a star with a reasonable sized-telescope.
Periodical variations of the luminosity could come from a planet located between the star and the Earth. In that
case, we need to see the system in the ecliptic plane. Otherwise, no planet can be detected. Some calculations with
geometric probabilities suggest an estimate that there are almost 5% of stars with a detectable exo-planet.
Here, detection will occur with a fall of luminosity and characterization is possible with the spectral analysis of
Figure 1.4: Principle of the transit method. Figure credit: NASA/JPL image
the light received. It is the primary transit. The composition and scale height used with the help of the absorption
of starlight passing through the planet’s atmosphere is called transit spectroscopy. Next, when the planet is almost
behind the star, the secondary eclipse happens and provides a direct detection of the planet’s spectrum. We can
therefore get information about components, reflectivity and temperature. The radius, mass and orbits follow from
time and depth of the fall. When the planet goes behind the star, we can get the light from the planet by subtraction
of the star light; this semi-direct method also allows planet characterization. For example, the Spitzer Space Telescope
(NASA) managed to produce a 7.5-14.7 µm spectrum for the transiting extrasolar giant planet HD 189733b ([19]).
Currently 81 planets have been discovered with the ongoing ground missions: Alsubai’s Project, ASP, BEST,
E.P.R.G., HATNetwork, LCOGT and UStAPS (also doing lensing), MEarth, MONET, OGLE III, PASS, PIRATE,
PISCES, STARE, Super WASP, STEPSS, UNSWEPS Project, Tenessee Automatic Photoelectric Telescope, TrES,
TRESCA, Vulcain South, WHAT, XO Project. In space, CoRoT, EPOCh, Fabra-ROA Camera, Gaia, KEPLER al-
ready found many planets, and in project: Plato, GEST for space telescopes and GITPO, STELLA for ground-based
mission ([18]).
1.2.3 Gravitational Lensing
In order to use gravitational lensing for the detection of exoplanets, we need first a background star. Next, when
another star crosses the distance between Earth and the background star, the gravitational field acts like a lens and
the light of the background star is modified. If the star owns a planet, its gravitational field will contribute to the
lense and be detected from the Earth (see figure 1.5). However, this event is very rare and only occurs once for each
planet as the alignment cannot happen again, making a confirmation impossible.
This method provides some advantages: the mass can be measured, we can reach Earth-like masses, and the angular
separation is also known. We can also detect planets in other galaxies. But there are drawbacks: the observation
time required is huge as well as the number of stars which need to be observed for a small number of detections.
Moreover, the mass and orbit size depend on the properties of the host star, which also need to be known.
With gravitational lensing, 10 planets have been observed, especially with the use of OGLE (the Optical Gravitational
5
8. Lensing Experiment) and the others projects: University of St. Andrews Planet Search (UStAPS), Las Cumbres
Observatory Global Telescope Network (LCOGT), MACHO, Microlensing Planet Search Project (MPS). In the
future, GEST will join them ([18]).
Figure 1.5: Principle of gravitational lensing: the lensing effect of the star is disturbing by the presence of an
exoplanet. Figure credit: Abe et al.
1.2.4 Puslar Timing
Pulsars are neutron stars hosting a strong magnetic field and are the fastest spinning objects discovered so far. Two
beams of radiation are ejected at the pole of the star, and we get on Earth a brief pulse each time the beam crosses
the path of the Earth. This need to have an alignment between Earth and a pulsar makes the proportion of pulsars
we cannot observe from Earth rather large. However, the existence of a planet around it makes them both orbit
around their center of mass. Similar to the radial velocity method, the act of measuring the periodic changes in the
time between each pulse will be a way to estimate the semi-major axis of the planet’s orbit, and a lower limit of the
planet’s mass ([20]).
Actually, planets around pulsars are not very interesting in the search for life in our galaxy. Indeed, pulsars are
created by stellar explosions like supernovae, which may prevent life to expand in such systems. At this moment,
only 8 planets in 5 different solar systems have been discovered with this method by the ongoing mission Pulsar
Planet Detection.
1.2.5 Astrometry
Figure 1.6: Principle of the astrometry method. Figure credit: http://www.astro.wisc.edu/
Basically, astrometry is used for determining positions of stars. Using already well-known coordinates of near
stars, it determines the location of an unknown star in the same image by comparison. In the detection of extra-solar
planets, we measure the displacement of stars around a supposed center of mass of the system composed by a star and
its planets, providing position and mass of the planet(see figure 1.6). Many white dwarfs have been discovered as they
involve high variations. However, the accuracy needed for exo-planets is very precise and ground-based astrometry is
not enough powerful, due to the distorting effects of the Earth’s atmosphere for example. As of now, only one planet
6
9. has been discovered with astrometry (with HST Astrometry, [21]).
There will be space missions using this method in the next few years, such as NASA’s SIM Lite (Space Interferometry
Mission) projected to launch in 2015, European Space Agency’s GAA, due to launch in 2012, Origins Billion Star
Survey (OBSS) in study, and on the ground: STEPS, Radio Interferometric Planet Search (RIPL), PRIMA-DDL
(VLTI, under study), Keck Interferometer, ASPENS. All of them will detect terrestrial planets orbiting close to their
stars with astrometric techniques ([18]).
1.2.6 Direct Imaging
About direct imaging
Providing a direct picture of exo-planets is the most difficult challenge to take up. Their extremely faint luminosity
requires a high contrast for imaging. For young gas giants (1 MJup , 70 Myr), at a distance of 10 pc, a contrast of
10−7 with its host star will be required ([22]). In comparison, for exo-Earths, still at 10 pc, 10−10 of contrast will be
necessary. These values correspond to a difference of magnitude from 17.5 to 25 mag (as m = −2.5 ∗ log10 (F ) + C,
with F the flux and C a constant). There are some directly imaged exoplanets (12 as of now [18]).
In detection of Earth-like planets, one issue for getting high contrast is dust. Primary disks of dust and asteroids
dating from the creation of stellar systems may still remain like our Jupiter and Neptune Trojans. We also add the
dust coming from the numerous collisions between asteroids and comets. All of that represents the exozodiacal dust.
It is a source of background noise contaminating the spectrum. For example, the zodiacal dust in our solar system is
the most luminous component after the Sun ([23]). It is called the local zodi. Earth signature might appear in this
disk as a clump inside it so that even if the exozodiacal disk is not helpful for a detection, its structure might reveal
the presence of an unseen planet or could be sign of the system’s orbital dynamic.
The exozodiacal light is measured with the ratio between the infrared luminosity and stellar luminosity: (LIR /L∗ ) ≈
10−6 − 10−5 in most of the cases. In comparison, the local zodi represents the zodiacal disk in our asteroid belt and
the ratio (LIR /L∗ ) is approximately equal to 10−7 which is the referee value, 1 zodi. Even if it represents most of the
source of noise, the exozodiacal light provides us information about the elliptic inclination of the planetary system
by supposing a circularly distribution around the star. Therefore, dust may provide a clue about the planets’ orbits,
which is very important for science. The field of exozodiacal disks is of great importance to understand the behavior
in planetary systems as well as planetary formation and systems evolution.
One other issue lies in interferences between wavefronts. Optical aberrations cause speckles on the image that can
be confused with a planet signal. Speckles are also created by small thermal variations. These aberrations are time
dependent which makes the subtraction by calibration difficult. The use of deformable mirror and adaptive optics
reduces the atmospheric distortions for ground-based telescopes and the speckle intensity. However, 100% efficiency
cannot be achieved and there will still be remnants.
From visible light to infrared, there are several ways to detect exoplanets. In the case of infrared light, the main method
is the use of an interferometer. It consists of a few telescopes connected together to build a larger telescope with
high resolution power. A nulling interferometer can also be built to reduce the intensity of the host star to show the
faint light of the planet, e.g. the canceled Darwin mission and the ongoing ground-based Large Binocular Telescope.
Many projects are already in use or planned for the future ([18]): Keck Interferometer ([24]), Very Large Telescope
Interferometer VLTI (ESO, Paranal), both ground-based telescopes and others are in project: The Antarctic Plateau
Interferometer (API) and CARLINA Hypertelescope Project on Earth, Space Infrared Interferometric Telescope
(SPIRIT, NASA) in space.
The coronagraph, invented by the French astronomer Bernard Lyot in the 1930’s, is a powerful instrument used
firstly to observe the corona around the Sun. It simulates an artificial eclipse by blocking the light with an occulting
spot or mask whereas the surrounding light stays undisturbed. In search for exoplanets, the coronagraph helps to
get the faint light coming from a planet around its star. There are several types of coronagraphs, from the band-
limited coronagraph (present on the JWST’s Near-Infrared Cam), phase-mask coronagraph, apodized pupil lyot
coronagraph to the optical vortex coronagraph. More information about coronagraphs are listed by Guyon ([25]) and
classified by Quirrenbach ([26]). The space missions are: Pupil mapping Exoplanet Coronagraphic Observer (PECO,
under study), Super-Earth Explorer (SEE-COAST, under project [27]), and the ground-based missions: Spectro-
Polarimetric Imaging (SPHERE, under construction [28]), Gemini Planet Imager (GPI, under construction [22]).
And the last method, subject of our interest, is the external occulter. The idea of building an occulter in order
to block the light coming from a star was emitted by Lyman Spitzer in 1962 ([29], see figure 1.7). As one would
think, a circular-shaped screen could block the light of a star to help observe the faint light coming from a nearby
planet. However, strong edges cause Fresnel diffraction effects that will brighten the shadow created by the occulter
and cause starlight to come through the telescope. An apodization has to remove the strong edges of the occulter.
By advanced calculations, many shapes have been designed in the early 1980s ([30]) but finally, the petal-shape has
been adopted (a 20 point star-shaped mask was also another candidate, [31]). To ensure the best achievement, the
external occulter has to remain accurately at its position during observation time. In our case, the telescope will be
7
10. in an orbit around the Sun-Earth L2 point and the occulter has to follow the orbit. To avoid noise coming from the
Sun reflectance on the occulter, the observations have to be reduced to the ones which match an angular position to
the Sun from 45 to 85 degrees (or slightly beyond if the occulter can be tilted, [32]). Currently, there is no mission
in activity but a few missions are being studied e.g. the proposed THEIA (Telescope for Habitable Exoplanets and
Interstellar/Intergalactic Astronomy) and NWO (New Worlds Observer). Now, the future JWST (James Webb Space
Telescope) is the best candidate for an occulter mission on a short time scale.
Figure 1.7: Principle of a starshade: the light coming from the star is stopped whereas the weak light from the planet
is not stopped. Figure credit: Northrop Grumman Corporation
Managing direct imaging is the most important march for the search of life. However, as indirect methods like
astrometry or radial velocities provide a direct measurement of the masses, orbital parameters and coordinates of the
planets, combining these methods would complete the characterization and supply all the information we want to get
about an exoplanet ([33]).
Comparison between coronagraph and external occulter
Both occulter and coronagraph are acting in the same way: suppressing the light of the star to reveal the faint
light of the orbiting planet. However, if they do almost the same thing, we can wonder why one would send a huge
spacecraft thousand of kilometers away from the telescope. An external occulter presents several advantages over
internal coronagraphs.
But first, we will discuss some of the drawbacks of an occulter. As the word ’external’ says, an occulter requires
another spacecraft to maneuver it and thus the lifetime of the occulter depends on fuel consumption and on the
mission design. Since micro-engineering costs a lot for high performance coronagraphs, the cost of a occulter in orbit
is not that much higher but it increases with its size. The time of travel between each target is also a burden for
the occulter. Almost two weeks of traveling is necessary to move to the next target (Design Reference Mission is
built to optimize as much as possible all these time constraints by reducing all the impacted parameters). Next,
lower contrast caused by the deformations of the occulter, and speckles may arise from manufacturing, deployment
or micro-meteorite hits in flight.
The small size of a coronagraph may be source of material imperfections. One parameter only controlled by the ex-
ternal occulter, as it can move backward and forward, is the inner working angle. It is not fixed, and there is no limit
for outer working angle too (due to the absence of deformable mirror correction of the coronagraph speckles). Still
dealing with the inner working angle, in the case of coronagraph, it depends on the wavelength (IW A ∝ λ/D): the
higher the wavelength, the higher the inner working angle. Because exo-Earths will be found at low separations, the
coronagraphs are usually designed to provide a spectrum between 250 and 1000 nm ([34]). Starshades are typically
not limited in the size of the bandpass. For internal coronagraphs, starlight suppression over a broad band is more
challenging and typically limited to 10-20%. The contrast obtained is also better for external occulters and 10−10 is
achieved for most of the cases (see the result part) and this can be helped by the fact that the primary mirror and
supporting optics of the telescope have less constraints with an occulter. Finally, as a Lyot-type coronagraph is much
more complex, a simple reasoning explains that with fewer optical systems, less signal is lost.
The following table sums up the characteristics we compare between a coronagraph and the external equivalent:
8
11. Characteristics Coronagraph External Occulter
Cost + -
Signal - +
Lifetime + -
Deployment + -
Scattered light - +
Position control - +
Higher suppression - +
Inner Working Angle - +
Usable for any telescope instruments - +
Performance in spectral characterization - +
Instead of wondering if an occulter is better than a coronagraph, the idea of doing both light suppression have been
excogitated. Combining an occulter with a coronagraph makes the science really more complicated. The occulter
has to be designed for the coronagraph and moving the occulter will require the coronagraph to be changed. And for
each new configuration, tests need to be done to verify the ability of the system. These are called hybrid occulters.
They could reduce the size of the occulter and therefore the distance, economizing time and fuel. Occulters with
apodized pupil Lyot coronagraph and achromatic interfero-coronagraph have been described by Cady ([35]) showing
their advantages and drawbacks.
JWST
JWST is a large space telescope, optimized for infrared observations, scheduled for launch in 2014. Its goals are to
find first galaxies, planetary systems formation, and evidence of the reionization ([36]) during a five-year mission but
will have enough fuel to run over 10 years. It is due to an international collaboration between NASA, the European
Space Agency (ESA), and the Canadian Space Agency (CSA). The James Webb Space Telescope was named after a
former NASA Administrator.
The telescope will stand in the Sun-Earth Lagrangian 2 point, at 1.5 millions km from the Earth, after a trip of 30
days. It is composed of a large mirror, 6.5 meters and offers 4 scientific instruments covering infrared wavelengths:
NIRCam (Near Infrared Camera), NIRSpec (Near Infrared Spectrograph), TFI (Tunable Filter Imager) and MIRI
(Mid Infrared Instrument). Here we explain most of their abilities in the case of a starshade, at low resolution.
• NIRCam: insuring observations between 0.6 and 5 microns (tow arms, one short from 0.6 µm to 2.5 µm and one
longer from 2.5 µm to 5 µm), with a high sensitivity between 2.2 and 5 microns, the NIRCam is the first camera
of the JWST. It will be used for the search of planetary companions, mainly Jupiter-sized planets, for the search
for protoplanetary disks, stellar populations in nearby galaxies, and also for the characterization of galaxies at
very high redshift, mapping dark matter. NIRCam is composed of two modules for broad- and intermediate-
band imaging where traditional focal plane coronagraphic mask plates will be used and two different wavelength
channel outputs. Lastly, NIRCam will be used as a wave front control module to check alignment and shape of
the 18 hexagonal-shaped mirror segments ([37]).
• NIRSpec: also covering from 0.6 µm to 5 µm with different quantum efficiency above and below 1.0 µm
(respectively ≥ 80% and ≥ 70%), the Near Infrared Spectrograph uses a 0.2 arcsec slit with resolution of
100, 1000 and 2700. Low resolution is advised with regards to sensitivity and exposure time whereas higher
resolution could be used for giant planets.
• TFI: the Tunable Filter Imager is, as it is called, an imager. The fact that the light is not dispersed reduces
considerably the contrast sensitivity, making this instrument inappropriate with the use of a starshade.
• MIRI: the Mid Infrared Instrument is composed of two spectrographs, one for low resolution and on for medium.
Giant planets with an external occulter could be the goal of this instrument, however, similar efficiency as
NIRCam and NIRSpec is reached for an habitable zone at 400 mas, and there is an impossibility to combine
the low resolution spectrograph with filters. Theses particularities jeopardize a sufficient efficiency in the search
of exo-Earths with the use of a starshade.
A starshade for JWST is one of the New Worlds Observer mission ([38] & [39]), in supplementation of THEIA ([40]),
CESO ( Celestial Exoplanet Survey Occulter [41]) and O3 (Occulting Ozone Observatory [42]) which all use apodized
of binary occulters, optimized for a wide variety of wavelengths. JWST by itself will require the help of an external
occulter to be capable of directly imaging planets in the habitable zone; the Hubble successor is one of the perfect
candidates for this mission. It would be the fastest and most affordable path to the discovery of life as the resulting
cost of this kind of mission have been estimated at about 1 billion dollars ([43]).
9
12. Figure 1.8: James Webb Space Telescope, with the on-axis primary mirror of 6.5 meters diameter composed of 18
hexagonal mirrors. Figure credit: NASA
The starshade will be launched after the JWST (in the best case, 6 months after). The starshade will be in orbit
around the Sun-Earth L2 point. Due to the large distance between the occulter and the telescope, it has to cover
many thousands of km for each star. In the 5 year planned mission, almost one week is required between each star
observation for the travel, 24 hours for imaging and 2 weeks for the spectroscopy science. There is a Design Reference
Mission (DRM, [34], [44] & [45]) built for optimizing the time travel and the number of discoveries based on the
frequency of Earth-like planets. Thus the DRM would maximize the number of planets discovered, their spectral
characterizations, productions of orbital fits. At the same time it would maximize the percentage of the target
list observed. At the end, for an occurrence rate of planets of 0.3, there would be 5 habitable Earth-mass planets
discovered for a small fraction of JWST observing time (say 7%) and the probability of zero discoveries would be
0.004 ([45]).
Figure 1.9: Route followed by the starshade to join the Sun-Earth Lagrangian 2 point, for a case of a 50 meter
occulter at 50000 km, launched 3 years after the space telescope. Figure credit: W.Cash et al.
10
13. Chapter 2
Starshade: design, optimization and
properties
2.1 Design and optimization
2.1.1 Free-Space propagation from starshade to telescope
As the name tells us, an external occulter, also called a starshade, obscures the light coming from a star. We place
a large occulter in the path of the light between the star and our telescope. The latter has to remain in the shadow
produced by the occulter. We manipulate the design and position of the occulter to have a control over the size of
the shadow, and also over the contrast between both stellar and planet luminosity.
Our purpose is to get the expression of the light intensity, traveling in finite distances. One of the unwanted effects
due to a circular disk mask is the Poisson’s spot, a bright spot in the center of the shadow which irradiance is nearly
the same as without any occulter ([46]). It results in a diffraction pattern dependent on size, shape and distance of
the starshade relative to the telescope.
On paper, we work with the electric field in the telescope’s pupil plane to express the light observed since its intensity
is given by the squared modulus of the electric field. We start with the Babinet’s theorem ([47]). The light propagating
from an unobstructed star (Eu ) is the same as the light coming from an on-axis hole (Eh ) plus the light coming from
the complement of that hole (Eo ):
Eu = Eh + Eo .
Next we can write the plane wave equation in terms of the polar coordinates (ρ, φ) of the telescope pupil plane, (ρ=0
being the center of the plane and ρmax the top)
2πiz
Eu (ρ, φ) = E0 e λ ,
with z being the distance between the occulter and the telescope. As previously said, a circular-shaped screen cannot
stop the light of a star without Fresnel diffraction effects. In this way, Spitzer ([29]) built up a way to suppress
the Poisson’s spot with the help of an apodization function A(r, θ), meaning that we consider the occulter being
partially attenuated, with r and θ, the polar coordinates of the occulter. As we assume circular symmetry, we have
A(r, θ) = A(r) (same result for φ). This function is equal to 1 for total obscurity and 0 when all the light propagates,
so that it describes the whole occulter. All these tools combined, we have ([46]):
Eo = Eu − Eh ,
Eu (ρ) = E0 e2πiz/λ ,
R
E (ρ) = E 2π e2πiz/λ eπiρ2 /λz × 2πrρ
2
)A(r)e(πi/λz)r rdr,
h 0 J0 (
iλz 0 λz
with J0 , Bessel function of the first kind, order 0, and the Fresnel integral being Eh (ρ), which gives us the total field
for the occulter-telescope system:
R
2π 2πrρ πi (r2 +ρ2 )
Eo (ρ) = E0 e2πiz/λ 1− × A(r)J0 ( )e λz rdr .
iλz 0 λz
11
14. 2.1.2 Binary apodization, shape of the occulter
A continuous graded occulter cannot be built. The idea has been proposed by Spitzer ([29]) to build a binary occulter
with a workable shape. Thus we approximate our continuous apodization by a binary occulter composed of N even
identical petals arranged around a circular central part ([31] then [38]). We use a similar new expression of the
image-plane electric field
E(ρ, φ) = e−2πiρ cos(θ−φ) rdrdθ
S
with S being the mathematical description of the petal-shape mask. Using among others the Jacobi-Anger expression
([48]), we compute the following result for a propagated field with an occulter ([49] & [50]):
Eo (ρ, φ) =
∞ R
2π(−1)j 2
+ρ2 )/λz 2πrρ sin(jπA(r))
Eo (ρ) − E0 e2πiz/λ eπi(r JjN rdr ∗ (2 cos(jN (φ − π/2)))
j=1
iλz 0 λz jπ
with Eo (ρ) being the previous electric field for a graded apodization, JjN the jN th order Bessel function whose effect
exponentially decreases for high N and j > 0 so that Eo (ρ) becomes predominant. Indeed, terms in the sum over j
converge to 0 quickly enough so that our optimization codes only require writing Eo (ρ) ([31]).
Next, the translation between the apodization function and the final shape of the occulter is easy: the angular width
∆θ(r) will be expressed as:
2π
∆θ(r) = A(r)
N
where N , the number of petals. Hence, the width of the petals, R∆θ, is directly mapping the apodization function
with R being the radius:
2π
∆ = R ∗ ∆θ = R A(r)
N
If N → ∞ we result in a graded occulter. However to keep a range for a feasible occulter, calculations of the average
suppression over the shadow profile on the telescope for different wavelengths have shown that 16 petals were a good
compromise ([51] & [38]).
2.1.3 Optimization of the occulter apodization function
Among the numerous variables, we can fit the best occulter based on the physical constraints. As previously said,
we need to get a contrast (or a suppression) of about 10−10 in the focal plane. A starshade can be optimized for
diameter, suppression level, wavelength range, shadow size, petal length and inner working angle (IWA). Other linked
variables, like the telescope aperture size itself, can also be changed in the research field of adding external occulters
to general space telescopes.
Parameters
To shape an occulter, there are several parameters which come into play. Here we describe the main characteristics
of the occulter and we will see in the result section how they behave together.
The diameter is one of the most important features. As the suppression increases with the size, bigger occulters
would be more favorable. However, if the contrast increases with the size for a constant geometric IWA, so does the
distance, the cost and the time for moving it. A good frame size for an occulter is 60 - 100 meters in diameters for a
large telescope of the size of JWST since the size of the occulter depends on the size of the shadow. As of now, an
occulter larger than 80 meters would encounter technological issues.
Then, with the addition of a binary apodization function, we establish a difference between a circular central part
and the petals. Their lengths modify the suppression achieved and they are also subject to deployment concerns, i.e.
longer petals would be harder to bloom and control. Decreasing the size of the petals will reduce the suppression for
a reasonable number of petals, as we come closer to the circular occulter without apodization.
The size of the shadow is important too. If we need to suppress the light coming from the star, the shadow provided by
the occulter needs to be large enough to cover the telescope aperture over the wavelengths studied. The shadow needs
to have at least a 1 meter margin to make sure we achieve the required contrast ([52] & [43] and see section 2.2.6).
Even if we first consider the contrast as a goal, it can also be considered as a parameter we get after optimization.
How much contrast we get as a function of the shape will be explained in depth in the section 2.2.3.
The Fresnel number is defined by
D2
F =
λz
12
15. with z, the distance of the occulter. For a given Fresnel number, the form of the shadow, or the contrast, created by
the occulter will remain the same. Thus, for a defined wavelength, we are able to establish a proportionality between
the distance z and the diameter D. Moreover, since the inner working angle equals IW A = D/2z, we have:
2 ∗ IW A ∗ D
F =
λ
Thereby, the science we are looking for will set the range of parameters describing the starshade, and by taking a
Fresnel number, we can figure how IWA, diameter and wavelength are correlated. To keep the same contrast, the
fresnel number needs to be identical. For a constant diameter, λ ∗ z remains constant. Therefore the starshade can
be used for observations at longer wavelengths by moving it closer to the telescope (this in turn increases the IWA).
The range of wavelengths we look at matters in our case of a starshade for JWST. Spectral characterization of the life-
signatures like O2 , O3 , H2 O, CO2 , or CH4 are optimally found between 0.7 µm and 2 µm. In the case of the JWST,
NIRSpec and NIRCam detectors work best between 0.6 and 5 µm, with different quantum efficiency depending on
the wavelength. However outside the optimal band pass, especially close to the red above 2 µm, the starshade starts
to leak starlight, reducing considerably the contrast. This light takes the form of speckles in the focal plane and thus
the use of filters with good out-of-band rejection is required to suppress enough this red leak ([43]).
In summary the basic parameters for the starshade are
• occulter diameter
• petal length
• inner working angle
• distance (related to occulter & IWA)
• wavelength range
• shadow size.
We next use them as the ’x’ values of the apodization function and write the physical equation of the electric field
so that we can easily manipulate these parameters.
Analytical optimization
A useful tool to shape the petals through the apodization function is the hypergaussian function. It has been used
and set up to be the best mathematical expression for the apodization. Developed by Cash ([38]) the function is
based on the following expression:
1 ∀ r ≤ a,
n
A(r) =
exp − r − a ∀ r ≥ a,
b
where a is the radius of the central part of the occulter and therefore b is the complementary distance which gives us
the radius of the occulter, i.e. the petal length. As the Hypergaussian is a mathematical function, the exponential
is endless and the definition of the inner working angle has been provided by Cash at AIWA = 1 , corresponding to a
e
transmission of almost 63%. a and b are given intrinsically by the value of the occulter central part (OCP ) and the
occulter diameter (D): a = OCP ∗ D and b = (1 − OCP ) ∗ D. n is a parameter for the petal shape set to the value
6 ([52]). The hypergaussian function presents the advantage of being an analytical application reducing, the time
of calculation. This is in contrary to the following optimization which requires minimization of calculation paths.
Moreover, the hypergaussian is independent of the number of wavelengths since it is monochromatic. For each step
requiring the wavelength, the average value λ = 1.7 µm is taken, corresponding to the maximum of the broadband
of interest. For smaller wavelengths, the contrast will be better (see figure 2.5).
Another function given by Copi & Starkman ([53]), is based on the transmission function τ (r) which equals 1 − A(r).
This transmission function expresses the expansion of the diffraction pattern through Chebyshev polynomials. They
wrote the transmission function as
N
τN (y) = cn y n
n=0
where N the order of the occulter and with
(r/R)2 −
y=
1−
13
16. where is the fractional radius of the center of the occulter, which gives us, for a four-order occulter:
A4 (y) = 1 − τ4 (y) = 1 − (35y 4 − 84y 5 + 70y 6 − 20y 7 ).
They took =0.15 in their consideration. It turns out that the hypergaussian function is similar to the first order of
the Copi & Starkman development, which makes it a harder but analytical way of optimization.
I wrote a code to generate starshade profiles with the hypergaussian function which calculates the contrast for a
range of parameters. I integrated the calculation with the existing functions for numerical optimization described in
the next section, in order to compare both approaches.
Numerical optimization
The main numerical way to get an optimized apodization function has been developed by Robert J. Vanderbei [51]
using linear programming. This mathematical operation, also used in economics, management or engineering, applies
this following operation:
Minimize: (c · A)
≥ ≥
Subject to: (m · A) = b & A = d.
≤ ≤
with c and A vectors, m a matrix, and b and d vectors corresponding to the constraints.
In our case, the goal is to get the best occulter shape by using Fourier optics for the system telescope + starshade.
To do so, we reproduce the approach used by Vanderbei et al. ([31]) and constrain the intensity ratio of the light over
the pupil plane to be less than 10−10 . As the intensity is not linear, we settle the matter by expressing the constraint
on the related electric field so that:
|Eo (ρ)|2 ≤ 10−10 |Eo |2
with
R
2π 2πrρ πi (r2 +ρ2 )
Eo (ρ) = E0 e2πiz/λ 1− × A(r)J0 ( )e λz rdr .
iλz 0 λz
We can already see that the first exponential e2πiz/λ disappears in the calculation of the modulus. As written above,
the electric field is described by the apodization function A(r). We use linear programming to minimize the sum
of the apodization function, which is described in matrix format by the scalar product c · A(r), with c, a simple
unit vector, so that c · A(r) = A(r). This objective function has little impact on the results in this case, because
the contrast goal will be placed on the constraints (together with other constraints described below). Since Eo (ρ) is
complex, the assumption is taken that the constraint
Re(Eo )2 + Im(Eo )2 ≤ 10−10
will correspond to
−5 −5
−10 ≤Re(E ) ≤ 10
√ o √
2 2
−5 −5
−10 ≤Im(E ) ≤ 10 ,
√ √
o
2 2
√
with the amplitude E0 removed and the 2 coming from the constraint expression scaled to a circle around 10−5 .
Thus, we get a system of four inequalities. In order to make it linear for optimal solutions ([54]), we write in the code
πi 2 2
the integral inside Eo (ρ) as the sum of all the area elements under the curve i J0 ( 2πri ρ )e λz (ri +ρ ) ri ∗ A(ri ), like a
λz
Riemann sum approximation of the integral. For convenience, we write
2πri ρ πi (ri +ρ2 )
2
J0 ( )e λz ri → φi , χi
λz
so that
Re(Eo ) ∝ φi ∗ A(ri ) & Im(Eo ) ∝ χi ∗ A(ri ).
Moreover, we can add constraints on the apodizer itself. Firstly, by definition, A(r) will be bound by 0 and 1.
Secondly, we impose A(r) to be equal to one for the central part of the occulter (the fully opaque central disk).
Thirdly, in the monochromatic case, the natural solution of such an optimization is a ”bang-bang” solution which is a
discontinuous function (like a bar-code). In order to avoid this problem we add a smoothness constraint σ bounding
14
17. the second derivative of A(r). We also add a constraint on the first derivative so that the petal width decreases
monotonically (the width of the petal is directly proportional to the apodization function). We finally have for A(r):
A(r) = 1 ∀ 0 ≤ r ≤ a,
0 ≤ A(r) ≤ 1 ∀ a ≤ r ≤ R,
A (r) ≤ 0 ∀ 0 ≤ r ≤ R,
|A (r)| ≤ σ ∀ 0 ≤ r ≤ R.
The first and second derivatives are expressed using respectively the finite difference expressions:
A(r + 1) − A(r) A(r + 1) − 2A(r) + A(r − 1)
lim and lim .
h→0 h h→0 h2
Here, h will correspond to the number of points n taken for the calculations, so that limh→0 will be limn→∞ . The
linear programming formalism allows to combine different constraints. For that, we simply combine all the constraints
in one large matrix:
Minimize: c · A(r) = ( .. 1 .. ) · A(r),
m·A b
√
.. Re(Eo ) .. ≥ and ≤ ±10−5 / 2
√
≥ and ≤ ±10−5 / 2
.. Im(E ) .. A(rr<a ) =1
o
Suject to: & .
A(rr≥a ) ∈ [0, 1]
.. A (r) .. ≤0
.. |A (r)| .. ≤σ
and it returns a table of the function A(r) in the case where the data set can converge to a solution. If not, the
calculation stops and returns an error message.
As Eo depends on the wavelength, the calculations have to be done over a sufficient range of λ covering the bandpass
we are looking at, meaning that all the constraints are also repeated for each wavelength.
The contrast as a parameter
I used the first optimization code described above to generate starshade designs for a large range of parameters. This
approach is appropriate to design a starshade for a specific goal, but for a systematic study of the parameter space
the optimizer may or may not deliver a result; e.g. there may not be a possible starshade profile achieving 10−10
suppression for a given set of constraints (diameter, IWA, shadow size, petal length). Given an existing code for the
basic optimization of starshade described above, I wrote a new optimization code that includes the contrast in the
objection function of the optimizer. This is described below following a method described by Cady ([35]). Next, I
changed the code to add the contrast as the result of the optimization, no longer constraint.
Starting from:
−5
Re(E ) ≤ 10 −10−5
o √ & Re(Eo ) ≥ √
2 2
10−5
Im(E ) ≤ √ & Im(E ) ≥ √ , −10−5
o o
2 2
−10
the square root of the contrast 10 becomes k, a parameter, such as:
k k
Re(E ) ≤ √ & Re(E ) ≥ √
o o
2 2
k
Im(E ) ≤ √ & Im(E ) ≥ √ ,
k
o o
2 2
The trick is to remove the contrast of the constraints. To do so, we write:
k k
Re(E ) − √ ≤ 0 & Re(E ) − √ ≥ 0
o o
2 2
k
Im(E ) − √ ≤ 0 & Im(E ) − √ ≥ 0,
k
o o
2 2
Then, the vector expressing the apodization requires the addition of one term, the contrast k. We rewrite the matrix
1
to express Re(Eo ) − √2 so that:
..
−1 A(ri ) k
.. φi , χi .. √ ·
.. =
φi , χi ∗ A(ri ) − √
2 i
2
k
15
18. and the last modification will be in the scalar product minimized: c · A(r). As we need to get the best contrast, we
write:
c1 A(r1 ) 0
.. .. ..
c such as ci · A(ri ) minimized, give us the minimum value of k, i.e. c = 0 .
.. .. ..
cn k 1
Constraints dealing with the first and second derivatives of the apodization function just need to be resized to the
dimension of A(r) and we burke the suppression by putting a 0 inside the new vectors. With this scheme, the sum of
the apodization profile is no longer minimized. This does not impact the result since the optimizer is entirely driven
by the constraints in this case. With this new version of the code, the optimizer delivers a result whatever the set of
constraints may be. The output of the optimizer is both the apodization profile and contrast, concatenated in a long
vector. This code is more adapted for the parameter space study we describe in the next section.
Creation of widths at tips and gaps
Another improvement I made is creating a concrete expression of the apodization function. The numerical solutions
have limited size petals, corresponding to the size of the array used for A(r), however the width of the tip of the
petal is unconstrained and may reach non-realistic values. Analytical optimization has endless petals that need to
be truncated at some radius, which is studied in section 2.2.2. The problem is the same for the ”valleys” between
two petals. The purpose is to create a width characterizing both tip and gap size at the same time (they can take
different values). In order to do so, we add two lines of constraints ([35]): one will ask the translation of apodization
Figure 2.1: Positions of the tip and gap on an external occulter. We want to build width of the order of the millimeter,
and as the petal size is more than 10 meters, gap and tip widths will not be observable.
function in width at the bottom of the petal (r1 ) to be 1 minus the gap, ∆gap . The second one will ask it to be
π
equal to a thickness ∆tip at the edge (r2 ). We use ∆ = R∆θ(r) = R N A(r) (here, the factor of 2 disappears as we
deal with half of the petal) to write the following new constraints:
π
R ∗ (1 − A(r1 )) ≥ ∆gap
N
π
R ∗ A(r2 ) ≥ ∆tip
N
π ∆
− ∗ A(r1 ) ≥ gap − π
N R N
π ∆tip
∗ A(r2 ) ≥
N r1
R
(.. 0 .. − π .. 0 ..) · A(r) ≥ ∆gap − π
i.e., in our matrix: N R N
r2
∆
(.. 0 .. 0 .. .. π ) · A(r) ≥ tip ,
N R
16
19. As we ask the function to make a leap in two points, the smoothness constraints might be relaxed to allow the op-
timization convergence. Here, I wrote it by nullifying the expression of the second derivative on three points around
the gap so that the discrepancy is not taken into account.
Of course, we can combine both improvements in order to get the best suppression for these new petals. A realistic
number for manufacturability would be a minimum size of 2 mm for these features. This lower limit is sufficient for
the tip and gap widths, without changing the efficiency of the starshade as we can observe in figure 2.2 (this is easy
to imagine when we think about the size of a petal, between 15 and 20 meters). More calculations have to be made in
10
4. 10
10
3. 10
Contraste
10
2. 10
10
1. 10
0 1 2 3 4
Width mm
Figure 2.2: Contrast as a function of the width at tips and gaps (both the same) for an occulter of 75 meters in
diameter, at 100 mas and with an occulter central part of 50%. The contrast remains under the 10−10 requirement
at 3 mm and starts going up exponentially after, exceeding 10−10 . Here the changes in the program made the start
at 0 mm different of 1 · 10−11 between the original one, creating a small discrepancy in the results.
order to see the difference in behavior between the tip and the gap over all ranges of diameter, petal length and inner
working angle. Following similar logic, the addition of tensioning elements and some changes in the petal structure
have been used to see how the contrast varied between them ([49]). However, in general, the more constraints we
add to the problem, the larger the starshade becomes.
17
20. 2.2 Global study of the parameter space and results
2.2.1 Goal
We now have enough tools to describe the starshade properties giving the suppression we are looking for. We run
many calculations to explore the abilities of the starshade over the range of parameters describing the stellar system.
The inner working angle will take values from 80 to 120 mas, corresponding to most of the range of the habitable
zone. We start with starshade diameters of 60 up to 100 meters. 100 meters would certainly be unrealistic, but we
include a large range of diameter to understand the behavior of the starshade according to its design parameters.
The distance between the occulter and the telescope is between 50,000 km and 100,000 km. Next, the petal length
varies between 30% and 70% of the total size. More than 70% would be unfeasible and less than 30% would make
the occulter come closer to a one without apodization. The shadow size will take values from -1 to 8 meters (for the
margin in comparison to the telescope size). At first, the contrast, as a constraint, will be set at 10−10 as planned.
Then, we want to optimize this value in regards to all the others.
About time of calculation
Each creation of an apodizer profile for a given set of data, in the case of numerical optimization runs between 10
and 20 minutes depending on the precision (number of points, constraints). Typically we use 4000 points along
the apodizer profile, 11 wavelengths and 11 points along the shadow profile at the telescope aperture. For figures
like figure 2.11, figure 2.9, or figure 2.6, the diameter takes almost 8 values, the inner working angle 5 values, the
central part 9 values, which gives us almost 360 different panels. Thus we have 360*20*60/3600 = 120 hours of
calculation. This long time can be reduced by decreasing the number of wavelengths, or the number of points across
the apodization profile A(r), and the number of points across the shadow profile at the telescope aperture. However
this will significate less accurate values (it gets a useful practice only for testing the programs). Moreover, trying to
get an higher accuracy or changing the values of the shadow oversize, gap and tip sizes, or any other parameter (like
changing the step of the diameter to 1 or 2 meters) will increase considerably the time of calculation.
On the contrary, since the hypergaussian function proceeds to take an analytical optimization so that the apodization
is already known, we only calculate which contrast it returns. Because the hypergaussian is always better at shorter
wavelengths, we calculate the monochromatic contrast at the longest wavelength of the band. This monochromatic
character also reduces the time and makes the hypergaussian easier to manipulate. Each calculation takes less than a
minute mainly limited by the Fresnel propagation from occulter to the aperture. However, for the second method of
calculating an analytical optimization, seen in 2.2.7, we need to compute a complete numerical apodization so that
it takes as much time as the numerical solution.
2.2.2 First comparison between analytical and numerical methods
For similar properties, we have a look at the behavior of the two optimization methods for the occulter’s shape.
Figure 2.3 shows the differences between linear programming and hypergaussian in the apodization function. We
1.0
0.8
Occulter profile
0.6
Numerical
0.4
Analytical
0.2
0.0
0 1000 2000 3000 4000
Radial position
Figure 2.3: Apodization function for numerical approximation with linear programming and analytical optimization
with the hypergaussian function for an occulter diameter of 75m, an IWA of 100 mas, and a central part of 50% in
the case of a JWST-like telescope.
can see that there are some steps on the numerical apodization. We also note the hypergaussian function is fitted
18
21. to the constraints so that it goes down faster than the numerical. Moreover, because the hypergaussian function is
exponential, it never reaches the 0 value, but reaches very small values quickly as the typical value for the hypergaus-
sian exponent (which is characteristic of the fall) is n=6. It is difficult to define where to end the written apodizer.
Indeed, in our plot, the true diameter of the analytical apodizer is not known. In order to calculate the IWA, Cash
([38]) defined the diameter for a transmission to be up to 1 − 1/e. In this way, we understand why the difference
is well marked. A 50 meter diameter defined in this way would actually be a 60 meter diameter, tip to tip ([38]).
However, the propagation takes into account all the apodizer profiles up until a zero value. The next plot 2.4 shows
us how the contrast varies when we expand the diameter to its true value. To do so, we make a run over a coefficient
which extends the radius for the propagation calculation. We start from 1 up to 1.6.
As we can see, the radius needs to be multiplied at least by 1.3 to satisfy the constraints and provide a good
1
0.01
Contrast
4
10
6
10
60 65 70 75 80 85 90 95
True Diameter m
Figure 2.4: Behavior of the contrast as a function of the true diameter. We multiply the value of the radius by a
coefficient which will express how much longer the true diameter is (in the case of a 60 meter diameter). Beyond a
certain diameter the truncation of the hypergaussian has virtually no effect (here beyond 75m). Therefore, we use a
coefficient from the 1/e diameter to include the entire tip of the hypergaussian. Since the propagation is calculated
for the entire array, the result does not depend on the coefficient value.
suppression (here, the 60 meters occulter made with the analytical optimization provides 10−8 suppression). The
occulter diameter will proceed from 60 to 78 meters (and so on for bigger starshade).
As previously said, the hypergaussian function is monochromatic. Figure 2.5 shows us how the contrast changes when
we change the maximum wavelength. Clearly, increasing the range of the spectrum, and therefore the wavelength
of interest, will damage the contrast quickly and thus a bigger starshade will be required. Later in section 2.2.7, we
will create the hypergaussian in another way by fitting a numerical optimization to it in order to find a consistent
definition for both approaches.
10
5 10
10
1 10
11
5 10
Contrast
11
1 10
12
5 10
0.8 1.0 1.2 1.4 1.6
Maximum Wavelength Μm
Figure 2.5: Contrast of the hypergaussian apodization as a function of the wavelength. Smaller wavelengths will
provide a better suppression, and increasing the wavelength will deteriorate the contrast in a logarithmic way.
19
22. 2.2.3 Contrast as a function of the diameter
Here, we have a look at the general behavior of the contrast subject to the variations of diameter at different inner
working angles for a constant central part size of the occulter. We make an imposing run over the whole range of
diameters and inner working angles.
In figure 2.6, we can see a logarithmic behavior of the contrast. As the inner working angle gets smaller, the
7 7
10 10
8 8
10 80 mas 10 80 mas
90 mas 90 mas
9 9
10 10
Contrast
Contrast
100 mas 100 mas
10 10
10 10
110 mas 110 mas
11 11
10 120 mas 10 120 mas
12 12
10 10
60 70 80 90 100 60 70 80 90 100
Occulter Diameter m Occulter Diameter m
Figure 2.6: Values of the contrast as a function of the occulter diameter for different inner working angles. Left:
obtained with the numerical optimization. Right: obtained with the help of the hypergaussian function. For the
numerical optimization, we notice that up to 85 meters would be sufficient to get all the range of inner working angles
wanted.
7
10
1.0
Numerical
8
10
0.8
Analytical
Occulter profile
9
10
0.6
Contrast
Numerical
10
10
0.4
Analytical
11
0.2 10
12
0.0 10
0 1000 2000 3000 4000 60 70 80 90 100
Radial position Occulter Diameter m
Figure 2.7: Left: Shape of the apodization for a similar contrast between numerical and analytical optimization at
62m. Right: their comparison in suppression at various diameter, overlaying at 62m. Very early, the numerical
optimization becomes more efficient than the other one, with a peak at around 85 meters. At 75m, the numerical
solution is nearly 9 times better (Contrast for the numerical: 4.383·10−11 and contrast for the analytical: 4.301·10−10 ).
Similarly, the 75 m numerical solution has a contrast of 4.383·10−11 ; this contrast is obtained for a diameter of almost
93 meters with an analytical solution.
suppression requirement of 10−10 is achieved for high values of diameter. On the contrary, high inner working angles
will easily reach the suppression for smaller diameters. This is consistent with the normal thought: bigger occulter
= higher suppression = smaller inner working angle.
In the second plot of figure 2.7, we have the two methods for a given inner working angle (here, 100 mas). We can see
that for deeper contrast, the difference between the hypergaussian and linear programming apodization gets higher,
and for the 10−10 requirements, we have a difference of almost 12 meters in the telescope diameter. The following is
the evidence of the advantage of a numerical optimized apodization-built occulter in comparison to the hypergaussian
function. If we increase the value of the inner working angle, the separation between the two methods will go down,
for as much as 8 meters for 120 mas. However, at 100 mas, an occulter smaller than 62 meters would be preferably
built with the help of the hypergaussian function. The first plot is similar to figure 2.15 and expresses how the shape
of both apodizations will look like for a similar contrast (our 62 meter occulter in that case). It is interesting to see
that even for a similar contrast, the shape is actually quite different. But we have to remember that the size of this
analytical apodization is defined up until the 1/e transmission point and will in fact correspond to a diameter of at
least 10 meters more (considering the supposed-infinite length of the function).
20
23. 2.2.4 Distance of the occulter as a function of the diameter
In this part, we change the variables. Using the fact that the inner working angle, diameter and distance between
the occulter and telescope are linked together through IW A = (Diameter D)/(2 ∗ Distance z), we get the distance
with:
D(m) 1
z(km) = ∗ 10−3
2 IW A(mas) ∗ 10−3 ∗ 180
3600
π
For a given ratio between the central part and petal length (here it is 49%), we select the 10−10 contrast. For each
diameter, we select the smaller distance between the occulter and the telescope. In figure 2.8, the plot describes
130 000 130 000
120 000 120 000
Distance Distance
110 000 110 000
80 mas 80 mas
Distance km
Distance km
100 000 90 mas 100 000 90 mas
90 000 100 mas 90 000 100 mas
80 000 110 mas 80 000 110 mas
70 000 120 mas 120 mas
70 000
60 000 60 000
65 70 75 80 85 90 95 100 65 70 75 80 85 90 95 100
Occulter Diameter m Occulter Diameter m
Figure 2.8: Distance of the occulter for less than 10−10 as a function of its diameter. Left: by numerical optimization.
Right: by analytical optimization. In the case of the analytical apodization, each time the diameter is not plotted, it
means that a design was not generated for this set of constraints (too small of a diameter).
the distance from the occulter to the telescope required to reach 10−10 suppression. There are two similar behaviors
between the numerical and analytical optimizations but with different configurations of data. We can see for example,
an angular resolution of 100 mas in the extra-solar system will require an occulter of about 89 m at a distance of
90,000 km for the hypergaussian apodization. In comparison, the numerical apodization requires an occulter of about
75 m at 75,000 km. Calculations of the distance are made in regards to the formula IW A = D/2z. By translating
the curve, we clearly see how much we gain in the occulter size and distance: almost 15 meters in diameter and 15
000 km in distance in average.
2.2.5 Action of the petal length on the contrast
Using the basic optimization scheme with the contrast as a constraint, the program stops each time the constraints
cannot be satisfied. This method presents the advantage to directly show the limit of the starshade. However, we
don’t have any information about what would be the best contrast for this set of constraints. Here, we make a run
over different sizes of petal and occulter in order to see how behave these two parameters together. In figure 2.9,
1.0 24
0.8 22
Petal length ratio
Petal length m
0.6 20
0.4 18
0.2 16
0.0
65 70 75 80 85 90 95 65 70 75 80 85 90 95
Starshade diameter m Starshade diameter m
Figure 2.9: Minimum of the petal length ratio (first plot) and petal length (second plot) as a function of the diameter
in order to reach the suppression requirement of 10−10 . Increasing the size of the diameter means decreasing the
petal length, which is more practical for the starshade’s engineering.
21