This document is a resume for Mark Yashar. It summarizes his expertise in data analysis, scientific computing, physics, and high performance computing. It lists his qualifications including various programming languages, operating systems, and software. It also outlines his educational background including a PhD in Physics from UC Davis, and professional experience including roles doing data analysis, scientific modeling and simulation, and research.
Going Smart and Deep on Materials at ALCFIan Foster
As we acquire large quantities of science data from experiment and simulation, it becomes possible to apply machine learning (ML) to those data to build predictive models and to guide future simulations and experiments. Leadership Computing Facilities need to make it easy to assemble such data collections and to develop, deploy, and run associated ML models.
We describe and demonstrate here how we are realizing such capabilities at the Argonne Leadership Computing Facility. In our demonstration, we use large quantities of time-dependent density functional theory (TDDFT) data on proton stopping power in various materials maintained in the Materials Data Facility (MDF) to build machine learning models, ranging from simple linear models to complex artificial neural networks, that are then employed to manage computations, improving their accuracy and reducing their cost. We highlight the use of new services being prototyped at Argonne to organize and assemble large data collections (MDF in this case), associate ML models with data collections, discover available data and models, work with these data and models in an interactive Jupyter environment, and launch new computations on ALCF resources.
Going Smart and Deep on Materials at ALCFIan Foster
As we acquire large quantities of science data from experiment and simulation, it becomes possible to apply machine learning (ML) to those data to build predictive models and to guide future simulations and experiments. Leadership Computing Facilities need to make it easy to assemble such data collections and to develop, deploy, and run associated ML models.
We describe and demonstrate here how we are realizing such capabilities at the Argonne Leadership Computing Facility. In our demonstration, we use large quantities of time-dependent density functional theory (TDDFT) data on proton stopping power in various materials maintained in the Materials Data Facility (MDF) to build machine learning models, ranging from simple linear models to complex artificial neural networks, that are then employed to manage computations, improving their accuracy and reducing their cost. We highlight the use of new services being prototyped at Argonne to organize and assemble large data collections (MDF in this case), associate ML models with data collections, discover available data and models, work with these data and models in an interactive Jupyter environment, and launch new computations on ALCF resources.
The Face of Nanomaterials: Insightful Classification Using Deep Learning - An...PyData
Artificial intelligence is emerging as a new paradigm in materials science. This talk describes how physical intuition and (insightful) machine learning can solve the complicated task of structure recognition in materials at the nanoscale.
Visualizing and Clustering Life Science Applications in Parallel Geoffrey Fox
HiCOMB 2015 14th IEEE International Workshop on
High Performance Computational Biology at IPDPS 2015
Hyderabad, India. This talk covers parallel data analytics for bioinformatics. Messages are
Always run MDS. Gives insight into data and performance of machine learning
Leads to a data browser as GIS gives for spatial data
3D better than 2D
~20D better than MSA?
Clustering Observations
Do you care about quality or are you just cutting up space into parts
Deterministic Clustering always makes more robust
Continuous clustering enables hierarchy
Trimmed Clustering cuts off tails
Distinct O(N) and O(N2) algorithms
Use Conjugate Gradient
Graph Centric Analysis of Road Network Patterns for CBD’s of Metropolitan Cit...Punit Sharnagat
OSMnx is a Python package to retrieve, model, analyze, and visualize street networks from OpenStreetMap.
OpenStreetMap (OSM) is a collaborative mapping project that provides a free and publicly editable map of the world.
OpenStreetMap provides a valuable crowd-sourced database of raw geospatial data for constructing models of urban street networks for scientific analysis
Deep Learning on nVidia GPUs for QSAR, QSPR and QNAR predictionsValery Tkachenko
While we have seen a tremendous growth in machine learning methods over the last two decades there is still no one fits all solution. The next era of cheminformatics and pharmaceutical research in general is focused on mining the heterogeneous big data, which is accumulating at ever growing pace, and this will likely use more sophisticated algorithms such as Deep Learning (DL). There has been increasing use of DL recently which has shown powerful advantages in learning from images and languages as well as many other areas. However the accessibly of this technique for cheminformatics is hindered as it is not available readily to non-experts. It was therefore our goal to develop a DL framework embedded into a general research data management platform (Open Science Data Repository) which can be used as an API, standalone tool or integrated in new software as an autonomous module. In this poster we will present results of comparing performance of classic machine learning methods (Naïve Bayes, logistic regression, Support Vector Machines etc.) with Deep Learning and will discuss challenges associated with Ddeep Learning Neural Networks (DNN). The DNN learning models of different complexity (up to 6 hidden layers) were built and tuned (different number of hidden units per layer, multiple activation functions, optimizers, drop out fraction, regularization parameters, and learning rate) using Keras (https://keras.io/) and Tensorflow (www.tensorflow.org) and applied to various use cases connected to prediction of physicochemical properties, ADME, toxicity and calculating properties of materials. It was also shown that using nVidia GPUs significantly accelerates calculations, although memory consumption puts some limits on performance and applicability of standard toolkits 'as is'.
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Geoffrey Fox
Describes relations between Big Data and Big Simulation Applications and how this can guide a Big Data - Exascale (Big Simulation) Convergence (as in National Strategic Computing Initiative) and lead to a "complete" set of Benchmarks. Basic idea is to view use cases as "Data" + "Model"
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...Konstantinos Demertzis
Deep learning architectures are the most e
ective methods for analyzing and classifying Ultra-Spectral Images (USI). However, e ective training of a Deep Learning (DL) gradient classifier aiming to achieve high classification accuracy, is extremely costly and time-consuming. It requires huge datasets with hundreds or thousands of labeled specimens from expert scientists. This research exploits the MAML++ algorithm in order to introduce the Model-Agnostic Meta-Ensemble Zero-shot Learning (MAME-ZsL) approach. The MAME-ZsL overcomes the above diculties, and it can be used as a powerful model to perform Hyperspectral Image Analysis (HIA). It is a novel optimization-based Meta-Ensemble Learning architecture, following a Zero-shot Learning (ZsL) prototype. To the best of our knowledge it is introduced to the literature for the first time. It facilitates learning of specialized techniques for the extraction of user-mediated representations, in complex Deep Learning architectures. Moreover, it leverages the use of first and second-order derivatives as pre-training methods. It enhances learning of features which do not cause issues of exploding or diminishing gradients; thus, it avoids potential overfitting. Moreover, it significantly reduces computational cost and training time, and it oers an improved training stability, high generalization performance and remarkable classification accuracy.
TMS workshop on machine learning in materials science: Intro to deep learning...BrianDeCost
This presentation is intended as a high-level introduction for to deep learning and its applications in materials science. The intended audience is materials scientists and engineers
Disclaimers: the second half of this presentation is intended as a broad overview of deep learning applications in materials science; due to time limitations it is not intended to be comprehensive. As a review of the field, this necessarily includes work that is not my own. If my own name is not included explicitly in the reference at the bottom of a slide, I was not involved in that work.
Any mention of commercial products in this presentation is for information only; it does not imply recommendation or endorsement by NIST.
Scientific Applications and Heterogeneous Architecturesinside-BigData.com
In this deck from ATPESC 2019, Michela Taufer from UT Knoxville presents: Scientific Applications and Heterogeneous Architectures.
"This talk discusses two emerging trends in computing (i.e., the convergence of data generation and analytics, and the emergence of edge computing) and how these trends can impact heterogeneous applications. Next-generation supercomputers, with their extremely heterogeneous resources and dramatically higher performance than current systems, will generate more data than we need or, even, can handle. At the same time, more and more data is generated at the “edge,” requiring computing and storage to move closer and closer to data sources. The coordination of data generation and analysis across the spectrum of heterogonous systems including supercomputers, cloud computing, and edge computing adds additional layers of heterogeneity to applications’ workflows. More importantly, the coordination can neither rely on manual, centralized approaches as it is predominately done today in HPC nor exclusively be delegated to be just a problem for commercial Clouds. This talk presents case studies of heterogenous applications in precision medicine and precision farming that expand scientist workflows beyond the supercomputing center and shed our reliance on large-scale simulations exclusively, for the sake of scientific discovery."
Watch the video: https://wp.me/p3RLHQ-lq2
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The Face of Nanomaterials: Insightful Classification Using Deep Learning - An...PyData
Artificial intelligence is emerging as a new paradigm in materials science. This talk describes how physical intuition and (insightful) machine learning can solve the complicated task of structure recognition in materials at the nanoscale.
Visualizing and Clustering Life Science Applications in Parallel Geoffrey Fox
HiCOMB 2015 14th IEEE International Workshop on
High Performance Computational Biology at IPDPS 2015
Hyderabad, India. This talk covers parallel data analytics for bioinformatics. Messages are
Always run MDS. Gives insight into data and performance of machine learning
Leads to a data browser as GIS gives for spatial data
3D better than 2D
~20D better than MSA?
Clustering Observations
Do you care about quality or are you just cutting up space into parts
Deterministic Clustering always makes more robust
Continuous clustering enables hierarchy
Trimmed Clustering cuts off tails
Distinct O(N) and O(N2) algorithms
Use Conjugate Gradient
Graph Centric Analysis of Road Network Patterns for CBD’s of Metropolitan Cit...Punit Sharnagat
OSMnx is a Python package to retrieve, model, analyze, and visualize street networks from OpenStreetMap.
OpenStreetMap (OSM) is a collaborative mapping project that provides a free and publicly editable map of the world.
OpenStreetMap provides a valuable crowd-sourced database of raw geospatial data for constructing models of urban street networks for scientific analysis
Deep Learning on nVidia GPUs for QSAR, QSPR and QNAR predictionsValery Tkachenko
While we have seen a tremendous growth in machine learning methods over the last two decades there is still no one fits all solution. The next era of cheminformatics and pharmaceutical research in general is focused on mining the heterogeneous big data, which is accumulating at ever growing pace, and this will likely use more sophisticated algorithms such as Deep Learning (DL). There has been increasing use of DL recently which has shown powerful advantages in learning from images and languages as well as many other areas. However the accessibly of this technique for cheminformatics is hindered as it is not available readily to non-experts. It was therefore our goal to develop a DL framework embedded into a general research data management platform (Open Science Data Repository) which can be used as an API, standalone tool or integrated in new software as an autonomous module. In this poster we will present results of comparing performance of classic machine learning methods (Naïve Bayes, logistic regression, Support Vector Machines etc.) with Deep Learning and will discuss challenges associated with Ddeep Learning Neural Networks (DNN). The DNN learning models of different complexity (up to 6 hidden layers) were built and tuned (different number of hidden units per layer, multiple activation functions, optimizers, drop out fraction, regularization parameters, and learning rate) using Keras (https://keras.io/) and Tensorflow (www.tensorflow.org) and applied to various use cases connected to prediction of physicochemical properties, ADME, toxicity and calculating properties of materials. It was also shown that using nVidia GPUs significantly accelerates calculations, although memory consumption puts some limits on performance and applicability of standard toolkits 'as is'.
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Geoffrey Fox
Describes relations between Big Data and Big Simulation Applications and how this can guide a Big Data - Exascale (Big Simulation) Convergence (as in National Strategic Computing Initiative) and lead to a "complete" set of Benchmarks. Basic idea is to view use cases as "Data" + "Model"
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...Konstantinos Demertzis
Deep learning architectures are the most e
ective methods for analyzing and classifying Ultra-Spectral Images (USI). However, e ective training of a Deep Learning (DL) gradient classifier aiming to achieve high classification accuracy, is extremely costly and time-consuming. It requires huge datasets with hundreds or thousands of labeled specimens from expert scientists. This research exploits the MAML++ algorithm in order to introduce the Model-Agnostic Meta-Ensemble Zero-shot Learning (MAME-ZsL) approach. The MAME-ZsL overcomes the above diculties, and it can be used as a powerful model to perform Hyperspectral Image Analysis (HIA). It is a novel optimization-based Meta-Ensemble Learning architecture, following a Zero-shot Learning (ZsL) prototype. To the best of our knowledge it is introduced to the literature for the first time. It facilitates learning of specialized techniques for the extraction of user-mediated representations, in complex Deep Learning architectures. Moreover, it leverages the use of first and second-order derivatives as pre-training methods. It enhances learning of features which do not cause issues of exploding or diminishing gradients; thus, it avoids potential overfitting. Moreover, it significantly reduces computational cost and training time, and it oers an improved training stability, high generalization performance and remarkable classification accuracy.
TMS workshop on machine learning in materials science: Intro to deep learning...BrianDeCost
This presentation is intended as a high-level introduction for to deep learning and its applications in materials science. The intended audience is materials scientists and engineers
Disclaimers: the second half of this presentation is intended as a broad overview of deep learning applications in materials science; due to time limitations it is not intended to be comprehensive. As a review of the field, this necessarily includes work that is not my own. If my own name is not included explicitly in the reference at the bottom of a slide, I was not involved in that work.
Any mention of commercial products in this presentation is for information only; it does not imply recommendation or endorsement by NIST.
Scientific Applications and Heterogeneous Architecturesinside-BigData.com
In this deck from ATPESC 2019, Michela Taufer from UT Knoxville presents: Scientific Applications and Heterogeneous Architectures.
"This talk discusses two emerging trends in computing (i.e., the convergence of data generation and analytics, and the emergence of edge computing) and how these trends can impact heterogeneous applications. Next-generation supercomputers, with their extremely heterogeneous resources and dramatically higher performance than current systems, will generate more data than we need or, even, can handle. At the same time, more and more data is generated at the “edge,” requiring computing and storage to move closer and closer to data sources. The coordination of data generation and analysis across the spectrum of heterogonous systems including supercomputers, cloud computing, and edge computing adds additional layers of heterogeneity to applications’ workflows. More importantly, the coordination can neither rely on manual, centralized approaches as it is predominately done today in HPC nor exclusively be delegated to be just a problem for commercial Clouds. This talk presents case studies of heterogenous applications in precision medicine and precision farming that expand scientist workflows beyond the supercomputing center and shed our reliance on large-scale simulations exclusively, for the sake of scientific discovery."
Watch the video: https://wp.me/p3RLHQ-lq2
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
AN INFORMATION THEORY-BASED FEATURE SELECTIONFRAMEWORK FOR BIG DATA UNDER APA...Nexgen Technology
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
OpenACC and Open Hackathons Monthly Highlights May 2023.pdfOpenACC
Stay up-to-date on the latest news, research, and resources. This month's edition covers the call for speakers for the Open Accelerated Computing Summit, scheduled Open Hackathons and Bootcamps, an interview with Sunita Chandrasekaran, a call for proposals for the DOE's INCITE program, upcoming webinars, and more!
AutoML for Data Science Productivity and Toward Better Digital DecisionsSteven Gustafson
With the increased availability of both cloud computing and AI libraries arrives the opportunity to automatically search, or optimize machine learning algorithms. While this technology has been around for almost twenty years and seeing renewed interest lately, only recently has the computing power become widespread enough to fully take advantage of it by a growing community of data scientists across many different types of opportunities. Because machine learning still remains a rather challenging discipline for most, I advocate for a more “assistive” approach to AutoML that helps the data scientist learn about different methods within the entire machine learning pipeline, as well as create a knowledge graph of results that can be further mined and explored to gain knowledge and connect with other individuals who are also searching for machine learning pipelines. In this talk, I will present an overview of the approach, published recently in IJCAI and AAAI, and provide new unpublished results demonstrating its effectiveness on public data sets.
1. Page 1 of 8 Yashar
Mark Yashar
864 Coachman Place Clayton California94517 (United States)
Cell: 530-574-1834
mark.yashar@gmail.com
http://www.linkedin.com/in/markyashar
https://github.com/markyashar
DATA ANALYSIS/SCIENCESCIENTIFIC COMPUTING/PROGRAMMINGHIGH
PERFORMANCE COMPUTING PHYSICS
Experienced data analyst, physicist, and engineer withexpertise in scientific computing and
numerical modeling methods. Experience and knowledgeof image processing, algorithm
development, data visualization, and machine learning with particular attention to detail
and excellent written and oral communication skills. Knowledge and experience with
technical/scientific writing, as well as teambuilding, project management, and leadership
skills. Ability to solve high level technical and scientific problems with both a holistic and
granulistic point-of-view.Qualificationsinclude:
COMPUTING/SOFTWARE
Operating Systems: Windows,Linux(RedHat, Centos,Ubuntu),Unix,Mac OS.
Programming Languages and Data Analysis Packages: Python(including numpy,
matplotlib,scipy,and scikit-learnlibraries),C/C++(including object-oriented
programming and associated use of gdb and ddd debuggers and Eclipse),
MATLAB/Octave,Fortran,Perl,R,Unix shell scripting, IDL, Mathematica,HTML,
Java, Javascript,BerkeleyDBXML,MySQL,Common Astronomy Software
Applications (CASA),ImageReduction and Analysis Facility (IRAF),Supermongo,
Meqtrees,Weather Research and Forcasting Model (WRF),NCAR Command
Language (NCL),NetCDF Command Line Operators (NCO).
Other Software Applications: LaTex,EXCEL(including use of formulas, functions,
and plotting features and capabilities), Concurrent Versions System (CVS),VMware
Workstation,LiferayEnterprise.Knowledgeof FTP and HTTPprotocols
High-Performance Computing (e.g., CrayXE6),Hadoop,MapReduce.
SCIENTIFIC/TECHNICAL
Monte Carlo methods and techniques
Markov Chain Monte Carlo (MCMC) (includingBayesian analysis)
Machine Learning
Data, signal, and image processing and analysis; error analysis and statistics
Data visualization
Numerical modeling, simulation
Scientific/technicalwriting
MANAGEMENT &LEADERSHIP
Excellent written and oral communications
Cost/benefit ratio analysis and risk management
Leadership and teambuilding
ProjectManagement
Financial accountabilities and grant writing
2. Page 2 of 8 Yashar
EDUCATION
12/2008
UniversityofCalifornia,Davis (UCD)(Davis,California)
PhDin Physics
Dissertation: “Topicsin Microlensing and Dark Energy”
Gained skills and experience in Monte Carlo, MCMC, Bayesian, and Kolmogorov-Smirnoff
methods and techniques, and data visualization; also gained experience and skills in
MATLAB, Fortran, C, Perl and UNIX shell scripting.
Advisor: Dr. Andreas Albrecht
01/1999
SanFrancisco State University(SFSU) (San Francisco, California)
MS in Physics
05/1994
SanFrancisco State University (San Francisco,California)
BA in Physics,Concentration in Astronomy
PROFESSIONALDEVELOPMENT
09/2016-10/2016
Introductionto Data SciencewithPython: 6-weekcourse, Metis,San Francisco, CA.
Course Instructors: Ramesh Sampath (ramesh@sampathweb.com) and TJ Bay
(spintronic@gmail.com). Please see https://github.com/markyashar/sf16_ids1/ and my
Linkedin profile forfurther details.
EXPERIENCE
03/2016-08/2016
BusinessAnalyst(temporary contractorposition), Visa Inc. Digital Operations, Foster City,
California (CA)
Supervisor: Tina Pan(tpan@visa.com),Directorof Fraud Operations, Visa Inc.Digital
Operations
Visa Checkoutfraud research and analysis
Weekly fraud monitoring (including extensive use of EXCEL)
Pythoncode analysis
02/2012-02/2014
Postdoctoral Scholar-Employee,University of California, Berkeley (UCB), Department of
Earth and Planetary Science, Berkeley, California (CA)
Supervisor: Dr. Inez Fung, Professor of Atmospheric Science; Contributorto the2007
Nobel PeacePrize awarded to the United Nations Environmental Program (UNEP)
Intergovernmental Panel forClimate Change (IPCC)
Carried out research focused on mesoscale and regional (forwardor“bottom-up”)
atmospheric transport modeling and analysis of anthropogenic and biogenic carbon
3. Page 3 of 8 Yashar
dioxide emissions from northern California for multi-scale estimation and
quantification of atmospheric CO2 concentrations.
Made extensive use of WRF (writtenmostly in FORTRAN),the WRF-Chemcoupled
weather-air quality model foratmospheric transport simulations, and the
Vegetation Photosynthesis and Respiration Model (WRF-VPRM) biospheric model to
simulate CO2 biosphere fluxes and atmospheric CO2 concentrations. This workalso
involvedthe use of the Rstatistical scriptinglanguage,theNCARCommand
Language(NCL),MATLAB,Python,GoogleEarth(KMLandKMZ) and Ferret for
additional pre- and post-processing, modification, and visualization of netCDF files,
and further enabling and expediting data analysis of simulation results.
Troubleshooted WRF, WRF-Chem, and VPRM simulation runs and results to
increase data efficiency and to decrease time to solve problems.
Installed, compiled, built, and configured WRF, WRF-Chem, and VPRMon the
National Energy Research Scientific Computing Center (NERSC) multi-core
supercomputing system “Hopper” and submitted batch job scripts to this system to
run the WRF model simulations.
Assisted students and post-docs in installing and configuring WRFand WRF-Chem.
02/2009-02/2012
Postdoctoral ResearchAssociate,University of IllinoisUrbana-Champaign (UIUC),
National Center forSupercomputing Applications (NCSA), Champaign, Illinois
Supervisor: Dr. Athol Kemball, Associate Professor of Astronomy, Center Affiliate of NCSA
Conducted research and development in Square Kilometer Array (SKA) calibration
and processing algorithms and computing with a focusoncost and feasibility
studies of radio imaging algorithms and direction- dependent calibration errors
with the Technology Development Project(TDP) Calibration Processing Group
(CPG)at UIUC.
Evaluated the computational costs of non-deconvolvedimages of a number of
existing radio interferometry algorithms used to deal with non-coplanar baselines
in wide-field radio interferometry and co-authored a corresponding internal
technical report (“Computational Costs of Radio Imaging Algorithms Dealing with
the Non-Coplanar Baselines Effect:I”) withA. Kemball.
Implemented and utilized numerical and imaging simulations in conjunction with
the use of the Meqtrees softwarepackage and the CASA softwarepackage (written
mostly in C++) to address cost,feasibility, dynamic range, and image fidelity issues
related to calibration and processing forSKA and the dependence of these issues on
certain key antenna and feed design parameters such as sidelobe level and mount
type. Numerical simulations included Monte Carlo simulations (written in Python)
to test analyticalexpressions.
Installed, built, compiled, configuredand set up C++ software development
environment forCASA (including use of gdb, ddd, and Eclipse C++ debuggers) and
all its dependencies, and made modifications to C++ code as necessary.
Installed, configured, and maintained Java-based LiferayPortal software bundled
with Apache Tomcat application server and connected to a MYSQL database on a
Linux machine.
Installed, configured, maintained, and utilized VMwareWorkstationonahost
Linux system.
05/2006-12/2008
4. Page 4 of 8 Yashar
ResearchAssistant,UCD,Department of Physics,Davis, CA
Supervisor: Dr. Andreas Albrecht, Professor of Physics
Carried out research project withProfessor Andreas Albrecht's research group that
involvedan MCMC analysis of a dark energy quintessence model (knownas the
Inverse PowerLaw (IPL)orRatra-Peebles model) that included the utilization of
Dark Energy Task Force(DETF) data models that simulated current and future data
sets from new and proposed observational programs.
Wrote, modified and submitted batch job scripts to run MATLAB MCMCcode on a
Linux computing cluster to expedite the running of the MCMC simulations and the
generation of MCMC output.
Generated simulated data sets for a Lambda-CDM background cosmology as wellas
a case where the dark energy was provided by a specific IPLmodel. Then used an
MCMC algorithm tomap the likelihood around each fiducialmodel via a Markov
chain of points in parameter space, starting with the fiducial model and moving to a
succession of random points in space using a Metropolis-Hastings stepping
algorithm. From the associated likelihood contours, found that the respective
increase in constraining power with higher quality data sets produced by analysis
gave results that were broadly consistent with the DETFfor the dark energy
parameterization that they used. Also found, consistent with other findings, that for
a universe containing dark energy described by the IPL potential, a cosmological
constant can be excluded by high quality “Stage 4” experiments by well over 3
sigma.
Troubleshooted and debugged simulation runs and results.
Lead author of paper published in PhysicalReview D on research results, using
Monte Carlo, MCMC, Metropolis-Hastings, Bayesian, and various mathematical
modeling and uncertainty quantification methods/skills.
Assisted a graduate student in generating 3-dimensional Chi-squared plots with
MATLAB that helped develop intuition into the actual physical behavior of the
Albrecht-Skordis (AS) dark energy quintessence model – a better understanding
than wouldhave been allowed by running the full MCMC on the larger parameter
space. This systematic investigation revealed that there were some numerical
problems and issues involved in the student’s analysis of the AS model.
01/2004-01/2006
ResearchAssistant/ParticipatingGuest.LawrenceLivermoreNational Laboratory,
Livermore, CA, and UCD, Department of Physics
Supervisor: Dr. Kem Cook
Engaged in a research project with Dr. Kem Cook at LLNL that expanded and
extended the workof others and involvedthe utilization and development of
reddening models, star formation histories, colormagnitude diagrams (CMDs),and
microlensing population models of the Large Magellanic Cloud (LMC) to constrain
the locations of micro-lensing source stars and micro-lensing objects(Massive
Compact Halo Objects -- MACHOs) in the Large Magellanic Cloud (LMC) and the
Milky Way halo using data of 13 microlensing source stars obtained by the MACHO
collaboration with the Hubble Space Telescope (HST).
5. Page 5 of 8 Yashar
Carried out a 2-dimensional Kolmogorov-Smirnoff (KS) test along with Monte Carlo
simulations to quantify the probability that the observed microlensing source stars
were drawn from a specific model population.
Utilized and modified C, Fortran and Perl code and UNIX shell scripts during the
course of this research project to implement and carry Monte Carlo simulations and
KS tests. Supermongo interactive plotting package was used forvisualization of
some simulation results. Simulation results were described and discussed in PhD
dissertation.
01/2003-05/2003
Student Project,UCD,Department of Physics,Davis, CA
Wrote computer code in Fortranand IDL forthe final project in a graduate level
computational physics course instructed by Dr. John Rundle which computed a
closed orbital ellipse of an extrasolar planet orbiting a single star using data input
by the user. The program queried the user to enter various orbital and physical
parameters of the planet-star system and used this data to calculate the observed
effectiveequilibrium blackbody temperature of the extrasolar planet fora given
orbital phase. The program also calculated the planet-to-star flux ratios at given
orbital phases.
Generated plots showing the shape and size of the orbit, orbital speed vs. orbital
phase, planet temperature vs. orbital phase, and planet-to-star flux ratios vs. orbital
phase, and whichalso indicated as to whether the inputs entered met the criteria
for a habitable planet.
09/2002-12/2008
Reader,UCD, Department of Physics,Davis, CA
Graded homework assignments and exams, and recorded and calculated grades
(using EXCEL) forundergraduate and graduate physics courses including classical
mechanics, electricity and magnetism, mathematical methods in physics,
astrophysics, introductory astronomy and cosmology, and quantum mechanics.
Held officehours.
Proctoredexams.
09/2002-05/2003
ResearchAssistant,UCD, Department of Physics,Davis, CA
Supervisor: Dr. Matt Richter, Research Physicist
Processed, extracted, and displayed data of spectra of stars and circumstellar
material obtained by the Texas EchelonCross EchelleSpectrograph (TEXES) forthe
mid-infrared used with the NASA Infrared Telescope Facility,using Fortran (fordata
extraction) and IDL (fordata visualization and display).
08/1999-08/2001
Data Aide,Stanford Linear Accelerator Center (SLAC), ParticleAstrophysics Group, Menlo
Park, CA
Supervisors: Professor ElliottBloom, Dr. Paul Kunz
Handled and processed data and maintained data archive forthe Unconventional
Stellar Aspect (USA) X-ray astronomy experiment at SLAC.
Downloaded raw data files from the NavalResearch Laboratory (NRL) and
6. Page 6 of 8 Yashar
processed them to create FITSfiles forscientist's use locally.
Submitted batch jobs to other computing systems and clusters.
Wrote and assisted in writing and developing Perl and UNIX shell scripts forthe
purpose of automating and expediting many of the data handling, processing, and
archive maintenance tasks. Also copied the raw data files to computer tape
cartridges.
Assisted in scheduling and setting up USA teleconferences.
Assisted in supporting, maintaining, and administering printer systems and
softwareand Windows operating systems on multiple machines.
Maintained inventory of Particle Astrophysics group computers, other hardware,
and softwarelicenses.
01/1999-06/1999
Student, San FranciscoState University,Department of Physicsand Astronomy, San
Francisco, CA
Engaged in a laboratory project for an astronomy lab course instructed by Dr.
Adrienne Cool in whichpossible cataclysmic variable (CV) star candidates were
identified fromlight curves and R vs. H-alpha plots using R and H-alpha CCD images
taken with the Hubble Space Telescope (HST) Wide Field Planetary Camera 2
(WFPC2) of the central regions of the globular star cluster NGC 6397. Used IRAF,
SAOTNG , and Supermongo softwarepackages in the analysis.
Carried out an observational project on variable stars for this astronomy lab course
using a 10-inch EpochTelescope-CCD system and the IRAF and SAOTNG software
packages. Created a web page using HTMLto post project results online.
GRANTS AND SUPPORTEDRESEARCH ASSOCIATESHIPS
National Science Foundation Grant (02/2009-02/2012, A. Kemball)
Department of Energy (2007, A. Albrecht; 2012, I. Fung)
National Science Foundation (2004, K. Cook, R. Becker)
PROFESSIONALAFFILIATIONS/MEMBERSHIPS
American PhysicalSociety (2002-Present)
American Geophysical Union(2013-Present)
PUBLICATIONS
M. Yashar,B. Bozek, A. Albrecht, A. Abrahamse, M. Barnard, Exploring Parameter
Constraints on Quintessential Dark Energy: The Inverse PowerLaw Model, PhysicalReview
D, 79, 103004, 2009.
M. Barnard, A. Abrahamse, A. Albrecht, B. Bozek, M. Yashar, A measure of the impact of
future dark energy experiments base on discriminating power among quintessence models,
PhysicalReview D,78, 043528; 2009, PhysicalReview D, 80, 129903(E), 2008.
M. Barnard, A. Abrahamse, A. Albrecht, B. Bozek, M. Yashar,Exploring Parameter
Constraints on Quintessential Dark Energy: the Albrecht-Skordis model, PhysicalReview D,
77, 103502, 2008.
7. Page 7 of 8 Yashar
INTERNALTECHNICALREPORTS
M. Yashar,A. Kemball, Computational Costs of Radio Imaging Algorithms Dealing withthe
Non-coplanar Baselines Effect:I,TDP Calibrationand Processing Group Memo #3
(http://www.astro.kemball.net/Publish/files/ska_tdp_memos/cpg_memo3_v1.1.pdf),2010.
A. Kemball, T. Cornwell, M. Yashar,Calibration and Processing Constraints on Antenna and
Feed Designs forSKA: I, TDP Calibration and Processing Group Memo #4
(http://www.astro.kemball.net/Publish/files/ska_tdp_memos/CP_Antenna_Feed.pdf),
2009.
WORKSHOPS ANDCONFERENCE PARTICIPATION
American Geophysical UnionFall Meeting, Moscone Center, San Francisco,CA,
December 9-13, 2013
Basic WRF Winter Tutorial, National Center forAtmospheric Research (NCAR),
Boulder, CO, January 28 – February 1, 2013
Model Evaluation Tools (MET) WRFTutorial, NCAR, Boulder, CO, February 4-5,
2013
SKA Calibration and Processing F2F Group Meeting, Hyatt Regency O’Hare Hotel,
Chicago, IL, January 15, 2010
5th Annual Cosmology in Northern California (CINC '08), Kavli Institute for Particle
Astrophysics and Cosmology (KIPAC),Stanford Linear Accelerator Center. 18 April,
2008
4th Annual Cosmology in Northern California (CINC '07), University of California,
Davis. 11 May, 2007
Cosmo 2006 International Workshop on Particle Physicsand the Early Universe,
Granlibakken Conference Center and Resort, Tahoe City, CA. 24 - 29 September,
2006
REFERENCES
Inez Fung, Sc.D.
Professorof Atmospheric Science
(Contributor to the 2007 Nobel Peace Prize awarded to the United Nations Environmental
Program (UNEP)Intergovernmental Panel forClimate Change (IPCC))
Department of Earth and Planetary Science
Department of Environmental Science, Policy,and Management
University of California, Berkeley
McCone Hall, Room 307, Mail Code 4767
Berkeley, CA 94720-4767
(510)-643-9367
ifung@berkeley.edu
8. Page 8 of 8 Yashar
Andreas Albrecht, Ph.D.
Distinguished Professorand Chair, Physics
Department of Physics
University of California, Davis
Physics,Room 175
One, Shields Avenue
Davis, CA 95616
(530)-752-5989
albrecht@physics.ucdavis.edu
Athol Kemball, Ph.D.
Associate Professorof Astronomy
Center Affiliate of NCSA
Astronomy Department
University of Illinois, Urbana-Champaign
203 Astronomy Building, MC-221
1002 W.Green Street
Urbana, IL 61801
(217)-333-7898
akemball@illinois.edu
David Elvins,M.S.
Lab Manager
Department of Earth and Planetary Science
University of California, Berkeley
McCone Hall, Room 307, Mail Code 4767
Berkeley, CA 94720-4767
(510)-643-8336
elvins@berkeley.edu
John Rundle
Distinguished Professor,Physics
Interdisciplinary Professor of Physics, Civil Engineering, and Geology
Department of Physics; Department of Earth and Planetary Sciences
University of California, Davis
Physics,Room 534B
One, Shields Avenue
Davis, CA 95616
(530)-752-6416
rundle@physics.ucdavis.edu