The document discusses building a World Wide Telescope by federating astronomical data from different sources and making it accessible via web services. It describes how the Sloan Digital Sky Survey data was made available online by designing typical science questions, implementing the data in a SQL database with spatial indexing extensions, and developing tools and interfaces to enable fast querying. Performance results are presented showing queries can typically be answered in seconds to minutes. Lessons learned include the importance of sequential scans, covering indices, common operations like counting and binning, and use of materialized views and spatial indices.
February 2017 HUG: Data Sketches: A required toolkit for Big Data AnalyticsYahoo Developer Network
In the analysis of big data there are problematic queries that don’t scale because they require huge compute resources and time to generate exact results. Examples include count distinct, quantiles, most frequent items, joins, matrix computations, and graph analysis. If approximate results are acceptable, there is a class of sub-linear, stochastic streaming algorithms, called "sketches", that can produce results orders-of magnitude faster and with mathematically proven error bounds. For interactive queries there may not be other viable alternatives, and in the case of extracting results for these problem queries in real-time, sketches are the only known solution. For any analysis system that requires these problematic queries from big data, sketches are a required toolkit that should be tightly integrated into the system's analysis capabilities. This technology has helped Yahoo successfully reduce data processing times from days to hours, or minutes to seconds on a number of its internal platforms. This talk covers the current state of our Open Source DataSketches.github.io library, which includes adaptations and example code for Pig, Hive, Spark and Druid and gives architectural examples of use and a case study.
Speakers:
Jon Malkin is a scientist at Yahoo working to extend the DataSketches library. His previous roles have involved large scale data processing for sponsored search, display advertising, user counting, ad targeting, and cross-device user identity modeling.
Alexander Saydakov is a senior software engineer at Yahoo working on the open source Data Sketches project. In his previous roles he has been involved in building large-scale back-end data processing systems and frameworks for data analytics and experimentation based on Torque, Hadoop, Pig, Hive and Druid. Alexander’s education background is in the field of applied mathematics.
Overlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in DetailJose Antonio Coarasa Perez
Overlay opportunistic clouds in CMS/ATLAS at CERN: The CMSooooooCloud in detail
The CMS and ATLAS online clusters consist of more than 3000 computers each. They have been exclusively used for the data acquisition that led to the Higgs particle discovery, handling 100Gbytes/s data flows and archiving 20Tbytes of data per day.
An openstack cloud layer has been deployed on the newest part of the clusters (totalling 1300 hypervisors and more than 13000 cores in CMS alone) as a minimal overlay so as to leave the primary role of the computers untouched while allowing an opportunistic usage of the cluster.
This presentation will show how to share resources with a minimal impact on the existing infrastructure. We will present the architectural choices made to deploy an unusual, as opposed to dedicated, "overlaid cloud infrastructure". These architectural choices ensured a minimal impact on the running cluster configuration while giving a maximal segregation of the overlaid virtual computer infrastructure. The use of openvswitch to avoid changes on the network infrastructure and encapsulate the virtual machines traffic will be illustrated, as well as the networking configuration adopted due to the nature of our private network. The design and performance of the openstack cloud controlling layer will be presented. We will also show the integration carried out to allow the cluster to be used in an opportunistic way while giving full control to the CMS online run control.
February 2017 HUG: Data Sketches: A required toolkit for Big Data AnalyticsYahoo Developer Network
In the analysis of big data there are problematic queries that don’t scale because they require huge compute resources and time to generate exact results. Examples include count distinct, quantiles, most frequent items, joins, matrix computations, and graph analysis. If approximate results are acceptable, there is a class of sub-linear, stochastic streaming algorithms, called "sketches", that can produce results orders-of magnitude faster and with mathematically proven error bounds. For interactive queries there may not be other viable alternatives, and in the case of extracting results for these problem queries in real-time, sketches are the only known solution. For any analysis system that requires these problematic queries from big data, sketches are a required toolkit that should be tightly integrated into the system's analysis capabilities. This technology has helped Yahoo successfully reduce data processing times from days to hours, or minutes to seconds on a number of its internal platforms. This talk covers the current state of our Open Source DataSketches.github.io library, which includes adaptations and example code for Pig, Hive, Spark and Druid and gives architectural examples of use and a case study.
Speakers:
Jon Malkin is a scientist at Yahoo working to extend the DataSketches library. His previous roles have involved large scale data processing for sponsored search, display advertising, user counting, ad targeting, and cross-device user identity modeling.
Alexander Saydakov is a senior software engineer at Yahoo working on the open source Data Sketches project. In his previous roles he has been involved in building large-scale back-end data processing systems and frameworks for data analytics and experimentation based on Torque, Hadoop, Pig, Hive and Druid. Alexander’s education background is in the field of applied mathematics.
Overlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in DetailJose Antonio Coarasa Perez
Overlay opportunistic clouds in CMS/ATLAS at CERN: The CMSooooooCloud in detail
The CMS and ATLAS online clusters consist of more than 3000 computers each. They have been exclusively used for the data acquisition that led to the Higgs particle discovery, handling 100Gbytes/s data flows and archiving 20Tbytes of data per day.
An openstack cloud layer has been deployed on the newest part of the clusters (totalling 1300 hypervisors and more than 13000 cores in CMS alone) as a minimal overlay so as to leave the primary role of the computers untouched while allowing an opportunistic usage of the cluster.
This presentation will show how to share resources with a minimal impact on the existing infrastructure. We will present the architectural choices made to deploy an unusual, as opposed to dedicated, "overlaid cloud infrastructure". These architectural choices ensured a minimal impact on the running cluster configuration while giving a maximal segregation of the overlaid virtual computer infrastructure. The use of openvswitch to avoid changes on the network infrastructure and encapsulate the virtual machines traffic will be illustrated, as well as the networking configuration adopted due to the nature of our private network. The design and performance of the openstack cloud controlling layer will be presented. We will also show the integration carried out to allow the cluster to be used in an opportunistic way while giving full control to the CMS online run control.
In this video from the DDN User Group Meeting at SC14, Steve Simms from Indiana University presents: Indiana University's Data Capacitor II.
"The High Performance File Systems unit of UITSResearch Technologies operates two separate high-speed file systems for temporary storage of research data. Both use the open sourceLustre parallel distributed file system running on a version of theLinux operating system: Data Capacitor II (DC2) is a larger, faster replacement for the former Data Capacitor, which was decommissioned January 7, 2014. Like its predecessor, DC2 is a large-capacity, high-throughput, high-bandwidth Lustre-based file system serving all IU campuses. It is mounted on the Big Red II, Karst,Quarry, and Mason research computing systems."
Rolf huisman programming quantum computers in dot net using q#Rolf Huisman
In the news, quantum computers are all the latest rage. While the quantum computers we have today are still research topics and limited to toy examples, it's still a clear vision of what our future will hold. To aid in the development and understanding of quantum protocols, quantum algorithms, quantum error correction, and quantum devices, Quantum Architectures and Computation Group (QuArC) has developed new language Q#. Q# extends .NET framework with the necessary implementations and structures to specify quantum circuits which allow developers to emulate or run a quantum computer as a virtual coprocessor.
Finding New Sub-Atomic Particles on the AWS Cloud (BDT402) | AWS re:Invent 2013Amazon Web Services
This session will describe how members of the US Large Hadron Collider (LHC) community have benchmarked the usage of Amazon Elastic Compute Cloud (Amazon EC2) resource to simulate events observed by experiments at the European Organization for Nuclear Research (CERN). Miron Livny from the University of Wisconsin-Madison who has been collaborating with the US-LHC community for more than a decade will detail the process for benchmarking high-throughput computing (HTC) applications running across multiple AWS regions using the open source HTCondor distributed computing software. The presentation will also outline the different ways that AWS and HTCondor can help meet the needs of compute intensive applications from other scientific disciplines.
For my thesis, I developed and compared a sequential CPU and parallel GPU implementation of a ray tracer written in C++ and CUDA respectively. Here are the presentation slides from my thesis defense.
CIFAR-10 for DAWNBench: Wide ResNets, Mixup Augmentation and "Super Convergen...Thom Lane
Summary of models and methods used for DAWNBench CIFAR-10 Challenge. Starting with an review of ResNets from high level architecture, we review Basic vs Bottleneck blocks, pre-activation blocks and Wide Resets. After a brief mention of PyramidNet, ResNext and DenseNet models, we look at regularization techniques such as Mixup. And we finish with a review of Cyclical Learning Rates, and the phenomenon of "Super Convergence".
MXNet Gluon API was used for the implementations.
Weather of the Century: Design and PerformanceMongoDB
This talk walks you through how you can use MongoDB to store and analyze worldwide weather data from the entire 20th century in a graphical application.
In this video from the DDN User Group Meeting at SC14, Steve Simms from Indiana University presents: Indiana University's Data Capacitor II.
"The High Performance File Systems unit of UITSResearch Technologies operates two separate high-speed file systems for temporary storage of research data. Both use the open sourceLustre parallel distributed file system running on a version of theLinux operating system: Data Capacitor II (DC2) is a larger, faster replacement for the former Data Capacitor, which was decommissioned January 7, 2014. Like its predecessor, DC2 is a large-capacity, high-throughput, high-bandwidth Lustre-based file system serving all IU campuses. It is mounted on the Big Red II, Karst,Quarry, and Mason research computing systems."
Rolf huisman programming quantum computers in dot net using q#Rolf Huisman
In the news, quantum computers are all the latest rage. While the quantum computers we have today are still research topics and limited to toy examples, it's still a clear vision of what our future will hold. To aid in the development and understanding of quantum protocols, quantum algorithms, quantum error correction, and quantum devices, Quantum Architectures and Computation Group (QuArC) has developed new language Q#. Q# extends .NET framework with the necessary implementations and structures to specify quantum circuits which allow developers to emulate or run a quantum computer as a virtual coprocessor.
Finding New Sub-Atomic Particles on the AWS Cloud (BDT402) | AWS re:Invent 2013Amazon Web Services
This session will describe how members of the US Large Hadron Collider (LHC) community have benchmarked the usage of Amazon Elastic Compute Cloud (Amazon EC2) resource to simulate events observed by experiments at the European Organization for Nuclear Research (CERN). Miron Livny from the University of Wisconsin-Madison who has been collaborating with the US-LHC community for more than a decade will detail the process for benchmarking high-throughput computing (HTC) applications running across multiple AWS regions using the open source HTCondor distributed computing software. The presentation will also outline the different ways that AWS and HTCondor can help meet the needs of compute intensive applications from other scientific disciplines.
For my thesis, I developed and compared a sequential CPU and parallel GPU implementation of a ray tracer written in C++ and CUDA respectively. Here are the presentation slides from my thesis defense.
CIFAR-10 for DAWNBench: Wide ResNets, Mixup Augmentation and "Super Convergen...Thom Lane
Summary of models and methods used for DAWNBench CIFAR-10 Challenge. Starting with an review of ResNets from high level architecture, we review Basic vs Bottleneck blocks, pre-activation blocks and Wide Resets. After a brief mention of PyramidNet, ResNext and DenseNet models, we look at regularization techniques such as Mixup. And we finish with a review of Cyclical Learning Rates, and the phenomenon of "Super Convergence".
MXNet Gluon API was used for the implementations.
Weather of the Century: Design and PerformanceMongoDB
This talk walks you through how you can use MongoDB to store and analyze worldwide weather data from the entire 20th century in a graphical application.
SAP implementation (Systems, Applications & Products implementation) refers to the name of the German company SAP SE, and is the whole of processes that defines a complete method to implement the SAP ERP enterprise resource planning software in an organization. The SAP implementation method described in this entry is a generic method and not a specific implementation method as such. It is based on best practices and case studies from various literature sources and presents a collection of processes and products that make up a complete implementation method to allow any organization to plan and execute the implementation of SAP software.
Network security consists of the policies adopted to prevent and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator.[citation needed] Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs; conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: It secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password.
Google self-driving car is any in a range of autonomous cars, developed by Google X as part of its project to develop technology for mainly electric cars. The software installed in Google's cars is called Google Chauffeur.[1] Lettering on the side of each car identifies it as a "self-driving car". The project was formerly led by Sebastian Thrun, former director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View. Thrun's team at Stanford created the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge and its US$2 million prize from the United States Department of Defense.[2] The team developing the system consisted of 15 engineers working for Google, including Chris Urmson, Mike Montemerlo, and Anthony Levandowski who had worked on the DARPA Grand and Urban Challenges.
Legislation has been passed in four U.S. states and Washington, D.C. allowing driverless cars. The state of Nevada passed a law on June 29, 2011, permitting the operation of autonomous cars in Nevada, after Google had been lobbying in that state for robotic car laws.The Nevada law went into effect on March 1, 2012, and the Nevada Department of Motor Vehicles issued the first license for an autonomous car in May 2012, to a Toyota Prius modified with Google's experimental driverless technology. In April 2012, Florida became the second state to allow the testing of autonomous cars on public roads,and California became the third when Governor Jerry Brown signed the bill into law at Google HQ in Mountain View.[8] In December 2013, Michigan became the fourth state to allow testing of driverless cars on public roads.In July 2014, the city of Coeur d'Alene, Idaho adopted a robotics ordinance that includes provisions to allow for self-driving cars.
Round Table Introduction: Analytics on 100 TB+ catalogsMario Juric
Introductory slides to spark the discussion at the MSDSE 2017 round table on tools enabling data management and analytics of 10-100 TB catalogs, using a specific astronomy problem as a case study.
Astronomical Data Processing on the LSST Scale with Apache SparkDatabricks
The next decade promises to be exciting for both astronomy and computer science with a number of large-scale astronomical surveys in preparation. One of the most important ones is Large Scale Survey Telescope, or LSST. LSST will produce the first ‘video’ of the deep sky in history by continually scanning the visible sky and taking one 3.2 giga-pixel image every 20 seconds. In this talk we will describe LSST’s unique design and how its image processing pipeline produces catalogs of astronomical objects. To process and quickly cross-match catalog data we built AXS (Astronomy Extensions for Spark), a system based on Apache Spark. We will explain its design and what is behind its great cross-matching performance.
Frossie Economou & Angelo Fausti [Vera C. Rubin Observatory] | How InfluxDB H...InfluxData
Frossie Economou & Angelo Fausti [Vera C. Rubin Observatory] | How InfluxDB Helps Vera C. Rubin Observatory Make the Deepest, Widest Image of the Universe | InfluxDays Virtual Experience NA 2020
Burst data retrieval after 50k GPU Cloud runIgor Sfiligoi
We ran a 50k GPU multi-cloud simulation to support the IceCube science. This talk provided an overview of what happened to the associated data.
Presented at the Internet2 booth at SC19.
AdS Biology and Quantum Information ScienceMelanie Swan
Quantum Information Science is a fast-growing discipline advancing many areas of science such as cryptography, chemistry, finance, space science, and biology. In particular AdS/Biology, an interpretation of the AdS/CFT correspondence in biological systems, is showing promise in new biophysical mathematical models of topology (Chern-Simons (solvable QFT), knotting, and compaction). For example, one model of neurodegenerative disease takes a topological view of protein buildup (AB plaques and tau tangles in Alzheimer’s disease, alpha-synuclein in Parkinson’s disease, TDP-43 in ALS). AdS/Neuroscience methods are implicated in integrating multiscalar systems with different bulk-boundary space-time regimes (e.g. oncology tumors, fMRI + EEG imaging), entanglement (correlation) renormalization across scales (MERA, random tensor networks, melonic diagrams), entropy (possible system states), entanglement entropy (interrelated fluctuations and correlations across system tiers), and non-ergodicity (implied efficiency mechanisms since biology does not cycle through all possible configurations per temperature (thermotaxis), chemotaxis, and energy cues); Maxwell’s demon of biology (partition functions), conservation across system scales (biophysical gauge symmetry (system-wide conserved quantity)), and the presence of codes (DNA, codons, neural codes). A multiscalar AdS/CFT correspondence is mobilized in 4-tier ecosystem models (light-plankton-krill-whale and ion-synapse-neuron-network (AdS/Brain)).
Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lens...inside-BigData.com
In this deck from the 2018 Swiss HPC Conference, Gilles Fourestey from EPFL presents: Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software.
"LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It was developed by Prof. Kneib, head of the LASTRO lab at EPFL, et al., starting from 1996. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature.
However, LENSTOOL lacks efficient vectorization and only uses OpenMP, which limits its execution to one node and can lead to execution times that exceed several months. Therefore, the LASTRO and the EPFL HPC group decided to rewrite the code from scratch and in order to minimize risk and maximize performance, a bottom-up approach that focuses on exposing parallelism at hardware and instruction levels was used. The result is a high performance code, fully vectorized on Xeon, Xeon Phis and GPUs that currently scales up to hundreds of nodes on CSCS’ Piz Daint, one of the fastest supercomputers in the world."
Watch the video: https://wp.me/p3RLHQ-ili
Learn more: https://infoscience.epfl.ch/record/234382/files/EPFL_TH8338.pdf?subformat=pdfa
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Science and Cyberinfrastructure in the Data-Dominated EraLarry Smarr
10.02.22
Invited talk
Symposium #1610, How Computational Science Is Tackling the Grand Challenges Facing Science and Society
Title: Science and Cyberinfrastructure in the Data-Dominated Era
San Diego, CA
[RakutenTechConf2013] [A-3] TSUBAME2.5 to 3.0 and Convergence with Extreme Bi...Rakuten Group, Inc.
Rakuten Technology Conference 2013
"TSUBAME2.5 to 3.0 and Convergence with Extreme Big Data"
Satoshi Matsuoka
Professor
Global Scientific Information and Computing (GSIC) Center
Tokyo Institute of Technology
Fellow, Association for Computing Machinery (ACM)
RedTacton is a Human Area Networking technology/Wireless Network, which is developed by Robin Gaur Jind, that uses the surface of the human body as a safe, high speed network transmission path. It is completely distinct from wireless and infrared technologies as it uses the minute electric field emitted on the surface of the human body
Electronic voting (also known as e-voting) is voting using electronic means to record or count votes. Depending on the particular implementation, e-voting may encompass a range of Internet services, from a touchscreen kiosk at a polling station to voting online, and from a local-only solution to a networked system.
Steganography (US Listeni/ˌstɛ.ɡʌnˈɔː.ɡrʌ.fi/, UK /ˌstɛɡ.ənˈɒɡ.rə.fi/) is the practice of concealing a file, message, image, or video within another file, message, image, or video. The word steganography combines the Greek words steganos (στεγανός), meaning "covered, concealed, or protected", and graphein (γράφειν) meaning "writing"
A sensor node, also known as a mote (chiefly in North America), is a node in a sensor network that is capable of performing some processing, gathering sensory information and communicating with other connected nodes in the network. A mote is a node but a node is not always a mote.
Android is a mobile operating system (OS) currently developed by Google, based on the Linux kernel and designed primarily for touchscreen mobile devices such as smartphones and tablets. Android's user interface is mainly based on direct manipulation, using touch gestures that loosely correspond to real-world actions, such as swiping...etc
An opto-electric nuclear battery is a device that converts nuclear energy into light, which it then uses to generate electrical energy. A beta-emitter such as technetium-99 or strontium-90 is suspended in a gas or liquid containing luminescent gas molecules of the excimer type, constituting a "dust plasma." This permits a nearly lossless emission of beta electrons from the emitting dust particles
The memristor (/ˈmɛmrɨstər/; a portmanteau of memory resistor) was a term coined in 1971 by circuit theorist Leon Chua as a missing non-linear passive two-terminal electrical component relating electric charge and magnetic flux linkage.[1] The operation of RRAM devices was recently connected to the memristor concept.[2] According to the characterizing mathematical relations, the memristor would hypothetically operate in the following way. The memristor's electrical resistance is not constant but depends on the history of current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has flowed in what direction through it in the past. The device remembers its history—the so-called non-volatility property
Renewable energy is generally defined as energy that comes from resources which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat.[2] Renewable energy replaces conventional fuels in four distinct areas: electricity generation, air and water heating/cooling, motor fuels, and rural (off-grid) energy services.
Based on REN21's 2014 report, renewables contributed 19 percent to our global energy consumption and 22 percent to our electricity generation in 2012 and 2013, respectively. This energy consumption is divided as 9% coming from traditional biomass, 4.2% as heat energy (non-biomass), 3.8% hydro electricity and 2% is electricity from wind, solar, geothermal, and biomass. Worldwide investments in renewable technologies amounted to more than US$214 billion in 2013, with countries like China and the United States heavily investing in wind, hydro, solar and biofuels.
Widescreen Peg PowerPoint template is another free background template for MS PowerPoint that you can download for free to make your presentations more impressive
ER(Entity Relationship) Diagram for online shopping - TAEHimani415946
https://bit.ly/3KACoyV
The ER diagram for the project is the foundation for the building of the database of the project. The properties, datatypes, and attributes are defined by the ER diagram.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
2. How to build the World Wide Telescope?
Web Services & Grid Enable Virtual Observatory
http://www.astro.caltech.edu/nvoconf/
http://www.voforum.org/
The Internet will be the world’s best telescope:
It has data on every part of the sky
In every measured spectral band: optical, x-ray, radio..
As deep as the best instruments (2 years ago).
It is up when you are up.
The “seeing” is always great
(no working at night, no clouds no moons no..).
It’s a smart telescope:
links objects and data to literature on them.
W3C & IETF standards Provide
Naming
Authorization / Security / Privacy
Distributed Objects
Discovery, Definition, Invocation, Object Model
Higher level services: workflow, transactions, DB,..
A great test bed for .NET ideas
3. Steps to World Wide Telescope
Define a set of Astronomy Objects and methods.
Based on UDDI, WSDL, XSL, SOAP, dataSet
Use them locally to debug ideas
Schema, Units,…
Dataset problems
Typical use scenarios.
Federate different archives
Each archive is a web service
Global query tool accesses them
Working on this with
Sloan Digital Sky Survey and CalTech/Palomar.
Especially Alex Szalay et. al. at JHU
4. Why Astronomy Data?
It has no commercial value
No privacy concerns
Can freely share results with others
Great for experimenting with algorithms
It is real and well documented
High-dimensional data (with confidence intervals)
Spatial data
Temporal data
Many different instruments from
Many different places and
Many different times
Federation is a goal
The questions are interesting
How did the universe form?
There is a lot of it (petabytes)
IRAS 100µ
ROSAT ~keV
DSS Optical
2MASS 2µ
IRAS 25µ
NVSS 20cm
WENSS 92cm
GB 6cm
5. Step1 Putting SDSS online Scenario Design
Astronomers proposed 20 questions
Typical of things they want to do
Each would require a week of programming in tcl / C++/ FTP
Goal, make it easy to answer questions
DB and tools design motivated by this goal
Implemented utility procedures
JHU Built GUI for Linux clients Q11: Find all elliptical galaxies with spectra that have an anomalous
emission line.
Q12: Create a grided count of galaxies with u-g>1 and r<21.5 over
60<declination<70, and 200<right ascension<210, on a grid of 2’,
and create a map of masks over the same grid.
Q13: Create a count of galaxies for each of the HTM triangles which
satisfy a certain color cut, like 0.7u-0.5g-0.2i<1.25 && r<21.75,
output it in a form adequate for visualization.
Q14: Find stars with multiple measurements and have magnitude
variations >0.1. Scan for stars that have a secondary object
(observed at a different time) and compare their magnitudes.
Q15: Provide a list of moving objects consistent with an asteroid.
Q16: Find all objects similar to the colors of a quasar at 5.5<redshift<6.5.
Q17: Find binary stars where at least one of them has the colors of a
white dwarf.
Q18: Find all objects within 30 arcseconds of one another that have very
similar colors: that is where the color ratios u-g, g-r, r-I are less than
0.05m.
Q19: Find quasars with a broad absorption line in their spectra and at
least one galaxy within 10 arcseconds. Return both the quasars and
the galaxies.
Q20: For each galaxy in the BCG data set (brightest color galaxy), in
160<right ascension<170, -25<declination<35 count of galaxies
within 30"of it that have a photoz within 0.05 of that galaxy.
Q1: Find all galaxies without unsaturated pixels within 1' of a given point
of ra=75.327, dec=21.023
Q2: Find all galaxies with blue surface brightness between and 23 and 25
mag per square arcseconds, and -10<super galactic latitude (sgb)
<10, and declination less than zero.
Q3: Find all galaxies brighter than magnitude 22, where the local
extinction is >0.75.
Q4: Find galaxies with an isophotal surface brightness (SB) larger than 24
in the red band, with an ellipticity>0.5, and with the major axis of the
ellipse having a declination of between 30” and 60”arc seconds.
Q5: Find all galaxies with a deVaucouleours profile (r¼
falloff of intensity on
disk) and the photometric colors consistent with an elliptical galaxy.
The deVaucouleours profile
Q6: Find galaxies that are blended with a star, output the deblended
galaxy magnitudes.
Q7: Provide a list of star-like objects that are 1% rare.
Q8: Find all objects with unclassified spectra.
Q9: Find quasars with a line width >2000 km/s and 2.5<redshift<2.7.
Q10: Find galaxies with spectra that have an equivalent width in Ha >40Å
(Ha is the main hydrogen spectral line.)
6. Two kinds of SDSS data in an SQL DB
(objects and images all in DB)
15M Photo Objects ~ 400 attributes
50K
Spectra with
~30 lines/
spectrum
7. Spatial Data Access – SQL extension
(Szalay, Kunszt, Brunner) http://www.sdss.jhu.edu/htm
Added Hierarchical Triangular Mesh (HTM)
table-valued function for spatial joins.
Every object has a 20-deep Mesh ID.
Given a spatial definition:
Routine returns up to ~10 covering triangles.
Spatial query is then up to ~10 range queries.
Very fast: 10,000 triangles / second / cpu.
Based onSQL Server Extended Stored Procedure
2
2,2
2,1
2,0
2,3
2,3,0
2,3,1
2,3,2 2,3,3
2
2,2
2,1
2,0
2,32,2
2,1
2,0
2,3
2,3,0
2,3,1
2,3,2 2,3,3
2,3,0
2,3,1
2,3,2 2,3,3
2
8. Q15: Fast Moving Objects
Find near earth asteroids:
Finds 3 objects in 11 minutes
(or 52 seconds with an index)
Ugly,
but consider the alternatives (c programs an files and…)
SELECT r.objID as rId, g.objId as gId,
dbo.fGetUrlEq(g.ra, g.dec) as url
FROM PhotoObj r, PhotoObj g
WHERE r.run = g.run and r.camcol=g.camcol
and abs(g.field-r.field)<2 -- nearby
-- the red selection criteria
and ((power(r.q_r,2) + power(r.u_r,2)) > 0.111111 )
and r.fiberMag_r between 6 and 22 and r.fiberMag_r < r.fiberMag_g
and r.fiberMag_r < r.fiberMag_i
and r.parentID=0 and r.fiberMag_r < r.fiberMag_u
and r.fiberMag_r < r.fiberMag_z
and r.isoA_r/r.isoB_r > 1.5 and r.isoA_r>2.0
-- the green selection criteria
and ((power(g.q_g,2) + power(g.u_g,2)) > 0.111111 )
and g.fiberMag_g between 6 and 22 and g.fiberMag_g < g.fiberMag_r
and g.fiberMag_g < g.fiberMag_i
and g.fiberMag_g < g.fiberMag_u and g.fiberMag_g < g.fiberMag_z
and g.parentID=0 and g.isoA_g/g.isoB_g > 1.5 and g.isoA_g > 2.0
-- the matchup of the pair
and sqrt(power(r.cx -g.cx,2)+ power(r.cy-g.cy,2)+power(r.cz-g.cz,2))*(10800/PI())< 4.0
and abs(r.fiberMag_r-g.fiberMag_g)< 2.0
10. Performance (on current SDSS data)
Run times: on 15k$ COMPAQ Server
(2 cpu, 1 GB , 8 disk)
Some take 10 minutes
Some take 1 minute
Median ~ 22 sec.
Ghz processors are fast!
(10 mips/IO, 200 ins/byte)
2.5 m rec/s/cpu
time vs queryID
1
10
100
1000
Q08 Q01 Q09 Q10A Q19 Q12 Q10 Q20 Q16 Q02 Q13 Q04 Q06 Q11 Q15B Q17 Q07 Q14 Q15A Q05 Q03 Q18
seconds
cpu
elapsed
ae
cpu vs IO
1E+0
1E+1
1E+2
1E+3
1E+4
1E+5
1E+6
1E+7
0.01 0.1 1. 10. 100. 1,000.CPU sec
IOcount
1,000 IOs/cpu sec
~1,000 IO/cpu sec
~ 64 MB IO/cpu sec
11. Sequential Scan Speed is Important
In high-dimension data, best way is to search.
Sequential scan covering index is 10x faster
Seconds vs minutes
SQL scans at 2M records/s/cpu (!)
12. Memory in GB
1.0
10.0
100.0
1000.0
10000.0
100000.0
0 10 20 30 40 50 60 70 80 90 100
No of galaxies in Millions
CPUtime(hrs)
1
4
32
256
year
decade
week
day
month
Cosmo: 64-bit SQL Server & Windows
Computing the Cosmological Constant
Compares simulated & observed galaxy distribution
Measure distance between each pair of galaxies
A lot of work (108 x
108
= 1016
steps)
Good algorithms make this ~Nlog2N
Needs LARGE main memory
Using Itanium
donated by Compaq
64-bit
Windows & SQL server
(Alex Szalay, Adrian Pope@ JHU).
13. Where We Are Today
One Astronomy Archive Web Service works
Federating 3 Web Services (JHU, Cal Tech, Space Telescope)
WWT is a great .Net application
Federating heterogeneous data sources.
Cooperating organizations
An Information At Your Fingertips challenge.
SDSS DB is a data mining challenge:
get your personal copy at http://research.microsoft.com/~gray/sdss
Papers about this at:
http://SkyServer.SDSS.org/
http://research.microsoft.com/~gray/ (see paragraph 1)
DB available for experiments
14. Sloan Digital Sky Survey
http://www.sdss.org/
For the last 12 years astronomers
have been building a telescope(with funding from Sloan Foundation, NSF, and a dozen universities). 90M$.
Y2000: engineer, calibrate, commission: now public data.
5% of the survey, 600 sq degrees, 15 M objects
60GB, ½ TB raw.
This data includes most of the known high z quasars.
It has a lot of science left in it but….
New the data is arriving:
250GB/nite (20 nights per year) = 5TB/y.
100 M stars, 100 M galaxies, 1 M spectra.
http://www.sdss.org/
15. What we learned from the 20 Queries
All have fairly short SQL programs --
a substantial advance over (tcl, C++)
Many are sequential
one-pass and two-pass over data
Covering indices make scans run fast
Table valued functions are wonderful
but limitations are painful.
Counting, Binning, Histograms VERY common
Spatial indices helpful,
Materialized view (Neighbors) helpful.
16. An easy one
Q7: Find rare star-like objects.
Found 14,681 buckets,
first 140 buckets have 99%
time 62 seconds
CPU bound 226 k records/second (2 cpu)
250 KB/s.
Select cast((u-g) as int) as ug,
cast((g-r) as int) as gr,
cast((r-i) as int) as ri,
cast((i-z) as int) as iz,
count(*) as Population
from stars
group by cast((u-g) as int), cast((g-r) as int),
cast((r-i) as int), cast((i-z) as int)
order by count(*)
17. An Easy One
Q15: Find asteroids
Sounds hard but
there are 5 pictures of the object at 5 different times
(color filters) and so can “see” velocity.
Image pipeline computes velocity.
Computing it from the 5 color x,y would also be fast
Finds 1,303 objects in 3 minutes, 140MBps.
(could go 2x faster with more disks)
select objId, dbo.fGetUrlEq(ra,dec) as url --return object ID & url
sqrt(power(rowv,2)+power(colv,2)) as velocity
from photoObj -- check each object.
where (power(rowv,2) + power(colv, 2)) -- square of velocity
between 50 and 1000 -- huge values =error