ppt
Upcoming SlideShare
Loading in...5
×
 

ppt

on

  • 942 views

 

Statistics

Views

Total Views
942
Views on SlideShare
942
Embed Views
0

Actions

Likes
0
Downloads
9
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Applications are first class services - users don’t have to specify paths to executables, working directories, etc Examples of what the operations may look like 2. Security, state management, scheduling are all configured optionally 1. Jobs can even run on a Windows box - without a scheduler 3. Different services can exist in the same container - all accessible by separate URLs 4. Clients send command line arguments and input files to services (Base64 binary)
  • 1. Configuration file - metadata (usage, etc), binary location, default arguments, parallel flag. 2. Service properties - globus and scheduler information, database information, mpi location, etc 3. Application writers don’t have to write a single line of code to expose an application as a Web service. Users pass the command line and list of files as arguments
  • Tomcat - servlet container, hosting environment Axis - SOAP engine
  • All the above operations defined by the Opal WSDL. Operations stay the same for all applications - no additional operations required as all applications can be invoked via command line
  • MEME: discover motifs (highly conserved regions) in groups of related DNA or protein sequences MAST: search sequence databases using motifs

ppt ppt Presentation Transcript

  • Kepler, Opal and Gemstone Amarnath Gupta University of California San Diego
  • Changing Needs for Scientific Process
    • Observe  Hypothesize  Conduct experiment 
    • Analyze data  Compare results and Conclude 
    •  Predict
    Traditional Scientific Process (before computers) Yesterday…at least for some of us!
  • What’s different in today’s science?
    • Observe  Hypothesize  Conduct experiment 
    • Analyze data  Compare results and Conclude 
    •  Predict
    Today’s scientific process More to add to this picture: network, Grid, portals, +++
    • Observing / Data: Microscopes, telescopes, particle accelerators, X-rays, MRI’s,
      • microarrays, satellite-based sensors, sensor networks, field studies…
    • Analysis, Prediction / Models and model execution: Potentially large
      • computation and visualization
    + + + + + + +
  • A Brief Recap
    • What are Scientific Workflow Systems trying to achieve?
      • Creation of a problem solving environment over distributed and mostly autonomous platforms
      • Seamless access to resources and services
      • Service composition and reuse
      • Scalability
      • Detached execution and yet, user interaction
      • Reliability and fault tolerance
      • “ Smart” re-runnability
      • Reproducibility of execution
      • Information discovery as an aid to workflow design
  • What is Kepler?
    • Derived from an earlier scientific data-flow system called Ptolemy-II, which is
      • Designed to model heterogeneous, concurrent systems for engineering applications
      • An actor-based workflow paradigm
    • Kepler adds to Ptolemy-II
      • New components for scientific workflows
      • Structural and Semantic type management
      • Semantic annotation and annotation propagation mechanisms
      • Distributed execution capabilities
        • Execution in a grid framework
  • Promoter Identification Workflow Source: Matt Coleman (LLNL)
  • Promoter Identification Workflow
  •  
  •  
  • Enter initial inputs, Run and Display results
  • Custom Output Visualizer
  • Kepler System Architecture Authentication GUI Vergil SMS Kepler Core Extensions Ptolemy … Kepler GUI Extensions… Actor&Data SEARCH Type System Ext Provenance Framework Kepler Object Manager Documentation Smart Re-run / Failure Recovery
  • What is an Actor-based Workflow?
    • An actor-based workflow is a graph with three components
      • Actors : passive (parameterized) programs are specified by their input and output signatures
        • Ports : an actor has a set of input and output ports that are specified by the signature of the data tokens passing through that port
        • No call semantics
        • Attributes
      • Dataflow connections : a connectivity specification that designates the flow of data from one actor to another
        • Relation : an intermediate data holding station
      • Director : an execution control model that coordinates the execution behavior of a workflow
  • Composite Actors
    • Composite actor AW
      • A pair ( W,Σ W ) comprising a subworkflow W and a set of distinguished ports Σ W  freeports( W ), the i/o-signature of W
      • The i/o-signatures of the subworkflow W and of the composite actor AW containing W match, i.e., Σ W = ports( AW )
      • An actor can be “refined” by treating it as a workflow and adding other restrictions around it
    • Workflow abstraction
      • One can substitute a subworkflow as a single actor
      • The subworkflow may have a different director than the higher-level workflow
  • Mineral Classification Workflow
  • PointInPolygon algorithm
  • Execution Model
    • Actors
      • Asynchronous : Many actors can be ready to fire simultaneously
        • Execution ("firing") of a node starts when (matching) data is available at a node's input ports.
        • Locally controlled events
          • Events correspond to the “firing” of an actor
        • Actor:
          • A single instruction
          • A sequence of instructions
        • Actors fire when all the inputs are available
    • Directors are the WF Engines that
      • Implement different computational models
      • Define the semantics of
        • execution of actors and workflows
        • interactions between actors
    • Process Network (PN) Director
      • Each actor executes as a separate thread or process
      • Data connections represent queues of unbounded size.
        • Actors can always write to output ports, but may get suspended (blocked) on input ports without a sufficient number of data tokens.
      • Performs buffer management, deadlock detection, allows data forks and merges
  • The Director
    • Execution Phases
      • pre-initialize method of all actors
        • Run once per workflow execution
        • Are the data types of all actor ports known? Are transport protocols known?
      • type-check
        • Are connected actors type compatible?
      • run*
        • initialize
          • Executed per run
          • Are all the external services (e.g., web services) working?
          • Replace dead services with live ones…
        • iteration*
          • pre-fire
            • Are all data in place?
          • fire*
          • post-fire
            • Any updates for local state management?
      • wrap-up
  • Polymorphic Actors: Components Working Across Data Types and Domains
    • Actor Data Polymorphism :
      • Add numbers (int, float, double, Complex)
      • Add strings (concatenation)
      • Add complex types (arrays, records, matrices)
      • Add user-defined types
    • Actor Behavioral Polymorphism :
      • In dataflow , add when all connected inputs have data
      • In a time-triggered model , add when the clock ticks
      • In discrete-event , add when any connected input has data, and add in zero time
      • In process networks , execute an infinite loop in a thread that blocks when reading empty inputs
      • In CSP , execute an infinite loop that performs rendezvous on input or output
      • In push/pull , ports are push or pull (declared or inferred) and behave accordingly
      • In real-time CORBA* , priorities are associated with ports and a dispatcher determines when to add
    By not choosing among these when defining the component, we get a huge increment in component re-usability. But how do we ensure that the component will work in all these circumstances? Source: Edward Lee et al. http://ptolemy.eecs.berkeley.edu/
  • GEON: Geosciences Network
    • Multi-institution collaboration between IT and Earth Science researchers
    • Funded by NSF “large” ITR program
    • GEON Cyberinfrastructure provides:
      • Authenticated access to data and Web services
      • Registration of data sets and tools, with metadata
      • Search for data, tools, and services, using ontologies
      • Scientific workflow environment
      • Data and map integration capability
      • Visualization and GIS mapping
    www.geongrid.org
  • LiDAR Introduction R. Haugerud, U.S.G.S D. Harding, NASA Point Cloud x, y, z n , … Survey Process & Classify Analyze / “Do Science” Interpolate / Grid
  • LiDAR Difficulties
    • Massive volumes of data
      • 1000s of ASCII files
      • Hard to subset
      • Hard to distribute and interpolate
    • Analysis requires high performance computing
    • Traditionally: Popularity > Resources
  • A Three-Tier Architecture
    • GOAL: Efficient LiDAR interpolation and analysis using GEON infrastructure and tools
      • GEON Portal
      • Kepler Scientific Workflow System
      • GEON Grid
    • Use scientific workflows to glue/combine different tools and the infrastructure
    Portal Grid
  • Lidar Workflow Process
    • Configuration phase
    • Subset : DB2 query on DataStar
    Portal Grid Subset
    • Interpolate : Grass RST, Grass IDW, GMT…
    • Visualize : Global Mapper, FlederMaus, ArcIMS
    Scheduling/ Output Processing Monitoring/ Translation Analyze move process Visualize move render display
  • Lidar Processing Workflow (using Fledermaus ) Subset NFS Mounted Disk d1 d2 (grid file) d2 d2 d1 Analyze move process Visualize move render display Arizona Cluster IBM DB2 Datastar NFS Mounted Disk d1 iView3D/Browser Create Scene file Fledermaus sd
  • Lidar Workflow Portlet
    • User selections from GUI
      • Translated into a query and a parameter file
      • Uploaded to remote machine
    • Workflow description created on the fly
    • Workflow response redirected back to portlet
  • Render Map DB2 DB2 Spatial query Client/ GEON Portal NFS Mounted Disk Compute Cluster x,y,z and attribute raw data process output KEPLER WORKFLOW Parameter xml Create Workflow description Map onto the grid (Pegasus) Binary grid ASCII grid Text file Tiff/Jpeg/Gif ASCII grid LIDAR POST-PROCESSING WORKFLOW PORTLET ArcInfo Map Parameters Grass Functions submit ArcSDE ArcIMS Grass surfacing algorithms: Spline IDW block mean … Download data
  • Portlet User Interface - Main Page
  • Portlet User Interface - Parameter Entry 1
  • Portlet User Interface - Parameter Entry 2
  • Portlet User Interface - Parameter Entry 3
  • Behind the Scenes: Workflow Template
  • Filled Template
  • Example Outputs
  • With Additional Algorithms
  • Kepler System Architecture Authentication GUI Vergil SMS Kepler Core Extensions Ptolemy … Kepler GUI Extensions… Actor&Data SEARCH Type System Ext Provenance Framework Kepler Object Manager Documentation Smart Re-run / Failure Recovery
  • The Hybrid Type System
    • Every portal of an actor has a type signature
      • Structural Types
        • Any type system admitted by the actor
          • DBMS data types, XML schema, Hindley-Milner type system …
      • Semantic Types
        • An expression in a logical language to specify what a data object means
        • In the SEEK project, such a statement is expressed in a DL over an ontology
          • MEASUREMENT  ITEM_MEASURED . SPECIES_OCCURRENCE
    • A workflow is well-typed if
      • For every pair of connected ports
      • The structural type of the output port is a subtype of that of the input port
      • The semantic type of the output port is logically subsumed by that of the input port
  • Hybridization Constraints
    • A hybridization constraint
      • a logical expression connecting instances of a structural type with instances of the corresponding semantic type for a port
      • For a relational type r(site, day, spp, occ)
    • I/O Constraint
      • A constraint relating the input and output port signatures of an actor
    • Propagating hybridization constraints
    Having a tuple in r implies that there is a measurement y of the type speciesoccurrence corresponding to x occ
  •  
  • How can my (grid) application become a Kepler actor?
    • By making it a web service
      • For applications that have a command line interface
      • OPAL can convert the application into a web service
    • What is Opal?
      • a Web services wrapper toolkit
        • Pros: Generic, rapid deployment of new services
        • Cons: Less flexible implementation, weak data typing due to use of generic XML schemas
  • Opal is an Application Wrapping Service Condor pool SGE Cluster PBS Cluster Globus Globus Globus Application Services Security Services (GAMA) State Mgmt Gemstone PMV/Vision Kepler
  • The Opal Toolkit: Overview
    • Enables rapid deployment of scientific applications as Web services (< 2 hours)
    • Steps
      • Application writers create configuration file(s) for a scientific application
      • Deploy the application as a Web service using Opal’s simple deployment mechanism (via Apache Ant)
      • Users can now access this application as a Web service via a unique URL
  • Opal Architecture Tomcat Container Axis Engine Opal WS Opal WS Cluster/Grid Resources Container Properties Service Config Scheduler, Security, Database Setups Binary, Metadata, Arguments
  • Service Operations
    • Get application metadata: Returns metadata specified inside the application configuration
    • Launch job: Accepts list of arguments and input files (Base64 encoded), launches the job, and returns a jobID
    • Query job status: Returns status of running job using the jobID
    • Get job outputs: Returns the locations of job outputs using the jobID
    • Get output as Base64: Returns an output file in Base64 encoded form
    • Destroy job: Uses the jobID to destroy a running job
  • MEME+MAST Workflow using Kepler
  • Kepler Opal Web Services Actor
  • Opal and Gemstone
  • Opal Summary
    • Opal enables rapidly exposing legacy applications as Web services
      • Provides features like Job management, Scheduling, Security, and Persistence
    • More information, downloads, documentation:
      • http://nbcr.net/services/
  • Kepler System Architecture Authentication GUI Vergil SMS Kepler Core Extensions Ptolemy … Kepler GUI Extensions… Actor&Data SEARCH Type System Ext Provenance Framework Kepler Object Manager Documentation Smart Re-run / Failure Recovery
  • Joint Authentication Framework
    • Requirements:
      • Coordinating between the different security architectures
        • GEON uses GAMA which requires a single certificate authority.
        • SEEK uses LDAP with has a centralized certificate authority with distributed subordinate CAS
      • To connect LDAP with GAMA
      • Coordinating between 2 different GAMA servers
      • Single sign-on/authentication at the initialize step of the run for multiple actors that are using authentication
        • This has issues related to single GAMA repository vs multiple, and requires users to have accounts on all servers.
        • Kepler needs to be able to handle expired certificates for long-running workflows and/or for users who use it for a long time.
        • A trust relation between the different GAMA servers must be established in order to allow for single authentication.
  • Functional Prototype Completed
    • APIs and tests cases in place
    • More work required on certificate renewal and multiple server access
  • Vergil is the GUI for Kepler
    • Actor ontology and semantic search for actors
    • Search -> Drag and drop -> Link via ports
    • Metadata-based search for datasets
    Actor Search Data Search
  • Back to Kepler - Actor Search
    • Kepler Actor Ontology
      • Used in searching actors and creating conceptual views (= folders)
    • Currently 160 Kepler actors added!
  • Data Search and Usage of Results
    • Kepler DataGrid
      • Discovery of data resources through local and remote services
        • SRB,
        • Grid and Web Services,
        • Db connections
      • Registry of datasets on the fly using workflows
  • Vergil Updates
    • To make it more useful to the user
      • Updated actor icons
      • Menu redesign
    • Improve readability
    • Develop cohesive visual language
    • Follow standard HCI principles
    • Improve organization
    Composite DB Query Computation or Operation Transformation Filter File Operation Web Service
  • Kepler Archives
    • Purpose : Encapsulate WF data and actors in an archive file
      • … inlined or by reference
      • … version control
        • More robust workflow exchange
        • Easy management of semantic annotations
        • Plug-in architecture (Drop in and use)
        • Easy documentation updates
    • A jar-like archive file (.kar) including a manifest
    • All entities have unique ids (LSID)
    • Custom object manager and class loader
    • UI and API to create, define, search and load .kar files
  • KAR File Example <entity name=&quot;Multiply or Divide&quot; class=&quot;ptolemy.kernel.ComponentEntity&quot;> <property name=&quot;entityId&quot; value=&quot;urn:lsid:localhost:actor:80:1&quot; class=&quot;org.kepler.moml.NamedObjId&quot;/> <property name=&quot;documentation&quot; class=&quot;org.kepler.moml.DocumentationAttribute &quot; ></property> <property name=&quot;class&quot; value=&quot;ptolemy.actor.lib.MultiplyDivide&quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;> <property name=&quot;id&quot; value=&quot;urn:lsid:localhost:class:955:1 &quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;/></property> <property name=&quot;multiply&quot; class=&quot;org.kepler.moml.PortAttribute&quot;> <property name=&quot;direction&quot; value=&quot;input&quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;/> <property name=&quot;dataType&quot; value=&quot;unknown&quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;/> <property name=&quot;isMultiport&quot; value=&quot;true&quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;/></property> <property name=&quot;divide&quot; class=&quot;org.kepler.moml.PortAttribute&quot;> <property name=&quot;direction&quot; value=&quot;input&quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;/> <property name=&quot;dataType&quot; value=&quot;unknown&quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;/> <property name=&quot;isMultiport&quot; value=&quot;true&quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;/> </property> <property name=&quot;output&quot; class=&quot;org.kepler.moml.PortAttribute&quot;> <property name=&quot;direction&quot; value=&quot;output&quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;/> <property name=&quot;dataType&quot; value=&quot;unknown&quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;/> <property name=&quot;isMultiport&quot; value=&quot;false&quot; class=&quot;ptolemy.kernel.util.StringAttribute&quot;/></property> <property name=&quot;semanticType00&quot; value=&quot;http://seek.ecoinformatics.org/ontology#ArithmeticMathOperationActor&quot; class=&quot;org.kepler.sms.SemanticType&quot;/> </entity>
  • Kepler Object Manager
    • Designed to access local and distributed objects
    • Objects: data, metadata, annotations, actor classes, supporting libraries, native libraries, etc. archived in kar files
    • Advantages:
      • Reduce the size of Kepler distribution
        • Only ship the core set of generic actors and domains
      • Easy exchange of full or partial workflows for collaborations
      • Publish full workflows with their bound data
        • Becomes a provenance system for derived data objects
      • => Separate workflow repository and distributions easily
  • Initial Work on Provenance Framework
    • Provenance
      • Track origin and derivation information about scientific workflows, their runs and derived information (datasets, metadata…)
    • Need for Provenance
      • Association of process and results
      • reproduce results
      • “ explain & debug” results (via lineage tracing, parameter settings, …)
      • optimize: “Smart Re-Runs”
    • Types of Provenance Information :
      • Data provenance
        • Intermediate and end results including files and db references
      • Process (=workflow instance) provenance
        • Keep the wf definition with data and parameters used in the run
      • Error and execution logs
      • Workflow design provenance (quite different)
        • WF design is a (little supported) process (art, magic, …)
        • for free via cvs: edit history
        • need more “structure” (e.g. templates) for individual & collaborative workflow design
  • Kepler Provenance Recording Utility
    • Parametric and customizable
      • Different report formats
      • Variable levels of detail
        • Verbose-all, verbose-some, medium, on error
      • Multiple cache destinations
    • Saves information on
      • User name, Date, Run, etc…
  • Provenance: Possible Next Steps
    • Kepler Provenance
      • Deciding on terms and definitions
      • .kar file generation, registration and search for provenance information
      • Possible data/metadata formats
      • Automatic report generation from accumulated data
      • A GUI to keep track of the changes
      • Adding provenance repositories
      • A relational schema for the provenance info in addition to the existing XML
  • What other system functions does provenance relate to?
    • Failure recovery
    • Smart re-runs
    • Semantic extensions
    • Kepler Data Grid
    • Reporting and Documentation
    • Authentication
    • Data registration
    Re-run only the updated/failed parts Guided documentation generation an updates
  • Where Kepler Meets the Grid
    • Abstract Grid workflow actors
      • Stage-execute-fetch (sub-)workflows
        • Copy files from one resource to computation node
        • Perform execution – possibly through a grid job scheduler
        • Get the result files back and continue with the rest of the workflow
      • Actors
        • Authenticate actor
          • over Globus Grid, SRB and databases
        • Copy actor
          • for both stage and fetch
        • Job executor actor
          • special wrappers for ssh -based execution, web service-clients, Grid job runner proxies, and actors for Nimrod- and APST-based submissions
  • Where Kepler Meets the Grid
        • Monitoring actor
          • Light monitoring: user notified on actor failure (e.g. NIMROD) upon completion of actor failure
          • Medium monitoring: same with immediate notification
          • Heavy monitoring: notifies every communication including immediate actor failure
        • Filter actor
          • Filtering and subsetting remote data of different formats
        • Data Discovery actor
        • Service Discovery actor
        • Storage actor
        • Transformation and Query actors
          • Shim generation
          • Querying of databases and mediators
  • Hot Topics in Kepler
    • http://kepler-project.org/Wiki.jsp?page=HotTopics
  • To Sum Up
    • … is an open-source system and collaboration
      • is a ~3 year-old project
      • grows by application pull from contributors
      • most topics are designed jointly
      • is developed by multiple developers under different projects in different countries
      • Is now being used in actual scientific research
    • The screen shots were results of initial success!
    • There is a lot more to cover and work on…
      • New foci at SDSC-Kepler around provenance and distributed computing
  • Questions… Thanks! Amarnath Gupta [email_address] +1 (858) 822-0994 http://www.sdsc.edu