Discusses some of the challenges around applying semantics at scale (tens of billions of triples and larger). Describes some of the patterns that can be used to meet those challenges.
Poio API: a CLARIN-D curation project for language documentation and language...Peter Bouda
Poio API is an open source software library written in Python and is being developed as part of a curation project within the working group “Linguistic Fieldwork, Anthropology, Language Typology” of CLARIN-D . The goal of Poio API is to provide unified access to pivot data structures parsed from different file formats that researchers use in language documentation projects. As unified data structures we chose an implementation of the “Graph Annotation Framework” (GrAF) that was standardized as ISO 24612 in 2012. In our presentation, we will discuss the connections between GrAF and TEI, and present two use cases that demonstrate the innovation and advantage of our approach in comparison to existing methods.
The R language is a project designed to create a free, open source language which can be used as a replacement for the S-PLUS language, originally developed as the S language at AT&T Bell Labs, and currently marketed by Insightful Corporation of Seattle, Washington. R is an open source implementation of S, and differs from S-plus largely in its command-line only format.
Topics Covered:
1.Introduction to R
2.Installing R
3.Why Learn R
4.The R Console
5.Basic Arithmetic and Objects
6.Program Example
7.Programming with Big Data in R
8.Big Data Strategies in R
9.Applications of R Programming
10.Companies Using R
11.What R is not so good at
12.Conclusion
Poio API: a CLARIN-D curation project for language documentation and language...Peter Bouda
Poio API is an open source software library written in Python and is being developed as part of a curation project within the working group “Linguistic Fieldwork, Anthropology, Language Typology” of CLARIN-D . The goal of Poio API is to provide unified access to pivot data structures parsed from different file formats that researchers use in language documentation projects. As unified data structures we chose an implementation of the “Graph Annotation Framework” (GrAF) that was standardized as ISO 24612 in 2012. In our presentation, we will discuss the connections between GrAF and TEI, and present two use cases that demonstrate the innovation and advantage of our approach in comparison to existing methods.
The R language is a project designed to create a free, open source language which can be used as a replacement for the S-PLUS language, originally developed as the S language at AT&T Bell Labs, and currently marketed by Insightful Corporation of Seattle, Washington. R is an open source implementation of S, and differs from S-plus largely in its command-line only format.
Topics Covered:
1.Introduction to R
2.Installing R
3.Why Learn R
4.The R Console
5.Basic Arithmetic and Objects
6.Program Example
7.Programming with Big Data in R
8.Big Data Strategies in R
9.Applications of R Programming
10.Companies Using R
11.What R is not so good at
12.Conclusion
Basic tutorial for R programming. this video contains lot of information about r programming like
agenda
history
SOFTWARE PARADIGM
R interface
advantages of r
drawbacks of r
R is a programming language and software environment for statistical analysis, graphics representation and reporting. Are You Interested to Learning R Programming in Best Institute Join Besant Technologies in Bangalore.
Are Linked Datasets fit for Open-domain Question Answering? A Quality AssessmentHarsh Thakkar
Presentation for the paper accepted at The 6th International Conference on Web Intelligence, Mining and Semantics (WIMS) 2016. [http://harshthakkar.in/wp-content/uploads/2016/02/wims.pdf]
This short text will get you up to speed in no time on creating visualizations using R's ggplot2 package. It was developed as part of a training to those who had no prior experience in R and had limited knowledge on general programming concepts. It's a must have initial guide for those exploring the field of Data Science
this presentation is an introduction to R programming language.we will talk about usage, history, data structure and feathers of R programming language.
We extend RDF with the ability to represent property values that exist, but are unknown or partially known, using constraints. Following ideas from the incomplete information literature, we develop a semantics for this extension of RDF, called RDFi, and study SPARQL query evaluation in this framework.
It is one of the Best Presentation on the topic "R Programming" having interesting Slides consisting of Amazing Images & Very Useful Information. It also have Transitions & Animation which makes the Presentation more Interesting & Attractive.
Created By - Abhishek Pratap Singh (Aps)
8th TUC Meeting - Peter Boncz (CWI). Query Language Task Force statusLDBC council
Peter Boncz, Research Scientist at the Centrum Wiskunde & Informatica in the Netherlands, talked about the updates on the Graph Query Language Task Force after being alive for a year. This Task Force was created to answer an issue detected during the benchmark meetings, all the workload is created in English text because there is no common graph query language.
A talk given at SemTechBiz 2014 in San Jose that follows up on the tool originally presented at the 2012 conference. Talks about the limitations we've encountered with the original tool and how we've evolved it to address these and build a more robust general purpose and open source SPARQL testing tool.
The tool is available on SourceForge in pre-built form at http://sourceforge.net/projects/sparql-query-bm/ or as code on SourceForge or GitHub (https://github.com/rvesse/sparql-query-bm)
Basic tutorial for R programming. this video contains lot of information about r programming like
agenda
history
SOFTWARE PARADIGM
R interface
advantages of r
drawbacks of r
R is a programming language and software environment for statistical analysis, graphics representation and reporting. Are You Interested to Learning R Programming in Best Institute Join Besant Technologies in Bangalore.
Are Linked Datasets fit for Open-domain Question Answering? A Quality AssessmentHarsh Thakkar
Presentation for the paper accepted at The 6th International Conference on Web Intelligence, Mining and Semantics (WIMS) 2016. [http://harshthakkar.in/wp-content/uploads/2016/02/wims.pdf]
This short text will get you up to speed in no time on creating visualizations using R's ggplot2 package. It was developed as part of a training to those who had no prior experience in R and had limited knowledge on general programming concepts. It's a must have initial guide for those exploring the field of Data Science
this presentation is an introduction to R programming language.we will talk about usage, history, data structure and feathers of R programming language.
We extend RDF with the ability to represent property values that exist, but are unknown or partially known, using constraints. Following ideas from the incomplete information literature, we develop a semantics for this extension of RDF, called RDFi, and study SPARQL query evaluation in this framework.
It is one of the Best Presentation on the topic "R Programming" having interesting Slides consisting of Amazing Images & Very Useful Information. It also have Transitions & Animation which makes the Presentation more Interesting & Attractive.
Created By - Abhishek Pratap Singh (Aps)
8th TUC Meeting - Peter Boncz (CWI). Query Language Task Force statusLDBC council
Peter Boncz, Research Scientist at the Centrum Wiskunde & Informatica in the Netherlands, talked about the updates on the Graph Query Language Task Force after being alive for a year. This Task Force was created to answer an issue detected during the benchmark meetings, all the workload is created in English text because there is no common graph query language.
A talk given at SemTechBiz 2014 in San Jose that follows up on the tool originally presented at the 2012 conference. Talks about the limitations we've encountered with the original tool and how we've evolved it to address these and build a more robust general purpose and open source SPARQL testing tool.
The tool is available on SourceForge in pre-built form at http://sourceforge.net/projects/sparql-query-bm/ or as code on SourceForge or GitHub (https://github.com/rvesse/sparql-query-bm)
Quadrupling your elephants - RDF and the Hadoop ecosystemRob Vesse
Presentation given at ApacheCon EU 2014 in Budapest on technologies aiming to bridge the gap between the RDF and the Hadoop ecosystems.
Talks primarily about RDF Tools for Hadoop (part of the Apache Jena) project and Intel Graph Builder (extensions to Pig)
Everyday Tools for the Semantic Web DeveloperRob Vesse
This talk was given at the London Semantic Web Meetup on September 27th 2011 at SkillsMatter
The slides give brief notes on the tools but the bulk of the talk was demos, please see http://skillsmatter.com/podcast/home/dotnetrdf/js-2678 for full video of the talk
Talk given at the London Semantic Web Meetup February 17th 2015. Discusses projects that aim to bridge the gap between the RDF and the Hadoop ecosystems primarily focusing on Apache Jena Elephas.
July 2018 talk to SW Data Meetup by Rob Vesse, Software Engineer, Cray Inc, discussing open source technologies for data science on high performance systems (Spark, Hadoop, PyData ecosystem, containers, etc), focusing on some of the implementation and scaling challenges they face.
Data Science Salon: A Journey of Deploying a Data Science Engine to ProductionFormulatedby
Presented by Mostafa Madjipour., Senior Data Scientist at Time Inc.
Next DSS NYC Event 👉 https://datascience.salon/newyork/
Next DSS LA Event 👉 https://datascience.salon/la/
Reducing the gap between R&D and production is still a challenge for data science/ machine learning engineering groups in many companies. Typically, data scientists develop the data-driven models in a research-oriented programming environment (such as R and python). Next, the data/machine learning engineers rewrite the code (typically in another programming language) in a way that is easy to integrate with production services.
This process has some disadvantages: 1) It is time consuming; 2) slows the impact of data science team on business; 3) code rewriting is prone to errors.
A possible solution to overcome the aforementioned disadvantages would be to implement a deployment strategy that easily embeds/transforms the model created by data scientists. Packages such as jPMML, MLeap, PFA, and PMML among others are developed for this purpose.
In this talk we review some of the mentioned packages, motivated by a project at Time Inc. The project involves development of a near real-time recommender system, which includes a predictor engine, paired with a set of business rules.
A Journey into Hexagon: Dissecting Qualcomm BasebandsPriyanka Aash
Mobile phones are quite complicated and feature multiple embedded processors handling wifi, cellular connectivity, bluetooth, and other signal processing in addition to the application processor. Have you ever been curious about how your phone actually makes calls and texts on a low level? Or maybe you want to learn more about the internals of the baseband but have no clue where to start. We will dive into the internals of a qualcomm baseband, tracing it's evolution over the years until its current state. We will discuss the custom, in-house DSP architecture they now run on, and the proprietary RTOS running on it. We will also cover the architecture of the cellular stack, likely places vulnerabilities lie, and exploit mitigations in place. Finally we will cover debugging possibilities, and how to get started analyzing the baseband firmware—how to differentiate between RTOS and cellular functions, how to find C std library functions, and more.
The world has changed and having one huge server won’t do the job anymore, when you’re talking about vast amounts of data, growing all the time the ability to Scale Out would be your savior. Apache Spark is a fast and general engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing.
This lecture will be about the basics of Apache Spark and distributed computing and the development tools needed to have a functional environment.
Whether migrating a database or application from Oracle to Postgres, as a first step, we need to analyze the database objects(DDLs), to find out the incompatibilities between both the databases and estimate the time and cost required for the migration. In schema migration, having a good knowledge of Oracle and Postgres helps to identify incompatibilities and choose the right tool for analysis/conversion. In this webinar, we will discuss schema incompatibility hurdles when migrating from Oracle to Postgres and how to overcome them.
What you will learn in this webinar:
- How you identify if your oracle schema is compatible with PostgreSQL
- Incompatibility hurdles and identifying them with Migration tools
- How to Overcome incompatibility hurdles
- Available tools for conversion
- Post migration activities - functional testing, performance analysis, data migration, application switchover
When data size grows in terms of sample count, feature count and model parameter count, things go crazy. The slideshow presents an overview of what to expect and how to handle them.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Challenges and patterns for semantics at scale
1. C O M P U T E | S T O R E | A N A L Y Z E
Challenges and Patterns for
Semantics at Scale
Rob Vesse
rvesse@cray.com
@RobVesse
2. C O M P U T E | S T O R E | A N A L Y Z E
Overview
● Background
● Challenges & Patterns
● Obtaining Data
● Input Format
● Blank Nodes
● Graph Partitioning
● Benchmarking
3. C O M P U T E | S T O R E | A N A L Y Z E
Background
● PhD in Computer Science
● Open Source
● Apache Jena
● dotNetRDF
● Software Engineer at Cray Inc
● In Analytics R&D
● Last 5 years
● Cray sells a range of analytics products
● Cray Graph Engine
● Massively scalable parallel RDF database and SPARQL engine
● Runs on GX and XC hardware platforms
● GX nodes are roughly equivalent to r3.8xlarge EC2 instance
4. C O M P U T E | S T O R E | A N A L Y Z E
Background - Terminology
● What do we mean by at scale?
● Typical customers have 10s of billions of triples
● Some are around the 100 billion mark
● What do we mean by parallelism?
● On node i.e. multiple threads/processes
● Across nodes i.e. multiple machines
5. C O M P U T E | S T O R E | A N A L Y Z E
Challenge #1 - Obtaining Data
● Most Data does not start out as RDF
● Relational databases, spreadsheets, structured/semi-structured
data, flat files etc.
● It varies depending on customer domain
● Therefore the first challenge is to get the data into RDF
● Problems
● Many ETL tools don't support it as an output format
● Even if tools do support it they are not scalable
● E.g D2RQ (http://d2rq.org)
6. C O M P U T E | S T O R E | A N A L Y Z E
Pattern #1 - Leverage Big Data
● Lots of big data projects can be used to implement ETL
pipelines
● E.g. Map Reduce, Spark, Flume, Sqoop
● There are some libraries available that provide basic
plumbing for this e.g.
● Apache Jena Elephas
● http://jena.apache.org/documentation/hadoop/index.html
● Unfortunately ETL tends to be very customer and data
specific
7. C O M P U T E | S T O R E | A N A L Y Z E
Challenge #2 - Input Format
● What data format should we be using?
● There are at least four widely used standard
serialisations:
● NTriples/NQuads, Turtle/TriG, RDF/XML and JSON-LD
● Plus the variety of lesser used formats e.g. TriX, RDF/JSON,
HDT, RDF/Thrift, Sesame Binary RDF etc
● Choice of format affects how you process it
● Parallel processing
● Error Tolerance
● State Tracking
8. C O M P U T E | S T O R E | A N A L Y Z E
Pattern #2 - Use NTriples/NQuads
● Simple but effective
● Can be arbitrarily split into chunks
● E.g. Pick some number of bytes, split into chunks, seek from
chunk boundaries to find actual line boundaries, process line by
line
● Extremely error tolerant
● Every line can be processed independently without needing
any shared state
● Even this has challenges:
● Verbose format so large datasets require extremely large files
● Blank nodes can still be problematic
9. C O M P U T E | S T O R E | A N A L Y Z E
Challenge #3 - Blank Node Identifiers
● Specifications say that a blank node
identifier is file scoped
● I.e. _:foo in a.nt is a different node from
_:foo in b.nt
● And _:foo is the same node throughout
a.nt
● Need to consistently assign identifiers
despite processing the data in chunks on
different physical nodes
● Preferably without resorting to global
state/synchronisation
<urn:a> <urn:link> _:foo .
_:foo <urn:link> <urn:b> .
# Many 100,000s of lines later
<urn:z> <urn:link> _:foo .
_:foo <urn:value> “example” .
_:bar <urn:value> “other” .
a.nt
b.nt
10. C O M P U T E | S T O R E | A N A L Y Z E
Pattern #3 - Derived Blank Node Identifiers
● Derive identifiers from a combination of their local
identifier and a scope identifier
● E.g. _:foo and a.nt
● Derivation method doesn't matter provided it is:
● Scope aware
● Deterministic
● Some possibilities:
● One-way hash e.g. MD5
● Mathematical transform
● Seeded random number generator (RNG)
● Apache Jena uses seeded RNG
● Scope awareness achieved by seeding the RNG based upon
the filename
11. C O M P U T E | S T O R E | A N A L Y Z E
Challenge #4 - Graph Partitioning
● Open Problem
● NP Hard
● Large graphs are never going to be processable on a
single node
● Need to partition across multiple nodes
● Partitioning affects both storage and processing of a
graph
● May need different schemes depending on desired processing
12. C O M P U T E | S T O R E | A N A L Y Z E
Pattern #4 - Domain Specific/Avoid It!
● For specific workloads a domain specific partitioning will
be best
● Needs knowledge of data and workload
● E.g. Educating the Planet with Pearson
● If you can then avoid it!
● Take advantage of increasingly capable hardware
● Large memory sizes, non-volatile memory, RDMA, high speed
interconnects, SSDs
13. C O M P U T E | S T O R E | A N A L Y Z E
Challenge #5 - Benchmarking
● Many of the classic benchmarks were developed by
academics
● E.g. LUBM, SP2B
● Often aren’t representative of actual customer problems
● Many data generators are single threaded
● Difficult to generate large-scale datasets
14. C O M P U T E | S T O R E | A N A L Y Z E
Pattern #5 - Change Benchmarks
● Linked Data Benchmark Council (LDBC)
● Industry working group that develops standardised benchmarks
● Equivalent to Transaction Processing Council (TPC) in
relational database industry
● http://ldbcouncil.org
● Design your own
● https://github.com/rvesse/sparql-query-bm
● Improve an existing one
● https://github.com/rvesse/lubm-uba
● LUBM 8k (~ 1 Billion Triples) can be generated in under 7
minutes which is a 10x speed up
15. C O M P U T E | S T O R E | A N A L Y Z E
Questions?
rvesse@cray.com
@RobVesse