This document provides an overview of trends from the 2015 Bio IT World Expo presented by Chris Dag from the BioTeam. It discusses trends around DevOps, automation, converged infrastructure, compute, storage, cloud computing, and specific projects. Key points include the need for sysadmins to learn scripting and automation, the growing role of APIs, and how object storage is the future for managing scientific data and metadata. Specific examples highlight high performance computing configurations, small file ingest solutions, low-cost storage approaches, and a petascale storage system built using Intel Lustre, Linux, and ZFS on commodity hardware.
This is a custom "Bio IT trends/problems" deck that I did for a general but highly technical audience at the 2014 Internet2 Technology Exchange conference.
Download of the raw PPT is disabled; contact me at chris@bioteam.net if a direct copy or PDF of the presentation would be useful.
Taming Big Science Data Growth with Converged InfrastructureThe BioTeam Inc.
2014 BioIT World Expo presentation
"Many of the largest NGS sites have identified IO bottlenecks as their number one concern in growing their infrastructure to support current and projected data growth rates. In this talk Aaron D. Gardner, Senior Scientific Consultant, BioTeam, Inc. will share real-world strategies and implementation details for building converged storage infrastructure to support the performance, scalability and collaborative requirements of today's NGS workflows. "
For a copy of this presentation please email: chris@bioteam.net
This was a 30 min talk intended as one of the opening/overview presentations before a full-day deep dive into ScienceDMZ design patterns and architectures.
Direct downloads are not enabled. Contact me directly (chris@bioteam.net) if you for some odd reason want a copy of this slide deck!
2014 BioIT World - Trends from the trenches - Annual presentationChris Dagdigian
Talk slides from the annual "trends from the trenches" address at BioITWorld Expo. 2014 Edition.
### Email chris@bioteam.net if you'd like a PDF copy of this deck ###
BioIT World 2016 - HPC Trends from the TrenchesChris Dagdigian
As presented at BioIT World 2016. In one of the more popular presentations of the Expo, Chris delivers a candid assessment of the best, the worthwhile, and the most overhyped information technologies (IT) for life sciences. He’ll cover what has changed (or not) in the past year around infrastructure, storage, computing, and networks. This presentation will help you understand IT to build and support data intensive science.
Video link from the presentation: biote.am/bs
[Note: email chris@bioteam.net if you would like a PDF copy of this presentation]
This is a very short slide deck I did for a 10-minute slot on a http://pistoiaalliance.org/ webinar. The slides do not fully cover what I intend to talk about so if the webinar is recorded and available afterwards I'll update this description with the recording URL.
PDF copy of the slides available upon request ("chris@bioteam.net")
This is a massive slide deck I used as the starting point for a 1.5 hour talk at the 2012 www.nerlscd.org conference. Mixture of old and (some) new slides from my usual stuff.
This is a custom "Bio IT trends/problems" deck that I did for a general but highly technical audience at the 2014 Internet2 Technology Exchange conference.
Download of the raw PPT is disabled; contact me at chris@bioteam.net if a direct copy or PDF of the presentation would be useful.
Taming Big Science Data Growth with Converged InfrastructureThe BioTeam Inc.
2014 BioIT World Expo presentation
"Many of the largest NGS sites have identified IO bottlenecks as their number one concern in growing their infrastructure to support current and projected data growth rates. In this talk Aaron D. Gardner, Senior Scientific Consultant, BioTeam, Inc. will share real-world strategies and implementation details for building converged storage infrastructure to support the performance, scalability and collaborative requirements of today's NGS workflows. "
For a copy of this presentation please email: chris@bioteam.net
This was a 30 min talk intended as one of the opening/overview presentations before a full-day deep dive into ScienceDMZ design patterns and architectures.
Direct downloads are not enabled. Contact me directly (chris@bioteam.net) if you for some odd reason want a copy of this slide deck!
2014 BioIT World - Trends from the trenches - Annual presentationChris Dagdigian
Talk slides from the annual "trends from the trenches" address at BioITWorld Expo. 2014 Edition.
### Email chris@bioteam.net if you'd like a PDF copy of this deck ###
BioIT World 2016 - HPC Trends from the TrenchesChris Dagdigian
As presented at BioIT World 2016. In one of the more popular presentations of the Expo, Chris delivers a candid assessment of the best, the worthwhile, and the most overhyped information technologies (IT) for life sciences. He’ll cover what has changed (or not) in the past year around infrastructure, storage, computing, and networks. This presentation will help you understand IT to build and support data intensive science.
Video link from the presentation: biote.am/bs
[Note: email chris@bioteam.net if you would like a PDF copy of this presentation]
This is a very short slide deck I did for a 10-minute slot on a http://pistoiaalliance.org/ webinar. The slides do not fully cover what I intend to talk about so if the webinar is recorded and available afterwards I'll update this description with the recording URL.
PDF copy of the slides available upon request ("chris@bioteam.net")
This is a massive slide deck I used as the starting point for a 1.5 hour talk at the 2012 www.nerlscd.org conference. Mixture of old and (some) new slides from my usual stuff.
Facilitating Collaborative Life Science Research in Commercial & Enterprise E...Chris Dagdigian
This is a talk I put together for a http://www.neren.org/ seminar called "Bridging the Gap: Research Facilitation". Tried to give a biotech/pharma view for a mostly academic audience.
Bio-IT & Cloud Sobriety: 2013 Beyond The Genome MeetingChris Dagdigian
October 2013 "Beyond the Genome" presentation slides. Talk is mostly focused on issues around IaaS cloud usage for "Bio-IT" and life science informatics & scientific computing.
PDF SLIDES AVAILABLE DIRECTLY - PLEASE EMAIL "CHRIS@BIOTEAM.NET" FOR SLIDES
Mapping Life Science Informatics to the CloudChris Dagdigian
Infrastructure cloud platforms such as those offered by Amazon Web Services are not designed and built with scientific research as the primary use case. These presentation slides cover the current state of mapping life science research and HPC technique onto “the cloud” and how to work around the common engineering, orchestration and data movement problems.
[Note: I've replaced the 2011 version of this talk deck with a slightly updated version as delivered at the AIRI Petabyte Challenge Meeting]
BioITWorld 2013 presentation - Best practices for building multi-tenant HPC clusters for Pharma/BioTech
Essentially a mini case study of a recent deployment of a multi-petabyte, 1000+ CPU core Linux cluster in the Boston area.
Please email me at: chris@bioteam.net if you would like the actual PDF file itself.
Disruptive Innovation: how do you use these theories to manage your IT?mark madsen
The term disruptive innovation was popularized by Harvard professor Clayton Christensen in his 1997 book “The Innovator’s Dilemma.” Nearly 20 years later “Disrupt!” is a popular leadership mantra that is more frequently uttered than experienced. You can't productize it. You can't always control it – at least what effects it has in practice. You aren't necessarily going to like every product of innovation. So are you sure you want it? If so, how do you promote a culture in which innovation can flower – and, potentially, thrive? Because that's probably the best that you can do.
Perhaps there's a better framing for innovation than just "disruption.“ This session is an overview of commmoditization and innovation theories followed by basic things you can do to apply that theory to your daily job architecting, choosing and managing a data environment in your company.
Bi isn't big data and big data isn't BI (updated)mark madsen
Big data is hyped, but isn't hype. There are definite technical, process and business differences in the big data market when compared to BI and data warehousing, but they are often poorly understood or explained. BI isn't big data, and big data isn't BI. By distilling the technical and process realities of big data systems and projects we can separate fact from fiction. This session examines the underlying assumptions and abstractions we use in the BI and DW world, the abstractions that evolved in the big data world, and how they are different. Armed with this knowledge, you will be better able to make design and architecture decisions. The session is sometimes conceptual, sometimes detailed technical explorations of data, processing and technology, but promises to be entertaining regardless of the level.
Yes, it’s about the data normally called “big”, but it’s not Hadoop for the database crowd, despite the prominent role Hadoop plays. The session will be technical, but in a technology preview/overview fashion. I won’t be teaching you to write MapReduce jobs or anything of the sort.
The first part will be an overview of the types, formats and structures of data that aren’t normally in the data warehouse realm. The second part will cover some of the basic technology components, vendors and architecture.
The goal is to provide an overview of the extent of data available and some of the nuances or challenges in processing it, coupled with some examples of tools or vendors that may be a starting point if you are building in a particular area.
Data lakes, data exhaust, web scale, data is the new oil. Vendors are throwing new terms and analogies at us to convince us to buy their products as the market around data technologies grows. We change data persistence and transaction layers because "databases don't scale" or because data is "unstructured". If data had no structure then it wouldn't be data, it would be noise. Schema on read, schema on write, schemaless databases; they imply structure underlying the data. All data has schema, but that word may not mean what you think it means.
This presentation will describe concepts of data storage and retrieval from technology prehistory (i.e. before the 1980s) and examine the design principles behind both old and new technology for managing data because sometimes post-relational is actually pre-relational. It is important to separate what is identical to things that were tried in the past from new twists on old topics that deliver new capabilities.
Directly related to these topics are performance, scalability and the realities of what organizations do with data over time. All of these topics should guide architecture decisions to avoid the trap of creating technical debts that must be paid later, after systems are in place and change is difficult.
Big Data Meets HCI—How South African Insurance Provider King Price Gives Deve...Dana Gardner
Transcript of a discussion on how an insurance innovator built a modern hyperconverged infrastructure environment that rapidly replicates databases to accelerate developer agility.
IT Performance Management Handbook for CIOsVikram Ramesh
Learn why measuring performance on individual devices and systems often leaves admins flying blind when it comes to SLA management and identifying performance bottlenecks. This in-depth e-Guide talks about how VirtualWisdom4 can give administrators a live, up- to-the-second view across the system-wide IT infrastructure.
(PDF available upon request). This is an updated version of the 2012 BioITWorld Boston talk that I gave 6 weeks later at Bio IT World Asia in June 2012. Some slide content was updated and revised and I also deleted a number of slides in an attempt to shorten the talk since I'm known to speak fast. There was legit concern I'd be unintelligible to non-native english speakers!
Rethinking The Data Warehouse: Emerging Practices and Technologies to Meet To...Senturus
Senturus special guest Mark Madsen, keynote speaker at the TDWI World Conference, shares his insights into the five major issues facing data warehouses and his solution to increase agility and flexibility. View the webinar video recording and download this deck: http://www.senturus.com/resources/rethinking-the-data-warehouse/.
Current data warehouses are not architected to meet current analytics requirements including end user self-service, multiple tools, huge data volumes, visualizations and deeper analysis needs. Hear Mark’s strategic insights for how to solve these issues.
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
The talk presents the evolution of Big-Data systems from single-purpose MapReduce frameworks to fully general computational infrastructures. In particular, I will follow the evolution of Hadoop, and show the benefits and challenges of a new architectural paradigm that decouples the resource management component (YARN) from the specifics of the application frameworks (e.g., MapReduce, Tez, REEF, Giraph, Naiad, Dryad, Spark,...). We argue that beside the primary goals of increasing scalability and programming model flexibility, this transformation dramatically facilitates innovation.
In this context, I will present some of our contributions to the evolution of Hadoop (namely: work-preserving preemption, and predictable resource allocation), and comment on the fascinating experience of working on open- source technologies from within Microsoft. The current Hadoop APIs (HDFS and YARN) provide the cluster equivalent of an OS API. With this as a backdrop, I will present our attempt to create the equivalent of stdlib for the cluster: the REEF project.
Carlo A. Curino received a PhD from Politecnico di Milano, and spent two years as Post Doc Associate at CSAIL MIT leading the relational cloud project. He worked at Yahoo! Research as Research Scientist focusing on mobile/cloud platforms and entity deduplication at scale. Carlo is currently a Senior Scientist at Microsoft in the Cloud and Information Services Lab (CISL) where he is working on big-data platforms and cloud computing.
Facilitating Collaborative Life Science Research in Commercial & Enterprise E...Chris Dagdigian
This is a talk I put together for a http://www.neren.org/ seminar called "Bridging the Gap: Research Facilitation". Tried to give a biotech/pharma view for a mostly academic audience.
Bio-IT & Cloud Sobriety: 2013 Beyond The Genome MeetingChris Dagdigian
October 2013 "Beyond the Genome" presentation slides. Talk is mostly focused on issues around IaaS cloud usage for "Bio-IT" and life science informatics & scientific computing.
PDF SLIDES AVAILABLE DIRECTLY - PLEASE EMAIL "CHRIS@BIOTEAM.NET" FOR SLIDES
Mapping Life Science Informatics to the CloudChris Dagdigian
Infrastructure cloud platforms such as those offered by Amazon Web Services are not designed and built with scientific research as the primary use case. These presentation slides cover the current state of mapping life science research and HPC technique onto “the cloud” and how to work around the common engineering, orchestration and data movement problems.
[Note: I've replaced the 2011 version of this talk deck with a slightly updated version as delivered at the AIRI Petabyte Challenge Meeting]
BioITWorld 2013 presentation - Best practices for building multi-tenant HPC clusters for Pharma/BioTech
Essentially a mini case study of a recent deployment of a multi-petabyte, 1000+ CPU core Linux cluster in the Boston area.
Please email me at: chris@bioteam.net if you would like the actual PDF file itself.
Disruptive Innovation: how do you use these theories to manage your IT?mark madsen
The term disruptive innovation was popularized by Harvard professor Clayton Christensen in his 1997 book “The Innovator’s Dilemma.” Nearly 20 years later “Disrupt!” is a popular leadership mantra that is more frequently uttered than experienced. You can't productize it. You can't always control it – at least what effects it has in practice. You aren't necessarily going to like every product of innovation. So are you sure you want it? If so, how do you promote a culture in which innovation can flower – and, potentially, thrive? Because that's probably the best that you can do.
Perhaps there's a better framing for innovation than just "disruption.“ This session is an overview of commmoditization and innovation theories followed by basic things you can do to apply that theory to your daily job architecting, choosing and managing a data environment in your company.
Bi isn't big data and big data isn't BI (updated)mark madsen
Big data is hyped, but isn't hype. There are definite technical, process and business differences in the big data market when compared to BI and data warehousing, but they are often poorly understood or explained. BI isn't big data, and big data isn't BI. By distilling the technical and process realities of big data systems and projects we can separate fact from fiction. This session examines the underlying assumptions and abstractions we use in the BI and DW world, the abstractions that evolved in the big data world, and how they are different. Armed with this knowledge, you will be better able to make design and architecture decisions. The session is sometimes conceptual, sometimes detailed technical explorations of data, processing and technology, but promises to be entertaining regardless of the level.
Yes, it’s about the data normally called “big”, but it’s not Hadoop for the database crowd, despite the prominent role Hadoop plays. The session will be technical, but in a technology preview/overview fashion. I won’t be teaching you to write MapReduce jobs or anything of the sort.
The first part will be an overview of the types, formats and structures of data that aren’t normally in the data warehouse realm. The second part will cover some of the basic technology components, vendors and architecture.
The goal is to provide an overview of the extent of data available and some of the nuances or challenges in processing it, coupled with some examples of tools or vendors that may be a starting point if you are building in a particular area.
Data lakes, data exhaust, web scale, data is the new oil. Vendors are throwing new terms and analogies at us to convince us to buy their products as the market around data technologies grows. We change data persistence and transaction layers because "databases don't scale" or because data is "unstructured". If data had no structure then it wouldn't be data, it would be noise. Schema on read, schema on write, schemaless databases; they imply structure underlying the data. All data has schema, but that word may not mean what you think it means.
This presentation will describe concepts of data storage and retrieval from technology prehistory (i.e. before the 1980s) and examine the design principles behind both old and new technology for managing data because sometimes post-relational is actually pre-relational. It is important to separate what is identical to things that were tried in the past from new twists on old topics that deliver new capabilities.
Directly related to these topics are performance, scalability and the realities of what organizations do with data over time. All of these topics should guide architecture decisions to avoid the trap of creating technical debts that must be paid later, after systems are in place and change is difficult.
Big Data Meets HCI—How South African Insurance Provider King Price Gives Deve...Dana Gardner
Transcript of a discussion on how an insurance innovator built a modern hyperconverged infrastructure environment that rapidly replicates databases to accelerate developer agility.
IT Performance Management Handbook for CIOsVikram Ramesh
Learn why measuring performance on individual devices and systems often leaves admins flying blind when it comes to SLA management and identifying performance bottlenecks. This in-depth e-Guide talks about how VirtualWisdom4 can give administrators a live, up- to-the-second view across the system-wide IT infrastructure.
(PDF available upon request). This is an updated version of the 2012 BioITWorld Boston talk that I gave 6 weeks later at Bio IT World Asia in June 2012. Some slide content was updated and revised and I also deleted a number of slides in an attempt to shorten the talk since I'm known to speak fast. There was legit concern I'd be unintelligible to non-native english speakers!
Rethinking The Data Warehouse: Emerging Practices and Technologies to Meet To...Senturus
Senturus special guest Mark Madsen, keynote speaker at the TDWI World Conference, shares his insights into the five major issues facing data warehouses and his solution to increase agility and flexibility. View the webinar video recording and download this deck: http://www.senturus.com/resources/rethinking-the-data-warehouse/.
Current data warehouses are not architected to meet current analytics requirements including end user self-service, multiple tools, huge data volumes, visualizations and deeper analysis needs. Hear Mark’s strategic insights for how to solve these issues.
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
The talk presents the evolution of Big-Data systems from single-purpose MapReduce frameworks to fully general computational infrastructures. In particular, I will follow the evolution of Hadoop, and show the benefits and challenges of a new architectural paradigm that decouples the resource management component (YARN) from the specifics of the application frameworks (e.g., MapReduce, Tez, REEF, Giraph, Naiad, Dryad, Spark,...). We argue that beside the primary goals of increasing scalability and programming model flexibility, this transformation dramatically facilitates innovation.
In this context, I will present some of our contributions to the evolution of Hadoop (namely: work-preserving preemption, and predictable resource allocation), and comment on the fascinating experience of working on open- source technologies from within Microsoft. The current Hadoop APIs (HDFS and YARN) provide the cluster equivalent of an OS API. With this as a backdrop, I will present our attempt to create the equivalent of stdlib for the cluster: the REEF project.
Carlo A. Curino received a PhD from Politecnico di Milano, and spent two years as Post Doc Associate at CSAIL MIT leading the relational cloud project. He worked at Yahoo! Research as Research Scientist focusing on mobile/cloud platforms and entity deduplication at scale. Carlo is currently a Senior Scientist at Microsoft in the Cloud and Information Services Lab (CISL) where he is working on big-data platforms and cloud computing.
(CMP303) ResearchCloud: CfnCluster and Internet2 for Enterprise HPCAmazon Web Services
Biogen built ResearchCloud for large-scale processing of research data. This extension of our infrastructure capability allows us to be more nimble, process more data, scale as needed, and collaborate with external organizations. In this session, learn about the design choices we took into account when building ResearchCloud. We cover our implementation of Internet2 and AWS Direct Connect, and the challenges we encountered when scaling to speeds of 10 gigabits. We also discuss the architecture of InstantHPC, which combines CfnCluster with GlusterFS using secure templates.
Introducton to Convolutional Nerural Network with TensorFlowEtsuji Nakai
Explaining basic mechanism of the Convolutional Neural Network with sample TesnsorFlow codes.
Sample codes: https://github.com/enakai00/cnn_introduction
Annual address covering trends, emerging requirements, pain points and infrastructure issues in the "Bio-IT" aka life science informatics and HPC realm; Email me if you want a PDF of this talk - chris@bioteam.net
Bio-IT Trends From The Trenches (digital edition)Chris Dagdigian
Note: Contact me directly dag@bioteam.net if you would like a PDF download of these slides
This is Chris Dagdigian’s 10th year delivering his no holds barred, candid state of the industry address at BioIT World, and we are not going to let a pandemic stop him.
Instead of his typical talk, five distinguished panelists will join Chris for a spirited discussion on Current Events and Scientific Computing and the impacts of the COVID-19 Pandemic:
Microservices 101: From DevOps to Docker and beyondDonnie Berkholz
Containers and microservices are two of the fastest-growing trends in technology, enabled by a modern approach to software development and deployment called DevOps. This talk will delve into the increasingly mainstream trend of DevOps, the Docker and containers ecosystem including current enterprise adoption, and how they combine to form a new style of software architecture dubbed microservices. We'll close by looking at real-world examples of containers and microservices architectures at leading-edge companies.
How to survive your technology career transition from old-school IT to the new-school of cloud and devops using the power of community and side projects.
Defining a Practical Path to Artificial Intelligence Roman Chanclor
With the evolution of purpose built AI Infrastructures and the advancement of Graphics Processing Units (GPUs) that enable massively parallel, deep analysis in real-time; cognitive computing may be the norm in data centers in record time. But how?
CIM 4.0 is part of 4IR, but it is not properly understood or explained. This presentation offers some additional views and perspectives to aid students to understand the value and impact of CIM 4.0.
Every company eventually encounters that “do-or-die” moment when the product cycle reaches the maturity stage and it’s necessary to pursue innovations. Borys Pratsiuk, Ph.D., Head of R&D Engineering at Ciklum describes how companies across all sectors and industries can use Research and Development as a Service to update and improve their products or services.
Blockchain, IoT and AI are foundational to the Fourth Industrial Revolution -...David Terrar
This is my keynote session at Channel Live on 12 September 2019. It covers Blockchain explaining what it is and isn’t. I cover why it is so transformational relevant for any Business Model. I go through real world case studies, not just proof of concepts.
I touch on what implementations and frameworks exist and should be considered? I then talk about what the future look like?
Blockchain is significant and business systems will evolve because of it. I move on to IoT (M2M), Industry 4.0 and why are they important. I explain the underlying factors and what the opportunity is for a reseller. I go on to demystify AI. What is it and why should you be interested now? Lastly I talk through why you should you factor blockchain, IoT and AI into your plans.
business model, business model canvas, mission model, mission model canvas, customer development, lean launchpad, lean startup, stanford, startup, steve blank, entrepreneurship, I-Corps, Stanford
Similar to 2015 Bio-IT Trends From the Trenches (20)
Tiny slide deck from a 5-min lightning talk covering a recent project involving live replication of 2-petabytes of scientific data.
Please leave feedback if you'd like to see this as a long-form technical blog article or conference talk, thanks!
Cloud Sobriety for Life Science IT Leadership (2018 Edition)Chris Dagdigian
Candid/blunt AWS advice for research IT and life science IT leadership. Hard lessons learned from many years of AWS consulting. Contact dag@bioteam.net if you want a PDF copy of this presentation
20 slides for a 10 minute talk!
Short presentation for the 2012 Amazon Web Services re:Invent conference. This briefly covers some Computer Aided Engineering (CAE) simulation work done on Amazon using CST Studio software. Interesting Linux/Windows/Cloudbursting use case.
Talk slides as delivered at the 2012 Bio-IT World Conference in Boston, MA
This is my annual "state of the state" address that has become somewhat popular.
A presentation given at the 2011 Amazon AWS Genomics meeting held in Seattle, WA.
This is a 30 minute talk I gave focusing mainly on practical tools, tips and methods for bootstrapping and orchestration on the cloud.
Covers examples of:
Ubuntu Cloud Init
AWS Cloud Formation
Opscode Chef
MIT StarCluster
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
7. 7
BioTeam.
Independent Consulting Shop
Run by scientists forced to learn IT to “get science done”
Virtual company with nationwide staff
15+ years “bridging the gap” between hardcore science, HPC & IT
8. 8
BioTeam.
Independent Consulting Shop
Run by scientists forced to learn IT to “get science done”
Virtual company with nationwide staff
15+ years “bridging the gap” between hardcore science, HPC & IT
Honest. Objective. Vendor & Technology Agnostic.
10. 10
Disclaimer
I speak mainly from my own personal experiences
Not an expert. Not a pundit. “Thought Leader? Hell no!
… and what I learn via osmosis from coworkers, clients & colleagues
11. 11
Disclaimer
I speak mainly from my own personal experiences
Not an expert. Not a pundit. “Thought Leader? Hell no!
… and what I learn via osmosis from coworkers, clients & colleagues
Be cynical. Filter my words through your own insight & experience.
12. 12image: shanelin via flickr
Tick …
Tick …
Tick …
Insufferable, huh? Lets talk trends …
And steal/recycle some bits from last year …
19. 19
Years ago we could stage cheap kit in the lab or a nearby telco closet
The easy period is over.
20. 20
Years ago we could stage cheap kit in the lab or a nearby telco closet
The easy period is over.
This approach has not been viable for years now… real solutions are required
27. Homework Assignment #1
2015 Bio IT World Expo
‣ 11:10am Tomorrow - Track1
‣ “Infrastructure, Architecture, and Organization:
Data Engineering at Scale at the Broad”
• Chris Dwan - Assistant Director, Research Computing
and Data Services, Broad Institute of MIT and Harvard
• … and the DDN/Qumulo stuff preceding Dwan should
be interesting as well
27
40. 40
Chef, Puppet, Ansible, SaltStack …
Pick what works for you (and that all can agree on)
… and commit to learning, using and evangelizing it
… ideally across the enterprise (not just Research)
41. 41
Hey Network Engineers …
Same API-driven automation trends are steamrolling your way
You’ll need more than a Cisco certification in the future
44. Homework Assignment #2
2015 Bio IT World Expo
‣ 2:55pm Today - Track1
‣ “Accelerating Biomedical Research Discovery:
The 100G Internet2 Network – Built and
Engineered for the Most Demanding Big Data
Science Collaborations”
• Christian Todorov, Director, Network Services
Management, Internet2
‣ Why? Internet2 has some of the most interesting non-
cloud SDN stuff in production today
44
47. 47
Unchanged in 2015:
Even our lab instruments know how to submit jobs to the
common HPC resource allocation & scheduling tools
48. 48
Unchanged in 2015:
Still feels like a solved problem. Compute is a commodity.
Most of the interesting action is with ‘outlier’ projects
Ok. Designing a 60,000 CPU core HPC environment is still hard :)
Compute power is rarely challenging these days.
49. 49
Compute: NICs & Disks
At 10Gig and higher speeds, careful attention is needed to ensure that
our server, NIC and disk configurations do not become the new
bottleneck. Pay careful attention to NIC selection and consider
host/kernel tuning when playing at 10Gig and above.
Latest Intel Haswell stuff is driving 40Gig NICs today
50. 50
Compute: Diversity Trend Still Strong
GPU Compute Block
GPU Visualization Block
Phi Coprocessor Block
Dev/Application Block
Hadoop/HDFS Block
HPC Life Science Informatics via modular “Building Blocks”
Large Memory Block
Very Large Memory Block
Fat Nodes (large SMP)
Thin Nodes (fastest CPUs)
Flash/SSD Analytic Block
Homogenous ResearchIT is fading away in favor of “capability-
based” computing via standard HPC building blocks
54. 54
Compute: FPGAs & Phi’s
Exotic hardware has it’s place but will not rule the world
2015: Thoughts on hardware acceleration largely unchanged
Why not?
“... the activation energy required for a scientist to use
this stuff is generally quite high ...”
Best deployed as point-solution for pain point or as component
of large scale / high-value analytical workflow
55. Homework Assignment #3
2015 Bio IT World Expo
‣ 1:10pm Today - Lunch Presentation II
‣ “Optimizing Genomic Sequence Searches to
Next-Generation Intel Architectures”
• Bhanu Rekepalli, Ph.D., Senior Scientific Consultant &
Principal Investigator, BioTeam Inc.
‣ Interested in Phi? Bhanu has deep experience and will
be talking at lunch about Intel Phi used for massively
scalable BLAST searches
55
56. 56
Topic / Trend: Converged Infrastructure
Yep.
100Gig line
card in the wild
57. 57
HyperConvergence in Research IT
Not yet a widespread trend. Something to watch though
Warning/Caveat:
We see it mostly in very large greenfield deployments
Feels like scale-out petabyte+ NAS a few years ago — This may be
an area that most of us “watch” to see how the big orgs approach it
… and then we copy them 2-3 years later
58. 58
HyperConvergence in Research IT
“ISP Model” seeing use within large campus network upgrade projects
Small Examples
Ultra-converged Virtualization blocks packed with CPU/Disk/Flash/NICs
Avere kit front-ending on-prem NAS/Object + Google & Amazon Object Stores
DDN Disk Arrays running native applications via onboard hypervisors
iRODS + Object Store efforts (including Cleversafe/BioTeam work …)
59. 59
HyperConvergence in Research IT
Infiniband EDR. Particularly the Mellanox stuff
One Big Example (and topic to watch …)
Mellanox ConnectX-4 VPI and EDR Infiniband gets you:
684 ports of 100Gig performance in one director class switch
Split personality host adaptors supporting IB, Ethernet or Both
Infiniband: 56Gb/s FDR or 100Gb/s EDR
Ethernet: 1GbE, 10GbE, 40GbE, 56GbE, 100Gbe
60. 60
Infiniband Convergence
EDR in the Core & FDR @ the Edge enable large non-blocking HPC designs
This is enabling some cool stuff in large greenfield projects:
Infiniband for parallel filesystem access AND low-latency MPI apps
10/40/100 Gig Ethernet wherever you need it
7-figure CAPEX cost savings at very large scale (**)
Compute, storage & message passing on one managed fabric
63. 63
2015 Converged IT Summit
1st ever BioTeam conference series (w/ CHI of course)
2-day meeting of the minds / Total focus on life science topics
Brochures @ BioTeam Booth #357
September 9-10, 2015
Intercontinental Hotel, San Francisco USA
http://convergeditsummit.com
65. 65
Cloud-based Science:
Still real. Still useful. Still growing strong.
One quick recap and then a few slides on
some BioTeam 2015 “firsts”
Still polluted by marketeers and thick layers of BS
66. 66
Cloud-based Science:
Cost and economics ARE NOT the primary drivers
Neutral meeting ground for collaboration w/ competitors
Primary cloud/science driver is CAPABILITY:
Lab instruments are now capable of “write to cloud” operation
Ease of data ingest/exchange
IaaS environments like AWS/Google offer capabilities that we
can’t easily match on-premise
And many more …
67. 67
Cloud-based Science:
Our first use of a 10Gig Direct Connect circuit to Amazon
BioTeam IaaS Cloud Milestones (2014-2015)
Our first use of Internet2 as layer2/3 transit for AWS Direct Connect
Our first large-scale production project on Google Compute
Our first large-scale use of Google Genomics API
69. Homework Assignment #4
2015 Bio IT World Expo
‣ 10:40am Thursday - Track 3
‣ “Next-Generation Sequencing and Cloud Scale:
A Journey to Large-Scale Flexible
Infrastructures in AWS”
• Jason Tetrault, Associate Director, Business and
Information Architect, R&D IT, Biogen
69
74. 74
Storage:
Lots of attention to this area in the 2014 talk
Check out the 2014 Trends slides online at http://slideshare.net/chrisdag
No time to recap all of the stuff that has only modestly changed
Today: Quick recap followed by new stuff/thoughts/trends
75. 75
Storage: Quick Recap
Still a huge pain point
Still amazing ways to waste large amounts of money
Petabyte-class storage has not been scary for years now
Single Tier of Scale-out NAS or Parallel FS insufficient in 2015
Multiple Tiers are a requirement; probably multi-vendor
76. 76
Storage: Reasonable Tier Example
5-40TB SSD/Flash tier for ingest & IOPS-sensitive workflows
50-400TB tier (SATA,SAS,SSD mix) for active processing
Petabyte-capable (Cloud/Object/SATA) nearline tier
100TB - 1PB “Trash” Tier (optional)
100TB - 500TB Fast Scratch (optional)
77. 77
Storage: Object Is the Future
Not a trend. Yet. I’m still a believer though ..
Object storage is the future of
scientific data at rest.
Expect a lot more on this in 2016 talk …
78. 78
Storage: Object Is the Future
Don’t believe me?
Check out how many object vendors
are on the show floor this week!
Amplidata, Avere*, CleverSafe, DDN, Swiftstack Inc. etc. etc.
79. 79
Storage: Object Is the Future
This what my metadata looks like on a POSIX filesystem:
Owner
Group membership
Read/write/execute permissions based on Owner or Group
File size
Creation/Modification/Last-access Timestamps
80. 80
Storage: Object Is the Future
This is what I WOULD LIKE TO TRACK on a PER-FILE basis:
What instrument produced this data?
What funding source paid to produce this data?
What revision was the instrument/flowcell at?
Who is the primary PI or owner of this data? Secondary?
What protocol was used to prepare the sample?
Where did the sample come from?
Where is the consent information?
Can this data be used to identify an individual?
What is the data retention classification for this file?
What is the security classification for this file?
Can this file be moved offsite?
etc. etc. etc.
…
81. 81
Storage: Object Is the Future
Historically metadata has been tracked via several methods:
DIY LIMS and Relational Database Systems
iRODS or other “metadata aware” systems
… all at significant human, development & operational cost
82. 82
Storage: Object Is the Future
My gut feeling for the future:
Economics and tech benefits like erasure coding will draw interest
But most adoption will be motivated by the ease at which arbitrary
metadata can be stored with each file or object
… and later searched / sorted / retrieved based on queries against
the stored metadata
83. 83
Storage: Object Is the Future
Advice for the audience:
It will take years for our field to get here. You've got time!
When evaluating Object Storage Solutions:
… consider scoring or evaluating them on how well they
handle metadata search and retrieval operations
84. 84
Storage: Object Is the Future
Would you like object storage with your iRODS?
Go talk to the CleverSafe people on the show floor
We did some neat stuff with them related to using
iRODS with CleverSafe object store backend
86. 86
Storage: Where the Action Is
Primary storage is still challenging, but …
The really interesting work lies at the edges:
Small & Cheap Storage
Ludicrously Large Storage
Some examples …
87. 87
Storage War Story 1: Small Ingest
Pharmaceutical company with an ingest issue
Funky lab instrument puts 30,000 tiny files in one directory
Copying each experiment across 1Gb to SAN took HOURS
Root cause: SAN system choking on tiny file metadata
Trivial from a size viewpoint — ~6GB per experiment
88. 88
Storage War Story 1: Small Ingest
This was a VERY interesting project
There are MANY large/expensive systems that can handle
small file ingest. Don’t need BIG and can’t afford EXPENSIVE
Incredibly difficult to find small 5-10TB usable solution with
the right mix of hardware to handle small-file ingest
Winner: NexSan due to their FASTier smart SSD caching
95. 95
Storage War Story 2: Backblaze
Backblaze ‘pod’ style still lives on
Now with much better fault tolerance
No more custom wiring harnesses!
What can YOU do with 45x 6TB drives at
rock bottom price?
99. 99
Storage War Story 2: Backblaze
… including updated ‘real world’ cost
Hope to blog about this Summer 2015
… and their 30 drive totally silent chassis (!)
100. 100
One last storage war story …
From a lab with long history of
innovative storage projects
101. 101
Storage War Story 3: Petascale Disruption
Very cool 2014 project
BioTeam + Pineda Lab @ Johns Hopkins
Intel Lustre + Linux + ZFS + Commodity HW
102. 102
Storage War Story 3:
Petascale Disruption
2 Petabytes (raw) / 1.4PB usable for $165,000
PUE of 1.5 = $10,000/year in electrical savings
Performance close to much more $$$ options
Expect to see more details released in 2015
104. 104
Last section of this talk is going to
discuss what keeps me up at night.
105. 105
This is what I expect my hardest projects
will involve 2015-2016 and beyond …
106. 106
Tipping Point #1
Effort/cost of generating or acquiring vast piles of data
in 2015 is far less than real world cost of storing and
managing that data through a realistic lifecycle.
107. 107
Tipping Point #2
Scientists still believe storage is cheap & near-infinite.
Data triage no longer sufficient. Scientists rarely asked
to articulate a scientific/business case for storage.
108. 108
Tipping Point #3
Centralized infrastructure models are not sufficient and
must be modified. Data & compute WILL span sites and
locations with or without active IT involvement.
“Data Spread” is real. We need to start preparing now.
110. 110
“Center Of Gravity” Problem
Current methods involving centralized storage and bringing
“users” and “compute” very close “… to the data” are
going to face significant problems in 2015 and beyond.
111. 111
“Center Of Gravity” Pain #1
Terabyte class instruments. Everywhere. Gulp.
We can not stop this trend - large scale data generation will span labs,
buildings, campus sites & WANs
112. 112
“Center Of Gravity” Pain #2
Collaborations & Peta-scale Open Access Data
The future of large scale genomics|informatics increasingly involves
multi-party / multi-site collaboration. Also: Petabytes of free data (!!)
113. 113
“Center Of Gravity” Pain #3
Object Storage Less Effective @ Single Site
Object storage is the future of scientific data at rest. Some major side
benefits (erasure coding, etc.) can only be realized when 3 or more
sites are involved
114. 114
“Center Of Gravity” Summarized
Data spread is unavoidable. Effectively Unstoppable.
We have a WAN-scale data movement/access problem.
There are ~2 viable approaches going forward ...
115. 115
Option 1 - “Stay Centralized”Still totally viable but much faster connectivity to
instruments & collaborators will be essential
Nutshell: Significant investment in edge/WAN connectivity required,
likely requiring bandwidth exceeding 10Gbps
116. 116
Option 2 - “Go With The Flow”Embrace the distributed & “cloudy” future where
compute & storage span multiple zones
Nutshell: Still requires massive bandwidth upgrades to support
metadata-aware or location-aware access & compute
118. 118
Terabyte-scale data movement is
going to be an informatics “grand
challenge” for the next 2-3+ years
And far harder/scarier than previous compute & storage challenges
120. Long history of engagement & cooperation
Research IT vs. Enterprise IT
‣ Historically our infrastructure requirements often
surpassed what the Enterprise uses to sustain
day to day operation
‣ We’ve spent ~20 years working closely with
Enterprise IT to enable “data intensive science”
‣ Relatively easy to align informatics IT
infrastructure with established vendor, product,
technology and architecture standards
‣ This held true until this year …
120
123. 123
Issue #1
Current LAN/WAN stacks bad for emerging use case
Existing technology we’ve used for decades has been architected to
support many small network flows; not a single big data flow
124. 124
Issue #2
Ratio of LAN:WAN bandwidth is out of whack
We will need faster links to “outside” than most organizations have
anticipated or accounted for in long-term technology planning
125. 125
Issue #3
Core, Campus, Edge and “Top of Rack” bandwidth
Enterprise networking types can be *smug* about 10Gbps at the
network core. Boy are they in for a bad surprise.
126. 126
Issue #4
Bigger blast radius when stuff goes wrong
Compute & storage can be logically or physically contained to
minimize disruption/risk when Research does stupid things.
Networks, however, touch EVERYTHING EVERYWHERE. Major risk.
127. 127
What We Need:
- Ludicrous bandwidth @ network core
- Very fast (10-40Gbps) ToR, Edge, Campus links
- 1Gbps - 10Gbps connections to “outside”
- Switches/Routers/Firewalls that can support
small #s of very large data flows
129. 129
Issue #4
Social, trust & cultural issues
We lack the multi-year relationship and track record we’ve built with
facility, compute & storage teams. We are “strangers” to many WAN
and SecurityOps types
130. 130
Issue #5
Our “deep bench” of internal expertise is lacking
Research IT usually has very good “shadow IT” skills but we don’t
have homegrown experts in BGP, Firewalls, Dark Fiber, Routing etc.
132. 132
Issue #5
Cisco. Cisco. Cisco.
The elephant in the room. Cisco rarely 1st choice for greenfield efforts
in this space but Cisco shops often refuse to entertain any
alternatives. Massive existing install base & on-premise expertise
must be balanced, recognized & carefully handled.
133. 133
Issue #5
Firewalls, SecOps & Incumbent Vendors
Legacy security products supporting 10Gbps can cost $150,000+ and
still utterly fail to perform without heroic tuning & deep config magic.
Alternatives exist but massive institutional inertia to overcome.
Deeply Challenging Issue.
135. 135
‣ Peta-scale becoming the norm, not exception
‣ Compute is a commodity; Storage getting there
‣ Historically it has been pretty easy to integrate
“Research Computing” with “Enterprise”
facilities and operational standards
‣ We can no longer assume the majority of our
infrastructure will reside in a single datacenter
136. 136
‣ We need a massive increase in end-to-end
network connectivity & bandwidth
‣ … and kit that can handle large data flows
‣ Current state of “Enterprise” LAN/WAN
networking is not aligned with emerging needs:
• Cost, Capability, Performance, Security …
137. 137
‣ New hardware, reference architectures, best
practices and methods will be required
‣ There is no easy path forward …
139. 139
‣ Science DMZ
• Only viable reference architecture &
collection of operational practices /
philosophy BioTeam has seen to date
• In-use today. Real world. No BS.
• High level visibility & support within US.GOV,
grant funding agencies and supporters of
data intensive science and R&E networks
140. 140
‣ BioTeam has three current ScienceDMZ
projects going on right now. Speeds ranging
from 10Gig to 100Gig
‣ This is likely just the beginning of a long and
difficult transformation in our world
‣ We are going to try to collect useful public info
at http://sciencedmz.org starting this summer
143. 143
‣ Science DMZ Overview Webinar
• May 18, 2-4pm EDT
• http://bioteam-events.webex.com
• No BS; No Hype; No Marketing
- 60 min of content from the inventors of
Science DMZ (ES.NET, of course!)
- 60 min for questions/discussion
144. 144
‣ Announcing BioTeam 100Gig ConvergedIT Lab
• Hosted at Texas Advanced Computing Center (“TACC”)
• Compute/storage/networking/security kit all available for use/experimentation
• Access to TACC 100Gig Internet2 circuit
• Access to STAMPEDE and other TACC Supercomputers
• Support from Intel, Juniper and many other vendors (Hint, hint!)
• Goal #1: Showcase and test ScienceDMZ reference architectures for LifeSci
• Goal #2: Have a killer demo for SuperComputing 2016 :)