Soaring through the Clouds - Oracle Fusion Middleware Partner Forum 2016 Lucas Jellema
The Oracle ACE team has a new mission: complete a complex end-to-end business flow across at least ten Oracle PaaS Services – in front of a live audience. This session will demonstrate how a document driven human workflow triggers an integration flow to update a 3rd party application that in turn emits events that are processed in real time resulting in findings that are published through a REST API in a user friendly front end. Expect guest appearances by an interesting Oracle PaaS cast, including Doc CS, PCS, OSN, Sites CS and ICS and also featuring DBaaS, JCS and SOA CS, Application Container Cloud with a touch of MCS and IoT CS and finally a JET [app] cruising through the clouds. Our flight plan depends a little bit on the weather forecast: we do need a cloudy sky to realize our full potential. The team will perform some live hacking in the various cloud services to complete and tweak the end-to-end flow. We will divulge some of the behind-the-scenes challenges and our findings beyond slideware and C-level promises. A very special guest star will be participating in this session – demonstrating an important attraction of cloud based development.
Introducing Node.js in an Oracle technology environment (including hands-on)Lucas Jellema
This presentation introduces Node.js in a few simple, straightforward steps. First, Node.js is presented as just JavaScript on the browser, then HTTP handling is discussed with core module http and subsequently using Express. Running Oracle JET from Node.js is explained. The implementation of APIs - REST services supporting various [operations on] resources is discussed. The single-thread nature of Node.js is presented, along with the essentials of asynchrous programming, working with callbacks and using the async module. The Node Oracle DB Database driver is introduced and demonstrated. Finally, further steps are suggested. This presentation is supported by a set of resources that constitute a three hour hands on session - sources are in GitHub https://github.com/lucasjellema/sig-nodejs-amis-2016.
RightScale Conference Santa Clara 2011: Looking for configurations that work across clouds? Want to pull configurations from Git? Learn how RightScriptsTM and Chef power ServerTemplates. We will present best practices for modular, agile configuration management.
AAI-1304 Technical Deep-Dive into IBM WebSphere LibertyWASdev Community
A detailed look into the philosophy, architecture and design of the most flexible, simple and scalable Java EE Application Server on the market today; the WebSphere Liberty profile. These slides describe the motivation behind this project, and the key characteristics that are encouraging so many Java EE users to move their applications to Liberty.
Soaring through the Clouds - Oracle Fusion Middleware Partner Forum 2016 Lucas Jellema
The Oracle ACE team has a new mission: complete a complex end-to-end business flow across at least ten Oracle PaaS Services – in front of a live audience. This session will demonstrate how a document driven human workflow triggers an integration flow to update a 3rd party application that in turn emits events that are processed in real time resulting in findings that are published through a REST API in a user friendly front end. Expect guest appearances by an interesting Oracle PaaS cast, including Doc CS, PCS, OSN, Sites CS and ICS and also featuring DBaaS, JCS and SOA CS, Application Container Cloud with a touch of MCS and IoT CS and finally a JET [app] cruising through the clouds. Our flight plan depends a little bit on the weather forecast: we do need a cloudy sky to realize our full potential. The team will perform some live hacking in the various cloud services to complete and tweak the end-to-end flow. We will divulge some of the behind-the-scenes challenges and our findings beyond slideware and C-level promises. A very special guest star will be participating in this session – demonstrating an important attraction of cloud based development.
Introducing Node.js in an Oracle technology environment (including hands-on)Lucas Jellema
This presentation introduces Node.js in a few simple, straightforward steps. First, Node.js is presented as just JavaScript on the browser, then HTTP handling is discussed with core module http and subsequently using Express. Running Oracle JET from Node.js is explained. The implementation of APIs - REST services supporting various [operations on] resources is discussed. The single-thread nature of Node.js is presented, along with the essentials of asynchrous programming, working with callbacks and using the async module. The Node Oracle DB Database driver is introduced and demonstrated. Finally, further steps are suggested. This presentation is supported by a set of resources that constitute a three hour hands on session - sources are in GitHub https://github.com/lucasjellema/sig-nodejs-amis-2016.
RightScale Conference Santa Clara 2011: Looking for configurations that work across clouds? Want to pull configurations from Git? Learn how RightScriptsTM and Chef power ServerTemplates. We will present best practices for modular, agile configuration management.
AAI-1304 Technical Deep-Dive into IBM WebSphere LibertyWASdev Community
A detailed look into the philosophy, architecture and design of the most flexible, simple and scalable Java EE Application Server on the market today; the WebSphere Liberty profile. These slides describe the motivation behind this project, and the key characteristics that are encouraging so many Java EE users to move their applications to Liberty.
"In recent years, containers have become a key component of modern application design. Increasingly, developers are breaking their applications apart into smaller components and distributing them across a pool of compute resources. It is relatively easy to run a few containers on your laptop, but building and maintaining an entire infrastructure to run and manage distributed applications is hard and requires a lot of undifferentiated heavy lifting.
In this session, we discuss some of the core architectural principles underlying Amazon ECS, a highly scalable, high performance service to run and manage distributed applications using the Docker container engine. We walk through a number of patterns used by our customers to run their microservices platforms, to run batch jobs, and for deployments and continuous integration. We explore the advanced scheduling capabilities of Amazon ECS and dive deep into the Amazon ECS Service Scheduler, which optimizes for long-running applications by monitoring container health, restarting failed containers, and load balancing across containers."
Azure Virtual Machines Deployment ScenariosBrian Benz
Architecture and Scenarios for deploying Database and middleware applications on Azure Virtual Machines including SQL Server, Oracle, Hadoop, and others.
Introduction to Desired State Configuration (DSC)Jeffery Hicks
Desired State Configuration (DSC) is the last major component of the Monad Manifesto which brought us Windows PowerShell. DSC will change the way you manage your datacenter. Instead of managing a server, you will manage its configuration. DSC is known as a “make it so” technology. You will define a desired server configuration and the server will make it happen. This session will provide an overview to DSC.
AWS RDS Oracle - What is missing for a fully managed service?DanielHillinger
With the Relational Database Service (RDS) Amazon Web Services (AWS) offers a managed service for many database products (e.g. Oracle, Postges and MYSQL).
AWS takes over many of the standard DBA tasks and has automated them. But what is missing, so that you really don't have to take care of anything anymore?
Which topics are fully managed and where do you have to actively work on solutions yourself?
In a world where an automatic backup is just a checkmark in a web interface, it is worth taking a closer look.
VMworld 2013: Architecting VMware Horizon Workspace for Scale and PerformanceVMworld
VMworld 2013
Jared Cook, VMware
Andrew Johnson, VMware
Kit Colbert, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
AUDWC 2016 - Using SQL Server 20146 AlwaysOn Availability Groups for SharePoi...Michael Noel
SQL Server 2016 provides for unprecedented high availability and disaster recovery options for SharePoint farms in the form of AlwaysOn Availability Groups. Using this new technology, SharePoint architects can provide for near-instant failover at the data tier, without the risk of any data loss. In addition, the latest version of this technology, available with SQL Server 2016, allows for replicas of SharePoint databases to be stored in the cloud in Microsoft’s Azure cloud offering. This technology, which will be demonstrated live, completely changes the data tier design options for SharePoint and revolutionises high availability options for a farm. This session covers in step-by-step detail the exact configuration required to enable this functionality for a SharePoint 2013 farm, based on the best practices, tips and tricks, and real-world experience of the presenter in deploying this technology in production.
Understand the differences between SQL AlwaysOn options, and determine the requirements to deploy the technologies
Examine how SQL Server 2016 AlwaysOn Availability Groups can provide aggressive Service Level Agreements (SLAs) with a Recovery Point Objective (RPO) of zero and a Recovery Time Objective (RTO) of a few seconds.
See the exact steps required to enable SQL Server 2016 AlwaysOn Availability Groups for a SharePoint 2013 On-Premises environment, including options for storing replicas in Microsoft’s Azure cloud service.
SharePoint 24x7x365 Architecting for High Availability, Fault Tolerance and D...Eric Shupps
Building SharePoint farms for development and testing is easy. But building highly available farms to meet enterprise service level agreements that are fault tolerant, scalable and fully recoverable? Not so simple. Learn how to plan, design and implement a highly available on-premises farm architecture for 2016 and 2019 using proven, field-tested techniques and practical guidance.
See the latest features of SSIS in ADF. We will show you how to join your Azure-SSIS Integration Runtime (IR) to an ARM VNet, so you can use Azure SQL Managed Instance to host your SSISDB and access data on premises. You will learn how to select Enterprise Edition for your IR, enabling you to use advanced/premium features, e.g. Oracle/Teradata/SAP BW connectors, CDC components, Fuzzy Grouping/Lookup transformations, etc. You will also learn how to customize your IR via a custom setup interface to modify system configurations/install additional components, e.g. (un)licensed 3rd party/Open Source extensions, assemblies, drivers, tools, APIs, etc. Finally, we will show you how to trigger/schedule/orchestrate SSIS package executions as first-class activities in ADF pipelines.
Embrace and Extend - First-Class Activity and 3rd Party Ecosystem for SSIS in...Sandy Winarko
This session focuses on the deeper integration of SQL Server Integration Services (SSIS) in Azure Data Factory (ADF) and the broad extensibility of Azure-SSIS Integration Runtime (IR). We will first show you how to provision Azure-SSIS IR – dedicated ADF servers for lifting & shifting SSIS packages – and extend it with custom/3rd party components. Preserving your skillsets, you can then use the familiar SQL Server Data Tools (SSDT)/SQL Server Management Studio (SSMS) to design/deploy/configure/execute/monitor your SSIS packages in the cloud just like you do on premises. Next, we will guide you to trigger/schedule SSIS package executions as first-class activities in ADF pipelines and combine/chain them with other activities, allowing you to inject/splice built-in/custom/3rd party tasks/data transformations in your ETL/ELT workflows, automatically provision Azure-SSIS IR on demand/just in time, etc. And finally, you will learn about the licensing model for ISVs to develop paid components/extensions and join the growing 3rd party ecosystem for SSIS in ADF with a few examples from our partners.
"In recent years, containers have become a key component of modern application design. Increasingly, developers are breaking their applications apart into smaller components and distributing them across a pool of compute resources. It is relatively easy to run a few containers on your laptop, but building and maintaining an entire infrastructure to run and manage distributed applications is hard and requires a lot of undifferentiated heavy lifting.
In this session, we discuss some of the core architectural principles underlying Amazon ECS, a highly scalable, high performance service to run and manage distributed applications using the Docker container engine. We walk through a number of patterns used by our customers to run their microservices platforms, to run batch jobs, and for deployments and continuous integration. We explore the advanced scheduling capabilities of Amazon ECS and dive deep into the Amazon ECS Service Scheduler, which optimizes for long-running applications by monitoring container health, restarting failed containers, and load balancing across containers."
Azure Virtual Machines Deployment ScenariosBrian Benz
Architecture and Scenarios for deploying Database and middleware applications on Azure Virtual Machines including SQL Server, Oracle, Hadoop, and others.
Introduction to Desired State Configuration (DSC)Jeffery Hicks
Desired State Configuration (DSC) is the last major component of the Monad Manifesto which brought us Windows PowerShell. DSC will change the way you manage your datacenter. Instead of managing a server, you will manage its configuration. DSC is known as a “make it so” technology. You will define a desired server configuration and the server will make it happen. This session will provide an overview to DSC.
AWS RDS Oracle - What is missing for a fully managed service?DanielHillinger
With the Relational Database Service (RDS) Amazon Web Services (AWS) offers a managed service for many database products (e.g. Oracle, Postges and MYSQL).
AWS takes over many of the standard DBA tasks and has automated them. But what is missing, so that you really don't have to take care of anything anymore?
Which topics are fully managed and where do you have to actively work on solutions yourself?
In a world where an automatic backup is just a checkmark in a web interface, it is worth taking a closer look.
VMworld 2013: Architecting VMware Horizon Workspace for Scale and PerformanceVMworld
VMworld 2013
Jared Cook, VMware
Andrew Johnson, VMware
Kit Colbert, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
AUDWC 2016 - Using SQL Server 20146 AlwaysOn Availability Groups for SharePoi...Michael Noel
SQL Server 2016 provides for unprecedented high availability and disaster recovery options for SharePoint farms in the form of AlwaysOn Availability Groups. Using this new technology, SharePoint architects can provide for near-instant failover at the data tier, without the risk of any data loss. In addition, the latest version of this technology, available with SQL Server 2016, allows for replicas of SharePoint databases to be stored in the cloud in Microsoft’s Azure cloud offering. This technology, which will be demonstrated live, completely changes the data tier design options for SharePoint and revolutionises high availability options for a farm. This session covers in step-by-step detail the exact configuration required to enable this functionality for a SharePoint 2013 farm, based on the best practices, tips and tricks, and real-world experience of the presenter in deploying this technology in production.
Understand the differences between SQL AlwaysOn options, and determine the requirements to deploy the technologies
Examine how SQL Server 2016 AlwaysOn Availability Groups can provide aggressive Service Level Agreements (SLAs) with a Recovery Point Objective (RPO) of zero and a Recovery Time Objective (RTO) of a few seconds.
See the exact steps required to enable SQL Server 2016 AlwaysOn Availability Groups for a SharePoint 2013 On-Premises environment, including options for storing replicas in Microsoft’s Azure cloud service.
SharePoint 24x7x365 Architecting for High Availability, Fault Tolerance and D...Eric Shupps
Building SharePoint farms for development and testing is easy. But building highly available farms to meet enterprise service level agreements that are fault tolerant, scalable and fully recoverable? Not so simple. Learn how to plan, design and implement a highly available on-premises farm architecture for 2016 and 2019 using proven, field-tested techniques and practical guidance.
See the latest features of SSIS in ADF. We will show you how to join your Azure-SSIS Integration Runtime (IR) to an ARM VNet, so you can use Azure SQL Managed Instance to host your SSISDB and access data on premises. You will learn how to select Enterprise Edition for your IR, enabling you to use advanced/premium features, e.g. Oracle/Teradata/SAP BW connectors, CDC components, Fuzzy Grouping/Lookup transformations, etc. You will also learn how to customize your IR via a custom setup interface to modify system configurations/install additional components, e.g. (un)licensed 3rd party/Open Source extensions, assemblies, drivers, tools, APIs, etc. Finally, we will show you how to trigger/schedule/orchestrate SSIS package executions as first-class activities in ADF pipelines.
Embrace and Extend - First-Class Activity and 3rd Party Ecosystem for SSIS in...Sandy Winarko
This session focuses on the deeper integration of SQL Server Integration Services (SSIS) in Azure Data Factory (ADF) and the broad extensibility of Azure-SSIS Integration Runtime (IR). We will first show you how to provision Azure-SSIS IR – dedicated ADF servers for lifting & shifting SSIS packages – and extend it with custom/3rd party components. Preserving your skillsets, you can then use the familiar SQL Server Data Tools (SSDT)/SQL Server Management Studio (SSMS) to design/deploy/configure/execute/monitor your SSIS packages in the cloud just like you do on premises. Next, we will guide you to trigger/schedule SSIS package executions as first-class activities in ADF pipelines and combine/chain them with other activities, allowing you to inject/splice built-in/custom/3rd party tasks/data transformations in your ETL/ELT workflows, automatically provision Azure-SSIS IR on demand/just in time, etc. And finally, you will learn about the licensing model for ISVs to develop paid components/extensions and join the growing 3rd party ecosystem for SSIS in ADF with a few examples from our partners.
Flying to clouds - can it be easy? Cloud Native ApplicationsJacek Bukowski
Nowadays "cloud" and "microservice" terms are used all the time, even overused. Does any system must be the "microservices" deployed in the "cloud"? Definitely not! However once you see that your system may benefit from that architecture, the next question is how to get there - how to fly to the clouds?
Spring was always about simplifying the complicated aspects of your enterprise system. Netflix went to microservice architecture long before this term even was created. Both are very much contributed to open source software. How can you benefit from joint forces of the both?
JDD 2016 - Jacek Bukowski - "Flying To Clouds" - Can It Be Easy?PROIDEA
Nowadays "cloud" and "microservice" terms are used all the time, even overused. Does any system must be the "microservices" deployed in the "cloud"? Definitely not! However once you see that your system may benefit from that architecture, the next question is how to get there - how to fly to the clouds?
Spring was always about simplifying the complicated aspects of your enterprise system. Netflix went to microservice architecture long before this term even was created. Both are very much contributed to open source software. How can you benefit from joint forces of the both?
Putting Kafka In Jail – Best Practices To Run Kafka On Kubernetes & DC/OSLightbend
Apache Kafka–part of Lightbend Fast Data Platform–is a distributed streaming platform that is best suited to run close to the metal on dedicated machines in statically defined clusters. For most enterprises, however, these fixed clusters are quickly becoming extinct in favor of mixed-use clusters that take advantage of all infrastructure resources available.
In this webinar by Sean Glover, Fast Data Engineer at Lightbend, we will review leading Kafka implementations on DC/OS and Kubernetes to see how they reliably run Kafka in container orchestrated clusters and reduce the overhead for a number of common operational tasks with standard cluster resource manager features. You will learn specifically about concerns like:
* The need for greater operational knowhow to do common tasks with Kafka in static clusters, such as applying broker configuration updates, upgrading to a new version, and adding or decommissioning brokers.
* The best way to provide resources to stateful technologies while in a mixed-use cluster, noting the importance of disk space as one of Kafka’s most important resource requirements.
* How to address the particular needs of stateful services in a model that natively favors stateless, transient services.
Cloud-Native DevOps: Simplifying application lifecycle management with AWS | ...Amazon Web Services
Organizations are migrating to the cloud in order to increase their agility and eliminate undifferentiated heavy lifting. At the same time, they’re embracing DevOps principles in order to deliver functionality faster and improve operational performance. Taken together, it’s possible to deliver agile, reliable applications with less overhead than ever before. However, it’s not always optimal to emulate traditional approaches to DevOps and configuration management in the cloud. No matter where you are in your DevOps journey, join us in this session to learn how to use AWS application lifecycle management services to focus on your mission, not your tooling.
Devops core principles
CI/CD basics
CI/CD with asp.net core webapi and Angular app
Iac Why and What?
Demo using Azure and Azure Devops
Docker why and what ?
Demo using Azure and Azure Devops
Kubernetes why and what?
Demo using Azure and Azure Devops
Going Serverless - an Introduction to AWS GlueMichael Rainey
Going "serverless" is the latest technology trend for enterprises moving their processing to the cloud, including data integration and ETL tools. But what does that mean and when should I use serverless ETL? In this session, we'll dive into the world of Amazon's fully managed data processing service called AWS Glue. With no server to provision or resources to allocate, and an easy to populate metadata catalog, AWS Glue allows the data engineer to focus on his or her craft; building data transformations and pipelines. Gaining an understanding of the similarities and differences between traditional ETL tools, such as Oracle Data Integrator, and Glue will prepare attendees for the new world of data integration. Presented at Collaborate 18.
DevOps, Continuous Integration and Deployment on AWS: Putting Money Back into...Amazon Web Services
Organizations around the globe are leveraging the cloud to accomplish world-changing missions. This session will address how AWS can help organizations put more money toward their mission and scale outreach and operations to achieve more with less. Hear some of AWS’s most advanced customers on how their organizations handle DevOps, continuous integration and deployment. Learn how these practices allow them to rapidly develop, iterate, test and deploy highly-scalable web applications and core operational systems on AWS. The discussion will focus on best practices, lessons learned, and the specific technologies and services they use.
Configuration Management in the Cloud - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to use AWS OpsWorks, AWS CodeDeploy, and AWS CodePipeline to build a reliable and consistent development pipeline
- Understand about continous integration and delivery for Infrastructure as Code
- Learn how to get started with these services.
Azure: Docker Container orchestration, PaaS ( Service Farbic ) and High avail...Alexey Bokov
Deep dive into Azure cloud technologies including common considerations about technology choices and then going deep into some of them. First we start from Azure Container Service and Docker containers orchestration by using Mesos or Swarm. Next part is about PaaS v2 which called Azure Service Fabric - crash course and deep dive into some parts of SF. After that we going through high Availability and Disaster Recovery in Azure:
- Azure DNS - cloud API for DNS records hosting
- Traffic Manager – load balancing and fault-tolerance on DNS level
- Azure Load Balancer – load balancing on transport level
-Application Gateway – load balancing on application level
Last part of deck is about IaaS based services and some updates for storage service:
* Azure Batch for computational tasks
* VM Scale sets
* Storage - managed disks and cool storage
DEVNET-1007 Network Infrastructure as Code with Chef and CiscoCisco DevNet
Automation of infrastructure is one of the key tenants of DevOps. Chef has been at the vanguard of "Infrastructure as Code", where the configuration and management of your applications and servers is automated and tracked as source code. This infrastructure source code may be tested, shared and tracked just like any other software project. Traditionally configuration management has meant physical, virtual and cloud servers but Cisco and Chef are working together to extend this into networking. This session will provide an introduction to Chef and the current state of Cisco integrations, network automation scenarios and the roadmap ahead.
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Stephen Gordon
This deck begins with a high-level overview of where OpenStack Compute (Nova) fits into the overall OpenStack architecture, as demonstrated in Red Hat Enterprise Linux OpenStack Platform. Before illustrating how OpenStack Compute interacts with other OpenStack components.
The session will also provide a grounding in some common Compute terminology and a deep-dive look into key areas of OpenStack Compute, including the:
Compute APIs.
Compute Scheduler.
Compute Conductor.
Compute Service.
Compute Instance lifecycle.
Intertwined with the architectural information are details on horizontally scaling and dividing compute resources as well as customization of the Compute scheduler. You’ll also learn valuable insights into key OpenStack Compute features present in OpenStack Icehouse.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...
TechBeats #2
1. Journey from monolith to microservices
Utilizing microservice patterns with monoliths
Chris Gianelloni @wolf31o2
1
2. In the beginning…
Applause had several ways to deploy and manage software.
• Custom system management tool (sysdeploy)
• Basically a SSH wrapper for manually created systems
• Custom Docker image management tool (Platypus)
• Standardized AMIs, built w/ Packer, including Docker daemon
• Services in Docker containers w/ configuration using SaltStack
• Provides A/B testing and health checks
• Packer + Chef + Terraform
• Packer + Chef to bake AMIs
• Terraform to deploy using ASGs
• Mesosphere DC/OS
• OSS orchestration for “Docker” containers
2
3. Typical “old school” configuration system, written completely
in-house, limited in capabilities, author has long-since
departed the company
• SSH wrapper to copy files and run commands
• No instance management
• No user management
• No rollback features
• No documentation
• Unfamiliar code base to everyone
• Unable to look up problems
sysdeploy
3
4. In-house microservice deployment and service management
• Leverages CloudFormation for infrastructure
• Template-based system
• INI-style configuration files
• Output lookups
• Leverages SaltStack for some configuration management
• Uses roles for service management
• Services in Docker containers
• Supports health checking
• Supports A/B deployments
• Supports manual rollback
• Tied to AWS
• Lots of ELBs
Platypus
4
5. Utilizes common, public, OSS tools
• Common tools with existing user bases and communities
• Basically “best of breed” tools
• Packer for AMIs
• Chef for installing and configuring software
• Terraform for deploying baked AMIs
• Plethora of documentation for each tool
• Chef Server optional
• Composable and reusable pieces
• Output lookups (Chef + Terraform)
Packer +
Chef +
Terraform
5
6. Mesos, Marathon, and Metronome (and more)
• Consolidated and unified platform
• Leverages common OSS technologies
• Standardized application and service management
• Health checking for services
• Supports single-shot, or scheduled tasks
• Service discovery
• Metrics and log collection
• Integrated data services
• Configuration rollbacks
• Canary deployments
• Universe packages
Mesosphere
DC/OS
6
7. 7
Applause chose DC/OS to leverage previous work while
also moving to a scalable system using open source
components. This frees the Platform Delivery team to
provide new capabilities to the Applause Hosting
Platform which provide for our business needs.
• Open source with a vibrant and active community
• Strong feature set around an integrated platform
• Ability to colocate diverse workloads
• Microservices
• Data services
• AI / Machine learning / Analytics
• Simple interfaces using API, CLI, and GUI
• Enterprise features and support
• Appreciation for memes
Why DC/OS?
8. 8
DC/OS Architecture
Software layer is where containers
execute to provide services. This
includes Marathon applications,
Metronome jobs, and Mesos
frameworks.
Platform layer is Mesosphere DC/OS
services execute, which run in the host
operating system.
Infrastructure layer provides the host
and operating system which hosts our
stack, such as Amazon Web Services.
9. 9
DC/OS Node Types
Master nodes host DC/OS services and
provide the orchestration layer, service
discovery, and administrative interfaces.
Public agent nodes are public facing
and contain API routing and load
balancing of incoming requests to
backend services. These are agent
nodes with a public role.
Private agent nodes are internal and
host all other services. Services
communicate via East-West load
balancing.
11. Mesosphere
Universe
packages +
Application
services
11
Packages and services which provide base value to the
platform to be used by all Applause services:
• ecr-login - AWS Elastic Container Registry login process
• Provides and updates credentials for fetching images
• marathon-lb - North-South load balancer
• Provides ingress load balancing from public slaves to services
running in private slaves
• hdfs - Hadoop Distributed File System
• Provides shared storage for artifacts, logs, etc.
• Provides storage layer for AI/ML and analytics processes
• linkerd - HTTP proxy
• Provides service discovery and service mesh
• Provides East-West load balancing across private slaves
• kong - API gateway
• Provides API routing to specific endpoints
• spark - Data processing framework
• Provides processing framework for AI/ML and analytics
12. • Chef cookbook wrapping community cookbook: https://supermarket.chef.io/cookbooks/dcos
• Custom recipes
• Monitoring agent
• Docker Engine installation and configuration
• Enhanced Networking (ena) driver
• Logging aggregation agent
• System users via data bag
• DC/OS volumes (volume0, etc)
• DC/OS workdir configuration
• Cookbook “bake_time”
• Packer templates to create “shared” images
• Start from “official” CentOS base images
• Patch
• Reboot
• Remove old kernels
• Run Chef
• Cleanup
How do we build DC/OS?
12
13. Chef
wrapper
“secret
sauce”
13
Disable some Chef resources by modifying resources at
converge time:
# These are resources which need to be modified in the upstream dcos
# cookbook to prevent them from executing at bake time
[
{ template: '/usr/src/dcos/genconf/config.yaml' },
{ execute: 'dcos-genconf' },
{ file: '/usr/src/dcos/genconf/serve/dcos_install.sh' },
{ execute: 'preflight-check' },
{ execute: 'dcos_install' },
].each do |res|
ruby_block "action-nothing-#{res.keys.first}[#{res.values.first}]" do
block do
r = resources(res)
r.action([:nothing])
end
only_if { node['chef-applause-dcos']['bake_time'] }
end
end
15. • Terraform
• Derived from Mesosphere’s AWS CloudFormation templates
• Originally a 1:1 translation
• Evolved over time, more customizations
• VPC per cluster
• Masters have public addresses / ELB for discovery
• Private slaves have only internal addresses
• Public slaves are behind ALB
• Autoscaling Groups + Launch Configs
• One group per DC/OS role
• Launch Configs write out node-specific Chef configuration
• Executes Chef client in cloud-init at boot
• IAM instance profiles used
• One profile per DC/OS role
How do we deploy DC/OS?
15
17. • Chef wrapper changes
• Pull Request made
• Tested with ChefDK and Test Kitchen
• Merged to master
• Tested again
• Pushed to Chef server
• Packer job executed
• Creates AMIs in AWS accounts
• Terraform job executed
• Creates AWS resources
• IAM accounts, profiles, instance profiles
• VPC, subnets, security groups
• ASGs, ELBs/ALBs
• Creates terraform outputs
Development to deployment workflow for DC/OS
17
18. • Application terraform updated
• Databases, caches, storage buckets, etc
• Service repository updated
• Pull Request made
• Unit tests, integration tests
• Merge to master (or deployment branch)
• Unit tests
• Docker image
• Integration tests
• Code coverage
• Push image
• Service deployment / promotion to DC/OS
• Metronome
• Marathon
• Kong
Development to deployment workflow for Applause services
18
19. • Migrate more workloads from legacy hosting
• Data science
• Analytics
• Build and test
• Other products
• Integrate services with in-cluster resources
• Data services
• Migrate scheduled jobs to Metronome
• Chronos
• cron
• Applause platform
• Migrate long-running tasks to Metronome
• Kubernetes in-cluster
What now?
19
20. Check out our careers page:
https://www.applause.com/working-at-applause/We’re hiring
20