The CloudStack usage service is used to track consumption of resources in Apache CloudStack for reporting and billing purposes. This talk will give an overview of the service before diving deeper into how data is processed from the CloudStack database into the different usage types before being aggregated into billable units or time slices in the usage database.
Adam Dagnall: Advanced S3 compatible storage integration in CloudStackShapeBlue
Adam's slides from his talk at the CloudStack European User group meetup, March 13, London. To provide tighter integration between the S3 compatible object store and CloudStack, Cloudian has developed a connector to allow users and their applications to utilize the object store directly from within the CloudStack platform in a single sign-on manner with self-service provisioning. Additionally, CloudStack templates and snapshots are centrally stored within the object store and managed through the CloudStack service. The object store offers protection of these templates and snapshots across data centres using replication or erasure coding.
Paul Angus - CloudStack Container ServiceShapeBlue
A walkthrough of the recently released update to ShapeBlue’s CloudStack Container Service (CCS). This update brings CCS bang up-to-date by running the latest version of Kubernetes (v1.11.3) on the latest version of Container Linux. CCS also now makes use of CloudStack’s new CA framework to automatically secure the Kubernetes environments it creates.
What I’m going to talk about
‣Briefly we do and for whom
‣Where we started
‣The kind of data we deal with
‣How it fits altogether
‣A few things we learned along the way
‣Q+A
Jenkins, jclouds, CloudStack, and CentOS by David Nalleybuildacloud
Setting up continuous integration for a single project can be a pretty daunting task. Doing that for hundreds of projects becomes a challenge of a different magnitude. Not only are their capacity problems, but some tests are destructive to the testing environment, some have esoteric environment demands. See how this is solved in the real world using Jenkins, jclouds, CloudStack to build an on-demand build infrastructure.
About David Nalley
David Nalley is the Vice President, Infrastructure at the Apache Software Foundation and a CloudStack PMC member.
The CloudStack usage service is used to track consumption of resources in Apache CloudStack for reporting and billing purposes. This talk will give an overview of the service before diving deeper into how data is processed from the CloudStack database into the different usage types before being aggregated into billable units or time slices in the usage database.
Adam Dagnall: Advanced S3 compatible storage integration in CloudStackShapeBlue
Adam's slides from his talk at the CloudStack European User group meetup, March 13, London. To provide tighter integration between the S3 compatible object store and CloudStack, Cloudian has developed a connector to allow users and their applications to utilize the object store directly from within the CloudStack platform in a single sign-on manner with self-service provisioning. Additionally, CloudStack templates and snapshots are centrally stored within the object store and managed through the CloudStack service. The object store offers protection of these templates and snapshots across data centres using replication or erasure coding.
Paul Angus - CloudStack Container ServiceShapeBlue
A walkthrough of the recently released update to ShapeBlue’s CloudStack Container Service (CCS). This update brings CCS bang up-to-date by running the latest version of Kubernetes (v1.11.3) on the latest version of Container Linux. CCS also now makes use of CloudStack’s new CA framework to automatically secure the Kubernetes environments it creates.
What I’m going to talk about
‣Briefly we do and for whom
‣Where we started
‣The kind of data we deal with
‣How it fits altogether
‣A few things we learned along the way
‣Q+A
Jenkins, jclouds, CloudStack, and CentOS by David Nalleybuildacloud
Setting up continuous integration for a single project can be a pretty daunting task. Doing that for hundreds of projects becomes a challenge of a different magnitude. Not only are their capacity problems, but some tests are destructive to the testing environment, some have esoteric environment demands. See how this is solved in the real world using Jenkins, jclouds, CloudStack to build an on-demand build infrastructure.
About David Nalley
David Nalley is the Vice President, Infrastructure at the Apache Software Foundation and a CloudStack PMC member.
Kubernetes is an open source container cluster orchestration platform founded by Google. This presentation covers an overview of it's main concepts, plus how it fits into Google Cloud Platform. This was delivered by Kit Merker at DevNexus 2015 in Atlanta.
From VMs to Containers: Decompose and Migrate Old Legacy JavaEE ApplicationJelastic Multi-Cloud PaaS
It is not a trivial task to decompose large monolithic enterprise applications and migrate them into multiple containers architecture without redesign. In this presentation, we dive deeply into the decomposition process and show the ways of making it smooth with the help of auto-scaling and load balancing. Find out the main road blockers and possible solutions during migration of GlassFish based applications from VMs to Containers based on the real experience.
Go through the result of our latest large-scale study about Docker usage in real environment. Analyze and see the impact for operations and monitoring.
Lifting the Blinds: Monitoring Windows Server 2012Datadog
Operating systems monitor resources continuously in order to effectively schedule processes.
In this webinar, Evan Mouzakitis (Datadog) discusses how to get operational data from Windows Server 2012 using a variety of native tools.
Common Patterns of Multi Data-Center Architectures with Apache Kafkaconfluent
Whether you know you want to run Apache Kafka in multiple data centers and need practical advice or you are wondering why some organizations even need more than one cluster, this online talk is for you.
In this short session, we’ll discuss the basic patterns of multi-datacenter Kafka architectures, explore some of the use-cases enabled by each architecture and show how Confluent Enterprise products make these patterns easy to implement.
Visit www.confluent.io for more information.
Sydney based cloud consultancy Cloudten's Richard Tomkinson shows how masterless Puppet can be used in concert with AWS's services including Lambda to automate server builds and manage code deployments
Guaranteeing Storage Performance by Mike Tutkowskibuildacloud
This session will introduce the basics of primary storage in CloudStack. Additionally, I discuss the challenges of guaranteeing storage performance in a cloud and how by leveraging the latest enhancements to CloudStack, storage administrators can deliver consistent, repeatable performance to 10s, 100s or 1,000s of application workloads in parallel. I'll review the CloudStack enhancements in detail, outline the management benefits they provide and discuss common go-to-market approaches.
About Mike Tutkowski
Mike Tutkowski, a member of the CloudStack PMC, develops software for the Apache Software Foundation's CloudStack project to help drive improvements in its storage component and to integrate SolidFire more deeply into the product.
Cassandra Day Denver 2014: Setting up a DataStax Enterprise Instance on Micro...DataStax Academy
Speaker: Joey Filichia, Business Intelligence Consultant
There are many options for Cloud Providers, but according to the Gartner Magic Quadrant 2014 for IaaS Solutions, Amazon AWS and Microsoft Azure are both leaders and visionaries. DataStax provides instructions for provisioning an Amazon Machine Image. This discussion will provide guidance on setting up a single-node DataStax Enterprise cluster using an Ubuntu 14.04 Server and a Windows Azure Virtual Machine. Using the DataStax Enterprise production installation in text mode, we will install DSE end to end during the presentation.
Monitoring Docker at Scale - Docker San Francisco Meetup - August 11, 2015Datadog
In this session I showed building a multi-container app from beginning to end, using Docker, Docker-Machine, Docker-Compose and everything in between. You can even try it out yourself using the link in the deck to a repo on GitHub.
Introduction and Overview of OpenStack for IaaSKeith Basil
These slides supported a presentation at the 2013 Red Hat Summit.
It covers:
✦ Introduction to OpenStack
✦ OpenStack Architecture
✦ Understanding the Elastic Cloud
✦ OpenStack in the Real World
Working with AKS for more then 3 months, I want to share my experience. I discuss benefits of AKS and some issues you might have. K8S is damn close to a silver bullet in regards of the simplicity to work with.
Kubernetes is an open source container cluster orchestration platform founded by Google. This presentation covers an overview of it's main concepts, plus how it fits into Google Cloud Platform. This was delivered by Kit Merker at DevNexus 2015 in Atlanta.
From VMs to Containers: Decompose and Migrate Old Legacy JavaEE ApplicationJelastic Multi-Cloud PaaS
It is not a trivial task to decompose large monolithic enterprise applications and migrate them into multiple containers architecture without redesign. In this presentation, we dive deeply into the decomposition process and show the ways of making it smooth with the help of auto-scaling and load balancing. Find out the main road blockers and possible solutions during migration of GlassFish based applications from VMs to Containers based on the real experience.
Go through the result of our latest large-scale study about Docker usage in real environment. Analyze and see the impact for operations and monitoring.
Lifting the Blinds: Monitoring Windows Server 2012Datadog
Operating systems monitor resources continuously in order to effectively schedule processes.
In this webinar, Evan Mouzakitis (Datadog) discusses how to get operational data from Windows Server 2012 using a variety of native tools.
Common Patterns of Multi Data-Center Architectures with Apache Kafkaconfluent
Whether you know you want to run Apache Kafka in multiple data centers and need practical advice or you are wondering why some organizations even need more than one cluster, this online talk is for you.
In this short session, we’ll discuss the basic patterns of multi-datacenter Kafka architectures, explore some of the use-cases enabled by each architecture and show how Confluent Enterprise products make these patterns easy to implement.
Visit www.confluent.io for more information.
Sydney based cloud consultancy Cloudten's Richard Tomkinson shows how masterless Puppet can be used in concert with AWS's services including Lambda to automate server builds and manage code deployments
Guaranteeing Storage Performance by Mike Tutkowskibuildacloud
This session will introduce the basics of primary storage in CloudStack. Additionally, I discuss the challenges of guaranteeing storage performance in a cloud and how by leveraging the latest enhancements to CloudStack, storage administrators can deliver consistent, repeatable performance to 10s, 100s or 1,000s of application workloads in parallel. I'll review the CloudStack enhancements in detail, outline the management benefits they provide and discuss common go-to-market approaches.
About Mike Tutkowski
Mike Tutkowski, a member of the CloudStack PMC, develops software for the Apache Software Foundation's CloudStack project to help drive improvements in its storage component and to integrate SolidFire more deeply into the product.
Cassandra Day Denver 2014: Setting up a DataStax Enterprise Instance on Micro...DataStax Academy
Speaker: Joey Filichia, Business Intelligence Consultant
There are many options for Cloud Providers, but according to the Gartner Magic Quadrant 2014 for IaaS Solutions, Amazon AWS and Microsoft Azure are both leaders and visionaries. DataStax provides instructions for provisioning an Amazon Machine Image. This discussion will provide guidance on setting up a single-node DataStax Enterprise cluster using an Ubuntu 14.04 Server and a Windows Azure Virtual Machine. Using the DataStax Enterprise production installation in text mode, we will install DSE end to end during the presentation.
Monitoring Docker at Scale - Docker San Francisco Meetup - August 11, 2015Datadog
In this session I showed building a multi-container app from beginning to end, using Docker, Docker-Machine, Docker-Compose and everything in between. You can even try it out yourself using the link in the deck to a repo on GitHub.
Introduction and Overview of OpenStack for IaaSKeith Basil
These slides supported a presentation at the 2013 Red Hat Summit.
It covers:
✦ Introduction to OpenStack
✦ OpenStack Architecture
✦ Understanding the Elastic Cloud
✦ OpenStack in the Real World
Working with AKS for more then 3 months, I want to share my experience. I discuss benefits of AKS and some issues you might have. K8S is damn close to a silver bullet in regards of the simplicity to work with.
[devops REX 2016] DevOps at Scale : ce qu’on fait, ce que l’on a appris chez ...devops REX
Adrien Blind et Laurent Dussault, Société Générale @ devops REX 2016
Dans le cadre d’une grande démarche de transformation Continuous Delivery, nous avons contextualisé et deployé un triptyque de pratiques complémentaires Agile, Craftsmanship et DevOps. Mise en lumière d’un « double mur de la confusion », organisation d’ateliers de sensibilisation, coaching de proximité, convergence des objectifs des équipes Devs et Ops, construction d’une plateforme automatisée de delivery (jira, github, puppet, docker, « apification » de l’infrastructure)… : dans cette session, 2 coaches DevOps vous proposent un focus sur les accompagnements apportés sur le terrain.
One-Man Ops with Puppet & Friends.
If you're getting started in Amazon AWS here's 7 tools that will help you be successful, a few tips to make your life easier and some common pitfalls to avoid.
"Puppet and Apache CloudStack" by David Nalley, Citrix, at Puppet Camp San Francisco 2013. Find a Puppet Camp near you: puppetlabs.com/community/puppet-camp/
(ARC402) Deployment Automation: From Developers' Keyboards to End Users' Scre...Amazon Web Services
Some of the best businesses today are deploying their code dozens of times a day. How? By making heavy use of automation, smart tools, and repeatable patterns to get process out of the way and keep the workflow moving. Come to this session to learn how you can do this too, using services such as AWS OpsWorks, AWS CloudFormation, Amazon Simple Workflow Service, and other tools. We'll discuss a number of different deployment patterns, and what aspects you need to focus on when working toward deployment automation yourself.
DevOps, Continuous Integration and Deployment on AWS: Putting Money Back into...Amazon Web Services
Organizations around the globe are leveraging the cloud to accomplish world-changing missions. This session will address how AWS can help organizations put more money toward their mission and scale outreach and operations to achieve more with less. Hear some of AWS’s most advanced customers on how their organizations handle DevOps, continuous integration and deployment. Learn how these practices allow them to rapidly develop, iterate, test and deploy highly-scalable web applications and core operational systems on AWS. The discussion will focus on best practices, lessons learned, and the specific technologies and services they use.
OpenStack Summit 2013 Hong Kong - OpenStack and WindowsAlessandro Pilotti
OpenStack summit session about how to deploy Windows instances using Cloudbase-Init and Heat!
The session takes care of explaining all the issues you might encounter, for example how to choose the rioght KVM VirtIO drivers.
Do any VM's contain a particular indicator of compromise? E.g. Run a YARA signature over all executables on my virtual machines and tell me which ones match.
Bare Metal to OpenStack with Razor and ChefMatt Ray
Slides from the OpenStack Spring 2013 Summit workshop presented by Egle Sigler (@eglute) and Matt Ray (@mattray) from Rackspace and Opscode respectively. Please refer to http://anystacker.com/ for additional content.
Docker containers have been making inroads into Windows and Azure world. Docker has now replaced the traditional Azure IaaS & PaaS services, offering superior container versions which are more responsive, cost effective, and agile. In this session for Charlotte Azure User Group, we will take an in-depth look at the intersection of Docker and Azure, and how Docker is empowering next gen Azure services.
Here's the link to CAG meetup for the event - https://www.meetup.com/Charlotte-Microsoft-Azure/events/fpftgmyxjbjb/
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Hubo un tiempo en el que casi cualquier componente de software requería pagar una licencia. Afortunadamente, hoy en día gracias al software libre y de código abierto, se puede desarrollar prácticamente cualquier aplicación usando componentes gratuitos.
Pero, si el software es gratis, ¿Quién lo desarrolla? ¿Trabaja la comunidad de software libre de forma altruista? ¿Se puede desarrollar software libre de forma profesional? De hecho, hay quien dice que el código abierto tal y como lo conocimos ya no existe, y que lo que hay hoy en día es otra cosa.
En esta charla hablaré de cómo se puede monetizar el código libre, y de algunos posibles conflictos que puedes encontrarte en el camino.
Además, te contaré cómo hacemos desde QuestDB para desarrollar una base de datos de código abierto y mantener un equipo estable viviendo de ello. Comentaré también algunas situaciones problemáticas a las que proyectos muy destacados se han enfrentado, o que se enfrentan a día de hoy.
QuestDB: The building blocks of a fast open-source time-series databasejavier ramirez
(talk delivered at OSA CON 23)
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed.
We will learn how it deals with data ingestion, and which SQL extensions it implements for working with time-series efficiently.
We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or data deduplication.
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...javier ramirez
QuestDB es una base de datos open source de alto rendimiento. Mucha gente nos comentaba que les gustaría usarla como servicio, sin tener que gestionar las máquinas. Así que nos pusimos manos a la obra para desarrollar una solución que nos permitiese lanzar instancias de QuestDB con provisionado, monitorización, seguridad o actualizaciones totalmente gestionadas.
Unos cuantos clusters de Kubernetes más tarde, conseguimos lanzar nuestra oferta de QuestDB Cloud. Esta charla es la historia de cómo llegamos ahí. Hablaré de herramientas como Calico, Karpenter, CoreDNS, Telegraf, Prometheus, Loki o Grafana, pero también de retos como autenticación, facturación, multi-nube, o de a qué tienes que decir que no para poder sobrevivir en la nube.
Ingesting Over Four Million Rows Per Second With QuestDB Timeseries Database ...javier ramirez
How would you build a database to support sustained ingestion of several hundreds of thousands rows per second while running near real-time queries on top?
In this session I will go over some of the technical decisions and trade-offs we applied when building QuestDB, an open source time-series database developed mainly in JAVA, and how we can achieve over four million row writes per second on a single instance without blocking or slowing down the reads. There will be code and demos, of course.
We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Deduplicating and analysing time-series data with Apache Beam and QuestDBjavier ramirez
Time series data pipelines tend to prioritise speed and freshness over completeness and integrity. In such scenarios, it is very common to ingest duplicate data, which may be fine for many analytical use cases, but is very inconvenient for others.
There are many open source databases built specifically for the speed and query semantics of time series, and most of them lack automatic deduplication of events in near real-time. One such database is QuestDB, which requires a manual batch process to deduplicate ingested data.
In this talk, we will see how we can successfully use Apache Beam to deduplicate streaming time series, which can then be analysed by a time series database.
Relational databases were created a long time ago for a simpler world. Even if they are still awesome tools for generic workloads, there are some things they cannot do well.
In this session I will speak about purpose-built databases that you can use for specific business scenarios. We will see the type of queries you can run on a Graph database, a Document Database, and a Time-Series database. We will then see how a relational database could also be used for the same use cases, just in a much more complex way.
Your Timestamps Deserve Better than a Generic Databasejavier ramirez
If you are storing records with a timestamp in your database, it is very likely a time series database can make your life easier.
However, time series databases are still the great unknown for a large part of the tech community.
In this talk, I will show you what use cases they are good for, what they give you that you cannot get from a traditional database, and when it is a good idea (and when it is not) to use them.
For the demos, we will be using QuestDB, the fastest open-source time series database.
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...javier ramirez
En esta sesión voy a contar las decisiones técnicas que tomamos al desarrollar QuestDB, una base de datos Open Source para series temporales compatible con Postgres, y cómo conseguimos escribir más de cuatro millones de filas por segundo sin bloquear o enlentecer las consultas.
Hablaré de cosas como (zero) Garbage Collection, vectorización de instrucciones usando SIMD, reescribir en lugar de reutilizar para arañar microsegundos, aprovecharse de los avances en procesadores, discos duros y sistemas operativos, como por ejemplo el soporte de io_uring, o del balance entre experiencia de usuario y rendimiento cuando se plantean nuevas funcionalidades.
Processing and analysing streaming data with Python. Pycon Italy 2022javier ramirez
Data used to be a batch thing, but more and more we get unbounded streams of data, fast or slow, that we need to process and analyse in near real time.
In this talk I’ll show you how you can use Apache Flink and QuestDB to build reliable streaming data pipelines that can grow as much as you need.
QuestDB: ingesting a million time series per second on a single instance. Big...javier ramirez
In this session I will show you the technical decisions we made when building QuestDB, the open source, Postgres compatible, time-series database, and how we can achieve a million row writes per second without blocking or slowing down the reads.
Servicios e infraestructura de AWS y la próxima región en Aragónjavier ramirez
AWS está montando una región de infraestructura en Aragón. Vale, pero ¿Qué significa eso? ¿Es tan diferente de un centro de datos convencional o de otros proveedores de nube? (Spoiler: Sí). En esta sesión te cuento por qué. Hay video en https://catedrasamcadt.unizar.es/noticias/el-momento-tecnologico-actual-contado-por-trabajadores-de-amazon-web-services/
¿Qué es eso del desarrollo sin servidores? ¿Qué lenguajes puedo utilizar? ¿Cómo hago cosas como autenticación, o guardar en base de datos, o enviar notificaciones? ¿Esto escala? A todas estas preguntas, y a alguna más, intentaré dar respuesta en esta sesión, donde haré una pequeña demo de montar una app muy sencilla y desplegarla en la nube sin preocuparnos de gestionar infraestructura. Charla realizada por primera vez para AlcarriaConf 2021
AWS launched publicly on March 2006 with just one service, starting the age of the public cloud. You might think after 15 years everything in cloud has already been invented, but that's simply not the case.
In this session I want to show you how AWS is reinventing the cloud in areas like computing, machine learning, databases and analytics, or cloud infrastructure.
Analitica de datos en tiempo real con Apache Flink y Apache BEAMjavier ramirez
Trabajar en tiempo real con datos que se mueven muy rápido no es trivial, sobre todo con volúmenes de datos elevados. Apache Flink y Apache BEAM están específicamente diseñadas para ese caso de uso. En esta charla te contaré los retos de la analítica en tiempo real, cuál es la arquitectura de Apache Flink, qué es Apace BEAM, y cómo usan estas herramientas empresas para hacer desde procesos triviales hasta gestionar billones de eventos al día con latencias de milisegundos. Por supuesto, haremos una demo :)
In this webinar we explain which are some of the problems of streaming analytics, and why they are different to batch/big data analytics. Then we go into introducing some basic streaming concepts, like event queues, event processors, event vs processing time, and delivery guarantees. We end this first part of the series presenting a few of the most common open source components for streaming (Kafka, Spark, Flink, Cassandra, or ElasticSearch) and we mention the different options you have to run them on AWS.
Getting started with streaming analytics: Setting up a pipelinejavier ramirez
In this session I will show you how to create a simple streaming analytics pipeline, first using open source tools and developing locally, then moving to a VM, then moving to fully managed AWS services. The session will serve as an introduction to some details of Apache Kafka, Apache Flink, ElasticSearch, Amazon Managed Streaming for Kafka, Kinesis Data Analytics, and Amazon ElasticSearch. It will be an almost slideless presentation, as I will spent most of the time at the command line and the IDE.
Getting started with streaming analytics: Deep Divejavier ramirez
Now that we know how to create simple streaming analytics pipelines, it is time to learn something more interesting. In this session I will show you how to add Complex Event Processing to your Apache Flink (or Kinesis Data Analytics) application using JAVA. For those of you that prefer SQL, I will show you how to run streaming analytics using only SQL.
Getting started with streaming analytics: streaming basics (1 of 3)javier ramirez
In this webinar we explain which are some of the problems of streaming analytics, and why they are different to batch/big data analytics. Then we go into introducing some basic streaming concepts, like event queues, event processors, event vs processing time, and delivery guarantees. We end this first part of the series presenting a few of the most common open source components for streaming (Kafka, Spark, Flink, Cassandra, or ElasticSearch) and we mention the different options you have to run them on AWS.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
5. Adding a new server:
* call (using a landline) or send a fax to the
provider
* pay via bank transfer
* wait for a few days/weeks
* set up the server on your own server
room
* hope it won't break
6. deploying software:
* code locally on your OS
* submit to CVS and manually build
* send package and SQL separately to IT
* wait until the time slot they give you (next
week, probably)
* test (by hand) everything is working
* hope it won't break
9. Devops work areas
Provisioning infrastructure
Deploying with confidence
Monitoring and alerting
Security and disaster prevention
Self-healing
Performance
10. Provisioning infrastructure: AppEngine
“Zero ops” applications. Just deploy
and forget*
Of course you still need to worry about
monitoring, backups, security.. but
infrastructure and scaling are automatic
*you need to adjust to the sandbox
27. Google Cloud Storage
Static files with free CDN for public
contents
Very cheap (up to $0.01 per GB/month)
Convenient command line for copying,
managing or rsync
40. Some data center outages reported in 2015:
* Amazon Web Services
* Apple iCloud
* Microsoft Azure
* IBM Softlayer
* Google Cloud Platform
* And of course every hosting with scheduled
maintenance operations (rackspace, digital
ocean, ovh...)
56. Storage and big data services
* Cloud SQL: Managed MySQL
* Cloud Data Store: NoSQL
* BigQuery: BigData analytics
* Dataflow: Streaming BigData
* Dataproc: Managed Hadoop and Spark
* Pub/Sub: High performance message
queue
57. Let's add CDN and DNS
So you can manage all your services from
a single point
58. Why not the cloud
It's too slow
I am limited in what I can do
I will get vendor lock-in
I cannot legally host my data in the cloud
Google will spy on my data
59. Google will spy on my data
* GCP is not Gmail. SLA
* Encryption at rest and in transit
* Bring your own keys
* ISO standards
60. Cannot host my data
on the cloud
public/private hybrid clouds
Cloud carrier interconnect,
direct peering and
CDN interconnect
72. javier ramirez - @supercoco9
https://teowaki.com
Ačiū - Thank you
Editor's Notes
you know some of the names on relational, traditional, non distributed databases
mysql
mariadb
oracle
postgresql
sql server
ibm db2
sqlite
SAP HANA
in 2011
A squirrel did take out half of our
Santa Clara data centre two years back
Mike Christian, Yahoo Director of Engineering
2012, at a conference
that's the reason why google wraps submarine fibre cables in kevlar, so shark bites won't damage them
rackspace was taken down when a truck driver had an accident during a delivery to the data centre
hurricanes, truck drivers, sharks eating trans
oceanic cable, and of course electronic and
mechanical failures, human errors, and malicious attacks
Parameters W and R can also be configured to LOCAL_QUORUM, so they need agreement only from local nodes and not across datacenters
by combining global quorum for reads and local quorum for reads, netflix gets 500 ms from the time it writes on one region until it can be read from another, while keeping very fast reads
you know some of the names on relational, traditional, non distributed databases
mysql
mariadb
oracle
postgresql
sql server
ibm db2
sqlite
SAP HANA
of course this doesn't give you high availability, but at least prevent from data lost to an extent (depending on your backup practices)