Speaker: Markus Karg
This talk shares some insight into the current project status and the road ahead of the technology formerly known as JAX-RS. If you maintain existing JAX-RS applications, or write new RESTful Microservices in Java, this is the strategic session you need to attend.
Introducing Scylla Manager: Cluster Management and Task AutomationScyllaDB
By centralizing cluster administration and automating recurring tasks, Scylla Manager brings greater predictability and control to Scylla-based environments.
In this webinar, you will learn about Scylla Manager’s recurrent repair capabilities, including why recurrent repair is critical for Scylla production cluster administration, and why keeping it manual results in errors and suboptimal performance.
We will present a demo of how to set up and run recurrent and ad-hoc repairs on a Scylla cluster, and give you a sneak peek of the Scylla Manager roadmap, which includes cluster management, rolling upgrades, and integrated monitoring.
No matter how resilient your database infrastructure is, backups are still needed to defend against catastrophic failures. Be it the unlikely hardware failure of all data centers, or the more likely and all-too-human user error. Acknowledging the importance of good backup procedures, the Scylla Manager now natively supports backup and restore operations. In this talk, we will learn more about how that works and the guarantees provided, as well as how to set it up to guarantee maximum resiliency to your cluster.
Sematext's DevOps Evangelist, Stefan Thies (@seti321), takes a Docker Logging tour through the different log collection options Docker users have, the pros and cons of each, specific and existing Docker logging solutions, tooling, the role of syslog, log shipping to ELK Stack, and more. Q&A session at end.
Introducing Scylla Manager: Cluster Management and Task AutomationScyllaDB
By centralizing cluster administration and automating recurring tasks, Scylla Manager brings greater predictability and control to Scylla-based environments.
In this webinar, you will learn about Scylla Manager’s recurrent repair capabilities, including why recurrent repair is critical for Scylla production cluster administration, and why keeping it manual results in errors and suboptimal performance.
We will present a demo of how to set up and run recurrent and ad-hoc repairs on a Scylla cluster, and give you a sneak peek of the Scylla Manager roadmap, which includes cluster management, rolling upgrades, and integrated monitoring.
No matter how resilient your database infrastructure is, backups are still needed to defend against catastrophic failures. Be it the unlikely hardware failure of all data centers, or the more likely and all-too-human user error. Acknowledging the importance of good backup procedures, the Scylla Manager now natively supports backup and restore operations. In this talk, we will learn more about how that works and the guarantees provided, as well as how to set it up to guarantee maximum resiliency to your cluster.
Sematext's DevOps Evangelist, Stefan Thies (@seti321), takes a Docker Logging tour through the different log collection options Docker users have, the pros and cons of each, specific and existing Docker logging solutions, tooling, the role of syslog, log shipping to ELK Stack, and more. Q&A session at end.
Real Time Data Processing With Spark Streaming, Node.js and Redis with Visual...Brandon O'Brien
Contact:
https://www.linkedin.com/in/brandonjobrien
@hakczar
Code examples available at https://github.com/br4nd0n/spark-streaming and https://github.com/br4nd0n/spark-viz
A demo and explanation of building a streaming application using Spark Streaming, Node.js and Redis with a real time visualization. Includes discussion of internals of Spark and Spark streaming including RDD partitioning and code and data distribution and cluster resource allocation.
Scylla allows us to create highly performant and scalable systems. However, to achieve good results and prevent our Scylla cluster from being overloaded, we need to properly write our client application and configure the driver. Join this session to learn some practical tips that can help you make your applications faster and more available.
DataStax: Backup and Restore in Cassandra and OpsCenterDataStax Academy
Cassandra and OpsCenter has a range of backup and restore topics. I will start with a basic overview of Cassandra backup/restore, walking through the operational steps to provide the understanding required to perform an on disk backup and restore. Expanding on this overview, I'll cover the limitations (including schema requirements) and their impact on the restore process. Further, I'll discuss commit log archiving and point in time restore operations. After covering the underlying operations, I'll wrap up with a discussion of how OpsCenter automates this process and leverages S3.
A 20 minute talk about how WePay runs airflow. Discusses usage and operations. Also covers running Airflow in Google cloud.
Video of the talk is available here:
https://wepayinc.box.com/s/hf1chwmthuet29ux2a83f5quc8o5q18k
Sparking up Data Engineering: Spark Summit East talk by Rohan SharmaSpark Summit
Learn about the Big Data Processing ecosystem at Netflix and how Apache Spark sits in this platform. I talk about typical data flows and data pipeline architectures that are used in Netflix and address how Spark is helping us gain efficiency in our processes. As a bonus – i’ll touch on some unconventional use-cases contrary to typical warehousing / analytics solutions that are being served by Apache Spark.
Jorge de la Cruz [Veeam Software] | RESTful API – How to Consume, Extract, St...InfluxData
Jorge de la Cruz [Veeam Software] | RESTful API – How to Consume, Extract, Store, and Visualize Data with InfluxDB and Grafana | InfluxDays Virtual Experience NA 2020
Homologous Apache Spark Clusters Using Nomad with Alex DadgarDatabricks
Nomad is a modern cluster manager by HashiCorp, designed for both long-lived services and short-lived batch processing workloads. The Nomad team has been working to bring a native integration between Nomad and Apache Spark.
By running Spark jobs on Nomad, both Spark developers and the engineering organization benefit. Nomad’s architecture allows it to have an incredibly high scheduling throughput. To demonstrate this, HashiCorp scheduled 1 million containers in less than five minutes. That speed means that large Spark workloads can be immediately placed, minimizing job runtime and job start latencies.
For an organization, Nomad offers many benefits. Since Nomad was designed for both batch and services, a single cluster can service both an organization’s Spark workload and all service-oriented jobs. That, coupled with the fact that Nomad uses bin-packing to place multiple jobs on each machine, means that organizations can achieve higher density. Which saves money and makes capacity planning easier.
In the future, Nomad will also have the ability to enforce quotas and apply chargebacks, allowing multi-tenant clusters to be easily managed. To further increase the performance of Spark on Nomad, HashiCorp would like to ingest HDFS locality information to place the compute by the data.
Scaling an invoicing SaaS from zero to over 350k customersSpeck&Tech
ABSTRACT: Fatture in Cloud was born in late 2013 on a single-server machine and scaled from zero to 35k customers at the end of 2018. Then, we faced the mandatory electronic invoicing which came into effect in Italy on 1st January 2019, and we experienced a huge growth to 350k customers in few months. In these 5 years, I've learned a lot about cloud architecture, scalability, optimization, DevOps, and we eventually achieved a 99,99% uptime even in the huge growth period.
BIO: Daniele Ratti is the Founder and CEO of Fatture in Cloud, which is currently the leader invoicing platform in Italy, counting more than 350k customers.
A presentation about the deployment of an ELK stack at bol.com
At bol.com we use Elasticsearch, Logstash and Kibana in a logsearch system that allows our developers and operations people to easilly access and search thru logevents coming from all layers of its infrastructure.
The presentations explains the initial design and its failures. It continues with explaining the latest design (mid 2014). Its improvements. And finally a set of tips are giving regarding Logstash and Elasticsearch scaling.
These slides were first presented at the Elasticsearch NL meetup on September 22nd 2014 at the Utrecht bol.com HQ.
This presentation covers how to setup an Airflow instance as a cluster which spans multiple machines instead of the traditional 1 machine distribution. In addition, it covers an added step you can take to ensure High Availability in that cluster.
Building Data Product Based on Apache Spark at Airbnb with Jingwei Lu and Liy...Databricks
Building data product requires having Lambda Architecture to bridge the batch and streaming processing. AirStream is a framework built on top of Apache Spark to allow users to easily build data products at Airbnb. It proved Spark is impactful and useful in the production for mission-critical data products.
On the streaming side, hear how AirStream integrates multiple ecosystems with Spark Streaming, such as HBase, Elasticsearch, MySQL, DynamoDB, Memcache and Redis. On the batch side, learn how to apply the same computation logic in Spark over large data sets from Hive and S3. The speakers will also go through a few production use cases, and share several best practices on how to manage Spark jobs in production.
Modernizing Infrastructures for Fast Data with Spark, Kafka, Cassandra, React...Lightbend
The Big Data industry emerged in response to the unprecedented sizes of data sets collected by Internet companies and the particular needs they had to store and use that data.
Today, the need to process that data more quickly is morphing Big Data architectures into Fast Data architectures. This session discusses the forces driving this trend and the most popular tools that have emerged to address particular design challenges:
Spark - For sophisticated processing of data streams, as well as traditional batch-mode processing.
Kafka - For durable and scalable ingestion and distribution of data streams.
Cassandra - For scalable, flexible persistence.
Reactive Platform: Lagom, Akka, and Play - For integration of other components and building microservices.
Mesos - For cluster resource management.
---
About the presenter:
Dean Wampler, Ph.D. is the Architect for Big Data Products and Services and a member of the office of the CTO at Lightbend. He is designing the product strategy and technical architecture for Lightbend's Spark on Mesos products and emerging streaming tools built around Spark and Lightbend’s ConductR and Akka products. Dean has written books on Scala, Functional Programming, and Hive for O'Reilly. He speaks at and co-organizes many industry conferences. He also organizes several Chicago-area user groups and contributes to many open-source projects, including Apache Spark. Dean has a Ph.D. in Physics from the University of Washington.
Financial Times is increasing its digital revenue by allowing business people to make data-driven decisions. Providing an Airflow based platform where data engineers, data scientists, BI experts and others can run language agnostic jobs was a huge swing. One of the most successful steps in the platform’s development was building our own execution environment, allowing stakeholders to self deploy jobs without cross team dependencies on top of the unlimited scale of Kubernetes.
In this talk we will share how we have integrated and extended Airflow at Financial Times. The main topics we will cover include:
- Providing team level security isolation
- Removing cross team dependencies
- Creating execution environment for independently creating and deploying R, Python, JAVA, Spark, etc jobs
- Reducing latency when sharing data between task instances
- Integrating all these features on top of Kubernetes
A fairly short (26 slides) presentation covering the GlassFish community and product (v2 and upcoming modular v3) as well as Java EE 5 and upcoming Java EE 6.
Real Time Data Processing With Spark Streaming, Node.js and Redis with Visual...Brandon O'Brien
Contact:
https://www.linkedin.com/in/brandonjobrien
@hakczar
Code examples available at https://github.com/br4nd0n/spark-streaming and https://github.com/br4nd0n/spark-viz
A demo and explanation of building a streaming application using Spark Streaming, Node.js and Redis with a real time visualization. Includes discussion of internals of Spark and Spark streaming including RDD partitioning and code and data distribution and cluster resource allocation.
Scylla allows us to create highly performant and scalable systems. However, to achieve good results and prevent our Scylla cluster from being overloaded, we need to properly write our client application and configure the driver. Join this session to learn some practical tips that can help you make your applications faster and more available.
DataStax: Backup and Restore in Cassandra and OpsCenterDataStax Academy
Cassandra and OpsCenter has a range of backup and restore topics. I will start with a basic overview of Cassandra backup/restore, walking through the operational steps to provide the understanding required to perform an on disk backup and restore. Expanding on this overview, I'll cover the limitations (including schema requirements) and their impact on the restore process. Further, I'll discuss commit log archiving and point in time restore operations. After covering the underlying operations, I'll wrap up with a discussion of how OpsCenter automates this process and leverages S3.
A 20 minute talk about how WePay runs airflow. Discusses usage and operations. Also covers running Airflow in Google cloud.
Video of the talk is available here:
https://wepayinc.box.com/s/hf1chwmthuet29ux2a83f5quc8o5q18k
Sparking up Data Engineering: Spark Summit East talk by Rohan SharmaSpark Summit
Learn about the Big Data Processing ecosystem at Netflix and how Apache Spark sits in this platform. I talk about typical data flows and data pipeline architectures that are used in Netflix and address how Spark is helping us gain efficiency in our processes. As a bonus – i’ll touch on some unconventional use-cases contrary to typical warehousing / analytics solutions that are being served by Apache Spark.
Jorge de la Cruz [Veeam Software] | RESTful API – How to Consume, Extract, St...InfluxData
Jorge de la Cruz [Veeam Software] | RESTful API – How to Consume, Extract, Store, and Visualize Data with InfluxDB and Grafana | InfluxDays Virtual Experience NA 2020
Homologous Apache Spark Clusters Using Nomad with Alex DadgarDatabricks
Nomad is a modern cluster manager by HashiCorp, designed for both long-lived services and short-lived batch processing workloads. The Nomad team has been working to bring a native integration between Nomad and Apache Spark.
By running Spark jobs on Nomad, both Spark developers and the engineering organization benefit. Nomad’s architecture allows it to have an incredibly high scheduling throughput. To demonstrate this, HashiCorp scheduled 1 million containers in less than five minutes. That speed means that large Spark workloads can be immediately placed, minimizing job runtime and job start latencies.
For an organization, Nomad offers many benefits. Since Nomad was designed for both batch and services, a single cluster can service both an organization’s Spark workload and all service-oriented jobs. That, coupled with the fact that Nomad uses bin-packing to place multiple jobs on each machine, means that organizations can achieve higher density. Which saves money and makes capacity planning easier.
In the future, Nomad will also have the ability to enforce quotas and apply chargebacks, allowing multi-tenant clusters to be easily managed. To further increase the performance of Spark on Nomad, HashiCorp would like to ingest HDFS locality information to place the compute by the data.
Scaling an invoicing SaaS from zero to over 350k customersSpeck&Tech
ABSTRACT: Fatture in Cloud was born in late 2013 on a single-server machine and scaled from zero to 35k customers at the end of 2018. Then, we faced the mandatory electronic invoicing which came into effect in Italy on 1st January 2019, and we experienced a huge growth to 350k customers in few months. In these 5 years, I've learned a lot about cloud architecture, scalability, optimization, DevOps, and we eventually achieved a 99,99% uptime even in the huge growth period.
BIO: Daniele Ratti is the Founder and CEO of Fatture in Cloud, which is currently the leader invoicing platform in Italy, counting more than 350k customers.
A presentation about the deployment of an ELK stack at bol.com
At bol.com we use Elasticsearch, Logstash and Kibana in a logsearch system that allows our developers and operations people to easilly access and search thru logevents coming from all layers of its infrastructure.
The presentations explains the initial design and its failures. It continues with explaining the latest design (mid 2014). Its improvements. And finally a set of tips are giving regarding Logstash and Elasticsearch scaling.
These slides were first presented at the Elasticsearch NL meetup on September 22nd 2014 at the Utrecht bol.com HQ.
This presentation covers how to setup an Airflow instance as a cluster which spans multiple machines instead of the traditional 1 machine distribution. In addition, it covers an added step you can take to ensure High Availability in that cluster.
Building Data Product Based on Apache Spark at Airbnb with Jingwei Lu and Liy...Databricks
Building data product requires having Lambda Architecture to bridge the batch and streaming processing. AirStream is a framework built on top of Apache Spark to allow users to easily build data products at Airbnb. It proved Spark is impactful and useful in the production for mission-critical data products.
On the streaming side, hear how AirStream integrates multiple ecosystems with Spark Streaming, such as HBase, Elasticsearch, MySQL, DynamoDB, Memcache and Redis. On the batch side, learn how to apply the same computation logic in Spark over large data sets from Hive and S3. The speakers will also go through a few production use cases, and share several best practices on how to manage Spark jobs in production.
Modernizing Infrastructures for Fast Data with Spark, Kafka, Cassandra, React...Lightbend
The Big Data industry emerged in response to the unprecedented sizes of data sets collected by Internet companies and the particular needs they had to store and use that data.
Today, the need to process that data more quickly is morphing Big Data architectures into Fast Data architectures. This session discusses the forces driving this trend and the most popular tools that have emerged to address particular design challenges:
Spark - For sophisticated processing of data streams, as well as traditional batch-mode processing.
Kafka - For durable and scalable ingestion and distribution of data streams.
Cassandra - For scalable, flexible persistence.
Reactive Platform: Lagom, Akka, and Play - For integration of other components and building microservices.
Mesos - For cluster resource management.
---
About the presenter:
Dean Wampler, Ph.D. is the Architect for Big Data Products and Services and a member of the office of the CTO at Lightbend. He is designing the product strategy and technical architecture for Lightbend's Spark on Mesos products and emerging streaming tools built around Spark and Lightbend’s ConductR and Akka products. Dean has written books on Scala, Functional Programming, and Hive for O'Reilly. He speaks at and co-organizes many industry conferences. He also organizes several Chicago-area user groups and contributes to many open-source projects, including Apache Spark. Dean has a Ph.D. in Physics from the University of Washington.
Financial Times is increasing its digital revenue by allowing business people to make data-driven decisions. Providing an Airflow based platform where data engineers, data scientists, BI experts and others can run language agnostic jobs was a huge swing. One of the most successful steps in the platform’s development was building our own execution environment, allowing stakeholders to self deploy jobs without cross team dependencies on top of the unlimited scale of Kubernetes.
In this talk we will share how we have integrated and extended Airflow at Financial Times. The main topics we will cover include:
- Providing team level security isolation
- Removing cross team dependencies
- Creating execution environment for independently creating and deploying R, Python, JAVA, Spark, etc jobs
- Reducing latency when sharing data between task instances
- Integrating all these features on top of Kubernetes
A fairly short (26 slides) presentation covering the GlassFish community and product (v2 and upcoming modular v3) as well as Java EE 5 and upcoming Java EE 6.
Simplifying Migration from Kafka to Pulsar - Pulsar Summit NA 2021StreamNative
Complex/large-scale implementations of OSS systems, Kafka included, involve customizations and in-house developed tools and plugins. Transition from one system to another is a complicated process and making it iterative increases the chance of success. In this talk we’ll take a look at the Kafka Adaptor that enables use of Kafka Connect Sinks in the Pulsar ecosystem.
Kubernetes is great for deploying stateless containers, but what about the big data ecosystem? Episode 3 of our Kubernetes series covers how DC/OS enables you to connect your Kubernetes-based applications to co-located big data services.
Slides cover:
1. Why persistence is challenging in distributed architectures
How DC/OS helps you take advantage of the services available in the big data ecosystem
2. How to connect Kubernetes to your data services through networking
3. How Apache Flink and Apache Spark work with Kubernetes to enable real-time data processing on DC/OS
Cloud State of the Union for Java DevelopersBurr Sutter
This presentation provides a broad overview of what is going on in the Cloud computing world - for Java developers - presented on Dec 21st 2010 at the Atlanta Java Users Group - ajug.org - no audio was recorded.
The features released between Java 11 and Java 17 have brought a greater opportunity for developers to improve application development productivity as well and code expressiveness and readability. In this deep-dive session, you will discover all the recent Project Amber features added to the Java language such as Records (including Records serialization), Pattern Matching for `instanceof`, switch expression, sealed classes, and hidden classes. The main goal of the Amber Project is to bring Pattern Matching to the Java platform, which will impact both the language and the JDK APsI. You will discover record patterns, array patterns, as well as deconstruction patterns, through constructors, factory methods, and deconstructors.
You can find the code shown here: https://github.com/JosePaumard/devoxx-uk-2021
This talk (delivered at QConLondon 2016) covers the evolution of Coursera's nearline architecture, delves into our latest generation system, and then covers the flagship application of the architecture (evaluating programming assignments).
Dynamic Languages & Web Frameworks in GlassFishIndicThreads
“Dynamic languages such as JRuby, Groovy, and Jython are increasingly playing an important role in the web these days. The associated frameworks such as Rails, Grails, and Django are gaining importance because of the agility provided by them.
The GlassFish project provides an easy-to-use and robust development and deployment platform for hosting these web applications. It also enables the various languages to leverage the investment in your existing Java Platform, Enterprise Edition (Java EE platform) infrastructure. This session gives an overview of various Dynamic Languages and associated Web frameworks that can be used on the GlassFish project.
It starts with a brief introduction to JRuby and details on how the GlassFish project provides a robust development and deployment platform for Rails, Merb, Sinatra and other similar applications without pain. As a basis for further discussion, this presentation shows the complete lifycycle for JRuby-on-Rails applications on GlassFish v2 and v3. It discusses the various development options provided by GlassFish v3, demonstrates how popular Rails applications can be easily deployed on GlassFish without any modification, and shows how v3 Gem can be used as an effective alternative to WEBrick and Mongrel. It also demonstrates debugging of Rails applications using NetBeans IDE. For enterprise users, it shows how JMX and other mechanisms can be used to monitor Rails applications.
It also talks in detail about the Groovy/Grails and Python/Django development and deployment models in context of GlassFish v3. By following the simple deployment steps the presentation shows, developers will be able to deploy their existing web applications on the GlassFish project.The session also describes the known limitations and workarounds for each of them.
The talk will show a working sample created in different frameworks and deployed on GlassFish v3. The demo will show how different features of the underlying GlassFish runtime are easily accessible to the frameworks running on top of it.”
SouJava May 2020: Apache Camel 3 - the next generation of enterprise integrationClaus Ibsen
In this session, we'll discuss:
- What’s Apache Camel: An overview of Camel and what you use it for and why you should care.
- Camel 3: Demos of how Camel 3, Camel K and Camel Quarkus all work together, and will provide insights into Camel’s role in the next major release of Red Hat Integration products.
- Camel K: This serverless integration platform provides low-code/no-code capabilities, where integrations can be snapped together quickly using the powers from integration patterns and Camel’s extensive set of connectors.
- Camel Quarkus: Using Knative (the fast runtime of Quarkus) and Camel K brings awesome serverless features, such as auto-scaling, scaling to zero, and event-based communication, with great integration capabilities from Apache Camel.
You will also hear about the latest Camel sub-project Camel Kafka Connectors which makes it possible to use all the Camel components as Kafka Connect connectors.
Finally we bring details of the roadmap for what is coming up in the Camel projects.
And after the presentation we have about 30 minutes of QA answering all the questions from the audience.
Kubernetes Multitenancy Karl Isenberg - KubeCon NA 2019Karl Isenberg
Cruise has been working on self-driving cars for six years and growing exponentially for most of that time. Two years ago they started using Kubernetes, betting on namespace-level multitenancy to provide isolation between teams and projects. Today they have over 40 internal tenants, 100,000 pods, 4,000 nodes, and… an embarrassing number of KubeDNS replicas.
This session will take you through the motivations, story, and results of migrating to multitenant Kubernetes, along with some hard-earned Pro Tips from the trenches.
You’ll also learn about the open source tooling they built around Spinnaker, Vault, Google Cloud, and Istio in order to integrate with our multitenant Kubernetes.
Come see how they went from barely isolated to very isolated and saved a few million dollars doing it!
Similar to Jakarta RESTful Web Services: Status Quo and Roadmap | JakartaOne Livestream (20)
Applied Domain-Driven Design Blueprints for Jakarta EEJakarta_EE
Domain-Driven Design (DDD) is an architectural approach that strongly focuses on materializing the business domain in enterprise software through disciplined object-oriented analysis. This session demonstrates first-hand how DDD can be elegantly implemented using Jakarta EE via an open source project named Cargo Tracker.
Cargo Tracker maps DDD concepts like entities, value objects, aggregates and repositories to Jakarta EE code examples in a realistic application. We will also see how DDD concepts like the bounded context are invaluable to designing pragmatic microservices.
Jakarta EE 9 Milestone Release Party
Presentor: Kevin Sutter, IBM - co-Release lead for Jakarta EE 9, co-Project Lead for Jakarta EE Platform Project, Member of EE4J PMC, Member of Jakarta EE Steering and Spec Committees
Kubernetes Native Java and Eclipse MicroProfile | EclipseCon Europe 2019Jakarta_EE
In this presentation we will cover some of those challenges, discuss how one of those standards efforts (Eclipse MicroProfile) has helped move the Java community forward, and give an hint at some changes happening in the Java language and frameworks with the Quarkus project as an example.
Speaker: Mark Little, Red Hat
Jakarta for dummEEs | JakartaOne LivestreamJakarta_EE
Speaker: Kevin Sutter
We have finally made some real progress with Jakarta EE in 2019! Specifications, APIs, TCKs, Maven artifacts, Implementations, Releases, and, yes, even a little bit of required process. If you want to get caught up quickly on all of the activities, this session is for you. We will discuss the potential impact to both implementors as well as application developers as we move away from the JCP-defined javax world to the open-source world of Jakarta EE.
Jakarta EE Meets NoSQL at the Cloud Age | JakartaOne LivestreamJakarta_EE
Speaker: Otavio Santana
Jakarta NoSQL is the first specification of the new era of Java EE now in the Eclipse Foundation home as Jakarta EE. The goal of this specification is to ease integration to Java applications with a standard API that supports more than 30 NoSQL vendors and rising.
Turbocharged Java with Quarkus | JakartaOne LivestreamJakarta_EE
Speaker: Marcus Biel
I will demonstrate how we can create a native executable with Quarkus, and how fast we can scale a large cluster of Quarkus containers in the cloud. Last but not least, I will show you how much fun it is to develop a REST + JPA based application with the help of Quarkus.
Building Interoperable Microservices With Eclipse MicroProfile| JakartaOne Li...Jakarta_EE
peaker: Ivar Grimstad
Eclipse MicroProfile is a collection of community-driven open source specifications that define an enterprise Java microservices platform. This session gives an introduction to Eclipse MicroProfile and the tools available to get started building portable microservices with a minimum of effort. The features of MicroProfile will be in explained in a down-to-earth and easily understandable way.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
3. Application Servers: Old-School Scaling
AS-spec.
Cluster
Controller
AS-spec.
Node 1
AS-spec.
Node 2
AS-spec.
Node …
AS-spec.
Node n
JAX-RS ApplicationJAX-RS Application
JAX-RS Container
Java EE Application Server
Java Runtime
Operating System
Bare Metal
Clustered Application Server Nodes
e. g. GlassFish including Jersey
AS-specific Cluster
4. Cloud Computing: The Modern Approach
Kub.’tes
Cluster
Controller
Kub.’tes
Node 1
Kub.’tes
Node 2
Kub.’tes
Node …
Kub.’tes
Node n
JAX-RS ApplicationJAX-RS Application
JAX-RS Container
Java Runtime
Docker
Kubernetes
Operating System
Bare Metal
Elastic creation and disposal of lightweight
processes (e. g. Kubernetes with Docker)
Kubernetes Cluster
JAX-RS ApplicationJAX-RS Application
JAX-RS Container
Java Runtime
Docker
Container
Docker
Container
Docker
Container
Docker
Container
Docker
Container
Docker
Container
Docker
Container
6. RESTful Devices
5 x 2,5 cm
1 Core @ 1 GHz
256 MB RAM
100 Mb/s LAN
$ 10
Try to start GlassFish...
7. Effects upon JAX-RS
JAX-RS ApplicationJAX-RS Application
JAX-RS Container
Java Runtime
Container Runtime
Container Orchestration
Operating System
Bare Metal
JAX-RS ApplicationJAX-RS Application
JAX-RS Container
Java Runtime
should start and stop instantly
should consume less resources
must provide http server
must read external config
must accept application-provided resources
9. Roadmap
JAX-RS 2.2: Java SE Bootstrap API
instant on/off
low resource
includes http server
reads external config
(optional) support for Microprofile Config API
https://github.com/eclipse-ee4j/jaxrs-api/wiki/Roadmap
DELAYED
10. Roadmap
JAX-RS 2.3: Support for CDI 2.0 SE
Deprecate @Context
Support for ContextResolver<Jsonb>
(accept application-provided resources)
https://github.com/eclipse-ee4j/jaxrs-api/wiki/Roadmap
14. JAX-RS 2.2: Java SE Bootstrap API
instant on/off – boot in one second
low resource – run on limited devices
includes http server – choose by actual need
reads external config – from app, controller or orchestrator
optional support for Microprofile Config API – cloud
standards