An Introduction to Apache Geode (incubating)Anthony Baker
Geode is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures.
Geode pools memory (along with CPU, network and optionally local disk) across multiple processes to manage application objects and behavior. It uses dynamic replication and data partitioning techniques for high availability, improved performance, scalability, and fault tolerance. Geode is both a distributed data container and an in-memory data management system providing reliable asynchronous event notifications and guaranteed message delivery.
Pivotal GemFire has had a long and winding journey, starting in 2002, winding through VMware, Pivotal, and finding it's way to Apache in 2015. Companies using GemFire have deployed it in some of the most mission critical latency sensitive applications in their enterprises, making sure tickets are purchased in a timely fashion, hotel rooms are booked, trades are made, and credit card transactions are cleared. This presentation discusses:
- A brief history of GemFire
- Architecture and use cases
- Why we are taking GemFire Open Source
- Design philosophy and principles
But most importantly: how you can join this exciting community to work on the bleeding edge in-memory platform.
Apache Geode (incubating) is the core of Pivotal Gemfire now available as an open source project governed by Apache Software Foundation Incubator. The legacy of Pivotal Gemfire and the ASF community uniquely position Geode as a secret ingredient for modern-day data management architectures.
These types of architectures require a robust in-memory data grid solution to handle a variety of use cases, ranging from enterprise-wide caching to real-time transactional applications at scale. In addition, as memory size and network bandwidth growth continues to outpace those of disk, the importance of managing large pools of RAM at scale increases. It is essential to innovate at the same pace.
Apache Geode (incubating) has all the right ingredients to do for RAM what HDFS has done for direct attach disks. The excitement (and funding!) in this area of big data ecosystem is palpable, and the ASF is the place where the innovation is happening. Come to this session to understand: a brief history of Geode, architecture and use cases, design philosophy and principles, but most importantly: how you too can participate in the in-memory data center revolution.
Using the awesome power of Spring Boot with Spring Data Geode to build highly-scalable, distributed Spring/Java applications using Apache Geode or Pivotal GemFire.
An Introduction to Apache Geode (incubating)Anthony Baker
Geode is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures.
Geode pools memory (along with CPU, network and optionally local disk) across multiple processes to manage application objects and behavior. It uses dynamic replication and data partitioning techniques for high availability, improved performance, scalability, and fault tolerance. Geode is both a distributed data container and an in-memory data management system providing reliable asynchronous event notifications and guaranteed message delivery.
Pivotal GemFire has had a long and winding journey, starting in 2002, winding through VMware, Pivotal, and finding it's way to Apache in 2015. Companies using GemFire have deployed it in some of the most mission critical latency sensitive applications in their enterprises, making sure tickets are purchased in a timely fashion, hotel rooms are booked, trades are made, and credit card transactions are cleared. This presentation discusses:
- A brief history of GemFire
- Architecture and use cases
- Why we are taking GemFire Open Source
- Design philosophy and principles
But most importantly: how you can join this exciting community to work on the bleeding edge in-memory platform.
Apache Geode (incubating) is the core of Pivotal Gemfire now available as an open source project governed by Apache Software Foundation Incubator. The legacy of Pivotal Gemfire and the ASF community uniquely position Geode as a secret ingredient for modern-day data management architectures.
These types of architectures require a robust in-memory data grid solution to handle a variety of use cases, ranging from enterprise-wide caching to real-time transactional applications at scale. In addition, as memory size and network bandwidth growth continues to outpace those of disk, the importance of managing large pools of RAM at scale increases. It is essential to innovate at the same pace.
Apache Geode (incubating) has all the right ingredients to do for RAM what HDFS has done for direct attach disks. The excitement (and funding!) in this area of big data ecosystem is palpable, and the ASF is the place where the innovation is happening. Come to this session to understand: a brief history of Geode, architecture and use cases, design philosophy and principles, but most importantly: how you too can participate in the in-memory data center revolution.
Using the awesome power of Spring Boot with Spring Data Geode to build highly-scalable, distributed Spring/Java applications using Apache Geode or Pivotal GemFire.
Scale Out Your Big Data Apps: The Latest on Pivotal GemFire and GemFire XDVMware Tanzu
Companies across all industries and sizes are investing in strategic custom applications to enhance their competitive advantages. Developing these applications requires continuous improvement, based on insights gleaned from collecting and analyzing the data that they generate.
Big Data for high-performing, scalable and reliable applications requires a new set of tools and technologies. Pivotal GemFire is a distributed in-memory NoSQL data management solution for creating high-scale custom applications. Pivotal GemFire XD supports structured data as part the industry’s first Hadoop-based platform for creating closed loop analytics solutions – enabling businesses to continuously optimize real-time automation in their applications.
YARN Containerized Services: Fading The Lines Between On-Prem And CloudDataWorks Summit
Apache Hadoop YARN is the modern distributed operating system for big data applications. In Apache Hadoop 3.1.0, YARN added a service framework that supports long-running services. This new capability goes hand in hand with the recent improvements in YARN to support Docker containers. Together these features have made it significantly easier to bring new applications and services to YARN.
In this talk you will learn about YARN service framework, its new containerization capabilities and how it lays the foundation for a hybrid and uniform architecture for compute and storage across on-prem and multi-cloud environments. This will include examples highlighting how easy it is to bring applications to the YARN service framework as well as how to containerize applications.
Here's what to expect in this talk:
- Motivation for YARN service framework and containerization
- YARN service framework overview
- YARN service examples
- Containerization overview
- Containerization for Big Data and non Big Data workloads - wait that's everything
Cloud-Native PostgreSQL is a Kubernetes Operator for Postgres written by EDB entirely from scratch in the Go language and relying exclusively on the Kubernetes API.
This webinar covered:
- About DevOps & Cloud Native
- Overview of Cloud Native Postgres
- Storage for Postgres workloads in Kubernetes
- Start Using Cloud-Native Postgres
- Demo
Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...DataWorks Summit
At Walmart Labs, we get close to 200 million customers every week across our 11000+ stores & online all over the world. As part of our data lake initiatives, we started a full-fledged migration to Hadoop based solutions for all our data needs at lower cost than traditional RDBMS/MPP solutions. While we have seen significant success in migrating to Hadoop based Data Lake solutions from traditional RDBMS based data warehouses, one challenge that we have faced is around migrating end users to Hadoop due to query latency issues. To solve this problem and to reduce the cost of the solution, Walmart Labs started using Hive LLAP.
In this session, we will introduce you to Hive LLAP, its architecture, best practices for deployment to achieve sub-second query performance and its cost comparison with traditional RDBMS systems for the same use case.
Building Apps with Distributed In-Memory Computing Using Apache GeodePivotalOpenSourceHub
Slides from the Meetup Monday March 7, 2016 just before the beginning of #GeodeSummit, where we cover an introduction of the technology and community that is Apache Geode, the in-memory data grid.
In April 2015, Apache Geode (incubating) was born from Pivotal’s GemFire, the distributed in-memory database. However, the donation of over 1M LOC was just the beginning of the journey. In this talk we discuss how the GemFire engineering team has adapted their development infrastructure, processes, and culture to embrace the “Apache Way". We present lessons learned and best practices for new and incubating open source projects in areas of initial code submission, IP clearance, governance policies, code review, and community building. We discuss the challenges the team faced and how we changed internal communication and software design processes to a community-driven model. In particular, we highlight effective strategies for growing a project community and embracing new members. Finally, we show how changing to the open source model has increased both productivity and quality.
Hadoop {Submarine} Project: Running Deep Learning Workloads on YARNDataWorks Summit
Deep learning is useful for enterprises tasks in the field of speech recognition, image classification, AI chatbots and machine translation, just to name a few.
In order to train deep learning/machine learning models, applications such as TensorFlow / MXNet / Caffe / XGBoost can be leveraged. And sometimes these applications will be used together to solve different problems.
To make distributed deep learning/machine learning applications easily launched, managed, monitored. Hadoop community has introduced Submarine project along with other improvements such as first-class GPU support, container-DNS support, scheduling improvements, etc. These improvements make distributed deep learning/machine learning applications run on YARN as simple as running it locally, which can let machine-learning engineers focus on algorithms instead of worrying about underlying infrastructure. Also, YARN can better manage a shared cluster which runs deep learning/machine learning and other services/ETL jobs with these improvements.
In this session, we will take a closer look at Submarine project as well as other improvements and show how to run these deep learning workloads on YARN with demos. Audiences can start trying running these workloads on YARN after this talk.
Speakers:
Sunil Govindan, Staff Engineer
Hortonworks
Zhankun Tank, Staff Engineer
Hortonworks
Demand for cloud is through the roof. Cloud is turbo charging the Enterprise IT landscape with agility and flexibility. And now, discussions of cloud architecture dominate Enterprise IT. Cloud is enabling many ephemeral on-demand use cases which is a game changing opportunity for analytic workloads. But all of this comes with the challenges of running enterprise workloads in the cloud securely and with ease.
In this session, we will take you through Cloudbreak as a solution to simplify provisioning and managing enterprise workloads while providing an open and common experience for deploying workloads across clouds. We will discuss the challenges (and opportunities) to run enterprise workloads in the cloud and will go through a live demo of how the latest from Cloudbreak enables enterprises to easily and securely run Apache Hadoop. This includes deep-dive discussion on Ambari Blueprints, recipes, custom images, and enabling Kerberos -- which are all key capabilities for Enterprise deployments.
Speakers
Jeff Sposetti, VP Product Management, Hortonworks
Attila Kanto, Principal Engineer, Hortonworks
In this webinar, we will discuss different open-source models and different ways open source communities are organized. Understanding these key concepts is essential when selecting a strategic open-source platform. We will explore how the PostgreSQL community ensures that it stays independent, remains vibrant, drives innovation, and provides a reliable long-term platform for strategic IT projects.
This is an introduction to Apache Geode (incubating) that explains some of the key concepts of the project, including steps create a build from the source code, how to create a local cluster with and without Docker, and how to create Teeny, a simple but very scalable URL shortener application.
Next Generation Scheduling for YARN and K8s: For Hybrid Cloud/On-prem Environ...DataWorks Summit
Scheduler of a container orchestration system, such as YARN and K8s, is a critical component that users rely on to plan resources and manage applications.
And if we assess where we are today, in YARN effectively it had two power schedulers (Fair and Capacity scheduler) and both serve many strong use cases in big data ecosystem. It can scale up to 50k nodes per cluster, and schedule 20k containers per second, and extremely efficient to manage batch workloads.
K8s default scheduler is an industry-proven solution to efficiently manage long-running services. As more big data apps are moving to K8s and cloud world, but many features like hierarchical queues to support multi-tenancy better, fairness resource sharing, and preemption, etc. are either missing or not mature enough at this point of time to support big data apps running on K8s.
At this point, there is no solution that exists to address the needs of having a unified resource scheduling experiences across platforms. That makes it extremely difficult to manage workloads running on different environments, from on-premise to cloud.
Hence evolving a common scheduler powered from YARN and K8s’s legacy capabilities and improving towards cloud use cases will focus more on use cases like:
Better bin-packing scheduling (and gang scheduling)
Autoscale up and shrink policy management
Effectively run batch workloads and services with clear SLA’s
In summary, we are improving core scheduling capabilities to manage both K8s and YARN cluster which is cloud aware as a separate initiative and above-mentioned cases will be the core focus of this initiative. More details of our works will be presented in this talk.
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. Postgres is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- Evolution of replication in Postgres
- Streaming replication
- Logical replication
- Replication for high availability
- Important high availability parameters
- Options to monitor high availability
- HA infrastructure to patch the database with minimal downtime
- EDB Postgres Failover Manager (EFM)
- EDB tools to create a highly available Postgres architecture
Deep Dive - Usage of on premises data gateway for hybrid integration scenariosSajith C P Nair
Presentation delivered by Sajith C P, Integration Architect at the 2017 Global Integration Bootcamp, Bangalore.
https://www.biztalk360.com/gib2017-india/#speakers[inline]/7/
In this session the speaker talked about ‘on-premises data gateway’ as a secure centralized gateway that can be used for accessing on premise data from various Azure Services. He took a deep dive on how it works, how to install and various methods to troubleshoot connectivity. He concluded the session with few demos of its use in Azure Logic App, Microsoft Flow, Power Apps and Power BI.
Using Apache Calcite for Enabling SQL and JDBC Access to Apache Geode and Oth...Christian Tzolov
When working with BigData & IoT systems we often feel the need for a Common Query Language. The system specific languages usually require longer adoption time and are harder to integrate within the existing stacks.
To fill this gap some NoSql vendors are building SQL access to their systems. Building SQL engine from scratch is a daunting job and frameworks like Apache Calcite can help you with the heavy lifting. Calcite allow you to integrate SQL parser, cost-based optimizer, and JDBC with your NoSql system.
We will walk through the process of building a SQL access layer for Apache Geode (In-Memory Data Grid). I will share my experience, pitfalls and technical consideration like balancing between the SQL/RDBMS semantics and the design choices and limitations of the data system.
Hopefully this will enable you to add SQL capabilities to your prefered NoSQL data system.
In this session we review the design of the newly released off heap storage feature in Apache Geode, and discuss use cases and potential direction for additional capabilities of this feature.
Scale Out Your Big Data Apps: The Latest on Pivotal GemFire and GemFire XDVMware Tanzu
Companies across all industries and sizes are investing in strategic custom applications to enhance their competitive advantages. Developing these applications requires continuous improvement, based on insights gleaned from collecting and analyzing the data that they generate.
Big Data for high-performing, scalable and reliable applications requires a new set of tools and technologies. Pivotal GemFire is a distributed in-memory NoSQL data management solution for creating high-scale custom applications. Pivotal GemFire XD supports structured data as part the industry’s first Hadoop-based platform for creating closed loop analytics solutions – enabling businesses to continuously optimize real-time automation in their applications.
YARN Containerized Services: Fading The Lines Between On-Prem And CloudDataWorks Summit
Apache Hadoop YARN is the modern distributed operating system for big data applications. In Apache Hadoop 3.1.0, YARN added a service framework that supports long-running services. This new capability goes hand in hand with the recent improvements in YARN to support Docker containers. Together these features have made it significantly easier to bring new applications and services to YARN.
In this talk you will learn about YARN service framework, its new containerization capabilities and how it lays the foundation for a hybrid and uniform architecture for compute and storage across on-prem and multi-cloud environments. This will include examples highlighting how easy it is to bring applications to the YARN service framework as well as how to containerize applications.
Here's what to expect in this talk:
- Motivation for YARN service framework and containerization
- YARN service framework overview
- YARN service examples
- Containerization overview
- Containerization for Big Data and non Big Data workloads - wait that's everything
Cloud-Native PostgreSQL is a Kubernetes Operator for Postgres written by EDB entirely from scratch in the Go language and relying exclusively on the Kubernetes API.
This webinar covered:
- About DevOps & Cloud Native
- Overview of Cloud Native Postgres
- Storage for Postgres workloads in Kubernetes
- Start Using Cloud-Native Postgres
- Demo
Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...DataWorks Summit
At Walmart Labs, we get close to 200 million customers every week across our 11000+ stores & online all over the world. As part of our data lake initiatives, we started a full-fledged migration to Hadoop based solutions for all our data needs at lower cost than traditional RDBMS/MPP solutions. While we have seen significant success in migrating to Hadoop based Data Lake solutions from traditional RDBMS based data warehouses, one challenge that we have faced is around migrating end users to Hadoop due to query latency issues. To solve this problem and to reduce the cost of the solution, Walmart Labs started using Hive LLAP.
In this session, we will introduce you to Hive LLAP, its architecture, best practices for deployment to achieve sub-second query performance and its cost comparison with traditional RDBMS systems for the same use case.
Building Apps with Distributed In-Memory Computing Using Apache GeodePivotalOpenSourceHub
Slides from the Meetup Monday March 7, 2016 just before the beginning of #GeodeSummit, where we cover an introduction of the technology and community that is Apache Geode, the in-memory data grid.
In April 2015, Apache Geode (incubating) was born from Pivotal’s GemFire, the distributed in-memory database. However, the donation of over 1M LOC was just the beginning of the journey. In this talk we discuss how the GemFire engineering team has adapted their development infrastructure, processes, and culture to embrace the “Apache Way". We present lessons learned and best practices for new and incubating open source projects in areas of initial code submission, IP clearance, governance policies, code review, and community building. We discuss the challenges the team faced and how we changed internal communication and software design processes to a community-driven model. In particular, we highlight effective strategies for growing a project community and embracing new members. Finally, we show how changing to the open source model has increased both productivity and quality.
Hadoop {Submarine} Project: Running Deep Learning Workloads on YARNDataWorks Summit
Deep learning is useful for enterprises tasks in the field of speech recognition, image classification, AI chatbots and machine translation, just to name a few.
In order to train deep learning/machine learning models, applications such as TensorFlow / MXNet / Caffe / XGBoost can be leveraged. And sometimes these applications will be used together to solve different problems.
To make distributed deep learning/machine learning applications easily launched, managed, monitored. Hadoop community has introduced Submarine project along with other improvements such as first-class GPU support, container-DNS support, scheduling improvements, etc. These improvements make distributed deep learning/machine learning applications run on YARN as simple as running it locally, which can let machine-learning engineers focus on algorithms instead of worrying about underlying infrastructure. Also, YARN can better manage a shared cluster which runs deep learning/machine learning and other services/ETL jobs with these improvements.
In this session, we will take a closer look at Submarine project as well as other improvements and show how to run these deep learning workloads on YARN with demos. Audiences can start trying running these workloads on YARN after this talk.
Speakers:
Sunil Govindan, Staff Engineer
Hortonworks
Zhankun Tank, Staff Engineer
Hortonworks
Demand for cloud is through the roof. Cloud is turbo charging the Enterprise IT landscape with agility and flexibility. And now, discussions of cloud architecture dominate Enterprise IT. Cloud is enabling many ephemeral on-demand use cases which is a game changing opportunity for analytic workloads. But all of this comes with the challenges of running enterprise workloads in the cloud securely and with ease.
In this session, we will take you through Cloudbreak as a solution to simplify provisioning and managing enterprise workloads while providing an open and common experience for deploying workloads across clouds. We will discuss the challenges (and opportunities) to run enterprise workloads in the cloud and will go through a live demo of how the latest from Cloudbreak enables enterprises to easily and securely run Apache Hadoop. This includes deep-dive discussion on Ambari Blueprints, recipes, custom images, and enabling Kerberos -- which are all key capabilities for Enterprise deployments.
Speakers
Jeff Sposetti, VP Product Management, Hortonworks
Attila Kanto, Principal Engineer, Hortonworks
In this webinar, we will discuss different open-source models and different ways open source communities are organized. Understanding these key concepts is essential when selecting a strategic open-source platform. We will explore how the PostgreSQL community ensures that it stays independent, remains vibrant, drives innovation, and provides a reliable long-term platform for strategic IT projects.
This is an introduction to Apache Geode (incubating) that explains some of the key concepts of the project, including steps create a build from the source code, how to create a local cluster with and without Docker, and how to create Teeny, a simple but very scalable URL shortener application.
Next Generation Scheduling for YARN and K8s: For Hybrid Cloud/On-prem Environ...DataWorks Summit
Scheduler of a container orchestration system, such as YARN and K8s, is a critical component that users rely on to plan resources and manage applications.
And if we assess where we are today, in YARN effectively it had two power schedulers (Fair and Capacity scheduler) and both serve many strong use cases in big data ecosystem. It can scale up to 50k nodes per cluster, and schedule 20k containers per second, and extremely efficient to manage batch workloads.
K8s default scheduler is an industry-proven solution to efficiently manage long-running services. As more big data apps are moving to K8s and cloud world, but many features like hierarchical queues to support multi-tenancy better, fairness resource sharing, and preemption, etc. are either missing or not mature enough at this point of time to support big data apps running on K8s.
At this point, there is no solution that exists to address the needs of having a unified resource scheduling experiences across platforms. That makes it extremely difficult to manage workloads running on different environments, from on-premise to cloud.
Hence evolving a common scheduler powered from YARN and K8s’s legacy capabilities and improving towards cloud use cases will focus more on use cases like:
Better bin-packing scheduling (and gang scheduling)
Autoscale up and shrink policy management
Effectively run batch workloads and services with clear SLA’s
In summary, we are improving core scheduling capabilities to manage both K8s and YARN cluster which is cloud aware as a separate initiative and above-mentioned cases will be the core focus of this initiative. More details of our works will be presented in this talk.
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. Postgres is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- Evolution of replication in Postgres
- Streaming replication
- Logical replication
- Replication for high availability
- Important high availability parameters
- Options to monitor high availability
- HA infrastructure to patch the database with minimal downtime
- EDB Postgres Failover Manager (EFM)
- EDB tools to create a highly available Postgres architecture
Deep Dive - Usage of on premises data gateway for hybrid integration scenariosSajith C P Nair
Presentation delivered by Sajith C P, Integration Architect at the 2017 Global Integration Bootcamp, Bangalore.
https://www.biztalk360.com/gib2017-india/#speakers[inline]/7/
In this session the speaker talked about ‘on-premises data gateway’ as a secure centralized gateway that can be used for accessing on premise data from various Azure Services. He took a deep dive on how it works, how to install and various methods to troubleshoot connectivity. He concluded the session with few demos of its use in Azure Logic App, Microsoft Flow, Power Apps and Power BI.
Using Apache Calcite for Enabling SQL and JDBC Access to Apache Geode and Oth...Christian Tzolov
When working with BigData & IoT systems we often feel the need for a Common Query Language. The system specific languages usually require longer adoption time and are harder to integrate within the existing stacks.
To fill this gap some NoSql vendors are building SQL access to their systems. Building SQL engine from scratch is a daunting job and frameworks like Apache Calcite can help you with the heavy lifting. Calcite allow you to integrate SQL parser, cost-based optimizer, and JDBC with your NoSql system.
We will walk through the process of building a SQL access layer for Apache Geode (In-Memory Data Grid). I will share my experience, pitfalls and technical consideration like balancing between the SQL/RDBMS semantics and the design choices and limitations of the data system.
Hopefully this will enable you to add SQL capabilities to your prefered NoSQL data system.
In this session we review the design of the newly released off heap storage feature in Apache Geode, and discuss use cases and potential direction for additional capabilities of this feature.
In this session we review the design of the current capabilities of a partially completed feature in Apache Geode - the ability to act as a backend for Redis client applications. We’ll explore potential use cases and future direction that this capability might evolve.
#GeodeSummit - Large Scale Fraud Detection using GemFire Integrated with Gree...PivotalOpenSourceHub
In this session we explore a case study of a large-scale government fraud detection program that prevents billions of dollars in fraudulent payments each year leveraging the beta release of the GemFire+Greenplum Connector, which is planned for release in GemFire 9. Topics will include an overview of the system architecture and a review of the new GemFire+Greenplum Connector features that simplify use cases requiring a blend of massively parallel database capabilities and accelerated in-memory data processing.
An Introduction to Apache Geode (incubating) - Geode is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures.
This talk provides an in-depth overview of the key concepts of Apache Calcite. It explores the Calcite catalog, parsing, validation, and optimization with various planners.
IoT Architecture - are traditional architectures good enough?Guido Schmutz
Independent of the source of data, the integration of event streams into an Enterprise Architecture gets more and more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analysed, often with many consumers or systems interested in all or part of the events. Dependent on the size and quantity of such events, this can quickly be in the range of Big Data. How can we efficiently collect and transmit these events? How can we make sure that we can always report over historical events? How can these new events be integrated into traditional infrastructure and application landscape?
Starting with a product and technology neutral reference architecture, we will then present different solutions using Open Source frameworks and the Oracle Stack both for on premises as well as the cloud.
This is my Spring 2015 studio project. The 2nd Year Foundation Studio focused on developing an existing parking lot for UVa's sports facilities into a mixed use student housing area. My project focused on creating spaces for interaction between students and fans at the center of game day activity.
In this session we review the design of the current capabilities of the Spring Data GemFire API that supports Geode, and explore additional use cases and future direction that the Spring API and underlying Geode support might evolve.
Part 4: Custom Buildpacks and Data Services (Pivotal Cloud Platform Roadshow)VMware Tanzu
Custom Buildpacks & Data Services
The primary goals of this session are to:
Give an overview of the extension points available to Cloud Foundry users.
Provide a buildpack overview with a deep focus on the Java buildpack (my target audience has been Java conferences)
Provide an overview of service options, from user-provided to managed services, including an overview of the V2 Service Broker API.
Provide two hands-on lab experiences:
Java Buildpack Extension
via customization (add a new framework component)
via configuration (upgrade to Java 8)
Service Broker Development/Management
deploy a service broker for “HashMap as a Service (HaaSh).”
Register the broker, make the plan public.
create an instance of the HaaSh service
deploy a client app, bind to the service, and test it
Pivotal Cloud Platform Roadshow is coming to a city near you!
Join Pivotal technologists and learn how to build and deploy great software on a modern cloud platform. Find your city and register now http://bit.ly/1poA6PG
PHP is the top platform for building and modernizing IBM i applications. In this webinar, Erwin discusses how features of the application server can be leveraged to streamline the development process as well as fast-tracking the management of the PHP environment.
Building microservice for api with helidon and cicd pipelineDonghuKIM2
Eclispe Microprofile 기반 프레임워크인 Oracle Helidon에 대한 이해와 이를 활용한 마이크로 서비스 개발을 시연합니다.
• API 문서 검증, 서비스 빌드, 테스트 및 Oracle Kubernetes Engine에 배포하는 과정을 마이크로 서비스 CI/CD 서비스인 Oracle Wercker를 통해 자동화하는 과정을 시연합니다.
o Building microservice with Helidon MP and Helidon SE
o Validating API document against backend microservice with Dredd and Apiary
o Building CI/CD pipeline with Wercker and Oracle Kubernetes Engine
Oracle Coherence Strategy and Roadmap (OpenWorld, September 2014)jeckels
The Oracle Coherence strategy and roadmap session from OpenWorld 2014. Includes details on the 12.1.3 Cloud Application Foundation release (including WebLogic integration), a road map for the 12.2.1 release, and notable features including JCache (JSR-107) support, Memcached adapters, federated caching, recoverable caching, security enhancements, multitenancy support and more. As usual, all items and statements contained herein are subject to change based on slide 3 of this presentation.
Apigee Deploy Grunt Plugin - API Management Lifecycle Tool that makes your life easier by providing a JavaScript pluggable framework for API development.
Timings API: Performance Assertion during the functional testingPetrosPlakogiannis
Most teams at GoDaddy follow a CI/CD process where Selenium is commonly used for test automation. When they realized that the Selenium WebDrivers provide access to the same APIs as real browsers, including the widely supported W3C performance API, the concept for the cicd-perf-api webservice was born! By inserting some JavaScript code into the WebDriver object, performance data can be collected and posted back to the webservice. The response from the webservice includes a boolean field that testers can use for assertion - just like they would with functional checks! The field indicates whether performance was above/below the baseline. With this talk Petros will give an introduction to the concept of the Timings API and Marcel will be online in order to tell us about the future of it.
Java 9 New Features | Java Tutorial | What’s New in Java 9 | Java 9 Features ...Edureka!
( Java Training - https://www.edureka.co/java-j2ee- soa-training ) This Edureka Java 9 New Features Tutorial explains the latest Java 9 features which are trending nowadays. Below are the topics covered in this tutorial: 1. What is Java? 2. Where is Java used? 3. Features of Java 4. Java Environment 5. How does Java work? 6. Data Types in Java Check out our Java Playlist: https://goo.gl/ES3dI3
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Key is to manage large quantities of data under extreme load with accuracy and resilience reliably.
Big Data == data lake (any and all data)
Fast Data == processing streams of events in real-time
All about… Data Access
Scale Out rather than Scale Up
Throughput (or number of operations) increases as more nodes are added to the cluster
Data is stored in distributed, highly-concurrent, in-memory data structures to minimize context switching and contention
Data is replicated & partitioned for fast, predictable read/write throughput
In a nutshell… under-the-hood Apache Geode is implemented…
Stores data in-memory with puts.
Stores data to disk (synchronously (default) or asynchronously) on persistence and overflow
Oplogs are append-only; compaction is necessary
HDFS is new and Geode can feed Apache Spark processing streams.
Misconceptions about Spring…
Spring is a Web Application Framework
Spring’s programming model is unique and Spring uses it’s own conventions
Built on fundamental OO principles (POJO)
Software Design Patterns (IoC/DI, AOP) and…
Open Standards (OSS)
Apache Geode is a complex technology…
Too many configuration options and settings.
Inconsistent behavior between XML configuration (i.e. cache.xml) and API.