The document describes a solution called Kick-R that allows running R scripts with 36 cores on AWS to speed up processing. It demonstrates Kick-R running a random forest script on sample spam data over 10 times faster by leveraging 36 AWS cores compared to running locally on a laptop. Instructions are provided on building, running, and cleaning up the Kick-R environment.
This document summarizes how Scala and Hadoop are used at eBay. It discusses:
- Why Scala is used, including its functional capabilities and JVM compatibility.
- Why Hadoop is used to process eBay's petabytes of data across its large cluster.
- How Scalding, a Scala library, allows complex Hadoop jobs to be written concisely and tested effectively, improving on other frameworks like Pig and Cascading.
Code examples show how tasks like collaborative filtering, search query analysis, and Markov chains can be implemented in a readable way using Scalding.
Regular expression that produce parse treesAaron Karper
Presenting a regular expression engine, that gives parse trees in a single pass by modifying the standard non-deterministic finite-state automaton algorithm. My master thesis.
Dapper Tool - A Bundle to Make your ECL NeaterHPCC Systems
Have you ever written a long project for a simple column rename and thought, this should be easier? What about nicely named output statements? Yeah they bother me too. Oh, and DEDUP(SORT(DISTINCT()))? There is a better way! Learn how Dapper can help!
This talk is about using Hive in practice. We will go through some of the specific use cases for which Hive is currently being used at Last.fm, highlighting its strengths and weaknesses along the way.
OCF.tw's talk about "Introduction to spark"Giivee The
在 OCF and OSSF 的邀請下分享一下 Spark
If you have any interest about 財團法人開放文化基金會(OCF) or 自由軟體鑄造場(OSSF)
Please check http://ocf.tw/ or http://www.openfoundry.org/
另外感謝 CLBC 的場地
如果你想到在一個良好的工作環境下工作
歡迎跟 CLBC 接洽 http://clbc.tw/
This document summarizes how Scala and Hadoop are used at eBay. It discusses:
- Why Scala is used, including its functional capabilities and JVM compatibility.
- Why Hadoop is used to process eBay's petabytes of data across its large cluster.
- How Scalding, a Scala library, allows complex Hadoop jobs to be written concisely and tested effectively, improving on other frameworks like Pig and Cascading.
Code examples show how tasks like collaborative filtering, search query analysis, and Markov chains can be implemented in a readable way using Scalding.
Regular expression that produce parse treesAaron Karper
Presenting a regular expression engine, that gives parse trees in a single pass by modifying the standard non-deterministic finite-state automaton algorithm. My master thesis.
Dapper Tool - A Bundle to Make your ECL NeaterHPCC Systems
Have you ever written a long project for a simple column rename and thought, this should be easier? What about nicely named output statements? Yeah they bother me too. Oh, and DEDUP(SORT(DISTINCT()))? There is a better way! Learn how Dapper can help!
This talk is about using Hive in practice. We will go through some of the specific use cases for which Hive is currently being used at Last.fm, highlighting its strengths and weaknesses along the way.
OCF.tw's talk about "Introduction to spark"Giivee The
在 OCF and OSSF 的邀請下分享一下 Spark
If you have any interest about 財團法人開放文化基金會(OCF) or 自由軟體鑄造場(OSSF)
Please check http://ocf.tw/ or http://www.openfoundry.org/
另外感謝 CLBC 的場地
如果你想到在一個良好的工作環境下工作
歡迎跟 CLBC 接洽 http://clbc.tw/
Kite: efficient and available release consistency for the datacenterVasilis Gavrielatos
Kite is a replicated key-value store that provides release consistency and high availability. It uses an efficient fast-path/slow-path mechanism to provide release consistency semantics while minimizing synchronization overhead. The fast path uses eventual consistency for common reads and writes, while acquiring locks for synchronization operations like acquires and releases. The slow path is used when the fast path times out, adding a broadcast round to restore consistency. Microbenchmarks and experiments with lock-free data structures show that Kite outperforms a baseline implementation by up to 3x for workloads with synchronization operations.
AWS re:Invent 2019 - DAT328 Deep Dive on Amazon Aurora PostgreSQLGrant McAlister
Amazon Aurora with PostgreSQL compatibility is a relational database service that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. In this session, we review the functionality in order to understand the architectural differences that contribute to improved scalability, availability, and durability. You'll also get a deep dive into the capabilities of the service and a review of the latest available features. Finally, we walk you through the techniques that you can use to migrate to Amazon Aurora.
Deep Dive on the Amazon Aurora PostgreSQL-compatible Edition - DAT402 - re:In...Amazon Web Services
Amazon Aurora is a fully-managed relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The initial launch of Amazon Aurora delivered these benefits for MySQL. We have now added PostgreSQL compatibility to Amazon Aurora. In this session, Amazon Aurora experts discuss best practices to maximize the benefits of the Amazon Aurora PostgreSQL-compatible edition in your environment.
The document discusses Java 8 streams and stream performance. It provides background on streams and why they were introduced in Java 8. It discusses sequential and parallel streams, how to visualize them, and practical benefits. It covers microbenchmarking and a case study comparing a sequential grep implementation to a parallelized version. Key points are that streams can improve readability but performance must be tested, parallelism helps if the workload is large enough to outweigh overhead, and stream sources need to be splittable for parallelism.
This document provides an overview of the key topics to be covered in a session about Amazon ElastiCache, including an overview of ElastiCache and Redis, best practices for scaling clusters, security and encryption, usage patterns, and questions. Specific topics that will be discussed are scaling clusters with online re-sharding, security and encryption options in ElastiCache, and various usage patterns and best practices.
Node has revolutionized modern runtimes. Their async by default strategy boasts 3x the throughput of Java. And yet, the language runs 5x slower than C++ (when JS is interpreted).
This talk is an advanced intro into the world of Node where we take a closer look under the hood. What's the event loop? Why are there multiple compilers for JS in Node/V8? How many threads are actually used in Node and for what purpose? We'll answer these questions and more as we go over libuv, v8, the node core library, npm, and more.
If you're developing with Node, want to start, or are just curious about how it works, please check it out!
(ARC311) Extreme Availability for Mission-Critical Applications | AWS re:Inve...Amazon Web Services
More and more businesses are deploying their mission-critical applications on AWS, and one of their concerns is how to improve the availability of their services, going beyond traditional availability concepts. In this session, you will learn how to architect different layers of your application―beginning with an extremely available front-end layer with Amazon EC2, Elastic Load Balancing, and Auto Scaling, and going all the way to a protected multitiered information layer, including cross-region replicas for relational and NoSQL databases. The concepts that we will share, using services like Amazon RDS, Amazon DynamoDB, and Amazon Route 53, will provide a framework you can use to keep your application running even with multiple failures. Additionally, you will hear from Magazine Luiza, in an interactive session, on how they run a large e-commerce application with a multiregion architecture using a combination of features and services from AWS to achieve extreme availability.
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. We review architecture design patterns for big data solutions on AWS, and give you access to a take-home lab so that you can rebuild and customize the application yourself.
How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine YardSV Ruby on Rails Meetup
Wesley Beary: Cloud computing scared the crap out of me - the quirks and nightmares
of provisioning computing and storage on AWS, Terremark, Rackspace,
etc - until I took the bull by the horns. Let me now show you how I
tamed that bull.
Learn how to easily get started cloud computing with fog. It gives you
the reins within any Ruby application or script. If you can control
your infrastructure choices, you can make better choices in
development and get what you need in production.
You'll get an overview of fog and concrete examples to give you a head
start on your own provisioning workflow.
Xlab #1: Advantages of functional programming in Java 8XSolve
Presentation from xlab workshop about functional programming components introduced to the Java 8. How to operate the streams and lambdas in theory and practice.
re:Invent 2020 DAT301 Deep Dive on Amazon Aurora with PostgreSQL CompatibilityGrant McAlister
Amazon Aurora with PostgreSQL compatibility is a relational database managed service that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source PostgreSQL. This session highlights Aurora with PostgreSQL compatibility’s key capabilities, including low-latency read replicas and Multi-AZ deployments; reviews the architectural enhancements that contribute to Aurora’s improved scalability, availability, and durability; and digs into the latest feature releases. Finally, this session walks through techniques to migrate to Aurora.
Apache Spark for Library Developers with William Benton and Erik ErlandsonDatabricks
As a developer, data engineer, or data scientist, you’ve seen how Apache Spark is expressive enough to let you solve problems elegantly and efficient enough to let you scale out to handle more data. However, if you’re solving the same problems again and again, you probably want to capture and distribute your solutions so that you can focus on new problems and so other people can reuse and remix them: you want to develop a library that extends Spark.
You faced a learning curve when you first started using Spark, and you’ll face a different learning curve as you start to develop reusable abstractions atop Spark. In this talk, two experienced Spark library developers will give you the background and context you’ll need to turn your code into a library that you can share with the world. We’ll cover: Issues to consider when developing parallel algorithms with Spark, Designing generic, robust functions that operate on data frames and datasets, Extending data frames with user-defined functions (UDFs) and user-defined aggregates (UDAFs), Best practices around caching and broadcasting, and why these are especially important for library developers, Integrating with ML pipelines, Exposing key functionality in both Python and Scala, and How to test, build, and publish your library for the community.
We’ll back up our advice with concrete examples from real packages built atop Spark. You’ll leave this talk informed and inspired to take your Spark proficiency to the next level and develop and publish an awesome library of your own.
TDC 2012 - Patterns e Anti-Patterns em RubyFabio Akita
Palestra apresentada no The Developers Conference 2012 em São Paulo. Explicação sobre Patterns e Anti-Patterns em Ruby para quem está iniciando a aprender a linguagem.
Traceur - Javascript.next - Now! RheinmainJS April 14thCarsten Sandtner
The document discusses Traceur, a compiler that allows developers to write JavaScript code using ECMAScript 6 features while targeting browsers that do not yet support these features natively. It provides an overview of Traceur's capabilities, how to use it through command line, Grunt, or Gulp builds, and the benefits of using a compiler like Traceur to write ES6 code now while targeting older browsers through compilation to ES5. However, it also notes that Traceur does not support all ES6 features and requires a runtime, so developers must carefully consider if their projects truly need ES6 features.
Slides from JEEConf 2018 talk "Virtual Machine for Regular Expressions". It describes how and why to implement a custom regular expression engine for matching arbitrary sequences.
Cassandra + Spark (You’ve got the lighter, let’s start a fire)Robert Stupp
Slides from my talk at Cassandra Days Germany 2016 in Munich and Berlin. Please find the code used for the live demo at https://github.com/snazy/cstar-spark-demo
Building a Scalable Distributed Stats Infrastructure with Storm and KairosDBCody Ray
Building a Scalable Distributed Stats Infrastructure with Storm and KairosDB
Many startups collect and display stats and other time-series data for their users. A supposedly-simple NoSQL option such as MongoDB is often chosen to get started... which soon becomes 50 distributed replica sets as volume increases. This talk describes how we designed a scalable distributed stats infrastructure from the ground up. KairosDB, a rewrite of OpenTSDB built on top of Cassandra, provides a solid foundation for storing time-series data. Unfortunately, though, it has some limitations: millisecond time granularity and lack of atomic upsert operations which make counting (critical to any stats infrastructure) a challenge. Additionally, running KairosDB atop Cassandra inside AWS brings its own set of challenges, such as managing Cassandra seeds and AWS security groups as you grow or shrink your Cassandra ring. In this deep-dive talk, we explore how we've used a mix of open-source and in-house tools to tackle these challenges and build a robust, scalable, distributed stats infrastructure.
This document summarizes Rails on Oracle and the Oracle enhanced ActiveRecord adapter. It discusses the main components, how the adapter maps data types between Ruby/Rails and Oracle, and how it handles legacy schemas, PL/SQL CRUD procedures, and full-text indexes. It also provides information on testing, contributing, reporting issues and related libraries.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
More Related Content
Similar to Kick-R: Get your own R instance with 36 cores on AWS
Kite: efficient and available release consistency for the datacenterVasilis Gavrielatos
Kite is a replicated key-value store that provides release consistency and high availability. It uses an efficient fast-path/slow-path mechanism to provide release consistency semantics while minimizing synchronization overhead. The fast path uses eventual consistency for common reads and writes, while acquiring locks for synchronization operations like acquires and releases. The slow path is used when the fast path times out, adding a broadcast round to restore consistency. Microbenchmarks and experiments with lock-free data structures show that Kite outperforms a baseline implementation by up to 3x for workloads with synchronization operations.
AWS re:Invent 2019 - DAT328 Deep Dive on Amazon Aurora PostgreSQLGrant McAlister
Amazon Aurora with PostgreSQL compatibility is a relational database service that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. In this session, we review the functionality in order to understand the architectural differences that contribute to improved scalability, availability, and durability. You'll also get a deep dive into the capabilities of the service and a review of the latest available features. Finally, we walk you through the techniques that you can use to migrate to Amazon Aurora.
Deep Dive on the Amazon Aurora PostgreSQL-compatible Edition - DAT402 - re:In...Amazon Web Services
Amazon Aurora is a fully-managed relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The initial launch of Amazon Aurora delivered these benefits for MySQL. We have now added PostgreSQL compatibility to Amazon Aurora. In this session, Amazon Aurora experts discuss best practices to maximize the benefits of the Amazon Aurora PostgreSQL-compatible edition in your environment.
The document discusses Java 8 streams and stream performance. It provides background on streams and why they were introduced in Java 8. It discusses sequential and parallel streams, how to visualize them, and practical benefits. It covers microbenchmarking and a case study comparing a sequential grep implementation to a parallelized version. Key points are that streams can improve readability but performance must be tested, parallelism helps if the workload is large enough to outweigh overhead, and stream sources need to be splittable for parallelism.
This document provides an overview of the key topics to be covered in a session about Amazon ElastiCache, including an overview of ElastiCache and Redis, best practices for scaling clusters, security and encryption, usage patterns, and questions. Specific topics that will be discussed are scaling clusters with online re-sharding, security and encryption options in ElastiCache, and various usage patterns and best practices.
Node has revolutionized modern runtimes. Their async by default strategy boasts 3x the throughput of Java. And yet, the language runs 5x slower than C++ (when JS is interpreted).
This talk is an advanced intro into the world of Node where we take a closer look under the hood. What's the event loop? Why are there multiple compilers for JS in Node/V8? How many threads are actually used in Node and for what purpose? We'll answer these questions and more as we go over libuv, v8, the node core library, npm, and more.
If you're developing with Node, want to start, or are just curious about how it works, please check it out!
(ARC311) Extreme Availability for Mission-Critical Applications | AWS re:Inve...Amazon Web Services
More and more businesses are deploying their mission-critical applications on AWS, and one of their concerns is how to improve the availability of their services, going beyond traditional availability concepts. In this session, you will learn how to architect different layers of your application―beginning with an extremely available front-end layer with Amazon EC2, Elastic Load Balancing, and Auto Scaling, and going all the way to a protected multitiered information layer, including cross-region replicas for relational and NoSQL databases. The concepts that we will share, using services like Amazon RDS, Amazon DynamoDB, and Amazon Route 53, will provide a framework you can use to keep your application running even with multiple failures. Additionally, you will hear from Magazine Luiza, in an interactive session, on how they run a large e-commerce application with a multiregion architecture using a combination of features and services from AWS to achieve extreme availability.
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. We review architecture design patterns for big data solutions on AWS, and give you access to a take-home lab so that you can rebuild and customize the application yourself.
How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine YardSV Ruby on Rails Meetup
Wesley Beary: Cloud computing scared the crap out of me - the quirks and nightmares
of provisioning computing and storage on AWS, Terremark, Rackspace,
etc - until I took the bull by the horns. Let me now show you how I
tamed that bull.
Learn how to easily get started cloud computing with fog. It gives you
the reins within any Ruby application or script. If you can control
your infrastructure choices, you can make better choices in
development and get what you need in production.
You'll get an overview of fog and concrete examples to give you a head
start on your own provisioning workflow.
Xlab #1: Advantages of functional programming in Java 8XSolve
Presentation from xlab workshop about functional programming components introduced to the Java 8. How to operate the streams and lambdas in theory and practice.
re:Invent 2020 DAT301 Deep Dive on Amazon Aurora with PostgreSQL CompatibilityGrant McAlister
Amazon Aurora with PostgreSQL compatibility is a relational database managed service that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source PostgreSQL. This session highlights Aurora with PostgreSQL compatibility’s key capabilities, including low-latency read replicas and Multi-AZ deployments; reviews the architectural enhancements that contribute to Aurora’s improved scalability, availability, and durability; and digs into the latest feature releases. Finally, this session walks through techniques to migrate to Aurora.
Apache Spark for Library Developers with William Benton and Erik ErlandsonDatabricks
As a developer, data engineer, or data scientist, you’ve seen how Apache Spark is expressive enough to let you solve problems elegantly and efficient enough to let you scale out to handle more data. However, if you’re solving the same problems again and again, you probably want to capture and distribute your solutions so that you can focus on new problems and so other people can reuse and remix them: you want to develop a library that extends Spark.
You faced a learning curve when you first started using Spark, and you’ll face a different learning curve as you start to develop reusable abstractions atop Spark. In this talk, two experienced Spark library developers will give you the background and context you’ll need to turn your code into a library that you can share with the world. We’ll cover: Issues to consider when developing parallel algorithms with Spark, Designing generic, robust functions that operate on data frames and datasets, Extending data frames with user-defined functions (UDFs) and user-defined aggregates (UDAFs), Best practices around caching and broadcasting, and why these are especially important for library developers, Integrating with ML pipelines, Exposing key functionality in both Python and Scala, and How to test, build, and publish your library for the community.
We’ll back up our advice with concrete examples from real packages built atop Spark. You’ll leave this talk informed and inspired to take your Spark proficiency to the next level and develop and publish an awesome library of your own.
TDC 2012 - Patterns e Anti-Patterns em RubyFabio Akita
Palestra apresentada no The Developers Conference 2012 em São Paulo. Explicação sobre Patterns e Anti-Patterns em Ruby para quem está iniciando a aprender a linguagem.
Traceur - Javascript.next - Now! RheinmainJS April 14thCarsten Sandtner
The document discusses Traceur, a compiler that allows developers to write JavaScript code using ECMAScript 6 features while targeting browsers that do not yet support these features natively. It provides an overview of Traceur's capabilities, how to use it through command line, Grunt, or Gulp builds, and the benefits of using a compiler like Traceur to write ES6 code now while targeting older browsers through compilation to ES5. However, it also notes that Traceur does not support all ES6 features and requires a runtime, so developers must carefully consider if their projects truly need ES6 features.
Slides from JEEConf 2018 talk "Virtual Machine for Regular Expressions". It describes how and why to implement a custom regular expression engine for matching arbitrary sequences.
Cassandra + Spark (You’ve got the lighter, let’s start a fire)Robert Stupp
Slides from my talk at Cassandra Days Germany 2016 in Munich and Berlin. Please find the code used for the live demo at https://github.com/snazy/cstar-spark-demo
Building a Scalable Distributed Stats Infrastructure with Storm and KairosDBCody Ray
Building a Scalable Distributed Stats Infrastructure with Storm and KairosDB
Many startups collect and display stats and other time-series data for their users. A supposedly-simple NoSQL option such as MongoDB is often chosen to get started... which soon becomes 50 distributed replica sets as volume increases. This talk describes how we designed a scalable distributed stats infrastructure from the ground up. KairosDB, a rewrite of OpenTSDB built on top of Cassandra, provides a solid foundation for storing time-series data. Unfortunately, though, it has some limitations: millisecond time granularity and lack of atomic upsert operations which make counting (critical to any stats infrastructure) a challenge. Additionally, running KairosDB atop Cassandra inside AWS brings its own set of challenges, such as managing Cassandra seeds and AWS security groups as you grow or shrink your Cassandra ring. In this deep-dive talk, we explore how we've used a mix of open-source and in-house tools to tackle these challenges and build a robust, scalable, distributed stats infrastructure.
This document summarizes Rails on Oracle and the Oracle enhanced ActiveRecord adapter. It discusses the main components, how the adapter maps data types between Ruby/Rails and Oracle, and how it handles legacy schemas, PL/SQL CRUD procedures, and full-text indexes. It also provides information on testing, contributing, reporting issues and related libraries.
Similar to Kick-R: Get your own R instance with 36 cores on AWS (20)
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Kick-R: Get your own R instance with 36 cores on AWS
1. Kick-R: Get your own R instance with
36 cores on AWS
Kick-R: Get your own R instance with
36 cores on AWS
Kick-R: Get your own R instance with
36 cores on AWS
Kick-R: Get your own R instance with
36 cores on AWS
Kick-R: Get your own R instance with
36 cores on AWS
Kiwamu Okabe @ Centillion Japan
Co.,Ltd.
Kiwamu Okabe @ Centillion Japan
Co.,Ltd.
Kiwamu Okabe @ Centillion Japan
Co.,Ltd.
Kiwamu Okabe @ Centillion Japan
Co.,Ltd.
Kiwamu Okabe @ Centillion Japan
Co.,Ltd.
2. One day, my boss said...One day, my boss said...One day, my boss said...One day, my boss said...One day, my boss said...
☆ Boss: "Hey, my R script needs much time
to run..."
☆ Boss: "Hey, my R script needs much time
to run..."
☆ Boss: "Hey, my R script needs much time
to run..."
☆ Boss: "Hey, my R script needs much time
to run..."
☆ Boss: "Hey, my R script needs much time
to run..."
☆ Me: "O.K. I'll try to fix it using AWS!"☆ Me: "O.K. I'll try to fix it using AWS!"☆ Me: "O.K. I'll try to fix it using AWS!"☆ Me: "O.K. I'll try to fix it using AWS!"☆ Me: "O.K. I'll try to fix it using AWS!"
3. My solution named "Kick-R"My solution named "Kick-R"My solution named "Kick-R"My solution named "Kick-R"My solution named "Kick-R"
6. How to run?How to run?How to run?How to run?How to run?
$�make
$�make�ssh-config�>�~/.ssh/config
$�ssh�kick-r
ubuntu@ip-10-189-135-202:~$�R�--version�│�head�-1
R�version�3.0.2�(2013-09-25)�--�"Frisbee�Sailing"
$�make
$�make�ssh-config�>�~/.ssh/config
$�ssh�kick-r
ubuntu@ip-10-189-135-202:~$�R�--version�│�head�-1
R�version�3.0.2�(2013-09-25)�--�"Frisbee�Sailing"
$�make
$�make�ssh-config�>�~/.ssh/config
$�ssh�kick-r
ubuntu@ip-10-189-135-202:~$�R�--version�│�head�-1
R�version�3.0.2�(2013-09-25)�--�"Frisbee�Sailing"
$�make
$�make�ssh-config�>�~/.ssh/config
$�ssh�kick-r
ubuntu@ip-10-189-135-202:~$�R�--version�│�head�-1
R�version�3.0.2�(2013-09-25)�--�"Frisbee�Sailing"
$�make
$�make�ssh-config�>�~/.ssh/config
$�ssh�kick-r
ubuntu@ip-10-189-135-202:~$�R�--version�│�head�-1
R�version�3.0.2�(2013-09-25)�--�"Frisbee�Sailing"
7. How to use on Emacs?How to use on Emacs?How to use on Emacs?How to use on Emacs?How to use on Emacs?
M-x�R
/ssh:kick-r:
M-x�R
/ssh:kick-r:
M-x�R
/ssh:kick-r:
M-x�R
/ssh:kick-r:
M-x�R
/ssh:kick-r:
X forwarding is available.X forwarding is available.X forwarding is available.X forwarding is available.X forwarding is available.
8. How to cleanup all of environment?How to cleanup all of environment?How to cleanup all of environment?How to cleanup all of environment?How to cleanup all of environment?
make�distcleanmake�distcleanmake�distcleanmake�distcleanmake�distclean