1. Hello ANTLR: ANother Tool for Language Recognition
2. Where we can use ANTLR?
3. Why just not use regular expression language?
4. Tools under ANTLR umbrella
5. ANTLR basic syntax
6. ANTLR on real example
The presentation covers ANTLR and its testing. In the presentation we will discuss what is grammar and how its been parsed into its corresponding parse tree. Then we will focus on the stages of the process of parsing. We will then understand what is ANTLR and will see some of the companies exploring features of ANTLR. Towards the end of the discussion we discuss how to test weather an input string is correct with respect to a grammar or not using TestRig along with the demonstration.
You may refer following blog:
https://blog.knoldus.com/2016/04/29/testing-grammar-using-antlr4-testrig-grun/
Presenter: Kenn Knowles, Software Engineer, Google & Apache Beam (incubating) PPMC member
Apache Beam (incubating) is a programming model and library for unified batch & streaming big data processing. This talk will cover the Beam programming model broadly, including its origin story and vision for the future. We will dig into how Beam separates concerns for authors of streaming data processing pipelines, isolating what you want to compute from where your data is distributed in time and when you want to produce output. Time permitting, we might dive deeper into what goes into building a Beam runner, for example atop Apache Apex.
Presented at All Things Open 2022
Presented by Danny McCormick
Title: Streaming Data Pipelines With Apache Beam
Abstract: Handling big data presents big problems. Along with traditional concerns like scalability and performance, the increasingly common need for live streaming data processing introduces problems like late or incomplete data from flaky data sources. Apache Beam is an open source, unified model for defining both batch and streaming data-parallel processing pipelines that addresses these challenges. Using one of the open source Beam SDKs, you can build a program that defines a pipeline to be executed by one of Beam’s supported distributed processing back-ends, which include Apache Flink, Apache Spark, and Google Cloud Dataflow.
This talk will explore some problems associated with processing large datasets at scale and how you can write Apache Beam pipelines that address those issues. It will include a demo of a basic Beam streaming pipeline.
Takeaways: an understanding of some challenges associated with large datasets, the Apache Beam model, and how to write a basic Beam streaming pipeline
Audience: anyone dealing with big datasets or interested in data processing at scale.
The presentation covers ANTLR and its testing. In the presentation we will discuss what is grammar and how its been parsed into its corresponding parse tree. Then we will focus on the stages of the process of parsing. We will then understand what is ANTLR and will see some of the companies exploring features of ANTLR. Towards the end of the discussion we discuss how to test weather an input string is correct with respect to a grammar or not using TestRig along with the demonstration.
You may refer following blog:
https://blog.knoldus.com/2016/04/29/testing-grammar-using-antlr4-testrig-grun/
Presenter: Kenn Knowles, Software Engineer, Google & Apache Beam (incubating) PPMC member
Apache Beam (incubating) is a programming model and library for unified batch & streaming big data processing. This talk will cover the Beam programming model broadly, including its origin story and vision for the future. We will dig into how Beam separates concerns for authors of streaming data processing pipelines, isolating what you want to compute from where your data is distributed in time and when you want to produce output. Time permitting, we might dive deeper into what goes into building a Beam runner, for example atop Apache Apex.
Presented at All Things Open 2022
Presented by Danny McCormick
Title: Streaming Data Pipelines With Apache Beam
Abstract: Handling big data presents big problems. Along with traditional concerns like scalability and performance, the increasingly common need for live streaming data processing introduces problems like late or incomplete data from flaky data sources. Apache Beam is an open source, unified model for defining both batch and streaming data-parallel processing pipelines that addresses these challenges. Using one of the open source Beam SDKs, you can build a program that defines a pipeline to be executed by one of Beam’s supported distributed processing back-ends, which include Apache Flink, Apache Spark, and Google Cloud Dataflow.
This talk will explore some problems associated with processing large datasets at scale and how you can write Apache Beam pipelines that address those issues. It will include a demo of a basic Beam streaming pipeline.
Takeaways: an understanding of some challenges associated with large datasets, the Apache Beam model, and how to write a basic Beam streaming pipeline
Audience: anyone dealing with big datasets or interested in data processing at scale.
Kafka's basic terminologies, its architecture, its protocol and how it works.
Kafka at scale, its caveats, guarantees and use cases offered by it.
How we use it @ZaprMediaLabs.
Batch Processing at Scale with Flink & IcebergFlink Forward
Flink Forward San Francisco 2022.
Goldman Sachs's Data Lake platform serves as the firm's centralized data platform, ingesting 140K (and growing!) batches per day of Datasets of varying shape and size. Powered by Flink and using metadata configured by platform users, ingestion applications are generated dynamically at runtime to extract, transform, and load data into centralized storage where it is then exported to warehousing solutions such as Sybase IQ, Snowflake, and Amazon Redshift. Data Latency is one of many key considerations as producers and consumers have their own commitments to satisfy. Consumers range from people/systems issuing queries, to applications using engines like Spark, Hive, and Presto to transform data into refined Datasets. Apache Iceberg allows our applications to not only benefit from consistency guarantees important when running on eventually consistent storage like S3, but also allows us the opportunity to improve our batch processing patterns with its scalability-focused features.
by
Andreas Hailu
Exactly-Once Financial Data Processing at Scale with Flink and PinotFlink Forward
Flink Forward San Francisco 2022.
At Stripe we have created a complete end to end exactly-once processing pipeline to process financial data at scale, by combining the exactly-once power from Flink, Kafka, and Pinot together. The pipeline provides exactly-once guarantee, end-to-end latency within a minute, deduplication against hundreds of billions of keys, and sub-second query latency against the whole dataset with trillion level rows. In this session we will discuss the technical challenges of designing, optimizing, and operating the whole pipeline, including Flink, Kafka, and Pinot. We will also share our lessons learned and the benefits gained from exactly-once processing.
by
Xiang Zhang & Pratyush Sharma & Xiaoman Dong
OSA Con 2022 - Arrow in Flight_ New Developments in Data Connectivity - David...Altinity Ltd
OSA Con 2022: Arrow in Flight_ New Developments in Data Connectivity
David Li - Voltron Data
Apache Arrow has allowed developers to increase the speed and efficiency of analytics by making it easier for libraries and databases to share data without serialization. This talk provides an overview of the Arrow project, highlighting use cases where Arrow has accelerated analytics workflows by as much as 100x, and previews where Arrow is going, with special attention to database connectivity.
Improving Kafka at-least-once performance at UberYing Zheng
At Uber, we are seeing an increasing demand for Kafka at-least-once delivery (asks=all). So far, we are running a dedicated at-least-once Kafka cluster with special settings. With a very low workload, the dedicated at-least-once cluster has been working well for more than a year. When trying to allow at-least-once producing on the regular Kafka clusters, the producing performance was the main concern. We spent some effort on this issue in the recent months, and managed to reduce at-least-once producer latency by about 80% with code changes and configuration tuning. When acks=0, these improvements also help increasing Kafka throughput and reducing Kafka end-to-end latency.
Maxim Fateev - Beyond the Watermark- On-Demand Backfilling in FlinkFlink Forward
http://flink-forward.org/kb_sessions/beyond-the-watermark-on-demand-backfilling-in-flink/
Flink has consistency guarantees and efficient checkpointing model which make it a good fit for Uber’s money-related use cases, such as driver incentives. However, Flink’s time-progress model is built around a single watermark, which is incompatible with Uber’s business need for generating aggregates retroactively. The talk covers our solution for on-demand backfilling. It also outlines other abstractions and features we expect Flink to support as it matures.
Data all over the place! How SQL and Apache Calcite bring sanity to streaming...Julian Hyde
The revolution has happened. We are living the age of the deconstructed database. The modern enterprises are powered by data, and that data lives in many formats and locations, in-flight and at rest, but somewhat surprisingly, the lingua franca for remains SQL.
In this talk, Julian describes Apache Calcite, a toolkit for relational algebra that powers many systems including Apache Beam, Flink and Hive. He discusses some areas of development in Calcite: streaming SQL, materialized views, enabling spatial query on vanilla databases, and what a mash-up of all three might look like.
He also describes how SQL is being extended to handle streaming, and the challenges that will need to be solved if it is to become standard.
A talk given by Julian Hyde at Lyft, San Francisco, on 2018/06/27.
Machine Learning on Streaming Data using Kafka, Beam, and TensorFlow (Mikhail...confluent
Are you already using Apache Kafka as your primary messaging platform for streaming events? Would you like to extend your streaming platform for machine learning? Join us to learn about building a streaming machine learning pipeline with Kafka, Beam and TensorFlow on Google Cloud Platform using Confluent Cloud, Dataflow and Cloud Machine Learning Engine.
Apache Spark Listeners: A Crash Course in Fast, Easy MonitoringDatabricks
The Spark Listener interface provides a fast, simple and efficient route to monitoring and observing your Spark application - and you can start using it in minutes. In this talk, we'll introduce the Spark Listener interfaces available in core and streaming applications, and show a few ways in which they've changed our world for the better at SpotX. If you're looking for a "Eureka!" moment in monitoring or tracking of your Spark apps, look no further than Spark Listeners and this talk!
Apache Kafka’s Transactions in the Wild! Developing an exactly-once KafkaSink...HostedbyConfluent
Apache Kafka is one of the most commonly used connectors with Apache Flink for exactly-once streaming use cases. The combination of both systems allows you to build mission-critical systems that require low end-to-end latency and exactly-once processing eg. banks processing transactions. In Apache Flink 1.14, we released a new KafkaSink based on Apache Flink’s unified Sink interface that natively supports streaming and batch executions.
However, we needed to stretch Kafka’s transactions API to fully support exactly-once processing in Flink. In this talk, we will start with a quick recap of Apache Kafka’s transactions and Flink’s checkpointing mechanism. Then, we describe the two-phase commit protocol implemented in KafkaSink in-depth and emphasize the difficulties we have overcome when applying Kafka’s transaction API to longer-lasting transactions.
We explain how we ensure performant writing to Apache Kafka and how the KafkaSink recovery works.
In summary, this talk should give users a deep dive into how Apache Flink leverages Apache Kafka’s transactions and developers an overview of what they have to consider when using Apache Kafka’s transactions.
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...Julian Hyde
A talk given at ACM SIGMOD 2018 in support of the paper <a href="https://arxiv.org/abs/1802.10233"> Calcite: A Foundational Framework for Optimized Query Processing Over Heterogeneous Data Sources</a>.
Apache Calcite is a foundational software framework that provides query processing, optimization, and query language support to many popular open-source data processing systems such as Apache Hive, Apache Storm, Apache Flink, Druid, and MapD. Calcite's architecture consists of a modular and extensible query optimizer with hundreds of built-in optimization rules, a query processor capable of processing a variety of query languages, an adapter architecture designed for extensibility, and support for heterogeneous data models and stores (relational, semi-structured, streaming, and geospatial). This flexible, embeddable, and extensible architecture is what makes Calcite an attractive choice for adoption in big-data frameworks. It is an active project that continues to introduce support for the new types of data sources, query languages, and approaches to query processing and optimization.
This is an overview of interesting features from Apache Pulsar. Keep in mind that by the time I did this presentation I did not have used Pulsar yet. It's just my first impressions from the list of features.
Static Membership: Rebalance Strategy Designed for the Cloud (Boyang Chen,Con...confluent
In this presentation, we introduce static membership (KIP-345) and share the story of adopting it at Pinterest. The static membership aims to improve the availability of stream applications, consumer groups and other applications built on top of it. The original rebalance protocol relies on the group coordinator to allocate entity ids to group members. These generated ids are ephemeral and will change when members restart and rejoin. For consumer based apps, this "dynamic membership" can cause a large percentage of tasks re-assigned to different instances during administrative operations such as code deploys, configuration updates and periodic restarts. For large state applications, shuffled tasks need a long time to recover their local states before processing and cause applications to be partially or entirely unavailable. At Pinterest, the group membership is stable between administrative operations. Motivated by this observation, we modified the Kafka's group management protocol allowing group members to provide persistent entity ids. Group membership remains unchanged based on those ids, thus no rebalance will be triggered. We can conveniently leverage Kubernetes or other cloud management frameworks to provide entity ids. By adopting static membership to the realtime infrastructure at Pinterest, applications resume processing only a few seconds after administrative operations finish. Previously with dynamic membership, it can take more than 30 minutes before applications resume. The talk is organized as follows: we first review Kafka's group management protocol and demonstrate high availability use cases that dynamic membership is unable to support. Then we share the design and adoption story of static membership. At the end, we do a live demo to show the impact of static membership.
Temporal-Joins in Kafka Streams and ksqlDB | Matthias Sax, ConfluentHostedbyConfluent
Joins in Kafka Streams and ksqlDB are a killer-feature for data processing and basic join semantics are well understood. However, in a streaming world records are associated with timestamps that impact the semantics of joins: welcome to the fabulous world of _temporal_ join semantics. For joins, timestamps are as important as the actual data and it is important to understand how they impact the join result.
In this talk we want to deep dive on the different types of joins, with a focus of their temporal aspect. Furthermore, we relate the individual join operators to the overall ""time engine"" of the Kafka Streams query runtime and explain its relationship to operator semantics. To allow developers to apply their knowledge on temporal join semantics, we provide best practices, tip and tricks to ""bend"" time, and configuration advice to get the desired join results. Last, we give an overview of recent, and an outlook to future, development that improves joins even further.
Presentation at Strata Data Conference 2018, New York
The controller is the brain of Apache Kafka. A big part of what the controller does is to maintain the consistency of the replicas and determine which replica can be used to serve the clients, especially during individual broker failure.
Jun Rao outlines the main data flow in the controller—in particular, when a broker fails, how the controller automatically promotes another replica as the leader to serve the clients, and when a broker is started, how the controller resumes the replication pipeline in the restarted broker.
Jun then describes recent improvements to the controller that allow it to handle certain edge cases correctly and increase its performance, which allows for more partitions in a Kafka cluster.
Kafka's basic terminologies, its architecture, its protocol and how it works.
Kafka at scale, its caveats, guarantees and use cases offered by it.
How we use it @ZaprMediaLabs.
Batch Processing at Scale with Flink & IcebergFlink Forward
Flink Forward San Francisco 2022.
Goldman Sachs's Data Lake platform serves as the firm's centralized data platform, ingesting 140K (and growing!) batches per day of Datasets of varying shape and size. Powered by Flink and using metadata configured by platform users, ingestion applications are generated dynamically at runtime to extract, transform, and load data into centralized storage where it is then exported to warehousing solutions such as Sybase IQ, Snowflake, and Amazon Redshift. Data Latency is one of many key considerations as producers and consumers have their own commitments to satisfy. Consumers range from people/systems issuing queries, to applications using engines like Spark, Hive, and Presto to transform data into refined Datasets. Apache Iceberg allows our applications to not only benefit from consistency guarantees important when running on eventually consistent storage like S3, but also allows us the opportunity to improve our batch processing patterns with its scalability-focused features.
by
Andreas Hailu
Exactly-Once Financial Data Processing at Scale with Flink and PinotFlink Forward
Flink Forward San Francisco 2022.
At Stripe we have created a complete end to end exactly-once processing pipeline to process financial data at scale, by combining the exactly-once power from Flink, Kafka, and Pinot together. The pipeline provides exactly-once guarantee, end-to-end latency within a minute, deduplication against hundreds of billions of keys, and sub-second query latency against the whole dataset with trillion level rows. In this session we will discuss the technical challenges of designing, optimizing, and operating the whole pipeline, including Flink, Kafka, and Pinot. We will also share our lessons learned and the benefits gained from exactly-once processing.
by
Xiang Zhang & Pratyush Sharma & Xiaoman Dong
OSA Con 2022 - Arrow in Flight_ New Developments in Data Connectivity - David...Altinity Ltd
OSA Con 2022: Arrow in Flight_ New Developments in Data Connectivity
David Li - Voltron Data
Apache Arrow has allowed developers to increase the speed and efficiency of analytics by making it easier for libraries and databases to share data without serialization. This talk provides an overview of the Arrow project, highlighting use cases where Arrow has accelerated analytics workflows by as much as 100x, and previews where Arrow is going, with special attention to database connectivity.
Improving Kafka at-least-once performance at UberYing Zheng
At Uber, we are seeing an increasing demand for Kafka at-least-once delivery (asks=all). So far, we are running a dedicated at-least-once Kafka cluster with special settings. With a very low workload, the dedicated at-least-once cluster has been working well for more than a year. When trying to allow at-least-once producing on the regular Kafka clusters, the producing performance was the main concern. We spent some effort on this issue in the recent months, and managed to reduce at-least-once producer latency by about 80% with code changes and configuration tuning. When acks=0, these improvements also help increasing Kafka throughput and reducing Kafka end-to-end latency.
Maxim Fateev - Beyond the Watermark- On-Demand Backfilling in FlinkFlink Forward
http://flink-forward.org/kb_sessions/beyond-the-watermark-on-demand-backfilling-in-flink/
Flink has consistency guarantees and efficient checkpointing model which make it a good fit for Uber’s money-related use cases, such as driver incentives. However, Flink’s time-progress model is built around a single watermark, which is incompatible with Uber’s business need for generating aggregates retroactively. The talk covers our solution for on-demand backfilling. It also outlines other abstractions and features we expect Flink to support as it matures.
Data all over the place! How SQL and Apache Calcite bring sanity to streaming...Julian Hyde
The revolution has happened. We are living the age of the deconstructed database. The modern enterprises are powered by data, and that data lives in many formats and locations, in-flight and at rest, but somewhat surprisingly, the lingua franca for remains SQL.
In this talk, Julian describes Apache Calcite, a toolkit for relational algebra that powers many systems including Apache Beam, Flink and Hive. He discusses some areas of development in Calcite: streaming SQL, materialized views, enabling spatial query on vanilla databases, and what a mash-up of all three might look like.
He also describes how SQL is being extended to handle streaming, and the challenges that will need to be solved if it is to become standard.
A talk given by Julian Hyde at Lyft, San Francisco, on 2018/06/27.
Machine Learning on Streaming Data using Kafka, Beam, and TensorFlow (Mikhail...confluent
Are you already using Apache Kafka as your primary messaging platform for streaming events? Would you like to extend your streaming platform for machine learning? Join us to learn about building a streaming machine learning pipeline with Kafka, Beam and TensorFlow on Google Cloud Platform using Confluent Cloud, Dataflow and Cloud Machine Learning Engine.
Apache Spark Listeners: A Crash Course in Fast, Easy MonitoringDatabricks
The Spark Listener interface provides a fast, simple and efficient route to monitoring and observing your Spark application - and you can start using it in minutes. In this talk, we'll introduce the Spark Listener interfaces available in core and streaming applications, and show a few ways in which they've changed our world for the better at SpotX. If you're looking for a "Eureka!" moment in monitoring or tracking of your Spark apps, look no further than Spark Listeners and this talk!
Apache Kafka’s Transactions in the Wild! Developing an exactly-once KafkaSink...HostedbyConfluent
Apache Kafka is one of the most commonly used connectors with Apache Flink for exactly-once streaming use cases. The combination of both systems allows you to build mission-critical systems that require low end-to-end latency and exactly-once processing eg. banks processing transactions. In Apache Flink 1.14, we released a new KafkaSink based on Apache Flink’s unified Sink interface that natively supports streaming and batch executions.
However, we needed to stretch Kafka’s transactions API to fully support exactly-once processing in Flink. In this talk, we will start with a quick recap of Apache Kafka’s transactions and Flink’s checkpointing mechanism. Then, we describe the two-phase commit protocol implemented in KafkaSink in-depth and emphasize the difficulties we have overcome when applying Kafka’s transaction API to longer-lasting transactions.
We explain how we ensure performant writing to Apache Kafka and how the KafkaSink recovery works.
In summary, this talk should give users a deep dive into how Apache Flink leverages Apache Kafka’s transactions and developers an overview of what they have to consider when using Apache Kafka’s transactions.
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...Julian Hyde
A talk given at ACM SIGMOD 2018 in support of the paper <a href="https://arxiv.org/abs/1802.10233"> Calcite: A Foundational Framework for Optimized Query Processing Over Heterogeneous Data Sources</a>.
Apache Calcite is a foundational software framework that provides query processing, optimization, and query language support to many popular open-source data processing systems such as Apache Hive, Apache Storm, Apache Flink, Druid, and MapD. Calcite's architecture consists of a modular and extensible query optimizer with hundreds of built-in optimization rules, a query processor capable of processing a variety of query languages, an adapter architecture designed for extensibility, and support for heterogeneous data models and stores (relational, semi-structured, streaming, and geospatial). This flexible, embeddable, and extensible architecture is what makes Calcite an attractive choice for adoption in big-data frameworks. It is an active project that continues to introduce support for the new types of data sources, query languages, and approaches to query processing and optimization.
This is an overview of interesting features from Apache Pulsar. Keep in mind that by the time I did this presentation I did not have used Pulsar yet. It's just my first impressions from the list of features.
Static Membership: Rebalance Strategy Designed for the Cloud (Boyang Chen,Con...confluent
In this presentation, we introduce static membership (KIP-345) and share the story of adopting it at Pinterest. The static membership aims to improve the availability of stream applications, consumer groups and other applications built on top of it. The original rebalance protocol relies on the group coordinator to allocate entity ids to group members. These generated ids are ephemeral and will change when members restart and rejoin. For consumer based apps, this "dynamic membership" can cause a large percentage of tasks re-assigned to different instances during administrative operations such as code deploys, configuration updates and periodic restarts. For large state applications, shuffled tasks need a long time to recover their local states before processing and cause applications to be partially or entirely unavailable. At Pinterest, the group membership is stable between administrative operations. Motivated by this observation, we modified the Kafka's group management protocol allowing group members to provide persistent entity ids. Group membership remains unchanged based on those ids, thus no rebalance will be triggered. We can conveniently leverage Kubernetes or other cloud management frameworks to provide entity ids. By adopting static membership to the realtime infrastructure at Pinterest, applications resume processing only a few seconds after administrative operations finish. Previously with dynamic membership, it can take more than 30 minutes before applications resume. The talk is organized as follows: we first review Kafka's group management protocol and demonstrate high availability use cases that dynamic membership is unable to support. Then we share the design and adoption story of static membership. At the end, we do a live demo to show the impact of static membership.
Temporal-Joins in Kafka Streams and ksqlDB | Matthias Sax, ConfluentHostedbyConfluent
Joins in Kafka Streams and ksqlDB are a killer-feature for data processing and basic join semantics are well understood. However, in a streaming world records are associated with timestamps that impact the semantics of joins: welcome to the fabulous world of _temporal_ join semantics. For joins, timestamps are as important as the actual data and it is important to understand how they impact the join result.
In this talk we want to deep dive on the different types of joins, with a focus of their temporal aspect. Furthermore, we relate the individual join operators to the overall ""time engine"" of the Kafka Streams query runtime and explain its relationship to operator semantics. To allow developers to apply their knowledge on temporal join semantics, we provide best practices, tip and tricks to ""bend"" time, and configuration advice to get the desired join results. Last, we give an overview of recent, and an outlook to future, development that improves joins even further.
Presentation at Strata Data Conference 2018, New York
The controller is the brain of Apache Kafka. A big part of what the controller does is to maintain the consistency of the replicas and determine which replica can be used to serve the clients, especially during individual broker failure.
Jun Rao outlines the main data flow in the controller—in particular, when a broker fails, how the controller automatically promotes another replica as the leader to serve the clients, and when a broker is started, how the controller resumes the replication pipeline in the restarted broker.
Jun then describes recent improvements to the controller that allow it to handle certain edge cases correctly and increase its performance, which allows for more partitions in a Kafka cluster.
Impulsione sua carreira contribuindo para projetos open sourceAlexandre Porcelli
Qual o sonho de todo desenvolvedor de software? Desenvolver novas tecnologias e frameworks? Trabalhar em um time internacional? Ser incentivado a experimentar e buscar a inovação? Home office? Ser ativo em uma comunidade? Ser referência em uma determinada tecnologia? Ficar longe dos CRUDs? Ser bem pago para isto? Que tal "todas as anteriores"?
A primeira boa notícia é que este sonho pode se tornar sua realidade: é o mundo do open source. A segunda boa notícia é que só depende de você para chegar lá.
Contudo percorrer este caminho não é tão simples assim, aprenda neste keynote o caminho das pedras: desde o primeiro pull request até viver do desenvolvimento de software open source e de que forma estas contribuições podem efetivamente impulsionar a sua carreira de desenvolvedor de software.
These slides illustrate 20 different examples on using Domain-Specific Modeling: Models are expressed using directly the concepts of the problem domain and generate the code for a given target/programming language.
Thorough introduction to language parsing in C#, overview of different approaches, and live-coding session that showcases how to build a working JSON parser using Sprache
Parsers. We might not think about them but anyone who writes code uses parsers every day. And the best part, they are useful not only for compiler design but for implementing other things like custom search queries, DSLs, parsing log files and data.
Writing parsers, a prerequisite for implementation of such features, might seem scary at first (it seemed to me at first!), but in reality, writing parsers is not that complicated.
In this talk, I will explain a bit of theory behind parsers, show how they can be written by hand or with tools such as ANTLR.
This is a simple presentation of python intro. This is very helpful for beginners in presenting a presentation. I have created this presentation especially for beginners.
Utah Code Camp, Spring 2016. http://utahcodecamp.com In this presentation I describe modern C++. Modern C++ assumes features introduced in the C++11/14 standard. An overview of the new features is presented and some idioms for mdoern C++ based on those features are presented.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Using ANTLR on real example - convert "string combined" queries into parameterized queries
1. Using ANTLR on real
example
convert “string combined” queries into
parameterized queries
2. Simon Wiki says:
ANTLR (pronounced Antler), or ANother Tool for Language
Recognition, is a parser generator that uses LL(*) parsing.
ANTLR takes as input a grammar that specifies a language and
generates as output source code for a recognizer for that
language. A language is specified using a context-free grammar
which is expressed using Extended Backus–Naur Form (EBNF).
ANTLR allows generating lexers, parsers, tree parsers, and
combined lexer-parsers. Parsers can automatically generate
abstract syntax trees which can be further processed with tree
parsers. ANTLR provides a single consistent notation for specifying
lexers, parsers, and tree parsers. This is in contrast with other
parser/lexer generators and adds greatly to the tool's ease of use.
3. Used at least in following products:
Drools, JBoss rule engine (DRL DSL)
Hibernate, Java ORM (HQL DSL)
NHibernate, .NET ORM (HQL DSL)
Groovy, language for JVM
Jython, language for JVM
4. Where we need ANTLR?
Parsing a text stream of formal data
Parsing a text stream of incomplete formal data
Complex parsing
Parsing with good error handling
Writing Domain-Specific Language
You have enough time and some data to parse...
5. Why just not use regular
expression language?
In most cases you should go with RegEx
SO: “RegEx is a text search tool. If all you need to do is pull
strings out of strings then it's often the hammer of choice.”
SO: “ANTLR is a parser generator. If you need error messages
and parse actions or any of the complicated things that
come with a interpreter/compiler then it's a good option.”
SO: “ANTLR has perfect support for "error-messages": they
show line/column numbers and what was wrong. RegEx
doesn't have this support.”
ANTLR is a something (a-lot-of-things) on top of regular
expression language.
17. Real example. Test cases
• Query without any parameters
• Query with concat and variable
• Query with dotted and escaped table names and single quote in sql
• Query with function call and func args concat
• Query with function call with several func args
• Query with nested function call with several func args
• Query with concat and two variables
• Insert query with four params
• Query with dotted param and function name and funciton arg
• Endline symbol will be dropped from query
• Single line comment will be dropped from query
• Strip single quote only if it next to parameter
• Query with like keyword (FAILED)
• Refactor multiline query (FAILED)
18. Real example. Syntax tree
strsql = "SELECT * FROM TABLE_NAME WHERE
FIRST_FIELD = " & DOTTED.PARAM_VAR & "
AND SECOND_FIELD = " &
DOTTED.FUNC_CALL(DOTTED.FUNC_ARG)