The document discusses how to operationalize R models by turning R analytics into web services using Microsoft R Server. Key points include:
- R analytics can be published as web services with one line of code using Swagger-based REST APIs for easy consumption by any programming language.
- Web services can be deployed on Windows, SQL Server, Linux, and Hadoop platforms both on-premises and in the cloud for fast scoring in real time and batch modes.
- R Server allows scaling to grids for powerful computing with load balancing and includes diagnostic and capacity evaluation tools.
What is Kafka & why is it Important? (UKOUG Tech17, Birmingham, UK - December...Lucas Jellema
Fast data arrives in real time and potentially high volume. Rapid processing, filtering and aggregation is required to ensure timely reaction and actual information in user interfaces. Doing so is a challenge, make this happen in a scalable and reliable fashion is even more interesting. This session introduces Apache Kafka as the scalable event bus that takes care of the events as they flow in and Kafka Streams and KSQL for the streaming analytics. Both Java and Node applications are demonstrated that interact with Kafka and leverage Server Sent Events and WebSocket channels to update the Web UI in real time. User activity performed by the audience in the Web UI is processed by the Kafka powered back end and results in live updates on all clients.
This presentation includes a demonstration of remote database synchronization through Twitter.
How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Su...Amit Gupta
The Cloud Foundry Diego team at Pivotal has been hard at work for the past few months exploring and improving Diego's performance at scale and under stress. This talk covers the goals, tools, and results of the experiments to date, as well as a glimpse of what's next.
And finally, a brief teaser about the current state of .NET support in Diego
Netflix uses Conductor, an open source microservices orchestrator, to manage complex content processing workflows involving ingestion, encoding, localization, and delivery. Conductor provides visibility, control, and reuse of tasks through a task queuing system and workflow definitions. It has scaled to process millions of workflow executions across Netflix's content platform using a stateless architecture with Dynomite for storage and Dyno-Queues for task distribution.
Software Developer and Architecture @ LinkedIn (QCon SF 2014)Sid Anand
The document provides details about Sid Anand's career and then discusses LinkedIn's software development process and architecture when he was there. It notes that when Sid started at LinkedIn in 2011, compiling the code took a long time due to the large codebase and many dependencies. It then describes how LinkedIn scaled to support hundreds of millions of members and thousands of employees by splitting the monolithic codebase into individual Git repos, using intermediate JARs to reduce dependencies, and connecting development machines to test environments instead of deploying everything locally. It also discusses LinkedIn's use of Kafka, search federation, and not making web service calls between data centers to scale across multiple data centers.
Writing Blazing Fast, and Production-Ready Kafka Streams apps in less than 30...HostedbyConfluent
If you have already worked on various Kafka Streams applications before, then you have probably found yourself in the situation of rewriting the same piece of code again and again.
Whether it's to manage processing failures or bad records, to use interactive queries, to organize your code, to deploy or to monitor your Kafka Streams app, build some in-house libraries to standardize common patterns across your projects seems to be unavoidable.
And, if you're new to Kafka Streams you might be interested to know what are those patterns to use for your next streaming project.
In this talk, I propose to introduce you to Azkarra, an open-source lightweight Java framework that was designed to provide most of that stuffs off-the-shelf by leveraging the best-of-breed ideas and proven practices from the Apache Kafka community.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...confluent
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Pivotal Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value.
Apache Kafka® is providing developers a critically important component as they build and modernize applications to cloud-native architecture.
This talk will explore:
• Why cloud-native platforms and why run Apache Kafka on Kubernetes?
• What kind of workloads are best suited for this combination?
• Tips to determine the path forward for legacy monoliths in your application portfolio
• Demo: Running Apache Kafka as a Streaming Platform on Kubernetes
Putting Kafka In Jail – Best Practices To Run Kafka On Kubernetes & DC/OSLightbend
Apache Kafka–part of Lightbend Fast Data Platform–is a distributed streaming platform that is best suited to run close to the metal on dedicated machines in statically defined clusters. For most enterprises, however, these fixed clusters are quickly becoming extinct in favor of mixed-use clusters that take advantage of all infrastructure resources available.
In this webinar by Sean Glover, Fast Data Engineer at Lightbend, we will review leading Kafka implementations on DC/OS and Kubernetes to see how they reliably run Kafka in container orchestrated clusters and reduce the overhead for a number of common operational tasks with standard cluster resource manager features. You will learn specifically about concerns like:
* The need for greater operational knowhow to do common tasks with Kafka in static clusters, such as applying broker configuration updates, upgrading to a new version, and adding or decommissioning brokers.
* The best way to provide resources to stateful technologies while in a mixed-use cluster, noting the importance of disk space as one of Kafka’s most important resource requirements.
* How to address the particular needs of stateful services in a model that natively favors stateless, transient services.
The document discusses how to operationalize R models by turning R analytics into web services using Microsoft R Server. Key points include:
- R analytics can be published as web services with one line of code using Swagger-based REST APIs for easy consumption by any programming language.
- Web services can be deployed on Windows, SQL Server, Linux, and Hadoop platforms both on-premises and in the cloud for fast scoring in real time and batch modes.
- R Server allows scaling to grids for powerful computing with load balancing and includes diagnostic and capacity evaluation tools.
What is Kafka & why is it Important? (UKOUG Tech17, Birmingham, UK - December...Lucas Jellema
Fast data arrives in real time and potentially high volume. Rapid processing, filtering and aggregation is required to ensure timely reaction and actual information in user interfaces. Doing so is a challenge, make this happen in a scalable and reliable fashion is even more interesting. This session introduces Apache Kafka as the scalable event bus that takes care of the events as they flow in and Kafka Streams and KSQL for the streaming analytics. Both Java and Node applications are demonstrated that interact with Kafka and leverage Server Sent Events and WebSocket channels to update the Web UI in real time. User activity performed by the audience in the Web UI is processed by the Kafka powered back end and results in live updates on all clients.
This presentation includes a demonstration of remote database synchronization through Twitter.
How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Su...Amit Gupta
The Cloud Foundry Diego team at Pivotal has been hard at work for the past few months exploring and improving Diego's performance at scale and under stress. This talk covers the goals, tools, and results of the experiments to date, as well as a glimpse of what's next.
And finally, a brief teaser about the current state of .NET support in Diego
Netflix uses Conductor, an open source microservices orchestrator, to manage complex content processing workflows involving ingestion, encoding, localization, and delivery. Conductor provides visibility, control, and reuse of tasks through a task queuing system and workflow definitions. It has scaled to process millions of workflow executions across Netflix's content platform using a stateless architecture with Dynomite for storage and Dyno-Queues for task distribution.
Software Developer and Architecture @ LinkedIn (QCon SF 2014)Sid Anand
The document provides details about Sid Anand's career and then discusses LinkedIn's software development process and architecture when he was there. It notes that when Sid started at LinkedIn in 2011, compiling the code took a long time due to the large codebase and many dependencies. It then describes how LinkedIn scaled to support hundreds of millions of members and thousands of employees by splitting the monolithic codebase into individual Git repos, using intermediate JARs to reduce dependencies, and connecting development machines to test environments instead of deploying everything locally. It also discusses LinkedIn's use of Kafka, search federation, and not making web service calls between data centers to scale across multiple data centers.
Writing Blazing Fast, and Production-Ready Kafka Streams apps in less than 30...HostedbyConfluent
If you have already worked on various Kafka Streams applications before, then you have probably found yourself in the situation of rewriting the same piece of code again and again.
Whether it's to manage processing failures or bad records, to use interactive queries, to organize your code, to deploy or to monitor your Kafka Streams app, build some in-house libraries to standardize common patterns across your projects seems to be unavoidable.
And, if you're new to Kafka Streams you might be interested to know what are those patterns to use for your next streaming project.
In this talk, I propose to introduce you to Azkarra, an open-source lightweight Java framework that was designed to provide most of that stuffs off-the-shelf by leveraging the best-of-breed ideas and proven practices from the Apache Kafka community.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...confluent
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Pivotal Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value.
Apache Kafka® is providing developers a critically important component as they build and modernize applications to cloud-native architecture.
This talk will explore:
• Why cloud-native platforms and why run Apache Kafka on Kubernetes?
• What kind of workloads are best suited for this combination?
• Tips to determine the path forward for legacy monoliths in your application portfolio
• Demo: Running Apache Kafka as a Streaming Platform on Kubernetes
Putting Kafka In Jail – Best Practices To Run Kafka On Kubernetes & DC/OSLightbend
Apache Kafka–part of Lightbend Fast Data Platform–is a distributed streaming platform that is best suited to run close to the metal on dedicated machines in statically defined clusters. For most enterprises, however, these fixed clusters are quickly becoming extinct in favor of mixed-use clusters that take advantage of all infrastructure resources available.
In this webinar by Sean Glover, Fast Data Engineer at Lightbend, we will review leading Kafka implementations on DC/OS and Kubernetes to see how they reliably run Kafka in container orchestrated clusters and reduce the overhead for a number of common operational tasks with standard cluster resource manager features. You will learn specifically about concerns like:
* The need for greater operational knowhow to do common tasks with Kafka in static clusters, such as applying broker configuration updates, upgrading to a new version, and adding or decommissioning brokers.
* The best way to provide resources to stateful technologies while in a mixed-use cluster, noting the importance of disk space as one of Kafka’s most important resource requirements.
* How to address the particular needs of stateful services in a model that natively favors stateless, transient services.
Making Kafka Cloud Native | Jay Kreps, Co-Founder & CEO, ConfluentHostedbyConfluent
A talk discussing the rise of Apache Kafka and data in motion plus the impact of cloud native data systems. This talk will cover how Kafka needs to evolve to keep up with the future of cloud, what this means for distributed systems engineers, and what work is being done to truly make Kafka Cloud Native
Building Microservices with the 12 Factor App Pattern on AWSAmazon Web Services
by Chris Hein, Partner Solutions Architect, AWS
Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features. But building containerized microservices across multiple teams means you need well-defined, guiding methodologies for software design and implementation. In this talk we’ll discuss architectural best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers. We will deep-dive into Martin Fowler’s principles of microservices and map them to the twelve-factor app pattern and real-life considerations. If you are building or in the process of building microservices on AWS, don’t miss this session.
Function Mesh: Complex Streaming Jobs Made Simple - Pulsar Summit NA 2021StreamNative
Pulsar Function is a succinct computing abstraction Apache Pulsar provides users to express simple ETL and streaming tasks. The simplicity comes in two folds: Simple Interface and Simple Deployment. As it has been adopted, we realized that the native support of organizing multiple functions into integrity will be very beneficial. With such support, people can express and manage multi-stage jobs easily. In addition, this support also provides the possibility of higher-level abstraction DSL to further simplify the job composition. We call this new feature -- Function Mesh.
This talk aims to provide a thorough walkthrough of this new Function Mesh Feature, including its design, implementation, use cases, and examples, to help people seeking simple streaming solutions understand this newly created powerful tool in Apache Pulsar.
Unified Data Processing with Apache Flink and Apache Pulsar_Seth WiesmanStreamNative
Come learn how the combination of Apache Pulsar and Apache Flink is making stateful stream processing even more expressive and flexible to support applications in streaming that were previously not considered streamable. The new world of applications and fast data architectures has broken up the database: Raw data persistence comes in the form of event logs, and the state of the world is computed by a stream processor. Apache Pulsar provides a strong solution for the event log, while Apache Flink forms a powerful foundation for the computation over the event streams.
We will discuss the key concepts behind Apache Flink's approach to stream processing and how it is a powerful abstraction for stateful event-driven applications. We will then see how to use Flink in conjunction with Apache Pulsar to creates a unified data processing platform.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for Microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. At this session we will review the current landscape of DevOps with Containers. In addition, we will discuss known issues and solutions for enterprise Java applications in Containers.
From data stream management to distributed dataflows and beyondVasia Kalavri
Recent efforts by academia and open-source communities have established stream processing as a principal data analysis technology across industry. All major cloud vendors offer streaming dataflow pipelines and online analytics as managed services. Notable use-cases include real-time fault detection in space networks, city traffic management, dynamic pricing for car-sharing, and anomaly detection in financial transactions. At the same time, streaming dataflow systems are increasingly being used for event-driven applications beyond analytics, such as orchestrating microservices and model serving. In the past decades, streaming technology has evolved significantly, however, emerging applications are once more challenging the design decisions of modern streaming systems. In this talk, I will discuss the evolution of stream processing and bring current trends and open problems to the attention of our community.
With Apache Kafka’s rise for event-driven architectures, developers require a specification to design effective event-driven APIs. AsyncAPI has been developed based on OpenAPI to define the endpoints and schemas of brokers and topics. For Kafka applications, the broker’s design to handle high throughput serialized payloads brings challenges for consumers and producers managing the structure of the message. For this reason, a registry becomes critical to achieve schema governance. Apicurio Registry is an end-to-end solution to store API definitions and schemas for Kafka applications. The project includes serializers, deserializers, and additional tooling. The registry supports several types of artifacts including OpenAPI, AsyncAPI, GraphQL, Apache Avro, Google protocol buffers, JSON Schema, Kafka Connect schema, WSDL, and XML Schema (XSD). It also checks them for validity and compatibility.
In this session, we will be covering the following topics:
● The importance of having a contract-first approach to event-driven APIs
● What is AsyncAPI, and how it helps to define Kafka endpoints and schemas
● The Kafka challenges on message structure when serializing and deserializing
● Introduction to Apicurio Registry and schema management for Kafka
● Examples of how to use Apicurio Registry with popular Java frameworks like Spring and Quarkus
Patterns and Pains of Migrating Legacy Applications to KubernetesJosef Adersberger
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs, and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud native apps. But what to do if you’ve no shiny new cloud native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a German blue chip company onto a Kubernetes cluster within one year.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way.
How easy (or hard) it is to monitor your graph ql service performanceRed Hat
- GraphQL performance monitoring can be challenging as queries can vary significantly even when requesting the same data. Traditional endpoint monitoring provides little insight.
- Distributed tracing using OpenTracing allows tracing queries to monitor performance at the resolver level. Tools like Jaeger and plugins for Apollo Server and other GraphQL servers can integrate tracing.
- A demo showed using the Apollo OpenTracing plugin to trace a query through an Apollo server and resolver to an external API. The trace data was sent to Jaeger for analysis to help debug performance issues.
This document discusses Apache Slider, which allows applications to be deployed and managed on Apache Hadoop YARN. Slider uses an Application Master, agents, and scripts to deploy applications defined in an XML package. The Application Master keeps applications in a desired state across YARN containers and handles lifecycle commands like start, stop, and scaling. Slider integrates with Apache Ambari for graphical management and configuration of applications on YARN.
Resilient Predictive Data Pipelines (QCon London 2016)Sid Anand
This document discusses building resilient predictive data pipelines. It begins by distinguishing between ETL and predictive data pipelines, noting that predictive pipelines require high availability with downtimes of less than an hour. The document then outlines design goals for resilient data pipelines, including being scalable, available, instrumented/monitored/alert-enabled, and quickly recoverable. It proposes using AWS services like SQS, SNS, S3, and Auto Scaling Groups to build such pipelines. The document also recommends using Apache Airflow for workflow automation and scheduling to reliably manage pipelines as directed acyclic graphs. It presents an architecture using these techniques and assesses how well it meets the outlined design goals.
MS Office install has required the removal of the previously installed version of your Office product on the device or system. Office 365 and other subscription offers the various features, which you do not get when you do not purchase the Office product. The office can be used free, as MS provides the trial versions of every tool. VISIT HERE: Office setup TODAY.
In questo talk, assieme ad Andrea Gioia, Partner di Quantyca, abbiamo spiegato il ruolo di Apache Kafka e come possano supportare architetture e soluzioni 'event driven' o perchè Kafka è un'ottima scelta per fare 'event sourcing'.
This document provides an overview of using Chef and Azure to build next-generation infrastructure. It discusses key Azure services, deploying a Chef server in Azure, integrating Chef with the Microsoft ecosystem, and migrating and automating workloads across on-premise, Azure, and hybrid environments. The lab guides users through deploying a Chef server in Azure, configuring it, and cloning a sample cookbook to manage infrastructure as code.
The document introduces Apache Kafka's Streams API for stream processing. Some key points covered include:
- The Streams API allows building stream processing applications without needing a separate cluster, providing an elastic, scalable, and fault-tolerant processing engine.
- It integrates with existing Kafka deployments and supports both stateful and stateless computations on data in Kafka topics.
- Applications built with the Streams API are standard Java applications that run on client machines and leverage Kafka for distributed, parallel processing and fault tolerance via state stores in Kafka.
A Modern C++ Kafka API | Kenneth Jia, Morgan StanleyHostedbyConfluent
The document discusses a new C++ Kafka API called modern-cpp-kafka that was developed to address requirements for a pub/sub messaging system. It provides examples of how the API simplifies and improves upon an existing C++ Kafka client (librdkafka) for key tasks like producing and consuming messages. The modern-cpp-kafka API matches the Java API naming, uses RAII for lifetime management, and hides polling and queue details. It has led to improved throughput of over 26% for an application. The API is now open source and the team is working to expand it further.
With AWS you can choose the right database technology and software for the job. Given the myriad of choices, from relational databases to non-relational stores, this session provides details and examples of some of the choices available to you. This session also provides details about real-world deployments from customers using Amazon RDS, Amazon ElastiCache, Amazon DynamoDB, and Amazon Redshift.
Axway presented on its API Management Plus solution. The presentation covered Axway's vision of digital transformation and customer experience networks. It then demonstrated API Management Plus's full lifecycle API management capabilities. This includes API creation, governance, consumption, and measurement. The solution aims to streamline digital innovation and increase ecosystem engagement.
Building a company-wide data pipeline on Apache Kafka - engineering for 150 b...LINE Corporation
Yuto Kawamura
Building a company-wide data pipeline on Apache Kafka - engineering for 150 billion messages per day
Summary:
LINE is a messaging service with 200 million monthly active users. Our service architecture evolves daily with various collaborating components. I'll introduce overview of LINE's messaging service architecture,
mainly focusing on our company-wide data pipeline infrastructure built upon Apache Kafka which accepts more than 150 billion messages every day, making it one of the largest in the world. In this talk I will introduce
how we managed such scale keeping it highly reliable to be capable of being an infrastructure to build services.
Machine learning services with SQL Server 2017Mark Tabladillo
SQL Server 2017 introduces Machine Learning Services with two independent technologies: R and Python. The purpose of this presentation is 1) to describe major features of this technology for technology managers; 2) to outline use cases for architects; and 3) to provide demos for developers and data scientists.
Microsoft R Server allows users to run R code on large datasets in a distributed, parallel manner across SQL Server, Spark, and Hadoop without code changes. It provides scalable machine learning algorithms and tools to operationalize models for real-time scoring. The document discusses how R code can be run remotely on Hadoop and Spark clusters using technologies like RevoScaleR and Sparklyr for scalability.
Making Kafka Cloud Native | Jay Kreps, Co-Founder & CEO, ConfluentHostedbyConfluent
A talk discussing the rise of Apache Kafka and data in motion plus the impact of cloud native data systems. This talk will cover how Kafka needs to evolve to keep up with the future of cloud, what this means for distributed systems engineers, and what work is being done to truly make Kafka Cloud Native
Building Microservices with the 12 Factor App Pattern on AWSAmazon Web Services
by Chris Hein, Partner Solutions Architect, AWS
Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features. But building containerized microservices across multiple teams means you need well-defined, guiding methodologies for software design and implementation. In this talk we’ll discuss architectural best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers. We will deep-dive into Martin Fowler’s principles of microservices and map them to the twelve-factor app pattern and real-life considerations. If you are building or in the process of building microservices on AWS, don’t miss this session.
Function Mesh: Complex Streaming Jobs Made Simple - Pulsar Summit NA 2021StreamNative
Pulsar Function is a succinct computing abstraction Apache Pulsar provides users to express simple ETL and streaming tasks. The simplicity comes in two folds: Simple Interface and Simple Deployment. As it has been adopted, we realized that the native support of organizing multiple functions into integrity will be very beneficial. With such support, people can express and manage multi-stage jobs easily. In addition, this support also provides the possibility of higher-level abstraction DSL to further simplify the job composition. We call this new feature -- Function Mesh.
This talk aims to provide a thorough walkthrough of this new Function Mesh Feature, including its design, implementation, use cases, and examples, to help people seeking simple streaming solutions understand this newly created powerful tool in Apache Pulsar.
Unified Data Processing with Apache Flink and Apache Pulsar_Seth WiesmanStreamNative
Come learn how the combination of Apache Pulsar and Apache Flink is making stateful stream processing even more expressive and flexible to support applications in streaming that were previously not considered streamable. The new world of applications and fast data architectures has broken up the database: Raw data persistence comes in the form of event logs, and the state of the world is computed by a stream processor. Apache Pulsar provides a strong solution for the event log, while Apache Flink forms a powerful foundation for the computation over the event streams.
We will discuss the key concepts behind Apache Flink's approach to stream processing and how it is a powerful abstraction for stateful event-driven applications. We will then see how to use Flink in conjunction with Apache Pulsar to creates a unified data processing platform.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for Microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. At this session we will review the current landscape of DevOps with Containers. In addition, we will discuss known issues and solutions for enterprise Java applications in Containers.
From data stream management to distributed dataflows and beyondVasia Kalavri
Recent efforts by academia and open-source communities have established stream processing as a principal data analysis technology across industry. All major cloud vendors offer streaming dataflow pipelines and online analytics as managed services. Notable use-cases include real-time fault detection in space networks, city traffic management, dynamic pricing for car-sharing, and anomaly detection in financial transactions. At the same time, streaming dataflow systems are increasingly being used for event-driven applications beyond analytics, such as orchestrating microservices and model serving. In the past decades, streaming technology has evolved significantly, however, emerging applications are once more challenging the design decisions of modern streaming systems. In this talk, I will discuss the evolution of stream processing and bring current trends and open problems to the attention of our community.
With Apache Kafka’s rise for event-driven architectures, developers require a specification to design effective event-driven APIs. AsyncAPI has been developed based on OpenAPI to define the endpoints and schemas of brokers and topics. For Kafka applications, the broker’s design to handle high throughput serialized payloads brings challenges for consumers and producers managing the structure of the message. For this reason, a registry becomes critical to achieve schema governance. Apicurio Registry is an end-to-end solution to store API definitions and schemas for Kafka applications. The project includes serializers, deserializers, and additional tooling. The registry supports several types of artifacts including OpenAPI, AsyncAPI, GraphQL, Apache Avro, Google protocol buffers, JSON Schema, Kafka Connect schema, WSDL, and XML Schema (XSD). It also checks them for validity and compatibility.
In this session, we will be covering the following topics:
● The importance of having a contract-first approach to event-driven APIs
● What is AsyncAPI, and how it helps to define Kafka endpoints and schemas
● The Kafka challenges on message structure when serializing and deserializing
● Introduction to Apicurio Registry and schema management for Kafka
● Examples of how to use Apicurio Registry with popular Java frameworks like Spring and Quarkus
Patterns and Pains of Migrating Legacy Applications to KubernetesJosef Adersberger
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs, and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud native apps. But what to do if you’ve no shiny new cloud native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a German blue chip company onto a Kubernetes cluster within one year.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way.
How easy (or hard) it is to monitor your graph ql service performanceRed Hat
- GraphQL performance monitoring can be challenging as queries can vary significantly even when requesting the same data. Traditional endpoint monitoring provides little insight.
- Distributed tracing using OpenTracing allows tracing queries to monitor performance at the resolver level. Tools like Jaeger and plugins for Apollo Server and other GraphQL servers can integrate tracing.
- A demo showed using the Apollo OpenTracing plugin to trace a query through an Apollo server and resolver to an external API. The trace data was sent to Jaeger for analysis to help debug performance issues.
This document discusses Apache Slider, which allows applications to be deployed and managed on Apache Hadoop YARN. Slider uses an Application Master, agents, and scripts to deploy applications defined in an XML package. The Application Master keeps applications in a desired state across YARN containers and handles lifecycle commands like start, stop, and scaling. Slider integrates with Apache Ambari for graphical management and configuration of applications on YARN.
Resilient Predictive Data Pipelines (QCon London 2016)Sid Anand
This document discusses building resilient predictive data pipelines. It begins by distinguishing between ETL and predictive data pipelines, noting that predictive pipelines require high availability with downtimes of less than an hour. The document then outlines design goals for resilient data pipelines, including being scalable, available, instrumented/monitored/alert-enabled, and quickly recoverable. It proposes using AWS services like SQS, SNS, S3, and Auto Scaling Groups to build such pipelines. The document also recommends using Apache Airflow for workflow automation and scheduling to reliably manage pipelines as directed acyclic graphs. It presents an architecture using these techniques and assesses how well it meets the outlined design goals.
MS Office install has required the removal of the previously installed version of your Office product on the device or system. Office 365 and other subscription offers the various features, which you do not get when you do not purchase the Office product. The office can be used free, as MS provides the trial versions of every tool. VISIT HERE: Office setup TODAY.
In questo talk, assieme ad Andrea Gioia, Partner di Quantyca, abbiamo spiegato il ruolo di Apache Kafka e come possano supportare architetture e soluzioni 'event driven' o perchè Kafka è un'ottima scelta per fare 'event sourcing'.
This document provides an overview of using Chef and Azure to build next-generation infrastructure. It discusses key Azure services, deploying a Chef server in Azure, integrating Chef with the Microsoft ecosystem, and migrating and automating workloads across on-premise, Azure, and hybrid environments. The lab guides users through deploying a Chef server in Azure, configuring it, and cloning a sample cookbook to manage infrastructure as code.
The document introduces Apache Kafka's Streams API for stream processing. Some key points covered include:
- The Streams API allows building stream processing applications without needing a separate cluster, providing an elastic, scalable, and fault-tolerant processing engine.
- It integrates with existing Kafka deployments and supports both stateful and stateless computations on data in Kafka topics.
- Applications built with the Streams API are standard Java applications that run on client machines and leverage Kafka for distributed, parallel processing and fault tolerance via state stores in Kafka.
A Modern C++ Kafka API | Kenneth Jia, Morgan StanleyHostedbyConfluent
The document discusses a new C++ Kafka API called modern-cpp-kafka that was developed to address requirements for a pub/sub messaging system. It provides examples of how the API simplifies and improves upon an existing C++ Kafka client (librdkafka) for key tasks like producing and consuming messages. The modern-cpp-kafka API matches the Java API naming, uses RAII for lifetime management, and hides polling and queue details. It has led to improved throughput of over 26% for an application. The API is now open source and the team is working to expand it further.
With AWS you can choose the right database technology and software for the job. Given the myriad of choices, from relational databases to non-relational stores, this session provides details and examples of some of the choices available to you. This session also provides details about real-world deployments from customers using Amazon RDS, Amazon ElastiCache, Amazon DynamoDB, and Amazon Redshift.
Axway presented on its API Management Plus solution. The presentation covered Axway's vision of digital transformation and customer experience networks. It then demonstrated API Management Plus's full lifecycle API management capabilities. This includes API creation, governance, consumption, and measurement. The solution aims to streamline digital innovation and increase ecosystem engagement.
Building a company-wide data pipeline on Apache Kafka - engineering for 150 b...LINE Corporation
Yuto Kawamura
Building a company-wide data pipeline on Apache Kafka - engineering for 150 billion messages per day
Summary:
LINE is a messaging service with 200 million monthly active users. Our service architecture evolves daily with various collaborating components. I'll introduce overview of LINE's messaging service architecture,
mainly focusing on our company-wide data pipeline infrastructure built upon Apache Kafka which accepts more than 150 billion messages every day, making it one of the largest in the world. In this talk I will introduce
how we managed such scale keeping it highly reliable to be capable of being an infrastructure to build services.
Machine learning services with SQL Server 2017Mark Tabladillo
SQL Server 2017 introduces Machine Learning Services with two independent technologies: R and Python. The purpose of this presentation is 1) to describe major features of this technology for technology managers; 2) to outline use cases for architects; and 3) to provide demos for developers and data scientists.
Microsoft R Server allows users to run R code on large datasets in a distributed, parallel manner across SQL Server, Spark, and Hadoop without code changes. It provides scalable machine learning algorithms and tools to operationalize models for real-time scoring. The document discusses how R code can be run remotely on Hadoop and Spark clusters using technologies like RevoScaleR and Sparklyr for scalability.
Intro to big data analytics using microsoft machine learning server with sparkAlex Zeltov
Alex Zeltov - Intro to Big Data Analytics using Microsoft Machine Learning Server with Spark
By combining enterprise-scale R analytics software with the power of Apache Hadoop and Apache Spark, Microsoft R Server for HDP or HDInsight gives you the scale and performance you need. Multi-threaded math libraries and transparent parallelization in R Server handle up to 1000x more data and up to 50x faster speeds than open-source R, which helps you to train more accurate models for better predictions. R Server works with the open-source R language, so all of your R scripts run without changes.
Microsoft Machine Learning Server is your flexible enterprise platform for analyzing data at scale, building intelligent apps, and discovering valuable insights across your business with full support for Python and R. Machine Learning Server meets the needs of all constituents of the process – from data engineers and data scientists to line-of-business programmers and IT professionals. It offers a choice of languages and features algorithmic innovation that brings the best of open source and proprietary worlds together.
R support is built on a legacy of Microsoft R Server 9.x and Revolution R Enterprise products. Significant machine learning and AI capabilities enhancements have been made in every release. In 9.2.1, Machine Learning Server adds support for the full data science lifecycle of your Python-based analytics.
This meetup will NOT be a data science intro or R intro to programming. It is about working with data and big data on MLS .
- How to Scale R
- Work with R and Hadoop + Spark
-Demo of MLS on HDP/HDInsight server with RStudio
- How to operationalize deploying models using MLS Webservice operationalization features on MLS Server or on the cloud Azure ML (PaaS) offering. Speaker Bio:
Alex Zeltov is Big Data Solutions Architect / Software Engineer / Programmer Analyst / Data Scientist with over 19 years of industry experience in Information Technology and most recently in Big Data and Predictive Analytics. He currently works as Global black belt Technical Specialist in Microsoft where he concentrates on Big Data and Advanced Analytics use cases. Previously to joining Microsoft he worked as a Sr. Solutions Engineer at Hortonworks where he specialized in HDP and HDF platforms.
RUCK 2017 R에 날개 달기 - Microsoft R과 클라우드 머신러닝 소개r-kor
Microsoft R can be used with Spark to perform advanced analytics on big data in the cloud or on-premises. Key features include the ability to choose between Spark and other compute contexts, easily deploy analytic models as web services, and process data at scale on HDInsight clusters with hundreds of nodes. R enables building end-to-end AI solutions from data preparation and modeling to operationalizing models for production using services like SQL Server, HDInsight, and Azure.
Title: Scalable R
Event description:
During this short session you will get introduced to Microsoft R for big data and its integration into (not only) Microsoft environment (SQL Server / Hadoop) with showcase of tools and code.
About speaker:
Michal Marusan origins comes from data warehousing and business intelligence on massively parallel database engines but for more than last five years he has been working on numerous Big Data and Advanced Analytics projects with different customers mainly from Telco, Banking and Transportation industry.
Michal’s focus and passion is helping customers with implementation of new analytical methods into their business environments to drive data-driven decisions and generate new business insights both in the cloud and on-premises systems.
Michal is member of Global Black Belt team, CEE Advanced Analytics and Big Data TSP at Microsoft.
Registration:
@Meetup.com group's event here & @Eventbrite registration here (if you use both your seat is guarateed). +our event you can find also @Facebook here.
[Disclaimer: If you use both (Meetup.com& Eventbrite) or at least one of them your seat is guarateed/if you just mark "going" @ this Facebook event we can't guarantee your seat].
Language of the event: R & Slovak
------------------------------------
R <- Slovakia [R enthusiasts and users, data scientists and statisticians of all levels from Slovakia]
------------------------------------
This meetup group is for Data Scientists, Statisticians, Economists and Data Enthusiasts using R for data analysis and data visualization. The goals are to provide R enthusiasts a place to share ideas and learn from each other about how best to apply the language and tools to ever-evolving challenges in the vast realm of data management, processing, analytics, and visualization.
--
PyData is a group for users and developers of data analysis tools to share ideas and learn from each other. We gather to discuss how best to apply Python tools, as well as those using R and Julia, to meet the evolving challenges in data management, processing, analytics, and visualization. PyData groups, events, and conferences aim to provide a venue for users acrossall the various domains of data analysis to share their experiences and their techniques. PyData is organized by NumFOCUS.org, a 501(c)3 non-profit in the United States.
Real World Development: Peeling The Onion – Migrating A Monolithic Applicatio...Amazon Web Services
The document discusses migrating a monolithic application to microservices on AWS. It describes using AWS CodeStar to create a project for the monolithic application and automate its deployment. AWS CodeStar provides tools for development, project management and creating a continuous delivery pipeline for deploying updates. The document also discusses different strategies for migrating applications to AWS, including rehosting the monolithic application on AWS Elastic Beanstalk initially before refactoring it into microservices.
Predictive Analysis using Microsoft SQL Server R ServicesFisnik Doko
R is rapidly becoming the leading language in Data Science and statistics.
This session will show how Microsoft SQL Server can help meet an increasingly “predictive” world by supporting the R language inside the database.
Demonstration using R and SQL Server Services in rental industry.
AWS re:Invent 2016: Workshop: Migrating Microsoft Applications to AWS (ENT216)Amazon Web Services
In this workshop, we will explore the different approaches to migrating Microsoft applications to AWS. We’ll walk through the concerns and considerations to take into account while planning a migration, and learn how to develop and implement a migration plan to move applications from on-premises (or traditional hosting) to AWS. This session will use a case study format to dive deep into the details of how to successfully plan an application migration. To keep it real, teams will work through planning a SharePoint migration that integrates in with an existing Active Directory.
DevOps is focused on Agile development and in great demand.
GCP Supports DevOps in a manner similar to AWS.
Differences between API Gateway (CLI support and OpenAPI Support)
GCP uses a NGINX Proxy with Cloud Endpoints.
The Fastest Way to Redis on Pivotal Cloud FoundryVMware Tanzu
What do developers choose when they need a fast performing datastore with a flexible data model? Hands-down, they choose Redis.
But, waiting for a Redis instance to be set up is not a favorite activity for many developers. This is why on-demand services for Redis have become popular. Developers can start building their applications with Redis right away. There is no fiddling around with installing, configuring, and operating the service.
Redis for Pivotal Cloud Foundry offers dedicated and pre-provisioned service plans for Cloud Foundry developers that work in any cloud. These plans are tailored for typical patterns such as application caching and providing an in-memory datastore. These cover the most common requirements for developers creating net new applications or who are replatforming existing Redis applications.
We'd like to invite you to a webinar discussing different ways to use Redis in cloud-native applications. We'll cover:
- Use cases and requirements for developers
- Alternative ways to access and manage Redis in the cloud
- Features and roadmap of Redis for Pivotal Cloud Foundry
- Quick demo
Presenters: Greg Chase, Director of Products, Pivotal and Craig Olrich, Platform Architect, Pivotal
How to build "AutoScale and AutoHeal" systems using DevOps practices by using modern technologies.
A complete build pipeline and the process of architecting a nearly unbreakable system were part of the presentation.
These slides were presented at 2018 DevOps conference in Singapore. http://claridenglobal.com/conference/devops-sg-2018/
This document discusses RAD Server, a back-end platform from Embarcadero Technologies for building multi-tier applications with Delphi and C++Builder. RAD Server provides automated REST/JSON API publishing of server-side Delphi and C++ code. It also includes integration middleware, built-in application services, and tools for managing APIs, users and analytics. RAD Server allows developers to quickly develop and deploy modern multi-tier applications with Delphi and C++. Pricing options are provided on a per user or unlimited user basis.
Microsoft Data Platform Airlift 2017 Rui Quintino Machine Learning with SQL S...Rui Quintino
The document discusses machine learning with SQL Server 2016 and R Services. It provides an overview of machine learning, R programming language, and the challenges of using R with SQL databases prior to SQL Server 2016. SQL Server 2016 introduces R Services, which allows running R code directly in the database for high performance, scalable machine learning. R Services integrates R with SQL Server through in-database deployment and parallel processing capabilities. This eliminates data movement and scaling issues while leveraging existing R and SQL skills.
DevOps, Continuous Integration & Deployment on AWS discusses practices for software development on AWS including DevOps, continuous integration, continuous delivery, and continuous deployment. It provides an overview of AWS services that can be used at different stages of the software development lifecycle such as CodeCommit for source control, CodePipeline for release automation, and CodeDeploy for deployment. National Novel Writing Month (NaNoWriMo) maintains its websites and services on AWS to support its annual writing challenge. It migrated to AWS to improve uptime and scalability. Its future goals include porting older sites to Rails, using Amazon SES for email, load balancing with ELB, implementing auto scaling, and using services like CodeDeploy, SNS
DevOps, Continuous Integration and Deployment on AWS: Putting Money Back into...Amazon Web Services
Organizations around the globe are leveraging the cloud to accomplish world-changing missions. This session will address how AWS can help organizations put more money toward their mission and scale outreach and operations to achieve more with less. Hear some of AWS’s most advanced customers on how their organizations handle DevOps, continuous integration and deployment. Learn how these practices allow them to rapidly develop, iterate, test and deploy highly-scalable web applications and core operational systems on AWS. The discussion will focus on best practices, lessons learned, and the specific technologies and services they use.
Windows Server 2016 on pilve-valmis operatsioonisüsteem, mis toetab ettevõtte praegusi töövooge, samal ajal tutvustades uusi tehnoloogiaid, mis teevad pilve ülemineku sujuvaks, kui aeg õige. Millised on põhilised uuendused ja kuidas need ettevõtteid aitavad - nendele küsimustele leiate vastused esitlusest.
This document provides an overview of advanced analytics using R and SQL. It discusses how R is used widely for analytics and is growing in popularity. Microsoft R Open and Microsoft R Server are introduced as tools for scalable enterprise analytics that integrate with SQL Server. Key capabilities covered include running R scripts directly in the database using SQL Server 2016 extensions, calling stored procedures from applications that execute R code, and bringing compute to data with in-database analytics for performance and scale. Demos and sample programs are referenced to illustrate capabilities of Open Source R and Microsoft R tools.
Azure Active Directory - Secure and GovernCheah Eng Soon
Azure Active Directory helps secure and govern authentication with features like conditional access and privileged identity management. It allows organizations to mitigate admin risk, govern identities, and set terms of use policies for authentication and access across cloud and on-premises environments.
Zero Trust is a security concept that requires strict identity verification for anyone or anything trying to access applications, data, and infrastructure inside or outside the network. It assumes there is no implicit trust granted to assets and users inside the network, and that verification is required for every access. The goal of Zero Trust is to minimize risk from both external and internal threats by preventing lateral movement and only allowing access based on least-privilege user roles and asset usage.
Microsoft Endpoint Manager provides comprehensive device management capabilities for on-premises environments. It allows IT administrators to deploy, update, protect and monitor Windows, macOS, Linux and IoT devices from a single console. Endpoint Manager combines the capabilities of Configuration Manager and Intune to help businesses securely manage all types of devices across locations.
Microsoft Threat Protection Automated Incident Response Cheah Eng Soon
Microsoft Defender provides automated threat protection including zero-hour and auto purge features to respond to incidents. It also has automated incident response capabilities for user reported phishing attacks and URL verdict changes that help address threats.
The document discusses Azure penetration testing. It provides an agenda that covers an overview of common Azure services attacked, tools used for testing, and guidelines. It describes how Microsoft's blue and red teams work together on testing. Policies prohibit attacks on other customers or social engineering. Encouraged tests include using trial accounts and informing Microsoft of any vulnerabilities found. Steps outlined include identifying attack surfaces, data collection, vulnerability scanning, and penetration testing public-facing Azure services using tools like MicroBurst. Securing databases and using encryption are also addressed. A demo of vulnerability identification is promised.
You'll understand how hackers can attack resources hosted in the Azure and protect Azure infrastructure by identifying vulnerabilities, along with extending your pentesting tools and capabilities.
Microsoft Threat Protection Automated Incident Response DemoCheah Eng Soon
A user reported a phishing attack in their Office 365 organization. The Office 365 Threat Protection service investigated the report and found a malicious URL distributing malware. The URL was blocked for all users in the organization to prevent further infection from this phishing attempt.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
This document outlines demo scenarios for Microsoft Cloud App Security including discovering cloud apps used by an organization, protecting information from connected apps, detecting anomalous user behavior and threats across applications, and automating alert management with Power Automate. The scenarios cover exploring snapshot and continuous reports of discovered apps and risk scores, investigating connected apps and activity logs, detecting anonymous access, and integrating Microsoft Cloud App Security with Microsoft Threat Protection and Power Automate.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
This document summarizes three Microsoft cloud security products: Azure Security Center, Azure Defender, and Microsoft Cloud App Security. Azure Security Center strengthens multi-cloud security posture through dashboards, connectors, secure scores, recommendations, and inventory. Azure Defender protects cloud workloads through vulnerability assessment and security for SQL, storage, and Kubernetes. Microsoft Cloud App Security discovers cloud apps, protects access to connected apps, and detects anomalous user behavior and threats.
Azure Active Directory - External Identities Demo Cheah Eng Soon
The document discusses configuring external identities in Azure Active Directory. It mentions partner authentication with Azure AD and consumer identity providers. It also discusses verifying identities with IDology and lists several organization names, addresses, and contact emails.
Azure WAF is a cloud-native web application firewall service that provides powerful protection for web apps with simple deployment, low maintenance costs, and automatic updates. It acts as a content delivery network and can defend against common attacks like command execution, SQL injection, cross-site scripting, and more, as demonstrated in a presentation where custom rules were set up to create an Azure WAF.
Azure Weekend 2020 Build Malaysia Bus Uncle ChatbotCheah Eng Soon
Thank you for the informative presentation on conversational AI and natural language processing. I learned about key concepts like QnA Maker, Azure Bot Service, and various NLP capabilities in Azure Cognitive Services like text analytics, speech, and translation. The demo was very helpful to see these services in action.
20 common security vulnerabilities and misconfiguration in AzureCheah Eng Soon
This document outlines 20 common security vulnerabilities and misconfigurations in Microsoft Azure. It discusses issues such as storage accounts being publicly accessible, lack of multi-factor authentication, insecure guest user settings, and features like Azure Security Center and Network Watcher being disabled by default. The document is intended to educate users on important security best practices for securing resources and configurations in Azure.
Integrate Microsoft Graph with Azure Bot ServicesCheah Eng Soon
The document discusses 4 steps to integrate Microsoft Graph with Azure Bot Services by registering an application in Azure AD, making queries to Microsoft Graph to retrieve data like documents from SharePoint, implementing code snippets to retrieve the data, and extending the bot to Microsoft Teams. It provides an overview of conversational AI and Azure Bot Services and demonstrates using Microsoft Graph Explorer.
This document provides an overview of Azure Sentinel and how it can be used with Office 365. It discusses the challenges of security operations and how Azure Sentinel uses AI and automation to help. It then summarizes Azure Sentinel's key capabilities including visibility, analytics, hunting, incidents, and automation. It also includes demonstrations of these capabilities and steps to set up Azure Sentinel with an Office 365 connection.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
6. • Turn R analytics → Web
services in one line of
code;
• Swagger-based REST
APIs, easy to consume,
with any programming
languages, including R!
• Deploying web service
server to any platform:
Windows, SQL,
Linux/Hadoop
• On-prem or in cloud
• Fast scoring, real time
and batch
• Scaling to a grid for
powerful computing with
load balancing
• Diagnostic and capacity
evaluation tools
• Enterprise
authentication:
AD/LDAP or AAD
• Secure connection:
HTTPS with SSL/TLS 1.2
• Enterprise grade high
availability
7. • Turn R analytics → Web
Service in one line of
code;
• Swagger-based REST
APIs, easy to consume,
with any programming
languages, including R!
• Deploying Web Service
server to any platform:
Windows / SQL /
Linux/Hadoop
• On Prem or in Cloud
• Fast scoring, real time
and batch
• Scaling to a grid for
powerful computing with
load balancing
• Diagnostic and capacity
evaluation tools
• Enterprise
authentication: LDAP /
AD/ AAD
• Secure connection:
HTTPS with SSL.TSL1.2
• Enterprise grade High
Availability
8. Data Scientist
Developer
Easy Integration
Easy Deployment
Easy Setup
▪ In-cloud or on-prem
▪ Adding nodes to scale
▪ High availability & load balancing
▪ Remote execution server
Microsoft R Server
configured for
operationalizing R analytics
Microsoft R Client
(mrsdeploy package)
Data Scientist
Easy Consumption
publishServiceMicrosoft R Client
(mrsdeploy package)
10. Function Description
publishService
Publish a predictive function as a Web
Service
deleteService Delete a Web Service
getService Get a Web Service
ListServices List the different published web services
serviceOption
Retrieve, set, and list the different service
options
updateService Updates a Web Service
11. Data Scientist
# Run the following code in R
swagger <- api$swagger()
cat(swagger, file = "swagger.json",
append = FALSE)
Generate Swagger
Docs for Web Services
Developer
Popular Swagger Tools:
AutoRest or Code Generator
AutoRest.exe -CodeGenerator
CSharp -Modeler Swagger -
Input swagger.json -
Namespace Mynamespace
Run Swagger tools to
generate code
Developer
Write a few code to
consume the service
12. Share / Reuse R code / functions
• Not just models, a data scientist can share any functional code as a service.
• Other data scientists can explore in the repository to re-use those functions.
Enable Model Management capabilities
• A Predictive Web Service = “Model” + “Prediction Script”
• R Server hosts all those services → Central Repo of Models
• Each service has a version tag → Model Version Control
• All versions are active → Model Roll Back (to any version)
• A service can be accessed by any authorized users →
• Model reuse
• Model validation and monitoring by QA team
After service is published, I can
test if the service works as
expected right away
13. ▪ Built-in remote execute
functions in R Client/R Server
▪ Generate Diff report to
reconcile local and remote
▪ Execute .R script or interactive
R commands
▪ Results come back to local
▪ Generate working snapshots
for resume and reuse
▪ IDE agnostic
R Client
(mrsdeploy package)
R Server
configured to
Remote Execute R Scripts
(Support Window Server, Linux
Server, Hadoop )
▪ Execute R Scripts
▪ Snapshot remote env.
▪ Logout remote server
▪ Login remote server
▪ Generate Diff report
▪ Reconcile Environment
14. Snapshot Functions
createSnapshot
Create a snapshot of the remote session (workspace and
working directory)
loadSnapshot
Load a snapshot from the server into the remote session
(workspace and working directory)
listSnapshots Get a list of snapshots for the current user
downloadSnapshot Download a snapshot from the server
deleteSnapshot Delete a snapshot from the server
Remote Objects Management
listRemoteFiles
Get a list of files in the working directory of the
remote session
deleteRemoteFile
Delete a file from the working directory of the remote
R session
getRemoteFile
Copy a file from the working directory of the remote
R session
putLocalFile
Copy a file from the local machine to the working
directory of the remote R session
getRemoteObject Get an object from the remote R session
putLocalObject
Put an object from the local R session and load it into
the remote R session
getRemoteWorkspace
Take all objects from the remote R session and load
them into the local R session
putLocalWorkspace
Take all objects from the local R session and load
them into the remote R session
Remote Connection
remoteLogin
Remote login to the R Server with AD or admin
credentials
remoteLoginAAD Remote login to R Server server using Azure AD
remoteLogout Logout of the remote session on the DeployR Server.
Remote Execution
remoteExecute Remote execution of either R code or an R script
remoteScript Wrapper function for remote script execution
diffLocalRemote Generate a 'diff' report between local and remote
pause Pause remote connection and back to local
resume Return the user to the 'REMOTE >' command prompt
15. • Turn R analytics → Web
Service in one line of
code;
• Swagger-based REST
APIs, easy to consume,
with any programming
languages, including R!
• Deploying Web Service
server to any platform:
Windows / SQL /
Linux/Hadoop
• On Prem or in Cloud
• Fast scoring, real time
and batch
• Scaling to a grid for
powerful computing with
load balancing
• Diagnostic and capacity
evaluation tools
• Enterprise
authentication: LDAP /
AD/ AAD
• Secure connection:
HTTPS with SSL.TSL1.2
• Enterprise grade High
Availability
19. Product Platforms Modeling Operationalization
R Server for Windows Windows Server 2012 - 2016 Same as modeling
R Server for Linux Red Hat Enterprise Linux 6.X and 7.X 7.x
R Server for Linux SUSE Enterprise SLES 11 will support in future release
R Server for Linux Ubuntu 14.04 LTS, 16.04 LTS Same as modeling
R Server for Linux CentOS 6.X and 7.X 7.x
R Server for Hadoop Red Hat and SUSE Enterprise RHEL 6.x and 7.x, SUSE SLES11 RHEL 7.x
•
•
•
20. • Turn R analytics → Web
Service in one line of
code;
• Swagger-based REST
APIs, easy to consume,
with any programming
languages, including R!
• Deploying Web Service
server to any platform:
Windows / SQL /
Linux/Hadoop
• On Prem or in Cloud
• Fast scoring, real time
and batch
• Scaling to a grid for
powerful computing with
load balancing
• Diagnostic and capacity
evaluation tools
• Enterprise
authentication: LDAP /
AD/ AAD
• Secure connection:
HTTPS with SSL.TSL1.2
• Enterprise grade High
Availability
21. • Easily scale up a single
server to a grid to handle
more concurrent requests
• Load balancing cross
compute nodes
• A shared pool of warmed
up R shells to improve
scoring performance.
R
Client
22. • Health check node
configuration
• Get system status
• Trace R code execution
• Trace service execution
• Evaluate grid capacity
• Simulate traffic per service
• Configure with # of
concurrent threads or
latency thresholds
23. • Turn R analytics → Web
Service in one line of
code;
• Swagger-based REST
APIs, easy to consume,
with any programming
languages, including R!
• Deploying Web Service
server to any platform:
Windows / SQL /
Linux/Hadoop
• On Prem or in Cloud
• Fast scoring, real time
and batch
• Scaling to a grid for
powerful computing with
load balancing
• Diagnostic and capacity
evaluation tools
• Enterprise
authentication: LDAP /
AD/ AAD
• Secure connection:
HTTPS with SSL.TSL1.2
• Enterprise grade High
Availability
24. • Seamless integration
with authentication
solution: LDAP/AD/AAD
• Secure connection:
HTTPS encrypted by TLS
1.2/SSL
• Compliance with
Microsoft Security
Development Lifecycle
R
Client
25. Load
Balancer
• Server level HA:
Introduce multiple Web
Nodes for Active-Active
backup / recovery, via
load balancer
• Data Store HA: leverage
Enterprise grade DB, SQL
Server and Postgres’ HA
capabilities
26. Microsoft R Server 8.0.5: DeployR Microsoft R Server 9.0 Operationalization
Installation A separate installer from R Server MRS includes all deployment capabilities; Greatly improved
the installation and configuration experience.
Deployment
(turn R analytics into web services)
Involve multiple steps, and by default uploading
R analytics to Repo DB is the first step.
Publish your R analytics as web services directly from your
R console with one line of code.
Consumption of web services Integrate with app via Client library; RBroker
Framework.
Easy to integrate with Apps using Swagger based REST
API;
Enable many exciting scenarios by consuming services in R!
Enable Remote Execution Customers have to use DeployR APIs to build
their own way of remote execution
Built-in remote execution functions in ‘mrsdeploy’
package in R Client/R Server.
Architecture Apache Tomcat ASP .Net Core. Cross-platform, better support, endorsed
by Microsoft.
Authentication Basic/AD/LDAP/PAM authentication AD/LDAP/Azure AD auth.
High Availability Doesn’t support Active-Active recovery, unless
clone another grid.
Support Active-Active recovery via multiple web nodes.
Web UI Login/Admin Console/Repository Manager/API
Explorer/Event Console
• Coming soon in future releases
• Totally new design, ease of use.
APIs ~100 DeployR APIs Simplified APIs. ~40 raw APIs. Not compatible with 8.0.x.