Samantha Chan in the lead of the Streams Toolkit team. Samantha's presentation will cover the toolkits that are part of the InfoSphere Streams Open Source Project on GitHub at: https://github.com/IBMStreams. It will also cover how users can participate and contribute to these open source projects.
View related presentations and recordings from the Streams V4.0 Developers Conference at:
https://developer.ibm.com/answers/questions/183353/ibm-infosphere-streams-40-developers-conference-on.html?smartspace=streamsdev
Github Projects Overview and IBM Streams V4.1lisanl
Samantha Chan is the architect of the IBM Streams community. In her presentation, Samantha describes the Streams content available on GitHub, as well as how to get started with the key new features in IBM Streams V4.1.
Ankit Pasricha is the team lead of the IBM Streams Toolkit development team. In his presentation, Ankit provides an overview of all the Streams Toolkit updates available in the IBM Streams V4.1 product, as well as the updates made to the open source Toolkits on GitHub.
Mike Spicer is the lead architect for the IBM Streams team. In his presentation, Mike provides an overview of the many key new features available in IBM Streams V4.1. Simpler development, simpler management, and Spark integration are a few of the capabilities included in IBM Streams V4.1.
Develop and deploy Streaming Analytics applications visually with bindings for streaming engine and multiple source/sinks, rich set of streaming operators and operational lifecycle management. Streaming Analytics Manager makes it easy to develop, monitor streaming applications and also provides analytics of data thats being processed by streaming application.
Introduction to the Spark MLLib Toolkit in IBM Streams V4.1lisanl
Ankit Pasricha is the team lead of the IBM Streams Toolkit development team. In his presentation, Ankit will introduce the new Spark MLLib Toolkit that is available in IBM Streams V4.1. This toolkit combines the power of Spark MLLib and the real-time streaming capabilities of Streams.
Samantha Chan is a community architect for IBM Streams. In her presentation, Samantha covers the new and updated toolkits available in Streams GitHub projects, as well as the enhancements to toolkits that ship with IBM Streams V4.2.
Github Projects Overview and IBM Streams V4.1lisanl
Samantha Chan is the architect of the IBM Streams community. In her presentation, Samantha describes the Streams content available on GitHub, as well as how to get started with the key new features in IBM Streams V4.1.
Ankit Pasricha is the team lead of the IBM Streams Toolkit development team. In his presentation, Ankit provides an overview of all the Streams Toolkit updates available in the IBM Streams V4.1 product, as well as the updates made to the open source Toolkits on GitHub.
Mike Spicer is the lead architect for the IBM Streams team. In his presentation, Mike provides an overview of the many key new features available in IBM Streams V4.1. Simpler development, simpler management, and Spark integration are a few of the capabilities included in IBM Streams V4.1.
Develop and deploy Streaming Analytics applications visually with bindings for streaming engine and multiple source/sinks, rich set of streaming operators and operational lifecycle management. Streaming Analytics Manager makes it easy to develop, monitor streaming applications and also provides analytics of data thats being processed by streaming application.
Introduction to the Spark MLLib Toolkit in IBM Streams V4.1lisanl
Ankit Pasricha is the team lead of the IBM Streams Toolkit development team. In his presentation, Ankit will introduce the new Spark MLLib Toolkit that is available in IBM Streams V4.1. This toolkit combines the power of Spark MLLib and the real-time streaming capabilities of Streams.
Samantha Chan is a community architect for IBM Streams. In her presentation, Samantha covers the new and updated toolkits available in Streams GitHub projects, as well as the enhancements to toolkits that ship with IBM Streams V4.2.
IBM Interconnect 2016. This session outlines the offerings and initiatives that IBM provides around cloud and "as-a-service" messaging. We explain their roles and how they work together to deliver agility to business, while retaining the mission-critical reliability that enterprises have come to expect of IBM messaging. Topics include the work we are doing in IBM MQ Enterprise messaging to facilitate its deployment in public and private IaaS clouds, the use of MQ in Docker and how we are making it easier to build self-service deployments on-premise, the new MQ Light API and how it can be exploited from IBM Bluemix and "fast-speed of IT" systems of engagement, the MQ Light Service for IBM Bluemix and the work we are doing with the Apache Kafka project.
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...Pooyan Jamshidi
A look at the searches related to the term “microservices” on Google Trends revealed that the top searches are now technology driven. This implies that the time of general search terms such as “What is microservices?” has now long passed. Not only are software vendors (for example, IBM and Microsoft) using microservices and DevOps practices, but also content providers (for example, Netflix and the BBC) have adopted and are using them.
I report on experiences and lessons learned during incremental migration and architectural refactoring of a commercial mobile back end as a service to microservices architecture. I explain how we adopted DevOps and how this facilitated a smooth migration towards Microservices architecture.
An Overview of IBM Streaming Analytics for Bluemixlisanl
Mike Branson is the Cloud Architect for the IBM Streams team. In his presentation, Mike will review the capabilities of IBM Streaming Analytics for Bluemix.
MiNiFi is a recently started sub-project of Apache NiFi that is a complementary data collection approach which supplements the core tenets of NiFi in dataflow management, focusing on the collection of data at the source of its creation. Simply, MiNiFi agents take the guiding principles of NiFi and pushes them to the edge in a purpose built design and deploy manner. This talk will focus on MiNiFi's features, go over recent developments and prospective plans, and give a live demo of MiNiFi.
The config.yml is available here: https://gist.github.com/JPercivall/f337b8abdc9019cab5ff06cb7f6ff09a
[QCon London 2020] The Future of Cloud Native API Gateways - Richard LiAmbassador Labs
The introduction of microservices, Kubernetes, and cloud technology has provided many benefits for developers. However, the age-old problem of getting user traffic routed correctly to the API of your backend applications can still be an issue, and may be complicated with the adoption of cloud native approaches: applications are now composed of multiple (micro)services that are built and released by independent teams; the underlying infrastructure is dynamically changing; services support multiple protocols, from HTTP/JSON to WebSockets and gRPC, and more; and many API endpoints require custom configuration of cross-cutting concerns, such as authn/z, rate limiting, and retry policies.
A cloud native API gateway is on the critical path of all requests, and also on the critical path for the workflow of any developer that is releasing functionality. Join this session to learn about the underlying technology and the required changes in engineering workflows. Key takeaways will include:
A brief overview of the evolution of API gateways over the past ten years, and how the original problems being solved have shifted in relation to cloud native technologies and workflow
Two important challenges when using an API gateway within Kubernetes: scaling the developer workflow; and supporting multiple architecture styles and protocols
Strategies for exposing Kubernetes services and APIs at the edge of your system
Insight into the (potential) future of cloud native API gateways
https://qconlondon.com/london2020/presentation/future-cloud-native-api-gateways
IBM Interconnect 2016
To address a diverse set of needs coming from many quadrants (IoT, Shadow IT, SaaS adoption, etc.), IBM recognizes that the integration market must take a revolutionary step to get ahead of the needs of our customers. Enter the "Hybrid Integration Platform,” IBM's vision to evolve into the next generation of highly-productive integration offerings. In this session, we describe how IBM's Hybrid Integration Platform draws together the capabilities of its constituent parts—IBM AppConnect, Cast Iron, IBM Integration Bus, API Management and Bluemix—into a cohesive set of integration capabilities to enable digital transformation for the enterprise. This is a technical session focusing on architecture and technical details.
Big Data Day LA 2016/ Big Data Track - Building scalable enterprise data flow...Data Con LA
Connecting enterprise systems has always been a tough task. Modern IoT applications have exacerbated the issue by the need to integrate legacy systems with novel high velocity data streams. Various patterns like messaging, REST, etc. have been proposed, but they necessitate rearchitecting the integration layer which is extremely arduous. In this talk we will show you how to use Apache NiFi to solve your data integration, movement and ingestion problems. Next, we will examine how Apache NiFi can be used to construct durable, scalable and responsive IoT apps in conjunction with other stream processing and messaging frameworks.
Strangling the Monolith With a Data-Driven Approach: A Case StudyVMware Tanzu
SpringOne Platform 2017
David Julia, Pivotal; Simon P Duffy, Pivotal
"The scene: A complex procedure cost estimation system with hundreds of unknown business rules hidden in a monolithic application. A rewrite is started. If our system gives an incorrect result, the company is financially on the hook. A QA team demanding month-long feature freezes for testing. A looming deadline to cut over to the new system with severe financial penalties for missing the date. Tension is high. The business is nervous, and the team isn’t confident that it can replace the system without introducing costly bugs. Does that powder-keg of a project sound familiar?
Enter Project X: At a pivotal moment in the project, the team changed their approach. They’d implement a unique, data-driven variation of the strangler pattern. They’d run their system in production alongside the legacy system, while collecting data on their system’s accuracy, falling back to the legacy system when answers differed. True to Lean Software development, they would amplify learning and use data to drive their product decisions.
The end result: An outstanding success. Happy stakeholders, business buy-in to release at will, a vastly reduced QA budget, reusable microservices, and one heck of a Concourse continuous delivery pipeline. We achieved all of this, while providing a system that was provably better than the legacy subsystem we replaced.
This talk will appeal to engineers, managers, and product managers.
Join us for a 30 minute session where we review this case study and learn how you too can:
Build statistically significant confidence in your system with data-driven testing
Strangle the Monolith safely
Take a Lean approach to legacy rewrites
Validate your system’s accuracy when you don’t know the legacy business rules
Leverage Continuous Delivery in a Legacy Environment
Get Business and QA buy-in for Continuous Delivery
Articulate the business value of data-driven product decisions"
IBM Interconnect 2016. This session outlines the offerings and initiatives that IBM provides around cloud and "as-a-service" messaging. We explain their roles and how they work together to deliver agility to business, while retaining the mission-critical reliability that enterprises have come to expect of IBM messaging. Topics include the work we are doing in IBM MQ Enterprise messaging to facilitate its deployment in public and private IaaS clouds, the use of MQ in Docker and how we are making it easier to build self-service deployments on-premise, the new MQ Light API and how it can be exploited from IBM Bluemix and "fast-speed of IT" systems of engagement, the MQ Light Service for IBM Bluemix and the work we are doing with the Apache Kafka project.
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...Pooyan Jamshidi
A look at the searches related to the term “microservices” on Google Trends revealed that the top searches are now technology driven. This implies that the time of general search terms such as “What is microservices?” has now long passed. Not only are software vendors (for example, IBM and Microsoft) using microservices and DevOps practices, but also content providers (for example, Netflix and the BBC) have adopted and are using them.
I report on experiences and lessons learned during incremental migration and architectural refactoring of a commercial mobile back end as a service to microservices architecture. I explain how we adopted DevOps and how this facilitated a smooth migration towards Microservices architecture.
An Overview of IBM Streaming Analytics for Bluemixlisanl
Mike Branson is the Cloud Architect for the IBM Streams team. In his presentation, Mike will review the capabilities of IBM Streaming Analytics for Bluemix.
MiNiFi is a recently started sub-project of Apache NiFi that is a complementary data collection approach which supplements the core tenets of NiFi in dataflow management, focusing on the collection of data at the source of its creation. Simply, MiNiFi agents take the guiding principles of NiFi and pushes them to the edge in a purpose built design and deploy manner. This talk will focus on MiNiFi's features, go over recent developments and prospective plans, and give a live demo of MiNiFi.
The config.yml is available here: https://gist.github.com/JPercivall/f337b8abdc9019cab5ff06cb7f6ff09a
[QCon London 2020] The Future of Cloud Native API Gateways - Richard LiAmbassador Labs
The introduction of microservices, Kubernetes, and cloud technology has provided many benefits for developers. However, the age-old problem of getting user traffic routed correctly to the API of your backend applications can still be an issue, and may be complicated with the adoption of cloud native approaches: applications are now composed of multiple (micro)services that are built and released by independent teams; the underlying infrastructure is dynamically changing; services support multiple protocols, from HTTP/JSON to WebSockets and gRPC, and more; and many API endpoints require custom configuration of cross-cutting concerns, such as authn/z, rate limiting, and retry policies.
A cloud native API gateway is on the critical path of all requests, and also on the critical path for the workflow of any developer that is releasing functionality. Join this session to learn about the underlying technology and the required changes in engineering workflows. Key takeaways will include:
A brief overview of the evolution of API gateways over the past ten years, and how the original problems being solved have shifted in relation to cloud native technologies and workflow
Two important challenges when using an API gateway within Kubernetes: scaling the developer workflow; and supporting multiple architecture styles and protocols
Strategies for exposing Kubernetes services and APIs at the edge of your system
Insight into the (potential) future of cloud native API gateways
https://qconlondon.com/london2020/presentation/future-cloud-native-api-gateways
IBM Interconnect 2016
To address a diverse set of needs coming from many quadrants (IoT, Shadow IT, SaaS adoption, etc.), IBM recognizes that the integration market must take a revolutionary step to get ahead of the needs of our customers. Enter the "Hybrid Integration Platform,” IBM's vision to evolve into the next generation of highly-productive integration offerings. In this session, we describe how IBM's Hybrid Integration Platform draws together the capabilities of its constituent parts—IBM AppConnect, Cast Iron, IBM Integration Bus, API Management and Bluemix—into a cohesive set of integration capabilities to enable digital transformation for the enterprise. This is a technical session focusing on architecture and technical details.
Big Data Day LA 2016/ Big Data Track - Building scalable enterprise data flow...Data Con LA
Connecting enterprise systems has always been a tough task. Modern IoT applications have exacerbated the issue by the need to integrate legacy systems with novel high velocity data streams. Various patterns like messaging, REST, etc. have been proposed, but they necessitate rearchitecting the integration layer which is extremely arduous. In this talk we will show you how to use Apache NiFi to solve your data integration, movement and ingestion problems. Next, we will examine how Apache NiFi can be used to construct durable, scalable and responsive IoT apps in conjunction with other stream processing and messaging frameworks.
Strangling the Monolith With a Data-Driven Approach: A Case StudyVMware Tanzu
SpringOne Platform 2017
David Julia, Pivotal; Simon P Duffy, Pivotal
"The scene: A complex procedure cost estimation system with hundreds of unknown business rules hidden in a monolithic application. A rewrite is started. If our system gives an incorrect result, the company is financially on the hook. A QA team demanding month-long feature freezes for testing. A looming deadline to cut over to the new system with severe financial penalties for missing the date. Tension is high. The business is nervous, and the team isn’t confident that it can replace the system without introducing costly bugs. Does that powder-keg of a project sound familiar?
Enter Project X: At a pivotal moment in the project, the team changed their approach. They’d implement a unique, data-driven variation of the strangler pattern. They’d run their system in production alongside the legacy system, while collecting data on their system’s accuracy, falling back to the legacy system when answers differed. True to Lean Software development, they would amplify learning and use data to drive their product decisions.
The end result: An outstanding success. Happy stakeholders, business buy-in to release at will, a vastly reduced QA budget, reusable microservices, and one heck of a Concourse continuous delivery pipeline. We achieved all of this, while providing a system that was provably better than the legacy subsystem we replaced.
This talk will appeal to engineers, managers, and product managers.
Join us for a 30 minute session where we review this case study and learn how you too can:
Build statistically significant confidence in your system with data-driven testing
Strangle the Monolith safely
Take a Lean approach to legacy rewrites
Validate your system’s accuracy when you don’t know the legacy business rules
Leverage Continuous Delivery in a Legacy Environment
Get Business and QA buy-in for Continuous Delivery
Articulate the business value of data-driven product decisions"
Stream processing has become the defacto standard for building real-time ETL and Stream Analytics applications. We see batch workloads move into Stream processing to act on the data and derive insights faster. With the explosion of data with "Perishable Insights" such IoT and machine-generated data, Stream Processing + Predictive Analytics is driving tremendous business value. This is evidenced by the explosion of Stream Processing frameworks like proven and evolving Apache Storm and newer frameworks such as Apache Flink, Apache Apex, and Spark Streaming.
Today, users have to choose and try to understand the benefits of each of these frameworks and not only that they have to learn the new APIs and also operationalize their applications. To create value faster, we are introducing new open source tool - Streamline. It is a self-service framework that will ease building streaming application and deploy the streaming application across multiple frameworks/engines that users prefer in a snap. It simplifies integration with Machine Learning models for scoring and classification of data for Predictive Analytics. It provides an elegant way to build Analytics dashboards to derive business insights out of the streaming data and to allow the business users to consume it easily.
In this talk, we will outline the fundamentals of real-time stream processing and demonstrate Streamline capabilities to show how it simplifies building real-time streaming analytics applications.
Speaker:
Priyank Shah, Staff Software Engineer, Hortonworks
Its Finally Here! Building Complex Streaming Analytics Apps in under 10 min w...DataWorks Summit
Imagine if you could build and deploy an end to end complex streaming analytics app on a streaming engine like Storm or Flink that did the following:
1. Joining Streams
2. Aggregations over Windows (Time or Count based)
3. Complex Event Processing
4. Pattern Matching
5. Model scoring.
Now imagine implementing and deploying this without writing a single line of code in under 10 mins.
Imagine no more; it is indeed here. In this talk, we will discuss an exciting open source project led by Hortonworks on building and deploying streaming applications using a drag and drop paradigm.
Presentation from Future of Data Boston Meetup on Oct 24, 2017.
Streaming data is rich with insights but these insights can be difficult to find due to the difficulty of developing and deploying streaming applications. During this presentation we will show how to build and deploy a complex streaming application in a few minutes using open source tools. First we will build an application using Streaming Analytics Manager and Schema Registry that ingests data into Apache Druid. Then we will use Apache Superset to build beautiful, informative dashboards.
Dairy management system project report..pdfKamal Acharya
ASP.NET is the next version of Active Server Pages (ASP); it is a unified Web development platform that provides the services necessary for developers to build enterprise-class Web applications. While ASP.NET is largely syntax compatible, it also provides a new programming model and infrastructure for more secure, scalable, and stable applications. ASP.NET is a compiled, NET-based environment, we can author applications in any .NET compatible language, including Visual Basic .NET, C#, and JScript .NET. Additionally, the entire .NET Framework is available to any ASP.NET application. Developers can easily access the benefits of these technologies, which include the managed common language runtime environment (CLR), type safety, inheritance, and so on. ASP.NET has been designed to work seamlessly with WYSIWYG HTML editors and other programming tools, including Microsoft Visual Studio .NET. Not only does this make Web development easier, but it also provides all the benefits that these tools have to offer, including a GUI that developers can use to drop server controls onto a Web page and fully integrated debugging support.
Registry is a central metadata repository that allows users to collaboratively use Schema definitions for stream processing.
Stream Analytics Manager, provides a framework to build Streaming applications faster, easier.
Administration APIs: REST and JMX for IBM InfoSphere Streams V4.0lisanl
Janet Weber works as a software developer on the Streams Platform development team. Janet's presentation will dig deeper into the new platform enhancements in IBM InfoSphere Streams V4.0, including the administration APIs REST and JMX.
View related presentations and recordings from the Streams V4.0 Developers Conference at:
https://developer.ibm.com/answers/questions/183353/ibm-infosphere-streams-40-developers-conference-on.html?smartspace=streamsdev
Monitoring your applications, get into a framework of proactive application fixing instead of reactive. And with IBM, reduce your outages with the of predictive insights.
Stream processing has become the defacto standard for building real-time ETL and Stream Analytics applications. We see batch workloads move into Stream processing to to act on the data and derive insights faster. With the explosion of data with "Perishable Insights" such IoT and machine-generated data, Stream Processing + Predictive Analytics is driving tremendous business value. This is evidenced by the explosion of Stream Processing frameworks like proven and evolving Apache Storm and newer frameworks such as Apache Flink, Apache Apex, and Spark Streaming.
Today, users have to choose and try to understand the benefits of each of these frameworks and not only that they have to learn the new APIs and also operationalize their applications. To create value faster, we are introducing new open source tool - Streamline. It is a self-service tool that will ease building streaming application and deploy the streaming application across multiple frameworks/engines that users prefer in a snap. It simplifies integration with Machine Learning models for scoring and classification of data for Predictive Analytics. It provides an elegant way to build Analytics dashboards to derive business insights out of the streaming data and to allow the business users to consume it easily.
In this talk, we will outline the fundamentals of real-time stream processing and demonstrate Streamline capabilities to show how it simplifies building real-time streaming analytics applications.
Relocatable Application Bundles for IBM InfoSphere Streams V4.0lisanl
Howard Nasgaard is a software developer in the Streams SPL development team. Howard's presentation discusses the concept of an Application Bundle. An Application Bundle is a single file that contains all the toolkit artifacts necessary to run an application. With IBM InfoSphere Streams V4.0, an application is easily and fully relocatable using an application bundle.
View related presentations and recordings from the Streams V4.0 Developers Conference at:
https://developer.ibm.com/answers/questions/183353/ibm-infosphere-streams-40-developers-conference-on.html?smartspace=streamsdev
apidays LIVE Paris - Data with a mission: a COVID-19 API case study by Matt M...apidays
apidays LIVE Paris - Responding to the New Normal with APIs for Business, People and Society
December 8, 9 & 10, 2020
Data with a mission: a COVID-19 API case study
Matt McLarty, Global Leader of API Strategy at MuleSoft
Sanjna Verma, Product Manager at Salesforce
apidays LIVE Australia 2020 - Data with a Mission by Matt McLarty apidays
apidays LIVE Australia 2020 - Building Business Ecosystems
Data with a Mission: A COVID-19 API Case Study
Matt McLarty, Global Leader, API Strategy & Sanjna Verma, Product Manager at MuleSoft
This talk, a case study in application deployment models, was given at IBM InterConnect 2017 in Las Vegas, NV on March 21, 2017 by Lin Sun & Phil Estes of IBM Cloud.
In this talk, Lin & Phil provided a background of IBM Bluemix compute offerings across Cloud Foundry, Containers + Kubernetes, and FaaS/serverless via OpenWhisk and then used a demo application to describe the tradeoffs between using the various deployment models and technology. The application is open source and available at https://github.com/estesp/flightassist
Similar to Streams GitHub Products Overview for IBM InfoSphere Streams V4.0 (20)
This presentation provides an overview of the new capabilities in IBM Streams V4.3. Topics include dynamic and elastic scaling, programming model, Streams runner for Apache Beam, operations and system management, and toolkit enhancements.
SPL Event-Time Processing in IBM Streams V4.3lisanl
This presentation reviews the new SPL event-time processing capability that is available in IBM Streams V4.3. Topics include the use case, watermarks, language definitions, TimeInterval window, and more!
This presentation reviews the optional data type feature that is now available in IBM Streams V4.3. Topics include the new SPL type, new literal type, changes in product toolkits, and much more!
Dynamic and Elastic Scaling in IBM Streams V4.3lisanl
This presentation reviews the new dynamic and elastic scaling capabilities added in IBM Streams V4.3. Topics include serverless workloads, job resource allocation mode, job scoped resources mode, improved scheduling, and more!
Streaming Analytics for Bluemix Enhancementslisanl
Mike Branson is the Cloud architect working for IBM Streams. In his presentation, Mike provides an overview of Streaming Analytics for Bluemix, as well as describes the recent enhancements that are available.
IBM Streams V4.2 Submission Time Fusion and Configurationlisanl
Brad Fawcett, Queenie Ma, and Mary Komor are developers with IBM Streams. In their presentation, they cover the new Submission Time Fusion and Configuration support available in IBM Streams V4.2.
Samantha Chan is a community architect working in IBM Streams. In her presentation, Samantha covers a few of the getting started resources available to new users of IBM Streams V4.2.
IBM ODM Rules Compiler support in IBM Streams V4.2.lisanl
Chris Recoskie and Ankit Pasricha are developers with IBM Streams. In their presentation, they will discuss the enhancements made to IBM ODM Rules support that is available in IBM Streams V4.2.
Non-Blocking Checkpointing for Consistent Regions in IBM Streams V4.2.lisanl
Fang Zheng is a developer with IBM Streams. In his presentation, Fang describes the enhancements related to consistent regions that are available in IBM Streams V4.2.
Dan Debrunner and Susan Cline are developers for IBM Streams. In their presentation, they will discuss Apache Edgent, IBM Watson IoT Platform and IBM Streams.
Mary Komor is the team leader of the IBM Streams Studio development team. In her presentation, Mary provides details on the Streams V4.1 integration with the IBM Governance Catalog (IGC).
IBM Streams V4.1 and Incremental Checkpointinglisanl
Fang Zheng is a member of the IBM Streams development team. In his presentation, Fang provides an introduction to the incremental checkpointing feature that is available in IBM Streams V4.1, including how it works and how to use it.
IBM Streams V4.1 REST API Support for Cross-Origin Resource Sharing (CORS)lisanl
Janet Weber is a member of the IBM Streams development team. In her presentation, Janet provides an overview of Cross-Origin Resource Sharing (CORS) and then describes how to make cross-origin requests to the IBM Streams V4.1 REST API.
IBM Streams V4.1 and User Authentication with Client Certificateslisanl
Scott Timmerman is a member of the IBM Streams development team. In his presentation, Scott provides an introduction to user authentication with client certificates, discusses public key infrastructure terms and concepts, and demonstrates how to configure Streams to authenticate using client certificates.
IBM Streams V4.1 and JAAS Login Module Supportlisanl
Yip-Hing Ng is a senior software engineer with the IBM Streams development team. In this presentation, Yip covers the topics of IBM Streams V4.1 security enhancement overview, implementing a custom JAAS login module, and login module deployment and configuration.
IBM Streams V4.1 Integration with IBM Platform Symphonylisanl
Steve Halverson is a developer with the IBM Streams platform team. In this presentation, Steve covers the details of how IBM Streams V4.1 integrates with IBM Platform Symphony.
Please view the related presentation available at:
http://www.slideshare.net/lisanl/introduction-to-ibm-platform-symphony-integration-with-ibm-streams-v41
Introduction to IBM Platform Symphony Integration with IBM Streams V4.1lisanl
Roger Rea is the senior offering manager for IBM Streams. In this presentation, Roger discusses the key features of IBM Platform Symphony as it relates to integration with IBM Streams V4.1.
What's New in the Streams Console in IBM Streams V4.1lisanl
Adriana Carvajal is a developer in the IBM Streams Console development team. In her presentation, Adriana describes the new Consoles added in IBM Streams V4.1, as well as enhancements to the existing Streams Console.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/