Erick Erickson presented on Streaming Aggregation in Solr. SA allows processing very large result sets across Solr nodes in parallel. It enables SQL-like queries and arbitrary operations on result sets. SA uses docValues fields and tuples to export data for processing. Parallel SQL and Streaming Expressions provide additional interfaces to SA beyond Java code. SA complements Solr's search capabilities by enabling analytics on large result sets.
Apache Solr is a powerful search and analytics engine with features such as full-text search, faceting, joins, sorting and capable of handling large amounts of data across a large number of servers. However, with all that power and scalability comes complexity. Solr 6 supports a Parallel SQL feature which provides a simplified, well-known interface to your data in Solr, performs key operations such as sorts and shuffling inside Solr for massive speedups, provides best-practices based query optimization and by leveraging the scalability of SolrCloud and a clever implementation, allows you to throw massive amounts of computation power behind analytical queries.
In this talk, we will explore the why, what and how of Parallel SQL and its building block Streaming Expressions in Solr 6 with a hint of the exciting new developments around this feature.
Getting Started in Blockchain Security and Smart Contract AuditingBeau Bullock
Why is blockchain security important?
Blockchain usage has exploded since the Bitcoin whitepaper was first published in 2008. Many applications rely on this technology for increased trust and privacy, where they would otherwise be absent from a centralized system.
The ecosystem surrounding blockchain technology is large, complex, and has many moving pieces. Exchanges exist where users can transact various cryptocurrencies, NFTs, and tokens. Smart contracts can be written to programmatically apply behavior to blockchain transactions. Decentralized Finance (DeFi) markets exist where users can swap tokens without needing to sign up for an account.
All of these pieces are prone to vulnerabilities, and with blockchain being at the forefront of emerging technology new issues are being found daily.
In this Black Hills Information Security (BHIS) webcast, we'll use case studies about recent blockchain hacks to introduce the underlying issues that occur in writing/engineering smart contracts that have ultimately lead to the loss of millions of dollars to attackers.
Asset Tokenization - An Introduction and Overview, Guest Lecture at SMU Patrick Schueffel
This presentation provides an introduction and overview to the topic of Asset Tokenization. It explains why Asset Tokenization can help to unlock massive values on a global scale by democratizing investment processes in the capital markets. It highlights the significance of these concepts by drawing historical parallels.
When communication fails, PROFINET IO Devices go to their failsafe state. For more critical networks one could consider creating redundant paths in the PROFINET network.
The working principle of industrially available redundant Ethernet technologies such as MRP, PRP and HSR is explained, measurements and some industrial case studies are discussed.
Apache Solr is a powerful search and analytics engine with features such as full-text search, faceting, joins, sorting and capable of handling large amounts of data across a large number of servers. However, with all that power and scalability comes complexity. Solr 6 supports a Parallel SQL feature which provides a simplified, well-known interface to your data in Solr, performs key operations such as sorts and shuffling inside Solr for massive speedups, provides best-practices based query optimization and by leveraging the scalability of SolrCloud and a clever implementation, allows you to throw massive amounts of computation power behind analytical queries.
In this talk, we will explore the why, what and how of Parallel SQL and its building block Streaming Expressions in Solr 6 with a hint of the exciting new developments around this feature.
Getting Started in Blockchain Security and Smart Contract AuditingBeau Bullock
Why is blockchain security important?
Blockchain usage has exploded since the Bitcoin whitepaper was first published in 2008. Many applications rely on this technology for increased trust and privacy, where they would otherwise be absent from a centralized system.
The ecosystem surrounding blockchain technology is large, complex, and has many moving pieces. Exchanges exist where users can transact various cryptocurrencies, NFTs, and tokens. Smart contracts can be written to programmatically apply behavior to blockchain transactions. Decentralized Finance (DeFi) markets exist where users can swap tokens without needing to sign up for an account.
All of these pieces are prone to vulnerabilities, and with blockchain being at the forefront of emerging technology new issues are being found daily.
In this Black Hills Information Security (BHIS) webcast, we'll use case studies about recent blockchain hacks to introduce the underlying issues that occur in writing/engineering smart contracts that have ultimately lead to the loss of millions of dollars to attackers.
Asset Tokenization - An Introduction and Overview, Guest Lecture at SMU Patrick Schueffel
This presentation provides an introduction and overview to the topic of Asset Tokenization. It explains why Asset Tokenization can help to unlock massive values on a global scale by democratizing investment processes in the capital markets. It highlights the significance of these concepts by drawing historical parallels.
When communication fails, PROFINET IO Devices go to their failsafe state. For more critical networks one could consider creating redundant paths in the PROFINET network.
The working principle of industrially available redundant Ethernet technologies such as MRP, PRP and HSR is explained, measurements and some industrial case studies are discussed.
This presentation shows the evolution of blockchain implementations from simple financial transactions to complex computer programs (i.e. Smart Contracts)
Ethereum at its simplest, is an open software platform based on blockchain technology
Ethereum allows developers to build and deploy decentralized applications.
Building Event-Driven (Micro) Services with Apache KafkaGuido Schmutz
This talk begins with a short recap of how we created systems over the past 20 years, up to the current idea of building systems, using a Microservices architecture. What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to integrate services with each eachother in a Microservices Architecture? Or is it better to use a more loosely-coupled protocol? Answers to these and many other questions are provided. The talk will show how a distributed log (event hub) can help to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk shows the difference between a request-driven and event-driven communication and answers when to use which. It highlights how a modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Decentraland. Whitepaper.
Decentraland is a virtual reality platform powered by the Ethereum blockchain. Users
can create, experience, and monetize content and applications. Land in Decentraland
is permanently owned by the community, giving them full control over their creations.
Users claim ownership of virtual land on a blockchain-based ledger of parcels.
Landowners control what content is published to their portion of land, which is
identified by a set of cartesian coordinates (x,y). Contents can range from static 3D
scenes to interactive systems such as games.
How we eased out security journey with OAuth (Goodbye Kerberos!) | Paul Makka...HostedbyConfluent
Saxo Bank is on a growth journey and Kafka is a critical component to that success. Securing our financial event streams is a top priority for us and initially we started with an on-prem Kafka cluster secured with (the de-facto) Kerberos. However, as we modernize and scale, the demands of hybrid cloud, multiple domains, polyglot computing and Data Mesh require us to also modernize our approach to security. In this talk, we will describe how we took the default (non-production ready) Kafka OAuth implementation and productionized it to work with Kafka in Azure Cloud, including the Kafka stack and clients. By enabling both Kerberos and OAuth running on-prem and in the cloud, we now plan to gracefully retire Kerberos from our estate.
Delivering: from Kafka to WebSockets | Adam Warski, SoftwareMillHostedbyConfluent
Here's the challenge: we've got a Kafka topic, where services publish messages to be delivered to browser-based clients through web sockets.
Sounds simple? It might, but we're faced with an increasing number of messages, as well as a growing count of web socket clients. How do we scale our solution? As our system contains a larger number of servers, failures become more frequent. How to ensure fault tolerance?
There’s a couple possible architectures. Each websocket node might consume all messages. Otherwise, we need an intermediary, which redistributes the messages to the proper web socket nodes.
Here, we might either use a Kafka topic, or a streaming forwarding service. However, we still need a feedback loop so that the intermediary knows where to distribute messages.
We’ll take a look at the strengths and weaknesses of each solution, as well as limitations created by the chosen technologies (Kafka and web sockets).
Apart from Proof of Work there are many other Consensus Mechanisms being discussed. What are they and what are their pros and cons. (Proof of Stake, Proof of Elapsed Time, Proof of Authority, Proof of Burn, Proof of Authority, Byzantine Fault Tolerance, Proof of Importance)
Blockchain Training | Blockchain Tutorial for Beginners | Blockchain Technolo...Edureka!
This Edureka Blockchain training will give you a fundamental understanding regrading Blockchain and Bitcoin.
This session will help you learn following topics:
1. Current Existing Monetary System
2. How can Blockchain and Bitcoin help?
3. What is Blockchain?
4. Blockchain concepts
5. Bitcoin Transaction
6. Blockchain features
7. Blockchain Use Case
8. Demo: Bitcoin Transaction
What's Your Super-Power? Mine is Machine Learning with Oracle Autonomous DB.Jim Czuprynski
Artificial Intelligence (AI) and Machine Learning (ML) are a lot like preserving the Earth's environment: Almost everyone is talking about what should be done to save it, but very few people have committed to actually doing something about it. I'll demonstrate a few real-life opportunities to discover unseen patterns and relationships within sample financial and election data by leveraging the AI and ML capabilities already built into Oracle Autonomous Database.
NoSQL - MongoDB. Agility, scalability, performance. I am going to talk about the basis of NoSQL and MongoDB. Why some projects requires RDBMs and another NoSQL databases? What are the pros and cons to use NoSQL vs. SQL? How data are stored and transefed in MongoDB? What query language is used? How MongoDB supports high availability and automatic failover with the help of the replication? What is sharding and how it helps to support scalability?. The newest level of the concurrency - collection-level and document-level.
This presentation shows the evolution of blockchain implementations from simple financial transactions to complex computer programs (i.e. Smart Contracts)
Ethereum at its simplest, is an open software platform based on blockchain technology
Ethereum allows developers to build and deploy decentralized applications.
Building Event-Driven (Micro) Services with Apache KafkaGuido Schmutz
This talk begins with a short recap of how we created systems over the past 20 years, up to the current idea of building systems, using a Microservices architecture. What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to integrate services with each eachother in a Microservices Architecture? Or is it better to use a more loosely-coupled protocol? Answers to these and many other questions are provided. The talk will show how a distributed log (event hub) can help to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk shows the difference between a request-driven and event-driven communication and answers when to use which. It highlights how a modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Decentraland. Whitepaper.
Decentraland is a virtual reality platform powered by the Ethereum blockchain. Users
can create, experience, and monetize content and applications. Land in Decentraland
is permanently owned by the community, giving them full control over their creations.
Users claim ownership of virtual land on a blockchain-based ledger of parcels.
Landowners control what content is published to their portion of land, which is
identified by a set of cartesian coordinates (x,y). Contents can range from static 3D
scenes to interactive systems such as games.
How we eased out security journey with OAuth (Goodbye Kerberos!) | Paul Makka...HostedbyConfluent
Saxo Bank is on a growth journey and Kafka is a critical component to that success. Securing our financial event streams is a top priority for us and initially we started with an on-prem Kafka cluster secured with (the de-facto) Kerberos. However, as we modernize and scale, the demands of hybrid cloud, multiple domains, polyglot computing and Data Mesh require us to also modernize our approach to security. In this talk, we will describe how we took the default (non-production ready) Kafka OAuth implementation and productionized it to work with Kafka in Azure Cloud, including the Kafka stack and clients. By enabling both Kerberos and OAuth running on-prem and in the cloud, we now plan to gracefully retire Kerberos from our estate.
Delivering: from Kafka to WebSockets | Adam Warski, SoftwareMillHostedbyConfluent
Here's the challenge: we've got a Kafka topic, where services publish messages to be delivered to browser-based clients through web sockets.
Sounds simple? It might, but we're faced with an increasing number of messages, as well as a growing count of web socket clients. How do we scale our solution? As our system contains a larger number of servers, failures become more frequent. How to ensure fault tolerance?
There’s a couple possible architectures. Each websocket node might consume all messages. Otherwise, we need an intermediary, which redistributes the messages to the proper web socket nodes.
Here, we might either use a Kafka topic, or a streaming forwarding service. However, we still need a feedback loop so that the intermediary knows where to distribute messages.
We’ll take a look at the strengths and weaknesses of each solution, as well as limitations created by the chosen technologies (Kafka and web sockets).
Apart from Proof of Work there are many other Consensus Mechanisms being discussed. What are they and what are their pros and cons. (Proof of Stake, Proof of Elapsed Time, Proof of Authority, Proof of Burn, Proof of Authority, Byzantine Fault Tolerance, Proof of Importance)
Blockchain Training | Blockchain Tutorial for Beginners | Blockchain Technolo...Edureka!
This Edureka Blockchain training will give you a fundamental understanding regrading Blockchain and Bitcoin.
This session will help you learn following topics:
1. Current Existing Monetary System
2. How can Blockchain and Bitcoin help?
3. What is Blockchain?
4. Blockchain concepts
5. Bitcoin Transaction
6. Blockchain features
7. Blockchain Use Case
8. Demo: Bitcoin Transaction
What's Your Super-Power? Mine is Machine Learning with Oracle Autonomous DB.Jim Czuprynski
Artificial Intelligence (AI) and Machine Learning (ML) are a lot like preserving the Earth's environment: Almost everyone is talking about what should be done to save it, but very few people have committed to actually doing something about it. I'll demonstrate a few real-life opportunities to discover unseen patterns and relationships within sample financial and election data by leveraging the AI and ML capabilities already built into Oracle Autonomous Database.
NoSQL - MongoDB. Agility, scalability, performance. I am going to talk about the basis of NoSQL and MongoDB. Why some projects requires RDBMs and another NoSQL databases? What are the pros and cons to use NoSQL vs. SQL? How data are stored and transefed in MongoDB? What query language is used? How MongoDB supports high availability and automatic failover with the help of the replication? What is sharding and how it helps to support scalability?. The newest level of the concurrency - collection-level and document-level.
I inherited a MongoDB database server with 60 collections and 100 or so indexes.
The business users are complaining about slow report completion times. What can I do to improve performance?
SOLR has been integrated with OpenCms 9.5 tighter than ever before. With 9.5, all content items in the OpenCms repository can be indexed by SOLR, in all available languages. This deep integration allows to use SOLR not only for basic full text searches, but also as an API extension to create advanced queries for all kinds of contents.
In this workshop, Sören shows how to use SOLR for advanced content retrieval in OpenCms. He combines attributes, properties and XML field values in a query that generates an editable list of elements with a content collector. He also explains how to use advanced features such as individual content field mappings to make your custom content types easily findable.
Closing the Loop in Extended Reality with Kafka Streams and Machine Learning ...confluent
We’ve built a real-time streaming platform that enables prediction based on user behavior, with events occurring in virtual and augmented reality environments. The solution enables organizations to train people in an extended reality environment, where real-life training may be costly and dangerous. Kafka Streams enables analyzing spatial and event data to detect gestural feature and analyze user behavior in real-time to be able to predict any future mistake the user might make. Kafka is the backbone of our real-time analytics and extended reality communication platform with our cluster and applications being deployed on Kubernetes.
In this talk, we will mainly focus on the following: 1. Why Extended Reality with Kafka is a step in the right direction. 2. Architecture & Power of Schema Registry in building a generic platform for pluggable XR apps and analytics models 3. How KSQL and Kafka Streams fits in Kafka Ecosystem to help analyze human motion data and detect features for real-time prediction. 4. Demo of a VR application with real-time analytics feedback, which assists people to be trained in how to work with chemical laboratory equipment.
Search is the Tip of the Spear for Your B2B eCommerce StrategyLucidworks
With ecommerce experiencing explosive growth, it seems intuitive that the B2B segment of that ecosystem is mirroring the same trajectory. That said, B2B has very different needs when it comes to transacting with the same style of experiences that we see in B2C. For instance, B2B ecommerce is about precision findability, whereas B2C customers can convert at higher rates when they’re just browsing online. In order for the B2B buying experience to be successful, search needs to be tuned to meet the unique needs of the segment.
In this webinar with Forrester senior analyst Joe Cicman, you’ll learn:
-Which verticals in B2B will drive the most growth, and how machine-learning powered personalization tactics can be deployed to support those specific verticals
-Why an omnichannel selling approach must be deployed in order to see success in B2B
-How deploying content search capabilities will support a longer sales cycle at scale
-What the next steps are to support a robust B2B commerce strategy supported by new technology
Speakers
Joe Cicman, Senior Analyst, Forrester
Jenny Gomez, VP of Marketing, Lucidworks
Customer loyalty starts with quickly responding to your customer’s needs. When it comes to resolving open support cases, time is of the essence. Time spent searching for answers adds up and creates inefficiencies in resolving cases at scale. Relevant answers need to be a few clicks away and easily accessible for agents directly from their service console.
We will explore how Lucidworks’ Agent Insights application automatically connects agents with the correct answers and resources. You’ll learn how to:
-Configure a proactive widget in an agent’s case view page to access resources across third-party systems (such as Sharepoint, Confluence, JIRA, Zendesk, and ServiceNow).
-Easily set up query pipelines to autonomously route assets and resources that are relevant to the case-at-hand—directly to the right agent.
-Identify subject matter experts within your support data and access tribal knowledge with lightning-fast speed.
How Crate & Barrel Connects Shoppers with Relevant ProductsLucidworks
Lunch and Learn during Retail TouchPoints #RIC21 virtual event.
***
Crate & Barrel’s previous search solution couldn’t provide its shoppers with an online search and browse experience consistent with the customer-centric Crate & Barrel brand. Meanwhile, Crate & Barrel merchandisers spent the bulk of their time manually creating and maintaining search rules. The search experience impacted customer retention, loyalty, and revenue growth.
Join this lunch & learn for an interactive chat on how Crate & Barrel partnered with Lucidworks to:
-Improve search and browse by modernizing the technology stack with ML-based personalization and merchandising solutions
-Enhance the experience for both shoppers and merchandisers
-Explore signals to transform the omnichannel shopping experience
Questions? Visit https://lucidworks.com/contact/
Learn how to guide customers to relevant products using eCommerce search, hyper-personalisation, and recommendations in our ‘Best-In-Class Retail Product Discovery’ webinar.
Nowadays, shoppers want their online experience to be engaging, inspirational and fulfilling. They want to find what they’re looking for quickly and easily. If the sought after item isn’t available, they want the next best product or content surfaced to them. They want a website to understand their goals as though they were talking to a sales assistant in person, in-store.
In this webinar, we explore IMRG industry data insights and a best-in-class example of retail product discovery. You’ll learn:
- How AI can drive increased revenue through hyper-personalised experiences
- How user intent can be easily understood and results displayed immediately
- How merchandisers can be empowered to curate results and product placement – all without having to rely on IT.
Presented by:
Dave Hawkins, Principal Sales Engineer - Lucidworks
Matthew Walsh, Director of Data & Retail - IMRG
Connected Experiences Are Personalized ExperiencesLucidworks
Many companies claim personalization and omnichannel capabilities are top priorities. Few are able to deliver on those experiences.
For a recent Lucidworks-commissioned study, Forrester Consulting surveyed 350+ global business decision-makers to see what gets in the way of achieving these goals. They discovered that inefficient technology, lack of behavioral insights, and failure to tie initiatives to enterprise-wide goals are some of the most frequent blockers to personalization success.
Join guest speaker, Forrester VP and Principal Analyst, Brendan Witcher, and Lucidworks CEO, Will Hayes, to hear the results of the Forrester Consulting study, how to avoid “digital blindness,” and how to apply VoC data in real-time to delight customers with personalized experiences connected across every touchpoint.
In this webinar, you’ll learn:
- Why companies who utilize real-time customer signals report more effective personalization
- How to connect employees and customers in a shared experience through search and browse
- How Lucidworks clients Lenovo, Morgan Stanley and Red Hat fast-tracked improvements in conversion, engagement and customer satisfaction
Featuring
- Will Hayes, CEO, Lucidworks
- Brendan Witcher, VP, Principal Analyst, Forrester
Intelligent Insight Driven Policing with MC+A, Toronto Police Service and Luc...Lucidworks
Intelligent Policing. Leveraging Data to more effectively Serve Communities.
Policing in the next decade is anticipated to be very different from historical methods. More data driven, more focused on the intricacies of communities they serve and more open and collaborative to make informed recommendations a reality. Whether its social populations, NIBRS or organization improvement that’s the driver, the IT requirement is largely the same. Provide 360 access to large volumes of siloed data to gain a full 360 understanding of existing connections and patterns for improved insight and recommendation.
Join us for a round table discussion of how the Toronto Police Service is better serving their community through deploying a unified intelligent data platform.
Data innovation improves officers' engagement with existing data and streamlines investigation workflows by enhancing collaboration. This improved visibility into existing police data allows for a more intelligent and responsive police force.
In this webinar, we'll cover:
-The technology needs of an intelligent police force.
-How a Global Search improves an officer's interaction with existing data.
Featuring:
-Simon Taylor, VP, Worldwide Channels & Alliances, Lucidworks
-Michael Cizmar, Managing Director, MC+A
-Ian Williams, Manager of Analytics & Innovation, Toronto Police Service
[Webinar] Intelligent Policing. Leveraging Data to more effectively Serve Com...Lucidworks
Policing in the next decade is anticipated to be very different from historical methods. More data driven, more focused on the intricacies of communities they serve and more open and collaborative to make informed recommendations a reality. Whether its social populations, NIBRS or organization improvement that’s the driver, the IT requirement is largely the same. Provide 360 access to large volumes of siloed data to gain a full 360 understanding of existing connections and patterns for improved insight and recommendation.
Join us for a round table discussion of how the Toronto Police Service is better serving their community through deploying a unified intelligent data platform.
Data innovation improves officers' engagement with existing data and streamlines investigation workflows by enhancing collaboration. This improved visibility into existing police data allows for a more intelligent and responsive police force.
In this webinar, we'll cover:
The technology needs of an intelligent police force.
How a Global Search improves an officer's interaction with existing data.
Featuring
-Simon Taylor, VP, Worldwide Channels & Alliances, Lucidworks
-Michael Cizmar, Managing Director, MC+A
-Ian Williams, Manager of Analytics & Innovation, Toronto Police Service
Accelerate The Path To Purchase With Product Discovery at Retail Innovation C...Lucidworks
Wish your conversion rates were higher? Can’t figure out how to efficiently and effectively serve all the visitors on your site? Embarrassed by the quality of your product discovery experience? The bar is high and the influx of online shopping over recent months has reminded us that the opportunities are real. We’re all deep in holiday prep, but let’s take a few minutes to think about January 2021 and beyond. How can we position ourselves for success with our customers and against our competition?
Grab your lunch and let’s dive into three strategies that need to be part of your 2021 roadmap. You don’t need an army to get there. But you do need to take action and capitalize on the shoppers abandoning the product discovery journey on your site.
In this session, attendees will find out how to:
-Take control of merchandising at scale;
-Implement hands-free search relevancy; and
-Address personalization challenges.
AI-Powered Linguistics and Search with Fusion and RosetteLucidworks
For a personalized search experience, search curation requires robust text interpretation, data enrichment, relevancy tuning and recommendations. In order to achieve this, language and entity identification are crucial.
For teams working on search applications, advanced language packages allow them to achieve greater recall without sacrificing precision.
Join us for a guided tour of our new Advanced Linguistics packages, available in Fusion, thanks to the technology partnership between Lucidworks and Basistech.
We’ll explore the application of language identification and entity extraction in the context of search, along with practical examples of personalizing search and enhancing entity extraction.
In this webinar, we’ll cover:
-How Fusion uses the Rosette Basic Linguistics and Entity Extraction packages
-Tips for improving language identification and treatment as well as data enrichment for personalization
-Speech2 demo modeling Active Recommendation
-Use Rosette’s packages with Fusion Pipelines to build custom entities for specific domain use cases
Featuring:
-Radu Miclaus, Director of Product, AI and Cloud, Lucidworks, Lucidworks
-Robert Lucarini, Senior Software Engineer, Lucidworks
-Nick Belanger, Solutions Engineer, Basis Technology
The Service Industry After COVID-19: The Soul of Service in a Virtual MomentLucidworks
Before COVID-19, almost 80% of the US workforce worked service in jobs that involve in-person interaction with strangers. Now, leaders of service organizations must reshape their offerings during the pandemic and prepare for whatever the new normal turns out to be. Our three panelists will share ideas for adapting their service businesses, now that closer-than-six-feet isn’t an option.
Join Lucidworks as we talk shop with 3 service business leaders, covering:
-Common impacts of the pandemic on service businesses (and what to do about them),
-How service teams can maintain a human touch across virtual channels, and
-Plans for the future, before and after the pandemic subsides.
Featuring
-Sara Nathan, President & CEO, AMIGOS
-Anthony Carruesco, Founder, AC Fly Fishing
-sara bradley, chef and proprietor, freight house
-Justin Sears, VP Product Marketing, Lucidworks
Webinar: Smart answers for employee and customer support after covid 19 - EuropeLucidworks
The COVID-19 pandemic has forced companies to support far more customers and employees through digital channels than ever before. Many are turning to chatbots to help meet increasing demand, but traditional rules-based approaches can’t keep up. Our new Smart Answers add-on to Lucidworks Fusion makes existing chatbots and virtual assistants more intelligent and more valuable to the people you serve.
Smart Answers for Employee and Customer Support After COVID-19Lucidworks
Watch our on-demand webinar showcasing Smart Answers on Lucidworks Fusion. This technology makes existing chatbots and virtual assistants more intelligent and more valuable to the people you serve.
In this webinar, we’ll cover off:
-How search and deep learning extend conversational frameworks for improved experiences
-How Smart Answers improves customer care, call deflection, and employee self-service
-A live demo of Smart Answers for multi-channel self-service support
Applying AI & Search in Europe - featuring 451 ResearchLucidworks
In the current climate, it’s now more important than ever to digitally enable your workforce and customers.
Hear from Simon Taylor, VP Global Partners & Alliances, Lucidworks and Matt Aslett, Research Vice President, 451 Research to get the inside scoop on how industry leaders in Europe are developing and executing their digital transformation strategies.
In this webinar, we’ll discuss:
The top challenges and aspirations European business and technology leaders are solving using AI and search technology
Which search and AI use cases are making the biggest impact in industries such as finance, healthcare, retail and energy in Europe
What technology buyers should look for when evaluating AI and search solutions
Webinar: 5 Must-Have Items You Need for Your 2020 Ecommerce StrategyLucidworks
In this webinar with 451 Research, you'll understand how retailers are using AI to predict customer intent and learn which key performance metrics are used by more than 120 online retailers in Lucidworks’ 2019 Retail Benchmark Survey.
In this webinar, you’ll learn:
● What trends and opportunities are facing the ecommerce industry in 2020
● Why search is the universal path to understanding customer intent
● How large online retailers apply AI to maximize the effectiveness of their personalization efforts
Where Search Meets Science and Style Meets Savings: Nordstrom Rack's Journey ...Lucidworks
Nordstrom Rack | Hautelook curates and serves customers a wide selection of on-trend apparel, accessories, and shoes at an everyday savings of up to 75 percent off regular prices. With over a million visitors shopping across different platforms every day, and a realization that customers have become accustomed to robust and personalized search interactions, Nordstrom Rack | Hautelook launched an initiative over a year ago to provide data science-driven digital experiences to their customers.
In this session, we’ll discuss Nordstrom Rack | Hautelook’s journey of operationalizing a hefty strategy, optimizing a fickle infrastructure, and rallying troops around a single vision of building an expansible machine-learning driven product discovery engine.
The audience will learn about:
-The key technical challenges and outcomes that come with onboarding a solution
-The lessons learned of creating and executing operational design
-The use of Lucidworks Fusion to plug custom data science models into search and browse applications to understand user intent and deliver personalized experiences
Apply Knowledge Graphs and Search for Real-World Decision IntelligenceLucidworks
Knowledge graphs and machine learning are on the rise as enterprises hunt for more effective ways to connect the dots between the data and the business world. With newer technologies, the digital workplace can dramatically improve employee engagement, data-driven decisions, and actions that serve tangible business objectives.
In this webinar, you will learn
-- Introduction to knowledge graphs and where they fit in the ML landscape
-- How breakthroughs in search affect your business
-- The key features to consider when choosing a data discovery platform
-- Best practices for adopting AI-powered search, with real-world examples
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
3. Who am I?
• Erick Erickson
• Lucene/Solr committer
• PMC member
• Independent Consultant (Workplace
Partners, LLC)
• Not the Red State Guy
• XKCD fan
5. Agenda
• High-level introduction to why you should care
about Streaming Aggregation (SA hereafter)
• High-level view of Parallel SQL processing built
on SA
• High-level view of Streaming Expressions
• Samples from a mortgage database
• Joel Bernstein will do a deep-dive right after this
presentation
• Assuming you are familiar with Solr concepts
6. Why SA?
• Solr has always had “issues” when
dealing with very large result sets
• Data returned had to be read from disk
an decompressed
• “Deep paging” paid this price too
• Entire result set returned at once == lots
of memory
7. Quick Overview of SA
• Built on the “export” capabilities introduced in
Solr 4.10
• Exports “tuples” which must be populated from
docValues fields
• Only exports primitive types, e.g. numeric,
string etc.
• Work can be distributed in parallel to worker
nodes
• Can scale to limits of hardware, 10s of millions of
rows a second with ParallelStreams (we think)
8. DocValues
• DocValues are basic to SA, they are the only fields
that can be specified in the “fl” list of an
Streaming Aggregation query
• Only Solr “primitive” types (int/tint, long/tlong,
string) are allowed in DocValues fields
• Defined per-field in schema.xml
• Specifically, cannot be Solr.TextField-derived
• The Solr doc may contain any field types at all, the
DocValues restriction is only on the fields that
may be exported in “tuples” for SA
9. We can do SQL in Solr!
select
agency_code, count(*), sum(loan_amount),
avg(loan_amount), min(loan_amount),
max(loan_amount), avg(applicant_income)
from hmda
where phonetic_name='(eric)’
having (avg(applicant_income) > 50)
group by agency_code
order by agency_code asc
10. And that’s not all!
• We can program arbitrary operations on complete
result sets
• We can parallelize processing across Solr nodes
• We can process very large result sets in limited
memory
• Design processing rate is 400K rows/node/
second
11. Streaming Aggregation == glue
• Solr is built for returning the top N documents
• Top N is usually small, e.g. 20 docs
• Decompress to return fields (fl list)
• Solr commonly deals with billions of documents
• Analytics:
• Often memory intensive, especially in distributed
mode. If they can be done at all
• Are becoming more important to this thing we call
“search”
• Increasingly important in the era of “big data”
12. Use the Right Tool
• Three “modes”
• Streaming Aggregation to do arbitrary
operations on large result sets – SolrJ
• Streaming Expressions for non Java way to
access Streaming Aggregations – HTTP and SolrJ
• Parallel SQL to do selected SQL operations on
large result sets - SolrJ
• SA’s sweet spot: batch operations
• Complements Solr’s capabilities, applies to
different problems
13. Why not use an RDBMS?
• Well, if it’s the best tool, you should
• RDBMSs are not good search engines though
• Find the average mortgage value for all
users with a name that sounds like “erick”
• erik, erich, eric, aerick, erick, arik
• Critical point: The “tuples” processed can be
those that satisfy any arbitrary Solr query
14. Why not use Spark?
• Well, if it’s the best tool, you should
• I’m still trying to understand when one is
preferable to the other
• SA only needs Solr, no other infrastructure
15. Why not just use Solr?
• Well, if it’s the best tool, you should
• What I’d do: exhaust Solr’s capabilities then apply
SA to those kinds of problems that OOB Solr isn’t
satisfactory for, especially those that require
processing very large result sets
16. How does SA work?
• Simple example of how to get a bunch of rows
back and “do something” with them from a Solr
collection
• You can process multiple streams from entirely
different collections if you choose!
• It’s usually a good idea to sort return sets
• Process all of one kind of thing then move on
• Could write the results to file, connector, etc.
17. Sample Data
• Data set of approx 200M mortgages. Selected
fields:
• Year
• Loan amount (thousands)
• Agency (FDIC, FRS, HUD)
• Reason for loan
• Reason for denial
• No personal data, I added randomly generated
names to illustrate search
18. Use SA through SolrJ
• The basic pattern is:
• Create a Solr query
• Feed it to the appropriate stream
• Process the “tuples”
• Right, what’s a “tuple”? A wrapper for a map:
• keys are the Solr field names
• values the contents of those fields: must be docValues
• Why this restriction? Because getting stored fields is
expensive
19. Code example
• Here’s a bit of code that
• Accesses a 2-shard SolrCloud collection
• Computes the average mortgage by “agency”,
e.g. HUD, OTS, OCC, OFS, FDIC, NCUA
• For a 217M dataset, 335K results (untuned) took
2.1 seconds
20. Code example
String zkHost = "169.254.80.84:2181";
Map params = new HashMap();
params.put("q", "phonetic_name:eric");
params.put("fl", "loan_amount,agency_code");
params.put("sort", "agency_code asc");
params.put("qt", "/export");
….
CloudSolrStream stream = new
CloudSolrStream(zkHost, "hmda", params);
stream.open();
21. More code
while (true) {
Tuple tuple = stream.read();
if (tuple.EOF) {
break;
}
// next slide in here
}
22. Last Code
String newAgency =
tuple.getString("agency_code");
long loant = tuple.getLong("loan_amount");
if (agency.equals(thisAgency) == true) {
add_to_current_counters
} else {
log(average for this agency);
reset_for_next_agency
}
23. More interestingly
• Using SA, you can:
• Join across completely different collections
• Manipulate data in arbitrary ways to suit your use-case
• Distribute this load across the solr nodes in a
collection
• Unlike standard search, SA can use cycles on all the
replicas of a shard
• Process zillions of buckets without blowing up
memory
24. Parallel SQL
• Use from SolrJ
• The work can be distributed across multiple
“worker” nodes
• Operations can be combined into complex
statements
• Let’s do our previous example with ParallelSQL
• Currently trunk/6.0 only due to Java 8
requirement for SQL parser. No plan to put in 5x
25. Parallel SQL
• SQL “select” is mapped to Solr Search
• Order by, Group by and Having are all supported
• Certain aggregations are supported
• count, sum, avg, min max
• You can get crazy here:
• having ((sum(fieldC) > 1000) AND (avg(fieldY) <= 10))
• Following query with numWorkers=2, 612K rows
• 383ms
26. Sample SQL
select
agency_code, count(*), sum(loan_amount),
avg(loan_amount), min(loan_amount),
max(loan_amount)
from hmda
where phonetic_name='(erich)’
group by agency_code
order by agency_code asc
27. Sample SQL
select
agency_code, count(*), sum(loan_amount),
avg(loan_amount), min(loan_amount),
max(loan_amount)
from hmda <- collection name
where phonetic_name='(eric)’
group by agency_code
order by agency_code asc
28. Sample SQL
select
agency_code, count(*), sum(loan_amount),
avg(loan_amount), min(loan_amount),
max(loan_amount)
from hmda
where phonetic_name='(eric)’ <- Solr search
group by agency_code
order by agency_code asc
29. Sample SQL
select
agency_code, count(*), sum(loan_amount),
avg(loan_amount), min(loan_amount),
max(loan_amount)
from hmda
where phonetic_name='(eric)’
group by agency_code <- Solr field
order by agency_code asc <- Solr field
30. Parallel Sql in SolrJ
Map params = new HashMap();
params.put(CommonParams.QT, "/sql");
params.put("numWorkers", "2");
params.put("sql", "select agency_code, count(*),
sum(loan_amount), avg(loan_amount), " +
"min(loan_amount), max(loan_amount),
avg(applicant_income) from hmda where
phonetic_name='eric' " +
"group by agency_code " +
"having (avg(applicant_income) > 50) " +
"order by agency_code asc");
SolrStream stream = new SolrStream("http://ericks-mac-pro:
8981/solr/hmda", params);
31. Parallel Sql in SolrJ
Map params = new HashMap();
params.put(CommonParams.QT, "/sql");
params.put("numWorkers", "2");
params.put("sql", "select agency_code, count(*),
sum(loan_amount), avg(loan_amount), " +
"min(loan_amount), max(loan_amount),
avg(applicant_income) from hmda where
phonetic_name='eric' " +
"group by agency_code " +
"having (avg(applicant_income) > 50) " +
"order by agency_code asc");
32. Parallel Sql in SolrJ
SolrStream stream = new SolrStream("http://ericks-mac-pro:
8981/solr/hmda", params);
try {
stream.open();
while (true) {
Tuple tuple = stream.read();
dumpTuple(tuple);
log("");
if (tuple.EOF) {
break;
}
}
} finally {
if (stream != null) stream.close();
}
33. Parallel Sql in SolrJ
SolrStream stream = new SolrStream("http://ericks-mac-
pro:8981/solr/hmda", params);
try {
stream.open();
while (true) {
if (tuple.EOF) {
break;
}
Tuple tuple = stream.read();
dumpTuple(tuple);
}
} finally {
if (stream != null) stream.close();
}
36. Current Gotcha’s
• All fields must be lower case (possibly with
underscores)
• Trunk (6.0) only although will be in 5.x (5.4?) Not
planned. (Calcite)
• Requires solrconfig entries
• Only nodes hosting collections can act as worker
nodes (But not necessarily the queried collection)
• Be prepared to dig, documentation is also
evolving
37. Streaming expressions
• Provide a simple query language for SolrCloud
that merges search with parallel computing
without Java programming
• Operations can be nested
44. Future Enhancements
• This capability is quite new, Solr 5.2 with
significant enhancements every release
• Some is still “baking” in trunk/6.0
• A JDBC Driver so any Java application can treat
Solr like a SQL database, e.g. for visualization
• More user-friendly interfaces (widgets?)
• More docs, how to’s, etc.
• “Select Into”
45. No time for (some)
• Oh My. Subclasses of TupleStream:
• MetricStream
• RollupStream (for high cardinality faceting)
• UniqueStream
• FilterStream (Set operations)
• MergeStream
• ReducerStream
• SolrStream for non-SolrCloud
46. No time for (cont)
• Parallel execution details
• Distributing SA across “Worker nodes”
• All of the Parallel SQL composition
possibilities
• All of the Streaming Expression
operations