This document provides recommendations for system capacity planning for an Oracle database:
- Plan for 1 CPU per 200 concurrent users and prefer medium speed CPUs over fewer faster CPUs.
- Reserve 10% of memory for the operating system and allocate 220 MB for the Oracle SGA and 3 MB per user process.
- Use striped and mirrored or striped with parity RAID for disks. Consider raw devices or SANs if possible.
- Ensure the network capacity is adequate based on site size.
Being closer to Cassandra by Oleg Anastasyev. Talk at Cassandra Summit EU 2013odnoklassniki.ru
Odnoklassniki uses cassandra for its business data, which doesn't fit into RAM. This data is typically fast growing, frequently accessed by our users and must be always available, because it constitute our primary business as a social network. The way we use cassandra is somewhat unusual - we don't use thrift or netty based native protocol to communicate with cassandra nodes remotely. Instead, we co-locate cassandra nodes in the same JVM with business service logic, exposing not generic data manipulation, but business level interface remotely. This way, we avoid extra network roundtrips within a single business transaction and use internal calls to Cassandra classes to get information faster. Also, this helps us to create many small hacks on Cassandra's internals, making huge gains on efficiency and ease of distributed server development.
The document provides commands and descriptions for common Linux terminal tasks including system administration, networking, package management, and navigating files and directories. It lists commands for changing passwords, moving through directories, copying/deleting files, mounting devices, starting/stopping services, checking network information, installing and removing packages, and more. Precautions are given for potentially dangerous commands.
Cassandra Day SV 2014: Basic Operations with Apache CassandraDataStax Academy
Operations and tuning for Cassandra involve:
1) Ensuring a good data model first before trying to optimize operations, as a bad model cannot be fixed by operations alone.
2) Sizing for latency and operations depends on factors like CPU, memory, disk type and replication factor, with SSDs offering much faster performance than mechanical disks.
3) Various tuning techniques are described like disabling access time, warming the buffer cache, using SSDs, adjusting read ahead and schedulers on SSDs, choosing appropriate compaction strategies, and Cassandra heap size settings.
The document is a reference guide for Unix/Linux commands and tasks useful for system administration and advanced users. It contains over 20 sections covering topics like the system, processes, file system, networking, encryption, version control and programming. Each section provides concise explanations of relevant commands and how to perform common tasks in that area. The reader is expected to have a working knowledge of the Unix environment.
The document is a reference guide for Unix/Linux commands, organized into sections covering topics such as the system, processes, file system, network, and programming. It provides concise explanations of commands and tasks for advanced users, with the goal of being a practical toolbox reference. Sections include commands for viewing hardware and software information, monitoring system performance and activity, managing users and groups, and configuring process limits.
This document discusses containers and virtual machines. It covers the key differences between containers and VMs, such as containers sharing an OS kernel while VMs make full copies. It also outlines Docker concepts like images, containers, and the Docker engine. The document explains how to run and build containers and images, and mentions some disadvantages of containers related to security and networking.
The document discusses the performance of Cassandra over multiple versions from 0.7.0 to 1.0.0, noting new features introduced in each release including counters, CQL, compression, and levelDB-style compactions. It then analyzes the performance improvements achieved through optimizations like compression and leveled compaction on a single machine workload of inserts, point gets, and range queries. Finally, it invites questions about Cassandra's future performance.
This document provides recommendations for system capacity planning for an Oracle database:
- Plan for 1 CPU per 200 concurrent users and prefer medium speed CPUs over fewer faster CPUs.
- Reserve 10% of memory for the operating system and allocate 220 MB for the Oracle SGA and 3 MB per user process.
- Use striped and mirrored or striped with parity RAID for disks. Consider raw devices or SANs if possible.
- Ensure the network capacity is adequate based on site size.
Being closer to Cassandra by Oleg Anastasyev. Talk at Cassandra Summit EU 2013odnoklassniki.ru
Odnoklassniki uses cassandra for its business data, which doesn't fit into RAM. This data is typically fast growing, frequently accessed by our users and must be always available, because it constitute our primary business as a social network. The way we use cassandra is somewhat unusual - we don't use thrift or netty based native protocol to communicate with cassandra nodes remotely. Instead, we co-locate cassandra nodes in the same JVM with business service logic, exposing not generic data manipulation, but business level interface remotely. This way, we avoid extra network roundtrips within a single business transaction and use internal calls to Cassandra classes to get information faster. Also, this helps us to create many small hacks on Cassandra's internals, making huge gains on efficiency and ease of distributed server development.
The document provides commands and descriptions for common Linux terminal tasks including system administration, networking, package management, and navigating files and directories. It lists commands for changing passwords, moving through directories, copying/deleting files, mounting devices, starting/stopping services, checking network information, installing and removing packages, and more. Precautions are given for potentially dangerous commands.
Cassandra Day SV 2014: Basic Operations with Apache CassandraDataStax Academy
Operations and tuning for Cassandra involve:
1) Ensuring a good data model first before trying to optimize operations, as a bad model cannot be fixed by operations alone.
2) Sizing for latency and operations depends on factors like CPU, memory, disk type and replication factor, with SSDs offering much faster performance than mechanical disks.
3) Various tuning techniques are described like disabling access time, warming the buffer cache, using SSDs, adjusting read ahead and schedulers on SSDs, choosing appropriate compaction strategies, and Cassandra heap size settings.
The document is a reference guide for Unix/Linux commands and tasks useful for system administration and advanced users. It contains over 20 sections covering topics like the system, processes, file system, networking, encryption, version control and programming. Each section provides concise explanations of relevant commands and how to perform common tasks in that area. The reader is expected to have a working knowledge of the Unix environment.
The document is a reference guide for Unix/Linux commands, organized into sections covering topics such as the system, processes, file system, network, and programming. It provides concise explanations of commands and tasks for advanced users, with the goal of being a practical toolbox reference. Sections include commands for viewing hardware and software information, monitoring system performance and activity, managing users and groups, and configuring process limits.
This document discusses containers and virtual machines. It covers the key differences between containers and VMs, such as containers sharing an OS kernel while VMs make full copies. It also outlines Docker concepts like images, containers, and the Docker engine. The document explains how to run and build containers and images, and mentions some disadvantages of containers related to security and networking.
The document discusses the performance of Cassandra over multiple versions from 0.7.0 to 1.0.0, noting new features introduced in each release including counters, CQL, compression, and levelDB-style compactions. It then analyzes the performance improvements achieved through optimizations like compression and leveled compaction on a single machine workload of inserts, point gets, and range queries. Finally, it invites questions about Cassandra's future performance.
This document provides instructions for configuring a virtual private server (VPS) to enable remote desktop access via VNC. It describes how to install packages like Firefox and TigerVNC, set up a VNC user account, configure the VNC server, restart the service, and provides the IP address and credentials needed to remotely login to the desktop.
The document describes configuring an iSCSI target on a server to provide 5GB of shared block storage to clients. It involves creating an LVM volume from an unpartitioned disk, configuring the iSCSI target to use the LVM volume as a backing store, creating an ACL and LUN, then configuring an initiator on a client to discover and login to the target to access the LUN as a block device.
Nowadays, scaling and auto-scaling have become relatively easy tasks. Everyone knows how to set up auto-scaling environments - Auto-Scaling groups, Swarm, Kubernetes, etc.
But when we try to scale I/O Bound workloads:
- Message queues (Kafka, Rabbit, NATS)
- Distributed databases (Hadoop, Cassandra)
- Storage subsystems (CEPH, GlusterFS, HDFS),
the traditional auto-scaling mechanisms are just not enough.
Heavy calculations must be performed to determine the I/O bottlenecks. Rebalancing the data after a scaling event can take up to hours depending on your data & could, resulting in data loss if not properly designed.
We will deep dive into this type of workload and walk you through code samples you can apply in your own environment.
This document provides a summary of common commands and configuration files used in Ubuntu systems for privileges, networking, display, package management, applications, services, and system recovery. It includes commands for sudo access, configuring networking and wireless settings, starting and stopping services, installing and removing packages, checking the system version, and rebooting the system through keyboard shortcuts. Configuration files like /etc/network/interfaces and /etc/X11/xorg.conf are also listed.
Container security: seccomp, network e namespacesKiratech
Le slides hanno l'obiettivo di evidenziare le nuove features di sicurezza introdotte nell'ultima release docker sia descrivendone il funzionamento sia mostrando, attraverso alcune demo, l'eventuale impatto in ambienti di produzione. Viene fatta una comparazione, in termini di analisi del rischio, tra ambienti host utilizzanti engine inferiore a release 1.9 e nuove versioni, soffermandosi su mancanze e future implementazioni.
JavaScript is the new black - Why Node.js is going to rock your world - Web 2...Tom Croucher
Node.js allows JavaScript to be used for server-side programming. It is a popular choice because JavaScript programmers can reuse code and libraries on both the client-side and server-side. Node.js is also fast and non-blocking which allows for high concurrency levels. The Node.js ecosystem includes many libraries like Express for building web servers and Mustache.js for templating that make building server-side JavaScript applications easy.
The document provides steps for setting up and configuring DISKSuit 4.0 on a new machine. It describes initially partitioning the disks with separate partitions for root, var, backup, swap, mirror, usr, home. It also provides an example initial /etc/vfstab file. It then describes steps to mirror the root (/), opt, and var partitions using DISKSuit, which involves adding configuration to the md.tab file, creating a meta database on dedicated partitions using metadb, and encapsulating the root partitions.
The document discusses setting up FreeBSD on DigitalOcean virtual private servers (VPS). It provides details on DigitalOcean's pricing plans and features for droplets. It then describes the author's experience deploying FreeBSD 10.1 and FreeBSD AMP 10.1 droplets on DigitalOcean, including summaries of dmesg output and installed packages.
The document discusses the LMAX Disruptor, a high performance inter-thread messaging library. It describes problems with traditional queues and linked lists for inter-thread messaging due to contention. The Disruptor uses a single-producer principle and volatile variables to synchronize producers and consumers without locking, enabling high throughput. Key components include a ring buffer, events, publishers, processors and barriers. The Disruptor provides low latency, high throughput messaging and zero garbage collection overhead.
This document provides a collection of Unix/Linux commands useful for system administration and advanced users. It covers topics such as system information, processes, file systems, networks, encryption, version control, software installation and more. Each section provides concise explanations of commands within that topic area. The reader is expected to have a working knowledge of Unix-like systems.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
This document provides instructions for configuring a virtual private server (VPS) to enable remote desktop access via VNC. It describes how to install packages like Firefox and TigerVNC, set up a VNC user account, configure the VNC server, restart the service, and provides the IP address and credentials needed to remotely login to the desktop.
The document describes configuring an iSCSI target on a server to provide 5GB of shared block storage to clients. It involves creating an LVM volume from an unpartitioned disk, configuring the iSCSI target to use the LVM volume as a backing store, creating an ACL and LUN, then configuring an initiator on a client to discover and login to the target to access the LUN as a block device.
Nowadays, scaling and auto-scaling have become relatively easy tasks. Everyone knows how to set up auto-scaling environments - Auto-Scaling groups, Swarm, Kubernetes, etc.
But when we try to scale I/O Bound workloads:
- Message queues (Kafka, Rabbit, NATS)
- Distributed databases (Hadoop, Cassandra)
- Storage subsystems (CEPH, GlusterFS, HDFS),
the traditional auto-scaling mechanisms are just not enough.
Heavy calculations must be performed to determine the I/O bottlenecks. Rebalancing the data after a scaling event can take up to hours depending on your data & could, resulting in data loss if not properly designed.
We will deep dive into this type of workload and walk you through code samples you can apply in your own environment.
This document provides a summary of common commands and configuration files used in Ubuntu systems for privileges, networking, display, package management, applications, services, and system recovery. It includes commands for sudo access, configuring networking and wireless settings, starting and stopping services, installing and removing packages, checking the system version, and rebooting the system through keyboard shortcuts. Configuration files like /etc/network/interfaces and /etc/X11/xorg.conf are also listed.
Container security: seccomp, network e namespacesKiratech
Le slides hanno l'obiettivo di evidenziare le nuove features di sicurezza introdotte nell'ultima release docker sia descrivendone il funzionamento sia mostrando, attraverso alcune demo, l'eventuale impatto in ambienti di produzione. Viene fatta una comparazione, in termini di analisi del rischio, tra ambienti host utilizzanti engine inferiore a release 1.9 e nuove versioni, soffermandosi su mancanze e future implementazioni.
JavaScript is the new black - Why Node.js is going to rock your world - Web 2...Tom Croucher
Node.js allows JavaScript to be used for server-side programming. It is a popular choice because JavaScript programmers can reuse code and libraries on both the client-side and server-side. Node.js is also fast and non-blocking which allows for high concurrency levels. The Node.js ecosystem includes many libraries like Express for building web servers and Mustache.js for templating that make building server-side JavaScript applications easy.
The document provides steps for setting up and configuring DISKSuit 4.0 on a new machine. It describes initially partitioning the disks with separate partitions for root, var, backup, swap, mirror, usr, home. It also provides an example initial /etc/vfstab file. It then describes steps to mirror the root (/), opt, and var partitions using DISKSuit, which involves adding configuration to the md.tab file, creating a meta database on dedicated partitions using metadb, and encapsulating the root partitions.
The document discusses setting up FreeBSD on DigitalOcean virtual private servers (VPS). It provides details on DigitalOcean's pricing plans and features for droplets. It then describes the author's experience deploying FreeBSD 10.1 and FreeBSD AMP 10.1 droplets on DigitalOcean, including summaries of dmesg output and installed packages.
The document discusses the LMAX Disruptor, a high performance inter-thread messaging library. It describes problems with traditional queues and linked lists for inter-thread messaging due to contention. The Disruptor uses a single-producer principle and volatile variables to synchronize producers and consumers without locking, enabling high throughput. Key components include a ring buffer, events, publishers, processors and barriers. The Disruptor provides low latency, high throughput messaging and zero garbage collection overhead.
This document provides a collection of Unix/Linux commands useful for system administration and advanced users. It covers topics such as system information, processes, file systems, networks, encryption, version control, software installation and more. Each section provides concise explanations of commands within that topic area. The reader is expected to have a working knowledge of Unix-like systems.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
built by linkedin to process + store high-volume activity stream data, but its really a general use messaging system...\n\n
at it’s heart, its a pub-sub messaging system...\n
It starts with a broker\n
Publishers connect to the broker\n
and send their messages, \n
So we connect some consumers and they can pull messages.\n\nnote when they connect, we’ll receive all messages for a topic, not just since they’ve connected more on that later...\n
but its also distributed, which is to say...\n
we can have multiple brokers in multiple places and aggregate together...\n\ninternally we can also partition within topics to allow parallel consumption, but thats for another talk...\n
before we get into what makes it particularly different (persistence), its useful to understand some of the engineering decisions behind how it works.\n\nperformance is interesting because the behaviour of disks / memory has informed the way kafka has been built to embrace disk persistence\n
research from an ACM paper\n\nvalues/sec is the number of 4-byte integer values read per second from a 1-billion-long array on disk and in memory\n\nnumber of four-byte integer values read per second from a 1-billion-long (4 GB) array on disk or in memory\n\nuses the OS’s default page caching, rather than using custom in-memory stores\ngiven all disk writes/reads will be cached\nmeans we can avoid paying the caching overhead of objects within the JVM\n\nrather than maintaining everything in memory and flush when necessary\neverything is written immediately\n\nconfigurable flushing determines how much data is at risk\n\nsimilar to varnish\n
research from an ACM paper\n\nvalues/sec is the number of 4-byte integer values read per second from a 1-billion-long array on disk and in memory\n\nnumber of four-byte integer values read per second from a 1-billion-long (4 GB) array on disk or in memory\n\nuses the OS’s default page caching, rather than using custom in-memory stores\ngiven all disk writes/reads will be cached\nmeans we can avoid paying the caching overhead of objects within the JVM\n\nrather than maintaining everything in memory and flush when necessary\neverything is written immediately\n\nconfigurable flushing determines how much data is at risk\n\nsimilar to varnish\n
research from an ACM paper\n\nvalues/sec is the number of 4-byte integer values read per second from a 1-billion-long array on disk and in memory\n\nnumber of four-byte integer values read per second from a 1-billion-long (4 GB) array on disk or in memory\n\nuses the OS’s default page caching, rather than using custom in-memory stores\ngiven all disk writes/reads will be cached\nmeans we can avoid paying the caching overhead of objects within the JVM\n\nrather than maintaining everything in memory and flush when necessary\neverything is written immediately\n\nconfigurable flushing determines how much data is at risk\n\nsimilar to varnish\n
research from an ACM paper\n\nvalues/sec is the number of 4-byte integer values read per second from a 1-billion-long array on disk and in memory\n\nnumber of four-byte integer values read per second from a 1-billion-long (4 GB) array on disk or in memory\n\nuses the OS’s default page caching, rather than using custom in-memory stores\ngiven all disk writes/reads will be cached\nmeans we can avoid paying the caching overhead of objects within the JVM\n\nrather than maintaining everything in memory and flush when necessary\neverything is written immediately\n\nconfigurable flushing determines how much data is at risk\n\nsimilar to varnish\n
\n
it starts with a topic, a text description for the messages contained within. we use it to describe how to deserialize the message bytes\n
so we send a message to the topic, what happens?\n
kafka creates a file\nand it persists the message, which is to say it hands it off to the O/S to write\n\nfiles are just sets of bytes, nothing clever\n\ninternally it abstracts the collection of message bytes into a messageset, which is then backed by a file\n\nso what does each message look like...\n
so, our message length is n - 9 bytes\n\nwith a 91 byte payload we have a 100 byte message.\n\nwhich means our next message would start at offset 100\n
and we can see our offsets at the bottom...\n
so we have the offsets which lets us send all messages to consumers, not just those that were sent after they connected... \n
up to the consumer to remember what they’ve consumed, but means you can re-consume an entire set of messages easily, which is very useful when integrating with long-term storage like HDFS...\n\nquick look at the way it works\n
\nour input to the hadoop job is a token file that specifies the offset to read from, the topic etc.\n\nhaving read the token, the mapper connects, and consumes messages from a given offset\n\nthe mapper outputs 2 sets of data- the mapped output, such as the message payloads, and an updated token file with the last read offset.\n\nthis is the key, successful completion of the job results in new metadata for the next run and the output data\n\nmeans that if the job fails we can re-run and it’ll consume from the last consumed offset\n
the newly created output becomes the next input\n
and this is why kafka is an interesting messaging system\n\nsuitable for batch and realtime\n