The scheduled redis speaker was sick so I whipped up in about an hour and filled in on a different subject. It's a bit crude, but you get a big picture view of how to build a simple AI application using BigCouch. The accompanying video is up at http://www.youtube.com/watch?v=QEBDNxbSRuk
This document provides a summary of Mike Browning's experience, education, and objectives. It outlines his previous roles in sales and account management from 2009 to 1984 at companies like Dialogic Communications, Dell Computer Corp, Southeast Appliance Distributing, Tennessee Mat Company, and L'Aire Liquide. It also lists his education as a Bachelor of Science degree in Marketing from Brigham Young University. His stated objectives are to be an enterprising and adaptive team player with 27 years of sales experience.
This document discusses new technologies that are emerging beyond Hadoop and the original "Google canon" to process big data more efficiently. It introduces Percolator for incremental processing, Dremel for ad-hoc queries on nested data, and Pregel for large-scale graph processing. These new systems provide benefits like lower latency, easier analysis of diverse data types, and scalability to analyze graphs with billions of vertices.
The document discusses new technologies that are emerging beyond Hadoop to process big data more effectively. It summarizes Google's Percolator, Dremel, and Pregel systems, which provide incremental processing, ad-hoc querying, and graph processing respectively. These new approaches help address some of Hadoop's limitations and represent the next generation of big data technologies on the horizon.
The document describes revisions made to the table of contents page for a music magazine called "Flava." Key details include:
- Placing the magazine cover image and title "Flava" on the left third of the page.
- Adding editorial pillars ("News, Features, Music") and 2-4 cover lines below each pillar.
- Including an editorial letter and image of featured artist "Pretty In Pink" on the right third.
- Revising layout and formatting elements like fonts, sizes, alignments to improve professional appearance and readability.
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
The document provides an overview of the skills and experience of Elmer Donavan related to business intelligence and SQL Server technologies. It includes sections summarizing his skills in SQL Server Integration Services, SQL Server Analysis Services, SQL Server Reporting Services, and Microsoft PerformancePoint. Sample projects are described to showcase work with SSIS, SSAS, SSRS and dashboards in SharePoint.
This document provides a summary of Mike Browning's experience, education, and objectives. It outlines his previous roles in sales and account management from 2009 to 1984 at companies like Dialogic Communications, Dell Computer Corp, Southeast Appliance Distributing, Tennessee Mat Company, and L'Aire Liquide. It also lists his education as a Bachelor of Science degree in Marketing from Brigham Young University. His stated objectives are to be an enterprising and adaptive team player with 27 years of sales experience.
This document discusses new technologies that are emerging beyond Hadoop and the original "Google canon" to process big data more efficiently. It introduces Percolator for incremental processing, Dremel for ad-hoc queries on nested data, and Pregel for large-scale graph processing. These new systems provide benefits like lower latency, easier analysis of diverse data types, and scalability to analyze graphs with billions of vertices.
The document discusses new technologies that are emerging beyond Hadoop to process big data more effectively. It summarizes Google's Percolator, Dremel, and Pregel systems, which provide incremental processing, ad-hoc querying, and graph processing respectively. These new approaches help address some of Hadoop's limitations and represent the next generation of big data technologies on the horizon.
The document describes revisions made to the table of contents page for a music magazine called "Flava." Key details include:
- Placing the magazine cover image and title "Flava" on the left third of the page.
- Adding editorial pillars ("News, Features, Music") and 2-4 cover lines below each pillar.
- Including an editorial letter and image of featured artist "Pretty In Pink" on the right third.
- Revising layout and formatting elements like fonts, sizes, alignments to improve professional appearance and readability.
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
The document provides an overview of the skills and experience of Elmer Donavan related to business intelligence and SQL Server technologies. It includes sections summarizing his skills in SQL Server Integration Services, SQL Server Analysis Services, SQL Server Reporting Services, and Microsoft PerformancePoint. Sample projects are described to showcase work with SSIS, SSAS, SSRS and dashboards in SharePoint.
P02 sparse coding cvpr2012 deep learning methods for visionzukun
The document summarizes a tutorial on deep learning and sparse coding. It discusses how sparse coding can be used as an effective building block to learn useful features from data by designing feature learners instead of hand-crafting features. Sparse coding involves learning an overcomplete dictionary of bases to sparsely represent input data. It provides better results than traditional bag-of-words models for image classification. When applied to digit recognition on MNIST data, sparse coding learns bases that resemble digit shapes as the sparsity parameter increases, improving classification accuracy.
The document provides an overview of SQL Server 2008 business intelligence capabilities including SQL Server Analysis Services (SSAS) for online analytical processing (OLAP) cubes and data mining models. Key capabilities covered include new aggregation designer, simplified cube/dimension wizards in SSAS, improved time series and cross-validation algorithms in data mining, and the ability to use Excel as both an OLAP cube and data mining client and model creator.
Hadoop World 2011: BI on Hadoop in Financial Services - Stefan Grschupf, Data...Cloudera, Inc.
This session is designed for banking and other financial services managers with technical experience or engineers. It will discuss business intelligence platform deployments on Hadoop including cost performance, customer analytics, value-at-risk analytics and IT SLA’s.
Architecting Scalable Applications in the CloudClint Edmonson
There is an increasing importance to architect applications for both growth and optimal user experience. Modern development tools allow you to develop fantastic applications, but there are pitfalls with architecting the applications in the wrong way. This talk will discuss industry proven best practices for building highly scalable web sites and applications and how they might be implemented on Windows Azure.
The document introduces CloudBees, a platform as a service (PaaS) for Java applications. It discusses how CloudBees handles the entire lifecycle of cloud application development and deployment without the need for servers, virtual machines, or IT administration. The platform provides development tools through DEV@cloud and runtime services through RUN@cloud. It also demonstrates how to store code, build, test, and continuously deploy a sample application to the CloudBees platform.
When you're handling big data in the modern world, you will come to a point where you can't just pick a “one size fits all” approach anymore. However, to get the results you want, you also don’t have to spend big money on fire breathing hardware, or expensive software. AWS offers a beautiful array of open and commercial database choices, from do-it-yourself to fully managed services which handle scaling, and gives you powerful tools to choose the right architecture. You could choose from MySQL, RDS, Oracle, SQL Server, MongoDB, DynamoDB, Cassandra, ElastiCache, Redis, and SimpleDB, and our customers use them for different use cases. Each has different strengths, and this session highlights when you would want to choose each, with examples of how we use each to solve our big data challenges and why we made those decisions. We profile the some of the choices available to you - MySQL, RDS, Elasticache, Redis, Cassandra, MongoDB and DynamoDB – and three customer case studies on RDS, Elasticache and DynamoDB.
The document discusses various AWS database options and decision factors for choosing between SQL and NoSQL databases on AWS. It provides tips for three companies - Edmodo optimizes for manageability and scale using RDS, Obama for America optimizes for app velocity and scale, and BrandVerity leverages both YesSQL and NoSQL databases. The document also discusses factors to consider such as application needs, transactions, scale, performance, availability, and skills when choosing between SQL and NoSQL databases.
SSIS provides capabilities for ETL operations using a control flow and data flow engine. It allows importing and exporting data, integrating heterogeneous data sources, and supporting BI solutions. Key concepts include packages, control flow, data flow, variables, and event handlers. SSIS can be optimized for scalability through techniques like parallelism, avoiding blocking transformations, and leveraging SQL for aggregations. Performance can be monitored using tools like SQL Server logs, WMI, and MOM. SSIS is interoperable with data sources like Oracle, Excel, and flat files.
Microsoft Azure zmienia się. Jego częśc poświęcona bazie danych (Windows Azure SQL Database) zmienia się jeszcze szybciej. Podczas tej sesji chciałbym pokazac tym, którzy nie widzieli, oraz przypomniec tym, którzy już coś wiedzą - o co chodzi z WASD, jakie zmiany nastapiły i czego możemy po tej bazie oczekiwać. Dla odważnych będzie okazja podłączenia się do konta w chmurze i przetestowania ych rozwiązań samemu.
The document discusses SQL Server 2008 data mining capabilities. It provides an overview of data mining concepts and scenarios, demonstrates the data mining lifecycle process using SQL Server tools, and highlights new features in SQL Server 2008 such as improved time series algorithms and holdout support for model validation. Resources for learning more about SQL Server data mining are also listed.
The document discusses SQL Server 2008 data mining capabilities. It provides an overview of data mining concepts and scenarios, demonstrates the data mining lifecycle process using SQL Server tools, and highlights new features in SQL Server 2008 such as improved time series algorithms and holdout support for model validation. Resources for learning more about SQL Server data mining are also listed.
Use Machine learning to solve classification problems through building binary and multi-class classifiers.
Does your company face business-critical decisions that rely on dynamic transactional data? If you answered “yes,” you need to attend this free event featuring Microsoft analytics tools. We’ll focus on Azure Machine Learning capabilities and explore the following topics: - Introduction of two class classification problems.
- Classification Algorithms (Two Class Classification)
- Available algorithms in Azure ML.
- Real business problems that is solved using two class classification.
WebSphere Commerce v7 provides utilities for preparing and loading data into a WebSphere Commerce database from various sources like CSV and XML files. The Data Load utility is the recommended tool which can transform source data to business objects, allocate business objects to physical data, and load the data into the database in a single operation. Customizations to the data loading process include using custom data readers, column handlers, and business object mediators.
This document provides an overview of data mining in SQL Server 2008. It discusses the core functionality and new/advanced features including improved time series algorithms, holdout support for partitioning data, and cross-validation. It also outlines the data mining lifecycle and interfaces like DMX and XMLA that can be used to create and manage models. Excel add-ins and functions are demonstrated for exploring and querying models.
Just over a year ago (before becoming the full time chair and advocate of QCon London, San Francisco, and New York), my main role was with HPE as the principal architect for a client in the US public sector.
The systems we supported were responsible for personnel information, scholarships decisions, and record management. Like so many others, we were also faced with legacy applications, COTS product integrations, polyglot code bases, and often brittle deployments. In an effort to decouple code bases and address some of these issues, we started advocating for a Microservice architecture and trying to distinguish it from the SOA practices of the past.
Now, it’s a year later. I have had the incredible opportunity to have access to architects, engineers, and leaders from some of the world’s more respected software companies. These are companies like Uber, Microsoft, Netflix, Apple, Google, Slack, Pinterest, and Etsy. I’ve had the chance to have one-on-one discussions with Chief Architects, developers, and engineers building the apps I most admire and use every day (some leveraging Microservices, some embracing Monoliths, and others falling somewhere in between).
Patterns & Practices of Microservices is some of the things I wish I knew before beginning a push towards Microservices just over a year ago. It’s the practices of companies leveraging Microservices, it’s the technology tradeoffs when deciding between Monoliths and Microservices, and it’s the advice I’ve heard in interviewing, podcasting, and iterating on presentations from software giants like Adrian Cockcroft, Matt Ranney, Josh Evans, Martin Thompson, and literally hundreds of other engineers who drop knowledge at QCons around the world.
“A broad category of applications and technologies for gathering, storing, analyzing, sharing and providing access to data to help enterprise users make better business decisions” -Gartner
Microsoft Cloud BI Update 2012 for SQL Saturday PhillyMark Kromer
This document provides an overview and update of Microsoft's Cloud Business Intelligence (BI) solutions in version 3.0 from June 2012. It discusses the objectives of Cloud BI including providing data access and answers to business questions anytime from mobile devices. An overview of the session covers Windows Azure, SQL Azure, SQL Azure Reporting Services, mobile BI delivery, cloud data integration, data mining in the cloud, and hybrid scenarios. Key features of SQL Azure like import/export, data-tier applications, data sync, and federations for database scale-out are also summarized.
P02 sparse coding cvpr2012 deep learning methods for visionzukun
The document summarizes a tutorial on deep learning and sparse coding. It discusses how sparse coding can be used as an effective building block to learn useful features from data by designing feature learners instead of hand-crafting features. Sparse coding involves learning an overcomplete dictionary of bases to sparsely represent input data. It provides better results than traditional bag-of-words models for image classification. When applied to digit recognition on MNIST data, sparse coding learns bases that resemble digit shapes as the sparsity parameter increases, improving classification accuracy.
The document provides an overview of SQL Server 2008 business intelligence capabilities including SQL Server Analysis Services (SSAS) for online analytical processing (OLAP) cubes and data mining models. Key capabilities covered include new aggregation designer, simplified cube/dimension wizards in SSAS, improved time series and cross-validation algorithms in data mining, and the ability to use Excel as both an OLAP cube and data mining client and model creator.
Hadoop World 2011: BI on Hadoop in Financial Services - Stefan Grschupf, Data...Cloudera, Inc.
This session is designed for banking and other financial services managers with technical experience or engineers. It will discuss business intelligence platform deployments on Hadoop including cost performance, customer analytics, value-at-risk analytics and IT SLA’s.
Architecting Scalable Applications in the CloudClint Edmonson
There is an increasing importance to architect applications for both growth and optimal user experience. Modern development tools allow you to develop fantastic applications, but there are pitfalls with architecting the applications in the wrong way. This talk will discuss industry proven best practices for building highly scalable web sites and applications and how they might be implemented on Windows Azure.
The document introduces CloudBees, a platform as a service (PaaS) for Java applications. It discusses how CloudBees handles the entire lifecycle of cloud application development and deployment without the need for servers, virtual machines, or IT administration. The platform provides development tools through DEV@cloud and runtime services through RUN@cloud. It also demonstrates how to store code, build, test, and continuously deploy a sample application to the CloudBees platform.
When you're handling big data in the modern world, you will come to a point where you can't just pick a “one size fits all” approach anymore. However, to get the results you want, you also don’t have to spend big money on fire breathing hardware, or expensive software. AWS offers a beautiful array of open and commercial database choices, from do-it-yourself to fully managed services which handle scaling, and gives you powerful tools to choose the right architecture. You could choose from MySQL, RDS, Oracle, SQL Server, MongoDB, DynamoDB, Cassandra, ElastiCache, Redis, and SimpleDB, and our customers use them for different use cases. Each has different strengths, and this session highlights when you would want to choose each, with examples of how we use each to solve our big data challenges and why we made those decisions. We profile the some of the choices available to you - MySQL, RDS, Elasticache, Redis, Cassandra, MongoDB and DynamoDB – and three customer case studies on RDS, Elasticache and DynamoDB.
The document discusses various AWS database options and decision factors for choosing between SQL and NoSQL databases on AWS. It provides tips for three companies - Edmodo optimizes for manageability and scale using RDS, Obama for America optimizes for app velocity and scale, and BrandVerity leverages both YesSQL and NoSQL databases. The document also discusses factors to consider such as application needs, transactions, scale, performance, availability, and skills when choosing between SQL and NoSQL databases.
SSIS provides capabilities for ETL operations using a control flow and data flow engine. It allows importing and exporting data, integrating heterogeneous data sources, and supporting BI solutions. Key concepts include packages, control flow, data flow, variables, and event handlers. SSIS can be optimized for scalability through techniques like parallelism, avoiding blocking transformations, and leveraging SQL for aggregations. Performance can be monitored using tools like SQL Server logs, WMI, and MOM. SSIS is interoperable with data sources like Oracle, Excel, and flat files.
Microsoft Azure zmienia się. Jego częśc poświęcona bazie danych (Windows Azure SQL Database) zmienia się jeszcze szybciej. Podczas tej sesji chciałbym pokazac tym, którzy nie widzieli, oraz przypomniec tym, którzy już coś wiedzą - o co chodzi z WASD, jakie zmiany nastapiły i czego możemy po tej bazie oczekiwać. Dla odważnych będzie okazja podłączenia się do konta w chmurze i przetestowania ych rozwiązań samemu.
The document discusses SQL Server 2008 data mining capabilities. It provides an overview of data mining concepts and scenarios, demonstrates the data mining lifecycle process using SQL Server tools, and highlights new features in SQL Server 2008 such as improved time series algorithms and holdout support for model validation. Resources for learning more about SQL Server data mining are also listed.
The document discusses SQL Server 2008 data mining capabilities. It provides an overview of data mining concepts and scenarios, demonstrates the data mining lifecycle process using SQL Server tools, and highlights new features in SQL Server 2008 such as improved time series algorithms and holdout support for model validation. Resources for learning more about SQL Server data mining are also listed.
Use Machine learning to solve classification problems through building binary and multi-class classifiers.
Does your company face business-critical decisions that rely on dynamic transactional data? If you answered “yes,” you need to attend this free event featuring Microsoft analytics tools. We’ll focus on Azure Machine Learning capabilities and explore the following topics: - Introduction of two class classification problems.
- Classification Algorithms (Two Class Classification)
- Available algorithms in Azure ML.
- Real business problems that is solved using two class classification.
WebSphere Commerce v7 provides utilities for preparing and loading data into a WebSphere Commerce database from various sources like CSV and XML files. The Data Load utility is the recommended tool which can transform source data to business objects, allocate business objects to physical data, and load the data into the database in a single operation. Customizations to the data loading process include using custom data readers, column handlers, and business object mediators.
This document provides an overview of data mining in SQL Server 2008. It discusses the core functionality and new/advanced features including improved time series algorithms, holdout support for partitioning data, and cross-validation. It also outlines the data mining lifecycle and interfaces like DMX and XMLA that can be used to create and manage models. Excel add-ins and functions are demonstrated for exploring and querying models.
Just over a year ago (before becoming the full time chair and advocate of QCon London, San Francisco, and New York), my main role was with HPE as the principal architect for a client in the US public sector.
The systems we supported were responsible for personnel information, scholarships decisions, and record management. Like so many others, we were also faced with legacy applications, COTS product integrations, polyglot code bases, and often brittle deployments. In an effort to decouple code bases and address some of these issues, we started advocating for a Microservice architecture and trying to distinguish it from the SOA practices of the past.
Now, it’s a year later. I have had the incredible opportunity to have access to architects, engineers, and leaders from some of the world’s more respected software companies. These are companies like Uber, Microsoft, Netflix, Apple, Google, Slack, Pinterest, and Etsy. I’ve had the chance to have one-on-one discussions with Chief Architects, developers, and engineers building the apps I most admire and use every day (some leveraging Microservices, some embracing Monoliths, and others falling somewhere in between).
Patterns & Practices of Microservices is some of the things I wish I knew before beginning a push towards Microservices just over a year ago. It’s the practices of companies leveraging Microservices, it’s the technology tradeoffs when deciding between Monoliths and Microservices, and it’s the advice I’ve heard in interviewing, podcasting, and iterating on presentations from software giants like Adrian Cockcroft, Matt Ranney, Josh Evans, Martin Thompson, and literally hundreds of other engineers who drop knowledge at QCons around the world.
“A broad category of applications and technologies for gathering, storing, analyzing, sharing and providing access to data to help enterprise users make better business decisions” -Gartner
Microsoft Cloud BI Update 2012 for SQL Saturday PhillyMark Kromer
This document provides an overview and update of Microsoft's Cloud Business Intelligence (BI) solutions in version 3.0 from June 2012. It discusses the objectives of Cloud BI including providing data access and answers to business questions anytime from mobile devices. An overview of the session covers Windows Azure, SQL Azure, SQL Azure Reporting Services, mobile BI delivery, cloud data integration, data mining in the cloud, and hybrid scenarios. Key features of SQL Azure like import/export, data-tier applications, data sync, and federations for database scale-out are also summarized.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
How to Get CNIC Information System with Paksim Ga.pptx
Oscon miller 2011
1. Bayes on your (Big)Couch
Mike Miller
_milleratmit
July 25, 2011
2. I want my app to do _this_
Mike Miller, Oscon 2011 2
3. CouchDB in a slide
• Schema-free document database management system
Documents are JSON objects
Able to store binary attachments
• RESTful API
http://wiki.apache.org/couchdb/reference
• Views: Custom, persistent representations of your data
Incremental MapReduce with results persisted to disk
Fast querying by primary key (views stored in a B-tree)
• Bi-Directional Replication
Master-slave and multi-master topologies supported
Optional ‘filters’ to replicate a subset of the data
Edge devices (mobile phones, sensors, etc.)
Mike Miller, Oscon 2011 3
4. BigCouch = Couch+Scaling
• Open Source, Apache License
• Horizontal Scalability
Easily add storage capacity by adding more servers
Computing power (views, compaction, etc.) scales with
more servers
• No SPOF
Any node can handle any request
Individual nodes can come and go
• Transparent to the Application
All clustering operations take place “behind the curtain”
looks (mostly) like a single server instance of CouchDB
Mike Miller, Oscon 2011 4
6. Sample Data
Height vs. Weight
80
Height [in]
75 Girls
Boys
70
65
60
55
50
45
40
35
80 100 120 140 160 180 200 220
Weight [lbs]
Mike Miller, Oscon 2011 6
7. Naive Bayes Classifier
gaus
mean male
height 0.4
height 0.35
0.3
0.25
0.2
0.15
male height 0.1
male variance 0.05
0
-3 -2 -1 0 1 2 3
Mike Miller, Oscon 2011 7
8. Implementation Plan
Height vs. Weight
80
Height [in]
Model people as documents in 75 Girls
Boys
CouchDB 70
65
60
Calculate Means/Variances with
55
MapReduce
50
45
Run classifier in the CouchDB as 40
post-MapReduce hook (“_list”) 35
80 100 120 140 160 180 200 220
Weight [lbs]
• Note:
do not need to specify fields to use in classification
multi-class implementation
continuous, incremental training! Results improve as training data trickles in.
Mike Miller, Oscon 2011 8
9. 3 ways to follow along
couchapp python tool to push/pull from other couchdb’s
> sudo easy_install install -U couchapp
> couchapp clone ‘http://millertime.cloudant.com/bitb'
create an account at cloudant.com
> curl -X PUT ‘http://<username>:<pwd>@<username>.cloudant.com/bitb’
> couchapp push ‘http://<username>:<pwd>@<username>.cloudant.com/bitb’
github
> git clone git@github.com:mlmiller/bayes.git
CouchDB replication to your cloudant account
bonus, brings along the data, too!
Mike Miller, Oscon 2011 9
10. The Code
post MapReduce Classifier
Hook (“_list” (Probability
method) Calculator)
client side test
via node.js view code to
calculate
means and
you can ignore variances
everything else Mike Miller, Oscon 2011 10
11. Data Model
Arbitrary number of numerical
fields allowed
‘class’ => training Data
Mike Miller, Oscon 2011 11
12. Training via MapReduce
‘class’ => training Data
views/training/map.js
Calculate mean/variance for all numerical
fields in a document
emit: ([<class>, <field>], <value>)
Reduce: _stats (Erlang builtin)
Mike Miller, Oscon 2011 12
14. Bayes: Trained State
Count, Min, Max, Mean,
Variance
Automatically Updated as new training Data
Arrives
Mike Miller, Oscon 2011 14
15. Bayes Classifier
lib/bayes_classifier.js
Load state from DB
No assumptions on Field
Names
Calculate prob. for
all possible
hypotheses
Mike Miller, Oscon 2011 15
16. A brief aside...
• Lets test our classifier
Select 2000 documents for test
Randomly choose 1000 documents for training sample
Remaining documents used for validation
• Simulate continuous training
Add documents one at a time
After each document addition, test on all 1000 of our validation sample
Record and plot fraction of validation sample properly classified
Mike Miller, Oscon 2011 16
17. A brief aside...
Dramatic improvement with
additional training data
Number of documents in the training set
Mike Miller, Oscon 2011 17
18. ... and back to the code
Mike Miller, Oscon 2011 18
19. test it yourself
• Client side test via node.js
> ./test.js height=<some number> weigth=<some number>
Classifier runs server side, configured in line 6 of test.js
Can point this to
your DB
Mike Miller, Oscon 2011 19
20. Running as CouchApp
create a database (e.g., ‘bitb’) at cloudant.com
add data
then push your code
>couchapp push ‘http://<user>:<pwd>@<usr>.cloudant.com/bitb’
HTML & CSS served directly from BigCouch to the browser
Heavy lifting of classification done server side
http://millertime.cloudant.com/bitb/_design/bayes/index.html
Mike Miller, Oscon 2011 20
21. Running as API (_list)
> curl 'http://millertime.cloudant.com/bitb/_design/
bayes/_list/index/training?
height=65.65&weight=168.61&format=json
&group=true'
Mike Miller, Oscon 2011 21
22. Wrapping Up: Bayes on BigCouch
• Simple code, powerful results
light requirements on data model
can be relaxed with more complex view code
Continuous learning is very powerful
e.g., time-based learning (automatically adapt to changing conditions)
Classification can be performed client- or server-side
push documents into DB and they are auto-tagged!
More sophisticated classifiers easily implemented
e.g., Cloudant Search pre-calculates and exposes TF-IDF scores for textual
classification, weighted classifiers, etc
View Engine allows simple deployment of sophisticated domain libraries in
mass parallel
e.g. Lucene, R, SciPy, NumPy, Matlab/Octave, etc..
Mike Miller, Oscon 2011 22
23. Give it a spin
Hosting, Management, Support for CouchDB and BigCouch
http://cloudant.com
http://github.com/cloudant/bigcouch
Mike Miller, Oscon 2011 23