This document summarizes an analysis of sales data from Abuelo's Restaurant conducted by students. It describes preprocessing tasks to handle data quality issues and add new variables. Regression analysis found new menu items had a significant impact on profit, while value items did not. Further exploration of refined data using 3D bar charts and segment profiling was performed to analyze the effect of value items on total profit.
This document is a resume for Austin J. Bollacker seeking a career in agricultural and mechanical engineering. He has a Bachelor's degree in Agricultural Engineering from the University of Nebraska-Lincoln with coursework in machine design, dynamics, controls, and more. His technical skills include Solidworks, AutoCAD, LabVIEW, and Finite Element Analysis. He has worked as a Project Engineer and Engineering Intern at Blue Ox where he designed new products like roller mills and redesigned existing products. He also has experience detailing cars at auto dealerships and as a backroom manager at Kmart.
This document discusses leadership and the differences between managers and leaders. It covers several key points:
1) Leadership can be learned, though some people may have a natural ability, and performance can be improved through training.
2) Managers have formal authority given by the company, while leaders have informal authority and influence given by the team.
3) Good managers are usually good leaders, having leadership skills, while good leaders are not necessarily good managers, being better at inspiring than organizing.
Dream Jobz Consulting specializes in providing comprehensive human resource consulting and recruitment services in India. They have a team of young professionals focused on identifying highly qualified candidates through innovative solutions. The managing director's message outlines their commitment to understanding clients' needs and delivering premium recruitment and talent solutions. Dream Jobz Consulting offers a range of recruitment services across industries and sectors to help clients find the right talent.
This document describes a data warehouse and business intelligence project for analyzing Starbucks store data. It discusses extracting data from various structured, semi-structured, and unstructured sources, transforming the data using SQL and R, and loading it into a star schema data warehouse with fact and dimension tables. The data warehouse is then used for business queries and analysis in Tableau, with case studies examining city revenue, visitor and beverage sales by city, and city ratings based on food and beverage counts. The analysis finds that New York City generally has the highest revenue, visitor counts, and ratings.
Dimensional data modeling is a technique for database design intended to support analysis and reporting. It contains dimension tables that provide context about the business and fact tables that contain measures. Dimension tables describe attributes and may include hierarchies, while fact tables contain measurable events linked to dimensions. When designing a dimensional model, the business process, grain, dimensions, and facts are identified. Star and snowflake schemas are common types that differ in normalization of the dimensions. Slowly changing dimensions also must be accounted for.
Implemented Data warehouse on “Retail Stores of five states of USA” by using 3 different data sources including structured and unstructured using SSIS, SSAS and Power BI.
This document is a resume for Austin J. Bollacker seeking a career in agricultural and mechanical engineering. He has a Bachelor's degree in Agricultural Engineering from the University of Nebraska-Lincoln with coursework in machine design, dynamics, controls, and more. His technical skills include Solidworks, AutoCAD, LabVIEW, and Finite Element Analysis. He has worked as a Project Engineer and Engineering Intern at Blue Ox where he designed new products like roller mills and redesigned existing products. He also has experience detailing cars at auto dealerships and as a backroom manager at Kmart.
This document discusses leadership and the differences between managers and leaders. It covers several key points:
1) Leadership can be learned, though some people may have a natural ability, and performance can be improved through training.
2) Managers have formal authority given by the company, while leaders have informal authority and influence given by the team.
3) Good managers are usually good leaders, having leadership skills, while good leaders are not necessarily good managers, being better at inspiring than organizing.
Dream Jobz Consulting specializes in providing comprehensive human resource consulting and recruitment services in India. They have a team of young professionals focused on identifying highly qualified candidates through innovative solutions. The managing director's message outlines their commitment to understanding clients' needs and delivering premium recruitment and talent solutions. Dream Jobz Consulting offers a range of recruitment services across industries and sectors to help clients find the right talent.
This document describes a data warehouse and business intelligence project for analyzing Starbucks store data. It discusses extracting data from various structured, semi-structured, and unstructured sources, transforming the data using SQL and R, and loading it into a star schema data warehouse with fact and dimension tables. The data warehouse is then used for business queries and analysis in Tableau, with case studies examining city revenue, visitor and beverage sales by city, and city ratings based on food and beverage counts. The analysis finds that New York City generally has the highest revenue, visitor counts, and ratings.
Dimensional data modeling is a technique for database design intended to support analysis and reporting. It contains dimension tables that provide context about the business and fact tables that contain measures. Dimension tables describe attributes and may include hierarchies, while fact tables contain measurable events linked to dimensions. When designing a dimensional model, the business process, grain, dimensions, and facts are identified. Star and snowflake schemas are common types that differ in normalization of the dimensions. Slowly changing dimensions also must be accounted for.
Implemented Data warehouse on “Retail Stores of five states of USA” by using 3 different data sources including structured and unstructured using SSIS, SSAS and Power BI.
Data Warehousing and Business Intelligence is one of the hottest skills today, and is the cornerstone for reporting, data science, and analytics. This course teaches the fundamentals with examples plus a project to fully illustrate the concepts.
Data warehousing and business intelligence project reportsonalighai
Developed Data warehouse project with a structured, semi-structured and unstructured sources of data
and generated Business Intelligence reports. Topic for the project was Tobacco products consumption in
America. Studied on which products are more famous among people across and also got to know that
middle school students are the soft targets for the tobacco companies as maximum people start taking
tobacco products at this age.
Tools used: SSMS, SSIS, SSAS, SSRS, R-Studio, Power BI, Excel
How to use the PIVOT function in SQL
Pivoting in SQL is a great way to transform tabular data into a compact summary table
Here is a quick guide to how to use it, showing the Syntax and some use cases
https://www.selectdistinct.co.uk/2023/05/22/pivot-function-in-sql/
#SQLPIVOT #SQLPIVOTTABLE #PIVOT
How to use the PIVOT function in SQL Server
PIVOT is used in a similar way to a pivot table in Excel, to create aggregated columns from a long dataset
This short guide shows how it works and what it can be used for
#SQL #pivot #SQLPIVOT
EDA of San Francisco Employee Compensation for Fiscal Year 2014-15Sagar Tupkar
An Exploratory Data Analysis of the dataset of San Francisco Employee Compensation for Fiscal Year 2014-15 obtained from (www.data.sfgov.org) was performed as a part of the course curriculum at MS-Business Analytics at University of Cincinnati. After extensive cleaning, filtering and manipulation using R, SAS and Advanced Excel to detect potential outliers, the dataset was reduced to 83946 observations and 18 variables to probe into the statistics and draw insightful information using MS-SQL. The results of analysis are presented graphically for better data-visualization using Tableau
This document provides an introduction to creating an OLAP (Online Analytical Processing) project in Microsoft SQL Server Analysis Services (SSAS) 2012. It discusses connecting to data sources, creating dimensions and hierarchies, building cubes, and defining calculations and KPIs. The tutorial uses a sample product inventory dataset to demonstrate how to design and deploy an SSAS project that can then be accessed using Microsoft Excel for analysis and reporting.
Project report on the design and build of a data warehouse from unstructured and structured data sources (Quandl, yelp and UK Office for National Statistics) using SQL Server 2016, MongoDB and IBM Watson. Design and implementation of business intelligence visualisations using Tableau to answer cross domain business questions
Data modeling involves creating conceptual, logical, and physical data models of how entities are related in a database. The interview questions covered topics like different data modeling schemas (star vs snowflake), dimensions, facts, surrogate keys, normalization forms, and data warehousing concepts. The candidate discussed their experience working on a data model for a healthcare insurance project that used a snowflake schema to allow multi-dimensional analysis across entities like subscribers, providers, claims, and plans. Common data modeling mistakes like over-normalization and lack of purpose were also listed.
The document discusses various concepts related to data warehousing and ETL processes. It provides definitions for key terms like critical success factors, data cubes, data cleaning, data mining stages, data purging, BUS schema, non-additive facts, conformed dimensions, slowly changing dimensions, cube grouping, and more. It also describes different types of ETL testing including constraint testing, source to target count testing, field to field testing, duplicate check testing, and error handling testing. Finally, it discusses the differences between an ODS and a staging area, with an ODS storing recent cleaned data and a staging area serving as a temporary work area during the ETL process.
This portfolio contains examples of the author's work with Microsoft Business Intelligence tools. It includes projects and queries demonstrating skills in SQL Server, SSIS, SSAS, SSRS, Excel, PerformancePoint Services and SharePoint. It also describes the author's education through SetFocus, a hands-on BI training program focused on the Microsoft stack.
This document discusses ABC analysis, a common inventory classification method. It examines how the period of analysis used can impact ABC classification results. The author analyzes inventory data from an automotive plastics company using different periodic windows, from 1 month to 12 months. Shorter periods lead to more variation and movement of items between A, B, and C categories over time. The minimum variation is observed when analyzing 6-9 month periods. The author concludes the period of analysis must be chosen carefully for ABC analysis to effectively support inventory planning and forecasting goals.
The document discusses SQL query analyzer tools and database maintenance. It covers SQL query analyzer, execution plans, column statistics, running the analyzer, query tuning, optimization, and other analyzer tools like the profiler and tuning advisor. It also discusses database maintenance tasks like managing transaction log files, eliminating index fragmentation, ensuring accurate statistics, and establishing an effective backup strategy. The document demonstrates some of these tools and tasks.
This portfolio contains examples of SQL Server skills gained through an intensive training program, including T-SQL, SSIS, and SSRS. It includes projects on querying book data and building a functional bank database with stored procedures. It also demonstrates loading data from CSV files to SQL tables using SSIS packages and designing an SSRS report on vendor sales.
A data warehouse stores current and historical data for analysis and decision making. It uses a star schema with fact and dimension tables. The fact table contains measures that can be aggregated and connected to dimension tables through foreign keys. Dimensions describe the facts and contain descriptive attributes to analyze measures over time, products, locations etc. This allows analyzing large volumes of historical data for informed decisions.
See sql server graphical execution plans in action tech republicKaing Menglieng
Tim Chapman identifies a few basic things to look for in SQL Server graphical execution plans to understand how indexes are used. He creates a sample database with different indexes and queries it in various ways to demonstrate index seeks, scans, and lookups. The article shows how indexes can improve query performance when columns have high selectivity but may not help when columns have few distinct values.
SQL Server 2008 Portfolio for Saumya Bhatnagarsammykb
The document is Saumya Bhatnagar's portfolio showcasing her skills with SQL Server 2008. It includes examples from projects during her intensive 13-week SQL Server 2008 Master's program. The portfolio covers T-SQL development, database administration tasks, and tools like SSIS, SSRS. Specific projects highlighted include developing databases, stored procedures, triggers for a fictional bank application and using advanced T-SQL, SSIS, and SSRS to analyze real-world data.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Data Warehousing and Business Intelligence is one of the hottest skills today, and is the cornerstone for reporting, data science, and analytics. This course teaches the fundamentals with examples plus a project to fully illustrate the concepts.
Data warehousing and business intelligence project reportsonalighai
Developed Data warehouse project with a structured, semi-structured and unstructured sources of data
and generated Business Intelligence reports. Topic for the project was Tobacco products consumption in
America. Studied on which products are more famous among people across and also got to know that
middle school students are the soft targets for the tobacco companies as maximum people start taking
tobacco products at this age.
Tools used: SSMS, SSIS, SSAS, SSRS, R-Studio, Power BI, Excel
How to use the PIVOT function in SQL
Pivoting in SQL is a great way to transform tabular data into a compact summary table
Here is a quick guide to how to use it, showing the Syntax and some use cases
https://www.selectdistinct.co.uk/2023/05/22/pivot-function-in-sql/
#SQLPIVOT #SQLPIVOTTABLE #PIVOT
How to use the PIVOT function in SQL Server
PIVOT is used in a similar way to a pivot table in Excel, to create aggregated columns from a long dataset
This short guide shows how it works and what it can be used for
#SQL #pivot #SQLPIVOT
EDA of San Francisco Employee Compensation for Fiscal Year 2014-15Sagar Tupkar
An Exploratory Data Analysis of the dataset of San Francisco Employee Compensation for Fiscal Year 2014-15 obtained from (www.data.sfgov.org) was performed as a part of the course curriculum at MS-Business Analytics at University of Cincinnati. After extensive cleaning, filtering and manipulation using R, SAS and Advanced Excel to detect potential outliers, the dataset was reduced to 83946 observations and 18 variables to probe into the statistics and draw insightful information using MS-SQL. The results of analysis are presented graphically for better data-visualization using Tableau
This document provides an introduction to creating an OLAP (Online Analytical Processing) project in Microsoft SQL Server Analysis Services (SSAS) 2012. It discusses connecting to data sources, creating dimensions and hierarchies, building cubes, and defining calculations and KPIs. The tutorial uses a sample product inventory dataset to demonstrate how to design and deploy an SSAS project that can then be accessed using Microsoft Excel for analysis and reporting.
Project report on the design and build of a data warehouse from unstructured and structured data sources (Quandl, yelp and UK Office for National Statistics) using SQL Server 2016, MongoDB and IBM Watson. Design and implementation of business intelligence visualisations using Tableau to answer cross domain business questions
Data modeling involves creating conceptual, logical, and physical data models of how entities are related in a database. The interview questions covered topics like different data modeling schemas (star vs snowflake), dimensions, facts, surrogate keys, normalization forms, and data warehousing concepts. The candidate discussed their experience working on a data model for a healthcare insurance project that used a snowflake schema to allow multi-dimensional analysis across entities like subscribers, providers, claims, and plans. Common data modeling mistakes like over-normalization and lack of purpose were also listed.
The document discusses various concepts related to data warehousing and ETL processes. It provides definitions for key terms like critical success factors, data cubes, data cleaning, data mining stages, data purging, BUS schema, non-additive facts, conformed dimensions, slowly changing dimensions, cube grouping, and more. It also describes different types of ETL testing including constraint testing, source to target count testing, field to field testing, duplicate check testing, and error handling testing. Finally, it discusses the differences between an ODS and a staging area, with an ODS storing recent cleaned data and a staging area serving as a temporary work area during the ETL process.
This portfolio contains examples of the author's work with Microsoft Business Intelligence tools. It includes projects and queries demonstrating skills in SQL Server, SSIS, SSAS, SSRS, Excel, PerformancePoint Services and SharePoint. It also describes the author's education through SetFocus, a hands-on BI training program focused on the Microsoft stack.
This document discusses ABC analysis, a common inventory classification method. It examines how the period of analysis used can impact ABC classification results. The author analyzes inventory data from an automotive plastics company using different periodic windows, from 1 month to 12 months. Shorter periods lead to more variation and movement of items between A, B, and C categories over time. The minimum variation is observed when analyzing 6-9 month periods. The author concludes the period of analysis must be chosen carefully for ABC analysis to effectively support inventory planning and forecasting goals.
The document discusses SQL query analyzer tools and database maintenance. It covers SQL query analyzer, execution plans, column statistics, running the analyzer, query tuning, optimization, and other analyzer tools like the profiler and tuning advisor. It also discusses database maintenance tasks like managing transaction log files, eliminating index fragmentation, ensuring accurate statistics, and establishing an effective backup strategy. The document demonstrates some of these tools and tasks.
This portfolio contains examples of SQL Server skills gained through an intensive training program, including T-SQL, SSIS, and SSRS. It includes projects on querying book data and building a functional bank database with stored procedures. It also demonstrates loading data from CSV files to SQL tables using SSIS packages and designing an SSRS report on vendor sales.
A data warehouse stores current and historical data for analysis and decision making. It uses a star schema with fact and dimension tables. The fact table contains measures that can be aggregated and connected to dimension tables through foreign keys. Dimensions describe the facts and contain descriptive attributes to analyze measures over time, products, locations etc. This allows analyzing large volumes of historical data for informed decisions.
See sql server graphical execution plans in action tech republicKaing Menglieng
Tim Chapman identifies a few basic things to look for in SQL Server graphical execution plans to understand how indexes are used. He creates a sample database with different indexes and queries it in various ways to demonstrate index seeks, scans, and lookups. The article shows how indexes can improve query performance when columns have high selectivity but may not help when columns have few distinct values.
SQL Server 2008 Portfolio for Saumya Bhatnagarsammykb
The document is Saumya Bhatnagar's portfolio showcasing her skills with SQL Server 2008. It includes examples from projects during her intensive 13-week SQL Server 2008 Master's program. The portfolio covers T-SQL development, database administration tasks, and tools like SSIS, SSRS. Specific projects highlighted include developing databases, stored procedures, triggers for a fictional bank application and using advanced T-SQL, SSIS, and SSRS to analyze real-world data.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Isqs6347 team5 proposal_032513
1. ISQS 6347- Data &Text Mining
Spring 2013 Team 5
Project title Data Analysis For Abuelo’s
Class number / Semester ISQS6347 Spring 2013 – Section 1
Student names Preeti Prajapati
Neha Soam
Ming Kuo Hui
The type of this project Data Mining Academic Project
The nature and source of the dataset Nature - Available in SAS file format
Source – Abuelo’s Restaurant
Completion date May 16 2013
2013
2. May 15, 2013 [ISQS 6347 – Final Project Report]
2 | Abuelo's
Table of Contents
Introduction............................................................................................................................................................3
Business Background......................................................................................................................................3
Objective...............................................................................................................................................................3
Project Overview...................................................................................................................................................3
Dataset Availability and Description.........................................................................................................3
Table 1 : Attributes & their Description...................................................................................................4
Data Quality and Preparation ......................................................................................................................4
Table 2...................................................................................................................................................................6
Data Exploration & Preprocessing.................................................................................................................7
Data Preparation...............................................................................................................................................7
Figure 3: Non-Unique UID and Its Number of Missing Values (via SAS Enterprise Guide).8
Figure 4: Output Result from SAS Enterprise Miner...........................................................................8
Preprocessing Tasks........................................................................................................................................9
Data Mining Methodologies ........................................................................................................................... 10
Primitive Results and Findings..................................................................................................................... 11
Data Filtration & Addition of New Variables ...................................................................................... 13
Refined Data’s Exploration ........................................................................................................................ 13
3. May 15, 2013 [ISQS 6347 – Final Project Report]
3 | Abuelo's
Introduction
The purpose of this project is to analyze a restaurant’s sales data and to generate a model
that would aid at restaurant’s management decisions. The restaurant would be examined in
this project, Abuelo’s, is a real restaurant and all data collected are real data. By collecting,
exploring, processing, and analyzing the real life data via the data mining techniques, we
learned from lecture, we are able to generate a model that is useful and can be applied to
restaurant’s decision making.
Business Background
The Abuelo’s is a Mexican restaurant that has established stores in several cities since 1989.
Abuelo’s has consistently been on the leading edge of Mexican cuisine, combining menu
creativity, outstanding food and beverage quality, colorful plate presentations and superior
service in an impressive Mexican courtyard-themed atmosphere. Every dish is made to
order from scratch using only the freshest premium ingredients.
Objective
Recently Abuelo’s is planning to adopt a new menu to replace the old one. The restaurant
has been conducting trails of new Value Items. Value item has a lower cost as well as a
lower profit margin compared to its full version (i.e. Chicken Zucchini and Chicken Zucchini
Lite). But value items are more frequently ordered than other items. The new menu differs
from the old one in that it is extended with Value Items and some other new items which
are not treated as value items in the list.
The main objective of this project is to analyze the effect of value items on the total profit
return. The result of this project is expected to aid at decisions of what value items should
be deleted or stayed on the menu.
Project Overview
Dataset Availability and Description
The data for Abuelo's is available for year 2011 and 2012 in excel and SAS files. The
attributes and descriptions of the available data are listed in table below:
4. May 15, 2013 [ISQS 6347 – Final Project Report]
4 | Abuelo's
Attribute Name Attribute Description
UID Unique ID representing combination of item number and store
ID
Store ID Unique ID assigned to each store
Item Number Number assigned to an item
Minor Category Category of item
Product Description Description of item
Quantity Quantity sold for each item in different stores
Avg Unit Price Average unit price of an item
Avg Unit Cost Average unit cost of an item
Guest Count Sum of customer visits in one stores in a particular week
Week IND Number assigned to each week in one year
Number Item Number Number assigned to an item
Table 1 : Attributes & their Description
Note: The dataset has approximately 1,827,700 rows and has minimal missing values.
Data Quality and Preparation
The dataset used comes from previous student project; therefore, many data preparation
tasks have been done and the dataset has already been transformed into SAS file format.
However, after exploring the dataset, we observed some issues that may require further
considerations and adjustments before the data analysis and mining stage:
» UID is not a unique identifier, and it has no value for 2406 records.
5. May 15, 2013 [ISQS 6347 – Final Project Report]
5 | Abuelo's
Figure 1: Non-Unique UID and Its Number of Missing Values (via SAS Enterprise Guide)
» Purpose of Num_Item_Number attribute is unclear – the value contained is same to
that of Item_Number, but their data types are different. In addition,
Num_Item_Number has 187 missing value (but there are no missing value in
Item_Number).
6. May 15, 2013 [ISQS 6347 – Final Project Report]
6 | Abuelo's
» Output Result from SAS Enterprise Miner
Figure 2: Output Result from SAS EnterpriseMiner
» Unclear variable values:
Some Avg_Unit_Price contain 0, indicating the price of item is $0.
Some Avg_Unit_Cost contain 0 and negative value.
» There are 28 Item_Number having duplicate values but with different
Product_Description.
Table of Items that Have Same Number but Different Description (Show First Two)
Item_Number Minor_Category Product_Description
101090 Sub Cooked Taco Meat BF 2.5 oz - Sub
101090 Sub Cooked Taco Meat CK 2.5 oz - Sub
12067 Margaritas Patron Shaken Margarita
12067 Margaritas Shaken Margarita
Table 2
7. May 15, 2013 [ISQS 6347 – Final Project Report]
7 | Abuelo's
Data Exploration & Preprocessing
The tasks of preliminary data mining include data preparation, data exploration, data
model selection, and discussion of primitive findings. By performing preliminary data
mining, we are able to examine data quality and observe the issues such as missing data
and duplicated or erroneous data. The appropriate data methodologies are chosen and
applied based on nature of dataset and objective of project – to analyze the effect of valued
items on the total profit return. .
Data Preparation
The dataset, available in SAS file format, contains data and information as shown in Table 1.
Attribute Name Attribute Description
UID Unique ID representing combination of item number and store
ID
StoreID Unique ID assigned to each store
ItemNumber Number assigned to an item
MinorCategory Category of item
ProductDescription Description of item
Quantity Quantity sold for each item in different stores
AvgUnitPrice Average unit price of an item
AvgUnitCost Average unit cost of an item
GuestCount Sum of customer visits in one stores in a particular week
WeekIND Number assigned to each week in one year
NumItemNumber Number assigned to an item
Table 1: Initial Data from Dataset
Because the dataset is already cleansed and is well prepared, at this stage we focused on
data exploration and examination. We found several issues that may affect the analysis of
project. Four major issues observed are listed as followed:
UID is not a unique identifier, and 2406 of the records have no value (see Figure 1).
Purpose of NumItemNumber attribute is unclear – the value contained is same to
that of ItemNumber, but their data types are different. In addition, NumItemNumber
has 187 missing value (see Figure 2).
Unclear variable values:
o Some AvgUnitPrice contain 0, indicating the price of item is $0.
o Some AvgUnitCost contain 0 and negative value.
There are 28 ItemNumbers having duplicate values but with different
ProductDescription (see Table 2)
8. May 15, 2013 [ISQS 6347 – Final Project Report]
8 | Abuelo's
Figure 3: Non-Unique UID and Its Number of Missing Values (via SAS Enterprise Guide)
Figure 4: Output Result from SAS Enterprise Miner
9. May 15, 2013 [ISQS 6347 – Final Project Report]
9 | Abuelo's
Table 2: Items that Have Same Number but Different Description (Show First Two)
Preprocessing Tasks
The objective of this project is to determine whether the valued item has created any
effects on the profit generated. Therefore, we decided to add additional data
attributes,Profit, Valued_Item_Flag, and New_Item_Flag, to represent sales profit, valued
menu item, and new menu item, respectively, by combining the information of menu items.
One thing needs to be noted for the newly added attributes is that majority of data are
missing for the new item flag and valued item flag. The reason is because not all stores of
Abuelo’s participated in this research of new valued menu. Therefore, which data should be
chosen for our project analysis is a very important concern. Figure 3 below is the
screenshot of modified dataset, All_Profit_Flag. Table 3 lists the three newly added
attributes in dataset.
Figure 3: Table of Modified Dataset All_Profit_Flag
Item_Number Minor_Category Product_Description
101090 Sub Cooked Taco Meat BF 2.5 oz - Sub
101090 Sub Cooked Taco Meat CK 2.5 oz - Sub
12067 Margaritas Patron Shaken Margarita
12067 Margaritas Shaken Margarita
10. May 15, 2013 [ISQS 6347 – Final Project Report]
10 | Abuelo's
Attribute Name Attribute Description
Profit Sales profit of an item at a store during a week
NewItemFlag Flag for indicating the new menu item
ValuedItemFlag Flag for indicating the valued menu item
Table 3: Newly Added Attributes in Dataset
Data Mining Methodologies
The data mining models chosen for our project must meet two important criteria: the
nature of dataset and the objective of this business analysis project. Since our objective is
to determine whether the valued menu item increases sales profit of a store, at this
preliminary data mining stage we decided to use a Regression model to analyze the
importance of valued item in terms of profits generated. Figure 5 and 6 are variable
configuration and design of data process flow. The configuration shown in Figure 5 and 6
are subject to be changed and modified later.
Figure 5: Variable Configuration for Regression
Figure 6: Data Process Flow for Regression
Initially we only included two input variables, New_Item_Flag and Valued_Item_Flag, and
one target variable, Profit, for the regression analysis. As we mentioned earlier in report,
there are many data missing for the flags of new item and valued item. As a result, the data
11. May 15, 2013 [ISQS 6347 – Final Project Report]
11 | Abuelo's
must go through a filtering step to exclude the data rows which have no information about
new/valued item flags. Below is the result of Filter. About 90% of observations are
excluded after filtering.
Figure 7
Primitive Results and Findings
Figure 8 shows the result of Regression node. According to Type 3 Analysis of Effects, if we
only analyzed the effects of new item and valued item on the profit, new item seems to have
a significant effect on profit (Pr< .0001). On the other hand, the valued item does not have
any significant effect on the change of profit.
At this preliminary data mining stage, we concluded that regression analysis indicated that
the valued item has no significant impact on sales profit.
12. May 15, 2013 [ISQS 6347 – Final Project Report]
12 | Abuelo's
Figure 8: Output of Regression Model
13. May 15, 2013 [ISQS 6347 – Final Project Report]
13 | Abuelo's
Data Filtration & Addition of New Variables
We used Enterprise Guide to filter out the missing data and to add new variables like Profit,
New_Item_Flag&Valued_Item_Flag. Then we exported this refined dataset to use it in
Enterprise Miner.
Figure 9: Enterprise Guide showing newly introduce variables
Refined Data’s Exploration
After filtering & adding “Profit” column in the existing dataset using Enterprise Guide, we
used that dataset for further analysis. Figure 9 shows the variable settings for this dataset.
14. May 15, 2013 [ISQS 6347 – Final Project Report]
14 | Abuelo's
Figure 10
In Explore Window, Actions -> Plot, use 3D bar charts which will show dialog in Figure 10
& Figure 11 shows the same dialog enlarged.
Figure 11
15. May 15, 2013 [ISQS 6347 – Final Project Report]
15 | Abuelo's
Figure 12
Figure 12 shows 3D Bar Chart Plot with Profit as Response, year as Series
&Valued_Item_Flag as Category.
16. May 15, 2013 [ISQS 6347 – Final Project Report]
16 | Abuelo's
Figure 13
Figure 14
Figure 16 shows result of Segment Profile node with the variables settings shown in Figure
15
17. May 15, 2013 [ISQS 6347 – Final Project Report]
17 | Abuelo's
Figure 15
Figure 16