Lessons Learned From Building an In-House Executive Recruiting TeamBeamery
Alison Brody shares tactical advice to help companies navigate the challenges of bringing an executive recruiting team in-house, and talks through the metrics that you should use to track success and encourage stakeholder buy-in.
How to Become a Data Analyst? | Data Analyst Skills | Data Analyst Training |...Edureka!
** Data Analytics Masters' Program: https://www.edureka.co/masters-program/data-analyst-certification **
This Edureka PPT on "How to become a data analyst" will provide you with a crisp knowledge of who a data analyst is and what are the roles and responsibilities of a data analyst. The salary trends and the companies hiring Data Analyst.
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and Data Architecture. William will kick off the fourth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
Data Catalogs Are the Answer – What is the Question?DATAVERSITY
Organizations with governed metadata made available through their data catalog can answer questions their people have about the organization’s data. These organizations get more value from their data, protect their data better, gain improved ROI from data-centric projects and programs, and have more confidence in their most strategic data.
Join Bob Seiner for this lively webinar where he will talk about the value of a data catalog and how to build the use of the catalog into your stewards’ daily routines. Bob will share how the tool must be positioned for success and viewed as a must-have resource that is a steppingstone and catalyst to governed data across the organization.
Lessons Learned From Building an In-House Executive Recruiting TeamBeamery
Alison Brody shares tactical advice to help companies navigate the challenges of bringing an executive recruiting team in-house, and talks through the metrics that you should use to track success and encourage stakeholder buy-in.
How to Become a Data Analyst? | Data Analyst Skills | Data Analyst Training |...Edureka!
** Data Analytics Masters' Program: https://www.edureka.co/masters-program/data-analyst-certification **
This Edureka PPT on "How to become a data analyst" will provide you with a crisp knowledge of who a data analyst is and what are the roles and responsibilities of a data analyst. The salary trends and the companies hiring Data Analyst.
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and Data Architecture. William will kick off the fourth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
Data Catalogs Are the Answer – What is the Question?DATAVERSITY
Organizations with governed metadata made available through their data catalog can answer questions their people have about the organization’s data. These organizations get more value from their data, protect their data better, gain improved ROI from data-centric projects and programs, and have more confidence in their most strategic data.
Join Bob Seiner for this lively webinar where he will talk about the value of a data catalog and how to build the use of the catalog into your stewards’ daily routines. Bob will share how the tool must be positioned for success and viewed as a must-have resource that is a steppingstone and catalyst to governed data across the organization.
Slides from Michelle Ufford's talk, Data-Driven @ Netflix. Talk given at PASS Summit 2016 in October 2016.
Netflix is the quintessential data-driven company. It’s 83 million members stream more than 125 million hours in over 190 countries every day and generate more than 700 billion events in the process. In this session, we’ll share how data is used to make informed decisions across the entire business — from content acquisition to content delivery, and everything in between. We’ll look at how Netflix successfully employs a scalable cloud-based data platform to support a constant deluge of data and a small army of data analysts, engineers, and scientists. We’ll discuss the advanced analytical capabilities that are enabled through modern data technologies. Lastly, we’ll explore some of the architectural & operational principals that enable Netflix to so effectively make use of its data.
This impressive pitch deck helped Rewind AI founder Dan Siroker close a $350M Series A with top-tier VC investors in 2023. The deck provides a textbook example of a clear, concise, and compelling pitch deck. Every startup founder working on their pitch deck will learn something from this deck. Kudos to Rewind founder, Dan Siroker. Includes Dan's presentation transcript plus what's to love (and copy) for each slide.
REWIND PITCH DECK HIGHLIGHTS:
> 29 slides
> 7 mins 48s duration
> 443 words (transcript)
> 2nd Grade reading level
REWIND PITCH DECK SLIDES:
> Intro
> Founder Origin Story
> Problem (3 slides)
> Vision
> Team
> Solution (What it is)
> Solution (How it works)
> Demo
> What makes Rewind unique?
> Why now?
> Ideal Customer Profile
> Who uses Rewind?
> How do they use Rewind?
> Go To Market Strategy
> Product-Led Growth
> Pricing
> Metrics: Conversion & Retention
> Huge Market
> Traction
> Unit Economics
> Capital Efficiency
> Roadmap
> Problem Recap
> How to Invest
YOU MIGHT ALSO LIKE THESE PITCH DECK EXAMPLES & TEMPLATES:
> Airbnb pitch deck @ https://pitchdeckcoach.com/airbnb-pitch-deck
> Sequoia Capital pitch deck template @ https://pitchdeckcoach.com/sequoia-capital-pitch-deck
> FREE pitch deck template download @ https://pitchdeckcoach.com/free-pitch-deck-template
> Pitch deck guide with hints, tips, and a worked example @ https://pitchdeckcoach.com/pitch-deck-template
NEED HELP WITH YOUR PITCH DECK?
See how I can help then book a free call @ https://pitchdeckcoach.com/
MORE PITCH DECK RESOURCES @ https://pitchdeckcoach.com/pitch-deck-template#resources
StreamAnalytix is a software platform that enables enterprises to analyze and respond to events in real-time at Big Data scale. It is designed to rapidly build and deploy streaming analytics applications for any industry vertical, any data format, and any use case.
YouTube: https://youtu.be/lkga98m0few
( ** Data Analyst Master's Program: https://www.edureka.co/masters-program/data-analyst-certification ** )
This PPT will provide you with a crisp description of the Job Description of a Data Analyst's Job, the skills required to become one, Resume requirements and the salary trends of a fresher as well as an experienced Data Analyst.
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
A successful data governance capability requires a strategy to align regulatory drivers and technology enhancement initiatives with business needs and objectives, taking into account the organizational, technological and cultural changes that will need to take place.
RWDG: Measuring Data Governance PerformanceDATAVERSITY
There are two basic ways to measure the performance of a Data Governance program. The first way focuses on the acceptance of data governance into the organizational culture. The second way focuses on measuring the business value that comes from governing data. The first way is quicker and easier. The second way takes more effort and more time to measure. Both are important.
This month’s Real-World Data Governance webinar with Bob Seiner focuses of describing these two methods described above. In this webinar, Bob will discuss how to select the best approach to measuring the performance of a Data Governance program. Bob will also share tips and techniques for improving performance based on the methods.
In this webinar Bob will discuss:
Two primary ways for measuring Data Governance program performance
How to measure the acceptability of Data Governance
How to measure the business value gained from Data Governance
When and where to report performance measurements to management
Improving performance based on the selected metrics
Power BI has become a product with a ton of exciting features. This presentation will give an overview of some of them, including Power BI Desktop, Power BI service, what’s new, integration with other services, Power BI premium, and administration.
Recommended for CDOs and all Data & Analytics Managers
The past 2 years have had a huge impact on organizations journeys to become data driven. Existing data architectures were disrupted; rigid structures and processes were questioned, and many data strategies were re-written.
On the one hand, the global pandemic emphasized the need for organizations to raise the bar, implement strategies, improve data literacy and culture, increase investments in data and analytics, and explore AI opportunities.
On the other, it also presented new challenges such as: the war for data talent and the wide literacy gap. Inadequate structures as well as outdated processes were exposed. Major changes in the data landscape (Data Fabric, Data Mesh, Transition to Data Clouds) will further disrupt existing data architectures and enhance the need for a new adaptive architecture and organization.
Data Science Training | Data Science For Beginners | Data Science With Python...Simplilearn
This Data Science presentation will help you understand what is Data Science, who is a Data Scientist, what does a Data Scientist do and also how Python is used for Data Science. Data science is an interdisciplinary field of scientific methods, processes, algorithms and systems to extract knowledge or insights from data in various forms, either structured or unstructured, similar to data mining. This Data Science tutorial will help you establish your skills at analytical techniques using Python. With this Data Science video, you’ll learn the essential concepts of Data Science with Python programming and also understand how data acquisition, data preparation, data mining, model building & testing, data visualization is done. This Data Science tutorial is ideal for beginners who aspire to become a Data Scientist.
This Data Science presentation will cover the following topics:
1. What is Data Science?
2. Who is a Data Scientist?
3. What does a Data Scientist do?
This Data Science with Python course will establish your mastery of data science and analytics techniques using Python. With this Python for Data Science Course, you’ll learn the essential concepts of Python programming and become an expert in data analytics, machine learning, data visualization, web scraping and natural language processing. Python is a required skill for many data science positions, so jumpstart your career with this interactive, hands-on course.
Why learn Data Science?
Data Scientists are being deployed in all kinds of industries, creating a huge demand for skilled professionals. A data scientist is the pinnacle rank in an analytics organization. Glassdoor has ranked data scientist first in the 25 Best Jobs for 2016, and good data scientists are scarce and in great demand. As a data you will be required to understand the business problem, design the analysis, collect and format the required data, apply algorithms or techniques using the correct tools, and finally make recommendations backed by data.
You can gain in-depth knowledge of Data Science by taking our Data Science with python certification training course. With Simplilearn’s Data Science certification training course, you will prepare for a career as a Data Scientist as you master all the concepts and techniques. Those who complete the course will be able to:
1. Gain an in-depth understanding of data science processes, data wrangling, data exploration, data visualization, hypothesis building, and testing. You will also learn the basics of statistics.
Install the required Python environment and other auxiliary tools and libraries
2. Understand the essential concepts of Python programming such as data types, tuples, lists, dicts, basic operators and functions
3. Perform high-level mathematical computing using the NumPy package and its largelibrary of mathematical functions.
Learn more at: https://www.simplilearn.com
The Data Driven Enterprise - Roadmap to Big Data & Analytics SuccessBigInsights
The Data Driven Enterprise - Roadmap to Big Data & Analytics Success
Presentation used at the series of Breakfast seminar around Australia hosted by Lenovo/Intel/SAP/EY
Database mapping of XBRL instance documents from the WIP taxonomyAlexander Falk
Surety providers review the contractor’s financial statements to identify risks and determine eligibility for surety bonds. Financial statements include a Work in Process (WIP) report that describes the financial performance and status of a contractor’s construction projects.
The XBRL data standard renders paper-based information computer-readable, reducing costs and delays. Bringing XBRL into the surety underwriting process will make the WIP report and supporting financials computer-readable with data that can be extracted automatically into the sureties financial system without rekeying. The XBRL data standard will not change the underwriting process or what data is used, it will simply change how the data needed for underwriting is conveyed.
These slides discuss how sureties can using a data mapping tool like MapForce to import XBRL instance documents based on the WIP taxonomy into their in-house database systems.
See also: https://xbrl.us/govt-industry/surety/public-review/
Slides from Michelle Ufford's talk, Data-Driven @ Netflix. Talk given at PASS Summit 2016 in October 2016.
Netflix is the quintessential data-driven company. It’s 83 million members stream more than 125 million hours in over 190 countries every day and generate more than 700 billion events in the process. In this session, we’ll share how data is used to make informed decisions across the entire business — from content acquisition to content delivery, and everything in between. We’ll look at how Netflix successfully employs a scalable cloud-based data platform to support a constant deluge of data and a small army of data analysts, engineers, and scientists. We’ll discuss the advanced analytical capabilities that are enabled through modern data technologies. Lastly, we’ll explore some of the architectural & operational principals that enable Netflix to so effectively make use of its data.
This impressive pitch deck helped Rewind AI founder Dan Siroker close a $350M Series A with top-tier VC investors in 2023. The deck provides a textbook example of a clear, concise, and compelling pitch deck. Every startup founder working on their pitch deck will learn something from this deck. Kudos to Rewind founder, Dan Siroker. Includes Dan's presentation transcript plus what's to love (and copy) for each slide.
REWIND PITCH DECK HIGHLIGHTS:
> 29 slides
> 7 mins 48s duration
> 443 words (transcript)
> 2nd Grade reading level
REWIND PITCH DECK SLIDES:
> Intro
> Founder Origin Story
> Problem (3 slides)
> Vision
> Team
> Solution (What it is)
> Solution (How it works)
> Demo
> What makes Rewind unique?
> Why now?
> Ideal Customer Profile
> Who uses Rewind?
> How do they use Rewind?
> Go To Market Strategy
> Product-Led Growth
> Pricing
> Metrics: Conversion & Retention
> Huge Market
> Traction
> Unit Economics
> Capital Efficiency
> Roadmap
> Problem Recap
> How to Invest
YOU MIGHT ALSO LIKE THESE PITCH DECK EXAMPLES & TEMPLATES:
> Airbnb pitch deck @ https://pitchdeckcoach.com/airbnb-pitch-deck
> Sequoia Capital pitch deck template @ https://pitchdeckcoach.com/sequoia-capital-pitch-deck
> FREE pitch deck template download @ https://pitchdeckcoach.com/free-pitch-deck-template
> Pitch deck guide with hints, tips, and a worked example @ https://pitchdeckcoach.com/pitch-deck-template
NEED HELP WITH YOUR PITCH DECK?
See how I can help then book a free call @ https://pitchdeckcoach.com/
MORE PITCH DECK RESOURCES @ https://pitchdeckcoach.com/pitch-deck-template#resources
StreamAnalytix is a software platform that enables enterprises to analyze and respond to events in real-time at Big Data scale. It is designed to rapidly build and deploy streaming analytics applications for any industry vertical, any data format, and any use case.
YouTube: https://youtu.be/lkga98m0few
( ** Data Analyst Master's Program: https://www.edureka.co/masters-program/data-analyst-certification ** )
This PPT will provide you with a crisp description of the Job Description of a Data Analyst's Job, the skills required to become one, Resume requirements and the salary trends of a fresher as well as an experienced Data Analyst.
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
A successful data governance capability requires a strategy to align regulatory drivers and technology enhancement initiatives with business needs and objectives, taking into account the organizational, technological and cultural changes that will need to take place.
RWDG: Measuring Data Governance PerformanceDATAVERSITY
There are two basic ways to measure the performance of a Data Governance program. The first way focuses on the acceptance of data governance into the organizational culture. The second way focuses on measuring the business value that comes from governing data. The first way is quicker and easier. The second way takes more effort and more time to measure. Both are important.
This month’s Real-World Data Governance webinar with Bob Seiner focuses of describing these two methods described above. In this webinar, Bob will discuss how to select the best approach to measuring the performance of a Data Governance program. Bob will also share tips and techniques for improving performance based on the methods.
In this webinar Bob will discuss:
Two primary ways for measuring Data Governance program performance
How to measure the acceptability of Data Governance
How to measure the business value gained from Data Governance
When and where to report performance measurements to management
Improving performance based on the selected metrics
Power BI has become a product with a ton of exciting features. This presentation will give an overview of some of them, including Power BI Desktop, Power BI service, what’s new, integration with other services, Power BI premium, and administration.
Recommended for CDOs and all Data & Analytics Managers
The past 2 years have had a huge impact on organizations journeys to become data driven. Existing data architectures were disrupted; rigid structures and processes were questioned, and many data strategies were re-written.
On the one hand, the global pandemic emphasized the need for organizations to raise the bar, implement strategies, improve data literacy and culture, increase investments in data and analytics, and explore AI opportunities.
On the other, it also presented new challenges such as: the war for data talent and the wide literacy gap. Inadequate structures as well as outdated processes were exposed. Major changes in the data landscape (Data Fabric, Data Mesh, Transition to Data Clouds) will further disrupt existing data architectures and enhance the need for a new adaptive architecture and organization.
Data Science Training | Data Science For Beginners | Data Science With Python...Simplilearn
This Data Science presentation will help you understand what is Data Science, who is a Data Scientist, what does a Data Scientist do and also how Python is used for Data Science. Data science is an interdisciplinary field of scientific methods, processes, algorithms and systems to extract knowledge or insights from data in various forms, either structured or unstructured, similar to data mining. This Data Science tutorial will help you establish your skills at analytical techniques using Python. With this Data Science video, you’ll learn the essential concepts of Data Science with Python programming and also understand how data acquisition, data preparation, data mining, model building & testing, data visualization is done. This Data Science tutorial is ideal for beginners who aspire to become a Data Scientist.
This Data Science presentation will cover the following topics:
1. What is Data Science?
2. Who is a Data Scientist?
3. What does a Data Scientist do?
This Data Science with Python course will establish your mastery of data science and analytics techniques using Python. With this Python for Data Science Course, you’ll learn the essential concepts of Python programming and become an expert in data analytics, machine learning, data visualization, web scraping and natural language processing. Python is a required skill for many data science positions, so jumpstart your career with this interactive, hands-on course.
Why learn Data Science?
Data Scientists are being deployed in all kinds of industries, creating a huge demand for skilled professionals. A data scientist is the pinnacle rank in an analytics organization. Glassdoor has ranked data scientist first in the 25 Best Jobs for 2016, and good data scientists are scarce and in great demand. As a data you will be required to understand the business problem, design the analysis, collect and format the required data, apply algorithms or techniques using the correct tools, and finally make recommendations backed by data.
You can gain in-depth knowledge of Data Science by taking our Data Science with python certification training course. With Simplilearn’s Data Science certification training course, you will prepare for a career as a Data Scientist as you master all the concepts and techniques. Those who complete the course will be able to:
1. Gain an in-depth understanding of data science processes, data wrangling, data exploration, data visualization, hypothesis building, and testing. You will also learn the basics of statistics.
Install the required Python environment and other auxiliary tools and libraries
2. Understand the essential concepts of Python programming such as data types, tuples, lists, dicts, basic operators and functions
3. Perform high-level mathematical computing using the NumPy package and its largelibrary of mathematical functions.
Learn more at: https://www.simplilearn.com
The Data Driven Enterprise - Roadmap to Big Data & Analytics SuccessBigInsights
The Data Driven Enterprise - Roadmap to Big Data & Analytics Success
Presentation used at the series of Breakfast seminar around Australia hosted by Lenovo/Intel/SAP/EY
Database mapping of XBRL instance documents from the WIP taxonomyAlexander Falk
Surety providers review the contractor’s financial statements to identify risks and determine eligibility for surety bonds. Financial statements include a Work in Process (WIP) report that describes the financial performance and status of a contractor’s construction projects.
The XBRL data standard renders paper-based information computer-readable, reducing costs and delays. Bringing XBRL into the surety underwriting process will make the WIP report and supporting financials computer-readable with data that can be extracted automatically into the sureties financial system without rekeying. The XBRL data standard will not change the underwriting process or what data is used, it will simply change how the data needed for underwriting is conveyed.
These slides discuss how sureties can using a data mapping tool like MapForce to import XBRL instance documents based on the WIP taxonomy into their in-house database systems.
See also: https://xbrl.us/govt-industry/surety/public-review/
These are the slides for the keynote address I gave at the NIEM Town Hall meeting in February 2010, covering the use of Altova tools for IEPD development for the National Information Exchange Model (NIEM).
Workshop Fonctionnel - Mecanisme surveillance uniqueNovencia Groupe
En 2009, au plus fort de la crise financière, le G20 a voulu la mise en place de systèmes de régulation et de supervision du secteur financier mondial.
En octobre 2013, le Parlement européen a adopté le règlement sur le Mécanisme de Surveillance Unique qui prévoit qu’à compter du 4 novembre 2014, la supervision des banques européennes (6000 banques) se fera sous l’autorité de la BCE, avec une structure propre, afin de distinguer cette activité de la politique monétaire.
Nous vous proposons de faire un point d’étape sur :
- la mise en œuvre de ces textes
- les impacts opérationnels
- les enjeux sur le SI
- les nouveaux reportings attendus
Case study published by Alova, an industry leader in XML software. The case study highlights XBRL work completed at the Maryland Association of CPA's and gives an in-depth technical breakdown of what was accomplished, using Altova tools. XBRL GL was extensively used in the MACPA case study and is a technology that we believe is underused yet provides the biggest advantage.
Splunk is probably best known along with other Security, Information, and Event Monitoring software for its use in intrusion detection, W/Lan traffic monitoring, and more. But unlike other software systems which rely on modules and add-ons, Splunk offers a robust real-time big data collection and reporting framework complete with its own Search Processing Languages and ready to use point and click reporting tools.
SplunkLive! Frankfurt 2018 - Data Onboarding OverviewSplunk
Presented at SplunkLive! Frankfurt 2018:
Splunk Data Collection Architecture
Apps and Technology Add-ons
Demos / Examples
Best Practices
Resources and Q&A
Las nuevas arquitecturas, servicios y micro-servicios web, aplicaciones y apps, Bots, IoT, AI, etc., que demandan las organizaciones, necesitan cada vez más del talento y experiencia de los Administradores de Bases de Datos para dar consejos, sugerencias y respuestas que aporten un valor diferencial a los grupos de desarrollo y usuarios de negocio.
Te mostramos las claves del nuevo rol del DBA, que complementa la “A” de Administrar con: Analizar, Aconsejar, Automatizar y crear Arquitecturas eficientes y Autónomas para la gestión Avanzada de datos, colaborando con los desarrolladores y usuarios desde un conocimiento profundo de las base de datos.
Using Perforce Data in Development at TableauPerforce
Data plays a big role at Tableau—not just for our customers, but also throughout our company. Using our own products is not only one of our fundamental company values, but the analysis and discoveries we make are important to track as they shape our development processes and influence our day-to-day decisions. In this talk, we present and analyze a variety of data visualizations based on Perforce data from our development organization and share how it has influenced our infrastructure and development practices.
This a talk that I gave at BioIT World West on March 12, 2019. The talk was called: A Gen3 Perspective of Disparate Data:From Pipelines in Data Commons to AI in Data Ecosystems.
16-FEB-2015 talk at Bsides Cyber Security Conference Vancouver, BC, Canada. The Elasticsearch or Elastic stack provides a solution for a big data problem
Spark + AI Summit 2019: Apache Spark Listeners: A Crash Course in Fast, Easy ...Landon Robinson
The Spark Listener interface provides a fast, simple and efficient route to monitoring and observing your Spark application - and you can start using it in minutes. In this talk, we'll introduce the Spark Listener interfaces available in core and streaming applications, and show a few ways in which they've changed our world for the better at SpotX. If you're looking for a "Eureka!" moment in monitoring or tracking of your Spark apps, look no further than Spark Listeners and this talk!
Intro to InfluxDB 2.0 and Your First Flux Query by Sonia GuptaInfluxData
In this InfluxDays NYC 2019 talk, InfluxData Developer Advocate Sonia Gupta will provide an introduction to InfluxDB 2.0 and a review of the new features. She will demonstrate how to install it, insert data, and build your first Flux query.
Apache Spark Listeners: A Crash Course in Fast, Easy MonitoringDatabricks
The Spark Listener interface provides a fast, simple and efficient route to monitoring and observing your Spark application - and you can start using it in minutes. In this talk, we'll introduce the Spark Listener interfaces available in core and streaming applications, and show a few ways in which they've changed our world for the better at SpotX. If you're looking for a "Eureka!" moment in monitoring or tracking of your Spark apps, look no further than Spark Listeners and this talk!
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Quantum Computing: Current Landscape and the Future Role of APIs
How To Download and Process SEC XBRL Data Directly from EDGAR
1. How To Download and Process SEC XBRL Data Directly from EDGAR
XBRL Technology Webinar Series
1
Alexander Falk, CEO, Altova, Inc.
2. Agenda
• Introduction
• Downloading all XBRL data from EDGAR
– Accessing the SEC’s EDGAR archive of all XBRL files via RSS feeds
– Downloading the ZIP file enclosures for each filing
– Special considerations for early years
• Organizing your downloaded files
– You now have 105,840 ZIP files totaling 14.1 GB
– Making them accessible by date, CIK, ticker
• Processing and validating the XBRL filings
• Extracting useful information, e.g., financial ratios
2
3. Introduction
• The SEC's EDGAR System holds a wealth of XBRL-formatted
financial data for over 9,700 corporate entities.
• Accessing XBRL-formatted SEC data can seem like a daunting
task and most people think it requires creating a database
before the first data-point can be pulled.
• In this webinar we will show you how to pull all the data
directly from the SEC archive and download it to your
computer, then process it, and perform financial analysis.
• This will hopefully spark ideas that you can use in your own
applications to take advantage of the computer-readable XBRL
data now freely available.
3
For all source code
examples we will be
using Python 3.3.3,
since it is widely
available on all
operating system
platforms and also
provides for easily
readable code.
Obviously the approach
shown in this webinar
can easily be
implemented in other
languages, such as Java,
C#, etc.
4. Why don’t you just store it in a database?
• Processing and validating the XBRL files once and then storing
the extracted data in a relational database is not necessarily a
bad approach… BUT:
• Any data quality analysis as well as validation statistics and a
documentation of data inconsistencies with respect to the
calculation linkbase require the data to remain in XBRL format
• New XBRL technologies, like XBRL Formula and the XBRL Table
Linkbase provide the potential for future applications that
require the original data to be in XBRL format and not
shredded and stored in a database
• Similarly the ability to query the data with an XQuery 3.0
processor only exists if the data remains in its original XML-
based XBRL format
4
Clearly, if the only goal
of processing the filings
is to derive numerical
data from the facts and
build financial analytics,
then pre-processing and
shredding the data into
a database may still be
your best approach…
The XBRL.US database
by Phillip Engel as well
as Herm Fisher’s talk at
the last XBRL.US
conference about using
Arelle to populate a
PostgreSQL database
are good starting points
for the database-based
approach.
5. Downloading the data – accessing the RSS feeds
• In addition to the nice web-based user interface for searching
XBRL filings, the SEC website offers RSS feeds to all XBRL filings
ever received:
– http://www.sec.gov/spotlight/xbrl/filings-and-feeds.shtml
• In particular, there is a monthly, historical archive of all filings
with XBRL exhibits submitted to the SEC, beginning with the
inception of the voluntary program in 2005:
– http://www.sec.gov/Archives/edgar/monthly/
• This is our starting point, and it contains one RSS file per
month from April 2005 until the current month of March 2014.
5
7. Downloading the data – loading the RSS feed
7
In Python we can easily
use the urlopen function
from the urllib package
to open a file from a
URL and read its data
8. Downloading the data – parsing the RSS feed to
extract the ZIP file enclosure filename
8
In Python we can easily
use the feedparser 5.1.3
package to parse the
RSS feed and extract the
ZIP file name and CIK#
https://pypi.python.org
/pypi/feedparser
Please note that for the
local filename I am
constructing that by
inserting the CIK in front
of the actual ZIP file
name.
9. Downloading the data – loading the ZIP file enclosure
9
Please note that it is
prudent to first check if
we already have a local
copy of that particular
filing. We should only
download the ZIP file, if
we don’t find it on our
local machine yet.
Also, please note that
common courtesy
dictates that if you plan
to download all years
2005-2014 of XBRL
filings from the SEC’s
EDGAR archive, you
should do so during off-
peak hours, e.g. night-
time or weekends in
order not to tax the
servers during normal
business hours, as you
will be downloading
105,840 files or 14.1GB
of data!
10. Downloading the data – special considerations for
early years
• For years 2005-2007 most filings do not yet contain a ZIP file
enclosure.
• Even in 2008-2009 some filings can occasionally be found that
are not yet provided in a ZIP file.
• If you are interested in analyzing data from those early years, a
little bit of extra work is required to download all the
individual XBRL files from a filing and then ZIP them up locally.
• If done properly, all future analysis can then access all the
filings in the same manner directly from the ZIP files.
10
11. Downloading the early years – ZIPping the XBRL files
on our local machine
11
If we want to download
data from the early
years, we need to use
two additional Python
packages:
(a) The ElementTree
XML parser, because
feedparser cannot
handle multiple nested
elements for the
individual filings
(b) The zipfile
package so that we can
ZIP the downloaded files
up ourselves
13. Organizing the downloaded files – file system
structure
13
For my purposes I have
already organized the
files at the same time as
I have downloaded
them. Since the RSS
feeds group the filings
nicely by year and
month, I have created
one subdirectory for
each year and one
subdirectory for each
month.
In order to easily
process the filings of
one particular reporting
entity, I have also
inserted the CIK# in
front of the ZIP file
name, since the SEC-
assigned accession
number does not really
help me when I want to
locate all filings for one
particular filer.
14. Organizing the downloaded files – making them
accessible by date, CIK, ticker
14
Selecting files for further
processing by date is
trivial due to our
directory structure.
Similarly, selecting
filings by CIK is easily
facilitated since the
filenames of all filings
now begin with the CIK.
The only part that needs
a little work is to make
them accessible by
ticker – fortunately the
SEC provides a web
service interface to look
up company
information and filings
by ticker, and the
resulting XML also
contains the CIK, which
we can retrieve via
simple XML parsing.
15. Processing and validating the XBRL filings
• Now that we have all the data organized, we can use Python to
process, e.g., all filings from one filer for a selected date range
• For this webinar we are going to use RaptorXML® Server to
process and validate the XBRL filings
• RaptorXML can directly process the filings inside of ZIP files, so no
manual extraction step is necessary
• We can also pass an entire batch
of jobs to RaptorXML at once:
• We can do this either via direct call
as shown here, or over the HTTP
API provided by RaptorXML Server.
15
Shameless Plug:
RaptorXML® is built
from the ground up to
be optimized for the
latest standards and
parallel computing
environments. Designed
to be highly cross-
platform capable, the
engine takes advantage
of today’s ubiquitous
multi-CPU computers to
deliver lightning fast
processing of XML and
XBRL data.
Therefore, we can pass
an entire batch of jobs
to RaptorXML to process
and validate in parallel,
maximizing CPU
utilization and available
system resources.
16. Processing and validating the filings – building the job
list based on dates and CIK
16
This directory contains
all filings for one
particular year and
month.
We iterate over all the
files in that directory.
If a list of CIKs was
provided, then we make
sure the filename starts
with the CIK.
17. Demo time – validating all 2010-2014 filings for ORCL
17
18. Demo time – validating all 2010-2014 filings for AAPL
18
19. Extracting useful information, e.g. financial ratios
• While it is interesting to discuss data quality of XBRL filings,
and to muse over inconsistencies for some filers or whether
the SEC should put more stringent validation checks on the
data it accepts, we really want to do more here…
• Can we extract useful financial ratios from these XBRL filings?
• For example, from the balance sheet:
19
20. Extracting useful information – passing Python script
to the built-in Python interpreter inside RaptorXML
• We can ask RaptorXML to execute some Python script code if
the XBRL validation has succeeded.
• From our outer Python code we pass
the script to RaptorXML:
• Then, whenever validation succeeds,
RaptorXML will execute that script
using its built-in Python interpreter:
20
RaptorXML is written in
C++ and available on all
major operating system
platforms, including
Windows, Linux,
MacOS, etc.
To facilitate easy
customization and
building powerful
solutions on top of
RaptorXML, it includes a
built-in Python
interpreter that makes
the entire DTS, XBRL
instance, schema, and
other relevant
information accessible
to 3rd party developers.
21. Extracting useful information – calculating ratios
using Python script inside RaptorXML
• The RaptorXML Python API makes available all necessary
components of the XBRL instance document and the DTS
(=Discoverable Taxonomy Set)
• To make the code more easily understandable for this webinar,
we’ve created a few helper functions to locate relevant facts
and print them, e.g., for the Current Ratio:
21
22. Extracting useful information – not all financial ratios
are easy to calculate
• Not all ratios are calculated from elements that can be easily
found in the US GAAP taxonomy
• Even for those ratios where an exact match exists in the
taxonomy, it is often necessary to walk through the calculation
linkbase chain and identify appropriate matches in order to
calculate ratios across filings from different entities
• For further information please see Roger Debreceny, et al:
“Feeding the Information Value Chain: Deriving Analytical
Ratios from XBRL filings to the SEC”, Draft research paper,
December 2010:http://eycarat.faculty.ku.edu//myssi/_pdf/2-
Debreceny-XBRL%20Ratios%2020101213.pdf
22
23. Extracting useful information – Quick Ratio
• One such example is the Cash fact needed to calculate the
Quick Ratio: we have to try three different facts depending on
what is available in the actual XBRL filing:
23
24. Demo time – calculating financial ratios for some
companies in my investment portfolio
24
In this example, we are
calculating the Current
Ratio, Quick Ratio, and
Cash Ratio for a set of
companies that happen
to be part of my
investment portfolio…
In addition to printing
the ratios on the screen,
we also store them in a
CSV file for further
processing and
graphing…
26. Q&A and next steps
• Thank you for your time and for watching this webinar! Time
for some Q&A now…
• For more information on the XBRL data available from the SEC,
please visit http://xbrl.sec.gov/
• We also plan to post the Python scripts shown here on GitHub
in the near future.
• If you would like to learn more about RaptorXML®, please visit
the Altova website at http://www.altova.com/raptorxml.html
• For all other Altova® XBRL solutions, including taxonomy
development, data mapping, and rendering tools, please visit
http://www.altova.com/solutions/xbrl.html
• Free 30-day evaluation versions of all Altova products can be
downloaded from http://www.altova.com/download.html
26