This document discusses Apple Watch complications and their providers. It provides examples of different complication types like text, images, and data sources. It also demonstrates a "TimeTravel" complication that can scroll through data over time.
This document summarizes a presentation on directly machine learning quantum mechanical (QM) forces and its applications. It discusses using machine learning (ML) to predict QM forces rather than taking a differential approach, in order to reduce uncertainty. The ML model is trained on existing QM datasets and then used to complement new QM calculations, minimizing the number needed. Potential applications include using ML to guide molecular dynamics simulations at large scales. Future work may involve developing more transferable representations of interatomic forces in multi-species molecules and applying ML-guided molecular dynamics to simulations of interesting chemical and physical problems.
Sourabh Gujar is interning as a database analyst reporting to William Hall. His responsibilities include analyzing 15 years of data from the National Fire Incident Reporting System (NFIRS) by running queries in SQL and generating graphs from the results in tools like Tableau. NFIRS is a system initiated by the U.S. Fire Administration to document the nature and scope of fire problems in the U.S. It contains over 2 million incident records per year. Sourabh's work involves cleaning the data, writing SQL queries to analyze it, and exporting graphs for visualization and presentation.
This document discusses the challenges of creating map tiles from OpenStreetMap (OSM) data. It notes that OSM data is global in scope, supports multiple zoom levels and languages, and syncs changes every 5 minutes. Maintaining up-to-date map tiles at this scale and frequency while ensuring fast load times and consistent styling is difficult. Specific challenges include generalizing data, normalizing inconsistencies in OSM, placing labels within small tile boundaries without duplication across tiles, and handling major changes from OSM's 1.5 million daily edits.
New housing construction in Wales declined in 2017-18, with 6,037 new dwellings started (a 12% decrease from the previous year) and 6,663 dwellings completed (a 2% decrease). The majority (75%) of new dwellings completed were houses and bungalows, while 23% of new homes had 4 or more bedrooms. Most new construction (82%) occurred in the private sector.
This document provides an overview of MapReduce, describing it as a framework for processing large datasets in parallel across multiple systems. It outlines that MapReduce involves two key functions: the map function which extracts and transforms data, and the reduce function which combines output from the map to form final results. Examples are given of how MapReduce can be used for problems like word counting by mapping words to counts, shuffling by key, and reducing to obtain final counts. Code examples and a live demonstration model are proposed to further illustrate how MapReduce works.
Ruby's Time class allows you to work with dates and times. It can store both a date and time, get the current time, and perform time arithmetic like adding seconds. The strftime method formats times for output and strptime parses dates from strings. Rails adds functionality via ActiveSupport, including methods like 3.days.ago that return TimeWithZone objects. Formatting, parsing, and arithmetic on Time objects lets you easily manage dates and times in Ruby.
This document outlines the steps taken to determine material loss in the Grasberg area of Papua caused by private company exploration using a 3D analysis technique called cut and fill. The analysis involved generating elevation data points from SRTM data, converting the points to a vector file, creating a TIN surface, and executing a cut and fill between two TINs to calculate the volume of material loss in cubic meters.
This document discusses Apple Watch complications and their providers. It provides examples of different complication types like text, images, and data sources. It also demonstrates a "TimeTravel" complication that can scroll through data over time.
This document summarizes a presentation on directly machine learning quantum mechanical (QM) forces and its applications. It discusses using machine learning (ML) to predict QM forces rather than taking a differential approach, in order to reduce uncertainty. The ML model is trained on existing QM datasets and then used to complement new QM calculations, minimizing the number needed. Potential applications include using ML to guide molecular dynamics simulations at large scales. Future work may involve developing more transferable representations of interatomic forces in multi-species molecules and applying ML-guided molecular dynamics to simulations of interesting chemical and physical problems.
Sourabh Gujar is interning as a database analyst reporting to William Hall. His responsibilities include analyzing 15 years of data from the National Fire Incident Reporting System (NFIRS) by running queries in SQL and generating graphs from the results in tools like Tableau. NFIRS is a system initiated by the U.S. Fire Administration to document the nature and scope of fire problems in the U.S. It contains over 2 million incident records per year. Sourabh's work involves cleaning the data, writing SQL queries to analyze it, and exporting graphs for visualization and presentation.
This document discusses the challenges of creating map tiles from OpenStreetMap (OSM) data. It notes that OSM data is global in scope, supports multiple zoom levels and languages, and syncs changes every 5 minutes. Maintaining up-to-date map tiles at this scale and frequency while ensuring fast load times and consistent styling is difficult. Specific challenges include generalizing data, normalizing inconsistencies in OSM, placing labels within small tile boundaries without duplication across tiles, and handling major changes from OSM's 1.5 million daily edits.
New housing construction in Wales declined in 2017-18, with 6,037 new dwellings started (a 12% decrease from the previous year) and 6,663 dwellings completed (a 2% decrease). The majority (75%) of new dwellings completed were houses and bungalows, while 23% of new homes had 4 or more bedrooms. Most new construction (82%) occurred in the private sector.
This document provides an overview of MapReduce, describing it as a framework for processing large datasets in parallel across multiple systems. It outlines that MapReduce involves two key functions: the map function which extracts and transforms data, and the reduce function which combines output from the map to form final results. Examples are given of how MapReduce can be used for problems like word counting by mapping words to counts, shuffling by key, and reducing to obtain final counts. Code examples and a live demonstration model are proposed to further illustrate how MapReduce works.
Ruby's Time class allows you to work with dates and times. It can store both a date and time, get the current time, and perform time arithmetic like adding seconds. The strftime method formats times for output and strptime parses dates from strings. Rails adds functionality via ActiveSupport, including methods like 3.days.ago that return TimeWithZone objects. Formatting, parsing, and arithmetic on Time objects lets you easily manage dates and times in Ruby.
This document outlines the steps taken to determine material loss in the Grasberg area of Papua caused by private company exploration using a 3D analysis technique called cut and fill. The analysis involved generating elevation data points from SRTM data, converting the points to a vector file, creating a TIN surface, and executing a cut and fill between two TINs to calculate the volume of material loss in cubic meters.
The Proliferation of New Database Technologies and Implications for Data Scie...Domino Data Lab
In this talk, we’ll describe NoSQL (“not-only SQL”) and document-oriented databases and the value they provide for data science companies like Uptake. We will walk through the unique challenges such datastores pose for data science workflows. To make these challenges and lessons learned concrete, we’ll explore data science workflows through a discussion of the development efforts that led to “uptasticsearch”, an R package released by the Uptake Data Science team to reduce friction in interacting with a document store called Elasticsearch. The talk will conclude with a discussion of recent developments in NoSQL technologies and implications for data scientists.
This document outlines a presentation on ASP.NET Core 2.0 and MVC6. The presentation covers the history of ASP.NET, new features in ASP.NET Core like tag helpers and view components, and how to use Razor Pages. It discusses how ASP.NET Core is cross-platform, modular, and has faster development cycles compared to previous versions. The document provides examples of using tag helpers, view components, and Razor Pages in ASP.NET Core applications.
This document discusses big data and SQL Server. It covers what big data is, the Hadoop environment, big data analytics, and how SQL Server fits into the big data world. It describes using Sqoop to load data between Hadoop and SQL Server, and SQL Server features for big data analytics like columnstore and PolyBase. The document concludes that a big data analytics approach is needed for massive, variable data, and that SQL Server 2012 supports this with features like columnstore and tabular SSAS.
Serverless Security: A Pragmatic Primer for builders and defenders
Covers an intro to serverless, security ideas, and an open source vulnerable lambda application called lambhack.
From LASCON 2017, Austin, Texas.
Michelle Ufford of Netflix presented on their approach to data quality. They developed Quinto, a data quality service that implements a Write-Audit-Publish pattern for ETL jobs. It audits metrics after data is written to check for issues like row counts being too high/low. Configurable rules determine if issues warrant failing or warning on a job. Future work includes expanding metadata tracking and anomaly detection. The presentation emphasized building modular components over monolithic frameworks and only implementing quality checks where needed.
Whoops, The Numbers Are Wrong! Scaling Data Quality @ NetflixDataWorks Summit
Netflix is a famously data-driven company. Data is used to make informed decisions on everything from content acquisition to content delivery, and everything in-between. As with any data-driven company, it’s critical that data used by the business is accurate. Or, at worst, that the business has visibility into potential quality issues as soon as they arise. But even in the most mature data warehouses, data quality can be hard. How can we ensure high quality in a cloud-based, internet-scale, modern big data warehouse employing a variety of data engineering technologies?
In this talk, Michelle Ufford will share how the Data Engineering & Analytics team at Netflix is doing exactly that. We’ll kick things off with a quick overview of Netflix’s analytics environment, then dig into details of our data quality solution. We’ll cover what worked, what didn’t work so well, and what we plan to work on next. We’ll conclude with some tips and lessons learned for ensuring data quality on big data.
The document summarizes a research paper on Deep Crossing, a deep learning model that automatically combines features for web-scale modeling without manually crafted combinatorial features. The key points are:
1. Deep Crossing uses a neural network to automatically learn combinatorial features from individual features, avoiding the manual feature engineering required by previous models.
2. It was shown to outperform previous models like DSSM that used late feature crossing. Deep Crossing's early feature crossing was more effective.
3. Deep Crossing was able to achieve better performance than production models using much less training data, and is easier to build and maintain than manually engineered models.
Serverless Security: A pragmatic primer for builders and defendersJames Wickett
Talk given at O'Reilly's 2017 Velocity Conference in San Jose.
Serverless is the design pattern for writing applications at scale without the necessity of managing infrastructure. This is done across the continuum of the cloud—from storage as a service to database as a service—but the center of serverless is functions as a service (FaaS). (Current FaaS offerings include AWS Lambda, Azure Functions, and Google Cloud Functions.) Now processes run for milliseconds before being destroyed and then get instantiated for subsequent requests.
Serverless adds simplicity and a new economic model to cloud computing, but it creates some unique security challenges. In serverless architectures, technologies like antivirus and intrusion detection become meaningless. James Wickett explores practical security approaches for serverless in four key areas—the software supply chain, the delivery pipeline, data flow, and attack detection—and examines how traditional approaches need to be adapted to serverless.
Even if you don’t have any experience with serverless, don’t worry; this session starts with the basics. You’ll learn what serverless is (hint: it’s still being defined) and practical patterns for serverless adoption.
The document discusses taming the size and cardinality of OLAP data cubes over big data. It presents an overview of data warehouse systems and architectures, OLAP cubes, and decision support system benchmarks. It then introduces the TPC-H*d benchmark for evaluating multi-dimensional databases and the AutoMDB tool for automating multi-dimensional database design. Lastly, it discusses application scenarios for benchmarking data servers, multi-dimensional database schemas, and parallel OLAP servers.
The document discusses taming the size and cardinality of OLAP data cubes over big data. It introduces OLAP cubes and data warehouse architectures. It also discusses benchmarks like TPC-H and how TPC-H*d was created to turn TPC-H into a multi-dimensional benchmark by making some schema changes and adding MDX workloads. AutoMDB is presented as an open source tool that can parse multi-dimensional schemas, compare and merge cubes, and generate new schemas.
This document discusses layout and animation performance in Android. It begins with an overview of how motion is perceived by the human eye and how to achieve smooth motion. It then covers topics like measuring and laying out views, optimizing for the GPU, using hardware layers for animation, and getting size information during animation using ViewTreeObserver. The document provides guidance on profiling performance, reducing unnecessary layout requests, and techniques for creating smooth animations in Android.
During the talk, I've explained the difference between different approaches in contemporary JavaScript Front-End development as well as browser behavior itself. It causes different approaches of software development and makes JavaScript so "different" for newcomers. My presentation made to help back-end developers to understand what is
Best IEEE Projects 2017 -2018 Titles - IEEE Final Year Projects @ Brainrich T...Brainrich Technology
Project Guidance by Experienced Developers.
Provides real time training on information technology and provide projects on the following
• M.E
• M.Tech
. M.Phil
• BE/BTech (CSE & IT)
• B.Sc/ M.Sc (CSE)
• MSc (IT)
• BCA
• MCA
• Diploma Student’s.
Very Low cost….
LATEST TECHNOLOGIES USED IN OUR IEEE PROJECTS
We would be really glad to help you with your project development.
Our Service Address:
Contact : Mr.S. Sakthivel
Mobile : 9894604623
: 9965191941
Phone : 0422-4377414
Address : 6/1,Selvanayaki Complex 1st Floor, Gokhale Street, Ramnagar, Coimbatore
website : http://www.brainrichtech.com
: http://www.brainrichprojects.com
Mail : info@brainrichtech.com
: brainrichtech@gmail.com
Why are we excited about MySQL 8? / Петр Зайцев (Percona)Ontico
HighLoad++ 2017
Зал «Мумбай», 7 ноября, 17:00
Тезисы:
http://www.highload.ru/2017/abstracts/2953.html
MySQL 8 is coming! As large jump in the version implies this is the largest update in the MySQL space since MySQL 5.0 was released over 12 years ago. Are you excited about MySQL 8? We are!
In this presentation we will talk about features new to MySQL 8, taking a practical look - how do those features are useful for me as developer using MySQL or as a person responsible for MySQL Operations.
The document provides an overview of the React Context API, including what it is, when to use it, and how to use it. It explains that the Context API was introduced by React to solve the problem of prop drilling and make state management simpler for developers. It describes the key aspects of using the Context API, such as creating contexts with React.createContext, rendering context providers with Context.Provider, and subscribing to contexts within components using Context.Consumer. Examples and additional resources on the Context API are also provided.
Uponor Exadata e-Business Suite Migration Case StudySimo Vilmunen
Uponor, a plumbing solutions company, migrated their Oracle E-Business Suite and Oracle Business Intelligence environments from traditional hardware to Oracle Exadata in order to improve performance, scalability, availability and manageability. The migration was completed within 3 months and resulted in significant performance gains across key business processes. Lessons learned included benefits of using Exadata-specific tools and configurations and importance of testing database-specific functionality during migration.
JavaScript is a scripting language used to make web pages interactive. It was originally developed by Netscape under the name Mocha, then renamed LiveScript, and finally JavaScript. JavaScript can access and manipulate HTML elements on a page, add interactivity, and validate form data before submission. It runs in the browser rather than on the server. Common JavaScript statements include if/else, switch, for loops, while loops, and functions. The Document Object Model (DOM) represents HTML documents as objects that JavaScript can manipulate.
How to empower community by using GIS lecture 2wang yaohui
The document provides instructions for completing a GIS project using ArcGIS software. It outlines 4 steps: 1) Identifying project objectives which in this case is siting a wastewater treatment plant. 2) Creating a project database by assembling data layers and defining their coordinate systems. 3) Analyzing the data using tools in ArcToolbox to apply criteria to potential sites. 4) Presenting results to stakeholders like a city council. It then gives examples of using ArcCatalog to organize data and ArcToolbox tools to manage data formats and projections as part of completing the project.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
The Proliferation of New Database Technologies and Implications for Data Scie...Domino Data Lab
In this talk, we’ll describe NoSQL (“not-only SQL”) and document-oriented databases and the value they provide for data science companies like Uptake. We will walk through the unique challenges such datastores pose for data science workflows. To make these challenges and lessons learned concrete, we’ll explore data science workflows through a discussion of the development efforts that led to “uptasticsearch”, an R package released by the Uptake Data Science team to reduce friction in interacting with a document store called Elasticsearch. The talk will conclude with a discussion of recent developments in NoSQL technologies and implications for data scientists.
This document outlines a presentation on ASP.NET Core 2.0 and MVC6. The presentation covers the history of ASP.NET, new features in ASP.NET Core like tag helpers and view components, and how to use Razor Pages. It discusses how ASP.NET Core is cross-platform, modular, and has faster development cycles compared to previous versions. The document provides examples of using tag helpers, view components, and Razor Pages in ASP.NET Core applications.
This document discusses big data and SQL Server. It covers what big data is, the Hadoop environment, big data analytics, and how SQL Server fits into the big data world. It describes using Sqoop to load data between Hadoop and SQL Server, and SQL Server features for big data analytics like columnstore and PolyBase. The document concludes that a big data analytics approach is needed for massive, variable data, and that SQL Server 2012 supports this with features like columnstore and tabular SSAS.
Serverless Security: A Pragmatic Primer for builders and defenders
Covers an intro to serverless, security ideas, and an open source vulnerable lambda application called lambhack.
From LASCON 2017, Austin, Texas.
Michelle Ufford of Netflix presented on their approach to data quality. They developed Quinto, a data quality service that implements a Write-Audit-Publish pattern for ETL jobs. It audits metrics after data is written to check for issues like row counts being too high/low. Configurable rules determine if issues warrant failing or warning on a job. Future work includes expanding metadata tracking and anomaly detection. The presentation emphasized building modular components over monolithic frameworks and only implementing quality checks where needed.
Whoops, The Numbers Are Wrong! Scaling Data Quality @ NetflixDataWorks Summit
Netflix is a famously data-driven company. Data is used to make informed decisions on everything from content acquisition to content delivery, and everything in-between. As with any data-driven company, it’s critical that data used by the business is accurate. Or, at worst, that the business has visibility into potential quality issues as soon as they arise. But even in the most mature data warehouses, data quality can be hard. How can we ensure high quality in a cloud-based, internet-scale, modern big data warehouse employing a variety of data engineering technologies?
In this talk, Michelle Ufford will share how the Data Engineering & Analytics team at Netflix is doing exactly that. We’ll kick things off with a quick overview of Netflix’s analytics environment, then dig into details of our data quality solution. We’ll cover what worked, what didn’t work so well, and what we plan to work on next. We’ll conclude with some tips and lessons learned for ensuring data quality on big data.
The document summarizes a research paper on Deep Crossing, a deep learning model that automatically combines features for web-scale modeling without manually crafted combinatorial features. The key points are:
1. Deep Crossing uses a neural network to automatically learn combinatorial features from individual features, avoiding the manual feature engineering required by previous models.
2. It was shown to outperform previous models like DSSM that used late feature crossing. Deep Crossing's early feature crossing was more effective.
3. Deep Crossing was able to achieve better performance than production models using much less training data, and is easier to build and maintain than manually engineered models.
Serverless Security: A pragmatic primer for builders and defendersJames Wickett
Talk given at O'Reilly's 2017 Velocity Conference in San Jose.
Serverless is the design pattern for writing applications at scale without the necessity of managing infrastructure. This is done across the continuum of the cloud—from storage as a service to database as a service—but the center of serverless is functions as a service (FaaS). (Current FaaS offerings include AWS Lambda, Azure Functions, and Google Cloud Functions.) Now processes run for milliseconds before being destroyed and then get instantiated for subsequent requests.
Serverless adds simplicity and a new economic model to cloud computing, but it creates some unique security challenges. In serverless architectures, technologies like antivirus and intrusion detection become meaningless. James Wickett explores practical security approaches for serverless in four key areas—the software supply chain, the delivery pipeline, data flow, and attack detection—and examines how traditional approaches need to be adapted to serverless.
Even if you don’t have any experience with serverless, don’t worry; this session starts with the basics. You’ll learn what serverless is (hint: it’s still being defined) and practical patterns for serverless adoption.
The document discusses taming the size and cardinality of OLAP data cubes over big data. It presents an overview of data warehouse systems and architectures, OLAP cubes, and decision support system benchmarks. It then introduces the TPC-H*d benchmark for evaluating multi-dimensional databases and the AutoMDB tool for automating multi-dimensional database design. Lastly, it discusses application scenarios for benchmarking data servers, multi-dimensional database schemas, and parallel OLAP servers.
The document discusses taming the size and cardinality of OLAP data cubes over big data. It introduces OLAP cubes and data warehouse architectures. It also discusses benchmarks like TPC-H and how TPC-H*d was created to turn TPC-H into a multi-dimensional benchmark by making some schema changes and adding MDX workloads. AutoMDB is presented as an open source tool that can parse multi-dimensional schemas, compare and merge cubes, and generate new schemas.
This document discusses layout and animation performance in Android. It begins with an overview of how motion is perceived by the human eye and how to achieve smooth motion. It then covers topics like measuring and laying out views, optimizing for the GPU, using hardware layers for animation, and getting size information during animation using ViewTreeObserver. The document provides guidance on profiling performance, reducing unnecessary layout requests, and techniques for creating smooth animations in Android.
During the talk, I've explained the difference between different approaches in contemporary JavaScript Front-End development as well as browser behavior itself. It causes different approaches of software development and makes JavaScript so "different" for newcomers. My presentation made to help back-end developers to understand what is
Best IEEE Projects 2017 -2018 Titles - IEEE Final Year Projects @ Brainrich T...Brainrich Technology
Project Guidance by Experienced Developers.
Provides real time training on information technology and provide projects on the following
• M.E
• M.Tech
. M.Phil
• BE/BTech (CSE & IT)
• B.Sc/ M.Sc (CSE)
• MSc (IT)
• BCA
• MCA
• Diploma Student’s.
Very Low cost….
LATEST TECHNOLOGIES USED IN OUR IEEE PROJECTS
We would be really glad to help you with your project development.
Our Service Address:
Contact : Mr.S. Sakthivel
Mobile : 9894604623
: 9965191941
Phone : 0422-4377414
Address : 6/1,Selvanayaki Complex 1st Floor, Gokhale Street, Ramnagar, Coimbatore
website : http://www.brainrichtech.com
: http://www.brainrichprojects.com
Mail : info@brainrichtech.com
: brainrichtech@gmail.com
Why are we excited about MySQL 8? / Петр Зайцев (Percona)Ontico
HighLoad++ 2017
Зал «Мумбай», 7 ноября, 17:00
Тезисы:
http://www.highload.ru/2017/abstracts/2953.html
MySQL 8 is coming! As large jump in the version implies this is the largest update in the MySQL space since MySQL 5.0 was released over 12 years ago. Are you excited about MySQL 8? We are!
In this presentation we will talk about features new to MySQL 8, taking a practical look - how do those features are useful for me as developer using MySQL or as a person responsible for MySQL Operations.
The document provides an overview of the React Context API, including what it is, when to use it, and how to use it. It explains that the Context API was introduced by React to solve the problem of prop drilling and make state management simpler for developers. It describes the key aspects of using the Context API, such as creating contexts with React.createContext, rendering context providers with Context.Provider, and subscribing to contexts within components using Context.Consumer. Examples and additional resources on the Context API are also provided.
Uponor Exadata e-Business Suite Migration Case StudySimo Vilmunen
Uponor, a plumbing solutions company, migrated their Oracle E-Business Suite and Oracle Business Intelligence environments from traditional hardware to Oracle Exadata in order to improve performance, scalability, availability and manageability. The migration was completed within 3 months and resulted in significant performance gains across key business processes. Lessons learned included benefits of using Exadata-specific tools and configurations and importance of testing database-specific functionality during migration.
JavaScript is a scripting language used to make web pages interactive. It was originally developed by Netscape under the name Mocha, then renamed LiveScript, and finally JavaScript. JavaScript can access and manipulate HTML elements on a page, add interactivity, and validate form data before submission. It runs in the browser rather than on the server. Common JavaScript statements include if/else, switch, for loops, while loops, and functions. The Document Object Model (DOM) represents HTML documents as objects that JavaScript can manipulate.
How to empower community by using GIS lecture 2wang yaohui
The document provides instructions for completing a GIS project using ArcGIS software. It outlines 4 steps: 1) Identifying project objectives which in this case is siting a wastewater treatment plant. 2) Creating a project database by assembling data layers and defining their coordinate systems. 3) Analyzing the data using tools in ArcToolbox to apply criteria to potential sites. 4) Presenting results to stakeholders like a city council. It then gives examples of using ArcCatalog to organize data and ArcToolbox tools to manage data formats and projections as part of completing the project.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Keywords: AI, Containeres, Kubernetes, Cloud Native
Event Link: https://meine.doag.org/events/cloudland/2024/agenda/#agendaId.4211
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdfleebarnesutopia
So… you want to become a Test Automation Engineer (or hire and develop one)? While there’s quite a bit of information available about important technical and tool skills to master, there’s not enough discussion around the path to becoming an effective Test Automation Engineer that knows how to add VALUE. In my experience this had led to a proliferation of engineers who are proficient with tools and building frameworks but have skill and knowledge gaps, especially in software testing, that reduce the value they deliver with test automation.
In this talk, Lee will share his lessons learned from over 30 years of working with, and mentoring, hundreds of Test Automation Engineers. Whether you’re looking to get started in test automation or just want to improve your trade, this talk will give you a solid foundation and roadmap for ensuring your test automation efforts continuously add value. This talk is equally valuable for both aspiring Test Automation Engineers and those managing them! All attendees will take away a set of key foundational knowledge and a high-level learning path for leveling up test automation skills and ensuring they add value to their organizations.
QR Secure: A Hybrid Approach Using Machine Learning and Security Validation F...AlexanderRichford
QR Secure: A Hybrid Approach Using Machine Learning and Security Validation Functions to Prevent Interaction with Malicious QR Codes.
Aim of the Study: The goal of this research was to develop a robust hybrid approach for identifying malicious and insecure URLs derived from QR codes, ensuring safe interactions.
This is achieved through:
Machine Learning Model: Predicts the likelihood of a URL being malicious.
Security Validation Functions: Ensures the derived URL has a valid certificate and proper URL format.
This innovative blend of technology aims to enhance cybersecurity measures and protect users from potential threats hidden within QR codes 🖥 🔒
This study was my first introduction to using ML which has shown me the immense potential of ML in creating more secure digital environments!
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
12. 9 February 2017 12
Simplify the debugging and on boarding process because
it’s easier to reason about the state.
Simpler to test and Easier to implement difficult features
like pagination because reducer can be nested.
Because of the transactional nature of the actions you
can go back and forward and recreate the state at a
specific point in time.
13. 9 February 2017 13
React/Redux developer tools.
Change reducers on the fly.
See the history of the state.
See the state being recalculated.
14. 9 February 2017 14
No dispatcher.
No store registration.
No tricky async code to debug.
Just 99 lines of code you can read.
17. 9 February 2017 17
Given the same input, will always return the same
output.
Side-effect-less (doesn’t mutate input or external state).
Relies on no external state.
23. 9 February 2017 23
They are pure functions
Given the state and the action, they return the
new state.
Should be the single source of truth.
24. 9 February 2017 24
Reducers should never store functions.
In the reducer you should always return a new
object.
If you have nested objects you would need to
nest Object.assign().
25. 9 February 2017 25
Where possible your state should be flat
dictionaries and should contain atomic data
(BCNF).
The store should not implement UI specific logic
so it will be reusable.
39. 9 February 2017 39
Presentational Components Container Components
Purpose
How things look (markup,
styles)
How things work (data
fetching, state updates)
Aware of Redux No Yes
To read data Read data from props Subscribe to Redux state
To change data Invoke callbacks from props Dispatch Redux actions
Are written By hand
Usually generated by React
Redux