Predicting the most relevant ad at any point in time for every individual is how Rocket Fuel optimizes ROI for an advertiser. One of the factors influencing this prediction is a consumer's online interactions and behavioral profile. With more than 45 billion interactions being processed daily, this data runs into several Petabytes in our Hadoop warehouse. Running machine-learning algorithms and Artificial Intelligence on this vast scale requires many practical issues to be addressed. First, behavioral patterns are shortlived, so to accurately reflect the tendencies of a consumer, we need to curate and refresh his or her profiles as quickly as possible while avoiding multiple scans over the raw data and dealing with issues like transient system outages. Second, we must address the difficulty of building models utilizing behavioral profiles without overwhelming our Hadoop cluster. At this scale, frequent refreshes of several models can place an undue burden on even a thousand-node cluster. In this talk, we will dive into (a) the practical challenges involved in designing a highly scalable and efficient solution to build behavioral profiles using Hadoop framework and (b) techniques for ensuring reliability and availability of mission critical machine learning pipelines.
Maintaining large-scale distributed systems is a herculean task and Hadoop is no exception. The scale and velocity that we operate at Rocket Fuel presents a unique challenge. We observed 5 fold PB growth in our data and 5 fold number of machines, all in just a year’s time. As Hadoop became a critical infrastructure at Rocket Fuel, we had to ensure scale and high availability so our reporting, data mining, and machine learning could continue to excel. We also had to ensure business continuity with disaster recovery plans in the face of this drastic growth. In this presentation, we will discuss what worked well for us and what we learned 9the hard way). Specifically, we will (a) describe how we automated installation and dynamic configuration using Puppet and InfraDB (b) describe the performance tuning for scaling Hadoop (c) talk about the good, bad, and ugly of scheduling and multi-tenancy (d) detail some of the hard-fought issues (e) brief our Business-Continuity Plans and Disaster Recovery (f) touch upon how we monitor our Monster Hadoop cluster, and finally, (g) share our experience of Yarn-at-Scale at Rocket Fuel.
Maintaining large-scale distributed systems is a herculean task and Hadoop is no exception. The scale and velocity that we operate at Rocket Fuel presents a unique challenge. We observed 5 fold PB growth in our data and 5 fold number of machines, all in just a year’s time. As Hadoop became a critical infrastructure at Rocket Fuel, we had to ensure scale and high availability so our reporting, data mining, and machine learning could continue to excel. We also had to ensure business continuity with disaster recovery plans in the face of this drastic growth. In this presentation, we will discuss what worked well for us and what we learned 9the hard way). Specifically, we will (a) describe how we automated installation and dynamic configuration using Puppet and InfraDB (b) describe the performance tuning for scaling Hadoop (c) talk about the good, bad, and ugly of scheduling and multi-tenancy (d) detail some of the hard-fought issues (e) brief our Business-Continuity Plans and Disaster Recovery (f) touch upon how we monitor our Monster Hadoop cluster, and finally, (g) share our experience of Yarn-at-Scale at Rocket Fuel.
How to use Impala query plan and profile to fix performance issuesCloudera, Inc.
Apache Impala is an exceptional, best-of-breed massively parallel processing SQL query engine that is a fundamental component of the big data software stack. Juan Yu demystifies the cost model Impala Planner uses and how Impala optimizes queries and explains how to identify performance bottleneck through query plan and profile and how to drive Impala to its full potential.
Doug Cutting discusses:
- A brief history of Spark and its rise in popularity across developers and enterprises
- Spark's advantages over MapReduce
- The One Platform Initiative and the roadmap for Spark
- The future of data processing in Hadoop
Amazon EC2 provides resizable compute capacity in the cloud, making web scale computing easier. It offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high-performance supercomputing, all with flexible pricing options. In this session, learn about the latest Amazon EC2 features and capabilities, including new instance families, the differences among their hardware types and capabilities, and their optimal use cases. Also discover best practices for optimizing your expenditure and getting the most benefit from your EC2 instances while saving time and money.
HiveServer2 provides a multi-tenant service end-point for executing Hive queries concurrently. It provides support for authentication and authorization, serves as a JDBC endpoint for users to connect and run queries via various tools, maintains sessions and warm containers for faster query processing, provides caching at multiple levels and much more. In other words, it is an integral component of any Hive deployment. HiveServer2 deployments however often face performance and reliability issues leading to catastrophic failures at times. At Qubole, we have augmented HiveServer2 to utilize the capabilities of the cloud to offer an enterprise-ready scalable and stable HiveServer2 (or HS2) service.
The HS2 experience on the cloud at Qubole, which is our primary platform of deployment, has been enhanced to automatically scale based on the customer’s workload; our solution adds and gracefully removes HS2 instances according to the requirement, thus making HS2 service not only self-sufficient at scale but also fault-tolerant. We have implemented Load Balancing for queries based on the resource utilization on HS2 instances to provide a reliable, efficient and cost-effective solution. A health monitoring service, based on past learnings and insights of running HS2 in customer deployments, implemented on top of this scalable HS2 service acts as the foundation for battle-tested, enterprise-ready solution for HS2 instances. In this talk, we will share the details of such an implementation, and the challenges faced in providing an auto-scalable, highly performant and reliable HS2 experience in the cloud.
Topics include:
* Workload-aware autoscaling for HS2 clusters.
* Agent-based adaptive load balancing of Hive queries on multi-tenant HS2 clusters.
* Durability monitoring using failure semantics and automated measures to provide reliability.
* Enterprise level security for HS2 on the cloud.
* Metrics, monitoring and alerting around the HS2 service.
Hadoop 3.0 has been years in the making, and now it's finally arriving. Andrew Wang and Daniel Templeton offer an overview of new features, including HDFS erasure coding, YARN Timeline Service v2, YARN federation, and much more, and discuss current release management status and community testing efforts dedicated to making Hadoop 3.0 the best Hadoop major release yet.
Amazon EC2 provides resizable compute capacity in the cloud, making web scale computing easier. It offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high-performance supercomputing, all with flexible pricing options. In this session, learn about the latest Amazon EC2 features and capabilities, including new instance families, understand the differences among their hardware types and capabilities, and explore their optimal use cases.
How to build leakproof stream processing pipelines with Apache Kafka and Apac...Cloudera, Inc.
When Kafka stream processing pipelines fail, they can leave users panicked about data loss when restarting their application. Jordan Hambleton and Guru Medasani explain how offset management provides users the ability to restore the state of the stream throughout its lifecycle, deal with unexpected failure, and improve accuracy of results.
The TCO Calculator - Estimate the True Cost of Hadoop MapR Technologies
http://bit.ly/1wsAuRS - There are many hidden costs for Apache Hadoop that have different effects across different Hadoop distributions. With the new MapR TCO calculator organisations have a simple and reliable tool that is based on facts to compare costs.
Challenges for running Hadoop on AWS - AdvancedAWS MeetupAndrei Savu
Nowadays we've got all the tools we need to spin-up and tear-down clusters with hundreds of nodes in minutes and this puts more pressure on the tools we use to configure and monitor our applications. This challenge is even more interesting when we have to deal with long running distributed data storage and processing systems like Hadoop. In this talk we will look into some of the challenges we need to deal with when creating and managing Hadoop clusters in AWS, we will discuss improvement opportunities in monitoring (e.g. detecting and dealing with instance failure, resource contention & noisy neighbors) and a bit about the future and how we should go about disconnecting workload dispatch from cluster lifecycle.
Best Practices for Virtualizing Apache HadoopHortonworks
Join this webinar to discuss best practices for designing and building a solid, robust and flexible Hadoop platform on an enterprise virtual infrastructure. Attendees will learn the flexibility and operational advantages of Virtual Machines such as fast provisioning, cloning, high levels of standardization, hybrid storage, vMotioning, increased stabilization of the entire software stack, High Availability and Fault Tolerance. This is a can`t miss presentation for anyone wanting to understand design, configuration and deployment of Hadoop in virtual infrastructures.
CES - C Space Storytelling Session - Programmatic TV AdvertisingRocket Fuel Inc.
Featuring Randy Wootton from Rocket Fuel, Michael Giardina from Glenfiddich, James Shears from DISH, and Jarod Caporino from Resolute Digital.
Hear how real-time bidding helped Glenfiddich and Resolute Digital take advantage of audiences targeted on a household level to maximize performance in Q4, and what this new method of buying means for the future of TV.
Rocket fuel cross device and ptv 12-9-15 sharedv2Rocket Fuel Inc.
Thank you for joining the Chicago Rocket Fuel Cross Device and Programmatic TV lunch & learn on Wednesday, December 12 at the Old Crow Smokehouse! For your review is the presentation deck of this event. Thank you again for joining us!
How to use Impala query plan and profile to fix performance issuesCloudera, Inc.
Apache Impala is an exceptional, best-of-breed massively parallel processing SQL query engine that is a fundamental component of the big data software stack. Juan Yu demystifies the cost model Impala Planner uses and how Impala optimizes queries and explains how to identify performance bottleneck through query plan and profile and how to drive Impala to its full potential.
Doug Cutting discusses:
- A brief history of Spark and its rise in popularity across developers and enterprises
- Spark's advantages over MapReduce
- The One Platform Initiative and the roadmap for Spark
- The future of data processing in Hadoop
Amazon EC2 provides resizable compute capacity in the cloud, making web scale computing easier. It offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high-performance supercomputing, all with flexible pricing options. In this session, learn about the latest Amazon EC2 features and capabilities, including new instance families, the differences among their hardware types and capabilities, and their optimal use cases. Also discover best practices for optimizing your expenditure and getting the most benefit from your EC2 instances while saving time and money.
HiveServer2 provides a multi-tenant service end-point for executing Hive queries concurrently. It provides support for authentication and authorization, serves as a JDBC endpoint for users to connect and run queries via various tools, maintains sessions and warm containers for faster query processing, provides caching at multiple levels and much more. In other words, it is an integral component of any Hive deployment. HiveServer2 deployments however often face performance and reliability issues leading to catastrophic failures at times. At Qubole, we have augmented HiveServer2 to utilize the capabilities of the cloud to offer an enterprise-ready scalable and stable HiveServer2 (or HS2) service.
The HS2 experience on the cloud at Qubole, which is our primary platform of deployment, has been enhanced to automatically scale based on the customer’s workload; our solution adds and gracefully removes HS2 instances according to the requirement, thus making HS2 service not only self-sufficient at scale but also fault-tolerant. We have implemented Load Balancing for queries based on the resource utilization on HS2 instances to provide a reliable, efficient and cost-effective solution. A health monitoring service, based on past learnings and insights of running HS2 in customer deployments, implemented on top of this scalable HS2 service acts as the foundation for battle-tested, enterprise-ready solution for HS2 instances. In this talk, we will share the details of such an implementation, and the challenges faced in providing an auto-scalable, highly performant and reliable HS2 experience in the cloud.
Topics include:
* Workload-aware autoscaling for HS2 clusters.
* Agent-based adaptive load balancing of Hive queries on multi-tenant HS2 clusters.
* Durability monitoring using failure semantics and automated measures to provide reliability.
* Enterprise level security for HS2 on the cloud.
* Metrics, monitoring and alerting around the HS2 service.
Hadoop 3.0 has been years in the making, and now it's finally arriving. Andrew Wang and Daniel Templeton offer an overview of new features, including HDFS erasure coding, YARN Timeline Service v2, YARN federation, and much more, and discuss current release management status and community testing efforts dedicated to making Hadoop 3.0 the best Hadoop major release yet.
Amazon EC2 provides resizable compute capacity in the cloud, making web scale computing easier. It offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high-performance supercomputing, all with flexible pricing options. In this session, learn about the latest Amazon EC2 features and capabilities, including new instance families, understand the differences among their hardware types and capabilities, and explore their optimal use cases.
How to build leakproof stream processing pipelines with Apache Kafka and Apac...Cloudera, Inc.
When Kafka stream processing pipelines fail, they can leave users panicked about data loss when restarting their application. Jordan Hambleton and Guru Medasani explain how offset management provides users the ability to restore the state of the stream throughout its lifecycle, deal with unexpected failure, and improve accuracy of results.
The TCO Calculator - Estimate the True Cost of Hadoop MapR Technologies
http://bit.ly/1wsAuRS - There are many hidden costs for Apache Hadoop that have different effects across different Hadoop distributions. With the new MapR TCO calculator organisations have a simple and reliable tool that is based on facts to compare costs.
Challenges for running Hadoop on AWS - AdvancedAWS MeetupAndrei Savu
Nowadays we've got all the tools we need to spin-up and tear-down clusters with hundreds of nodes in minutes and this puts more pressure on the tools we use to configure and monitor our applications. This challenge is even more interesting when we have to deal with long running distributed data storage and processing systems like Hadoop. In this talk we will look into some of the challenges we need to deal with when creating and managing Hadoop clusters in AWS, we will discuss improvement opportunities in monitoring (e.g. detecting and dealing with instance failure, resource contention & noisy neighbors) and a bit about the future and how we should go about disconnecting workload dispatch from cluster lifecycle.
Best Practices for Virtualizing Apache HadoopHortonworks
Join this webinar to discuss best practices for designing and building a solid, robust and flexible Hadoop platform on an enterprise virtual infrastructure. Attendees will learn the flexibility and operational advantages of Virtual Machines such as fast provisioning, cloning, high levels of standardization, hybrid storage, vMotioning, increased stabilization of the entire software stack, High Availability and Fault Tolerance. This is a can`t miss presentation for anyone wanting to understand design, configuration and deployment of Hadoop in virtual infrastructures.
CES - C Space Storytelling Session - Programmatic TV AdvertisingRocket Fuel Inc.
Featuring Randy Wootton from Rocket Fuel, Michael Giardina from Glenfiddich, James Shears from DISH, and Jarod Caporino from Resolute Digital.
Hear how real-time bidding helped Glenfiddich and Resolute Digital take advantage of audiences targeted on a household level to maximize performance in Q4, and what this new method of buying means for the future of TV.
Rocket fuel cross device and ptv 12-9-15 sharedv2Rocket Fuel Inc.
Thank you for joining the Chicago Rocket Fuel Cross Device and Programmatic TV lunch & learn on Wednesday, December 12 at the Old Crow Smokehouse! For your review is the presentation deck of this event. Thank you again for joining us!
Are Programmatic Direct and Automated Guaranteed the same? What about Private Marketplace vs Programmatic Direct, how are they different? This chart helps to explain all the various digital media buying approaches and how they are all different.
Rocket Fuel's Traffic Quality Webinar featuring Ari Levenfeld, Rocket Fuel's Senior Director of Privacy and Inventory Quality and guest speaker Susan Bidel, Senior Analyst with Forrester Research Inc.
It’s an amazing time to be a marketer—but also an incredibly challenging one. Consumer attention is fractured across countless channels and screens while data grows exponentially. Meanwhile, CMOs and agencies are under tremendous pressure to produce more results with less budget.
But super-intelligent programmatic marketing can help you meet these demands head-on, and come out with better results than ever before. Rocket Fuel's VP of Marketing Rhonda Shantz will walk through how marketers can take advantage of programmatic today, including how to:
-Engage a consumer across all of his/her devices to drive him/her from awareness to conversion and beyond.
-Target people, not devices.
-Leverage machine learning for higher conversions and deeper marketer insight to make more meaningful connections across the consumer journey.
-Connect offline CRM data to access anonymous online profiles to address offline consumers in any programmatic channel.
ONLY OOYALA HAS YOU COVERED FROM SCRIPT TO SCREEN
Workflow, streaming, analytics and monetization: the complete suite of data-driven software and services to bring your OTT business to life. Learn how companies around the world achieved spectacular results that only Ooyala can deliver.
Maintaining large scale distributed systems is a herculean task and Hadoop is no exception. The scale and velocity that we operate at Rocket Fuel presents an unique challenge to maintain scalability, high availability, and business continuity with Hadoop clusters at the core of reporting, data mining, and machine learning. In this presentation we will describe: our automated installation and dynamic configuration; performance tuning for scaling Hadoop; the good, bad, ugly of scheduling and multi-tenancy; some of the hard fought issues; our Business Continuity Plans and Disaster Recovery; how we monitor our Monster Hadoop cluster; our experience of Yarn at Scale at Rocket Fuel.
How we solved Real-time User Segmentation using HBaseDataWorks Summit
At RichRelevance, we service 10 of the top 20 Internet retailer chains and deliver more than $5.5 billions in attributable sales. Every 21 milliseconds a shopper clicks on a recommendation that we have delivered, and we serve over 850 million product recommendations daily. Our Hadoop infrastructure has a capacity to handle upwards of 1.5+ PB. Behavioral Targeting, specifically user segmentation and building personas, is critical for us in generating triggers when a user is added to a segment or switches from a segment. In this presentation, we intend to demonstrate not only how the events are captured, but also how they are stored in HBase in real-time. It is critical to design the system so it can handle thousands of writes per second and, at the same time, be able to query any combination of behavioral attributes in HBase through real-time APIs. This session will walk attendees through the entire design & architecture starting from data Ingestion, schema design, and access patterns, as well as some major problems like sharing & hot spotting. Furthermore, performance metrics will be presented, including the number of read/write per second and details around cluster configuration.
Webinar: LiveAction 4.0 single pane of glass visibility for large enterprise ...LiveAction IT
The new LiveAction 4.0 provides major scalability enhancements that enables users to easily troubleshoot and resolve performance issues in networks with multiple data centers and geographically dispersed branch offices.
This webinar is highly recommended for enterprise network administrators managing thousands of network routers and switches. In this session, we will explore the following features:
1. “Single-pane of glass” visibility and management
2. Large network topology and end-to-end visual path analysis
3. Device grouping allows for common operations across multiple devices with ease
4. Bulk discover enables faster and more automatic detection of devices
5. Bulk configuration for groups of devices and/or sites
Serverless Applications at Global Scale with Multi-Regional Deployments - AWS...Amazon Web Services
Learning Objectives:
- Input and decision points when architecting a serverless multi-regional application
- Active-active Multi-Regional API with API Gateway and Lambda
- Replication with DynamoDB
[Redis conf18] The Versatility of RedisEiti Kimura
This presentation shows Movile/Wavy uses cases, presented at RedisConf18 in San Francisco California. Here you can see how versatile is Redis how you can use it to leverage your business!
Virtual SAN: It’s a SAN, it’s Virtual, but what is it really?DataCore Software
What do you think of when you hear the words “Virtual SAN”? For some, it may mean addressing application latency and infrastructure costs through consolidation. For others, it may be addressing potential single point of failures. Regardless of the use case, Virtual SANs are becoming one of the hottest software-defined storage solutions for IT organizations to maximize storage resources, lower overall TCO, and increase availability of critical applications and data.
This presentation introduces the concept of Virtual SAN and does a technical deep dive on the most common use cases and deployment models involved with a DataCore Virtual SAN solution.
Couchbase Cloud No Equal (Rick Jacobs, Couchbase) Kafka Summit 2020HostedbyConfluent
This session will describe and demonstrate the longstanding integration between Couchbase Server and Apache Kafka and will include descriptions of both the mechanics of the integration and practical situations when combining these products is appropriate.
How to Effectively Plan for Disaster Recovery on AWS (CMP204-S) - AWS re:Inve...Amazon Web Services
Although the AWS Cloud provides a new level of durability and resiliency, no workload is immune to disasters—be it due to accidental reasons or malicious intent. Even in the cloud, you have to ensure continuity. Traditional disaster recovery (DR) solutions are not optimized for the cloud and often result in higher costs, increased complexity, and operational challenges. To maintain compliance and business continuity service-level agreements, AWS DR planning requires a completely different approach to deal with cross-account, cross-region workload testing and failover. In this session, learn how you can set up an effective DR plan for your AWS environments. This session is brought to you by AWS partner, Druva.
From Mainframe to Microservices: Vanguard’s Move to the Cloud - ENT331 - re:I...Amazon Web Services
Maintaining control of sensitive data is critical in the highly regulated financial investments environment that Vanguard operates in. This need for data control complicated Vanguard's move to the cloud. They needed to expand globally to provide a great user experience while at the same time maintaining their mainframe-based backend data architecture. In this session, Vanguard discusses the creative approach they took to decouple their monolithic backend architecture to empower a microservices architecture while maintaining compliance with regulations. They also cover solutions implemented to successfully meet their requirements for security, latency, and end-state consistency.
NEW LAUNCH! Learn how Fubo is monetizing their content with server side ad in...Amazon Web Services
In this session, we will introduce server-side ad insertion, also known as ad stitching. Server side ad insertion helps you to deliver ads that are more relevant to your customers, and at the same time, helps bypass ad blockers and lower latency.
Mindfire Solutions provides expert off-shore Ruby on Rails development services which is an open-source web application framework for Ruby programming language that enables developers to build dynamic, data-driven applications and hence generates sustainable efficiency.
Similar to How did you know this Ad will be relevant for me?! (20)
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.