Bentley ProjectWise is a widely used file management system for engineering projects. Automating processes and file conversions in ProjectWise has caused consultants numerous headaches in recent years. This presentation will explore the inner workings of ProjectWise and FME, walk through a world of complex documentation and experimentation, and explain how to access and make the most of remote APIs with FME.
Cloud computing is widely used by industry for more than a decade. There are many patterns, best practices and tools around it including DevOps, despite that, they do not prevent from shouting yourself if misused.
This talk is a summary of practical experience and observations about top-most misuse of DevOps practices when applied to cloud software engineering and operations. AWS Cloud provider is used for cases examples.
[WSO2Con USA 2018] Deploying Applications in K8S and DockerWSO2
In this slide deck, Lakmal discusses best practices for deploying applications in Docker and Kubernetes while discussing Docker and Kubernetes concepts.
[WSO2Con USA 2018] Microservices, Containers, and BeyondWSO2
This slide deck discusses what's next in this highly agile, massively distributed environment. It will focus on fine-tuned DevOps processes, governance, and observability in a massively distributed container native microservices platform.
Bentley ProjectWise is a widely used file management system for engineering projects. Automating processes and file conversions in ProjectWise has caused consultants numerous headaches in recent years. This presentation will explore the inner workings of ProjectWise and FME, walk through a world of complex documentation and experimentation, and explain how to access and make the most of remote APIs with FME.
Cloud computing is widely used by industry for more than a decade. There are many patterns, best practices and tools around it including DevOps, despite that, they do not prevent from shouting yourself if misused.
This talk is a summary of practical experience and observations about top-most misuse of DevOps practices when applied to cloud software engineering and operations. AWS Cloud provider is used for cases examples.
[WSO2Con USA 2018] Deploying Applications in K8S and DockerWSO2
In this slide deck, Lakmal discusses best practices for deploying applications in Docker and Kubernetes while discussing Docker and Kubernetes concepts.
[WSO2Con USA 2018] Microservices, Containers, and BeyondWSO2
This slide deck discusses what's next in this highly agile, massively distributed environment. It will focus on fine-tuned DevOps processes, governance, and observability in a massively distributed container native microservices platform.
FME the Workhorse of the Enterprise SystemSafe Software
I would like to talk about the enterprise level system implementation that TransCanada carried out over the last couple of years. FME is the core component of the solution, responsible for all data synchronization between numerous data sources and various input formats.
Planned Topics:
1. FME the connector that makes it attainable
2. FME server architectural setup considerations / workflows control
3. Importance of workflow performance / efficiency
4. Supportability, an aspect that we often forget about
Design Summit - Technology Vision - Oleg Barenboim and Jason FreyManageIQ
Oleg and Jason share the vision for the ManageIQ technology, integration with partners, and an overview of the roadmap.
See accompanying video: http://youtu.be/lokMmVCavas
For more on ManageIQ, see http://manageiq.org/
Kubernetes & Google Container Engine @ mablJoseph Lust
Validating 100 Million Pages a Month using Kubernetes and Google Container Engine (GKE).
How we used Docker to build our ML testing engine in four months. Lessons learned, best practices, and demonstrations.
Boston Google Cloud Meetup September Presentation @ mabl
https://www.meetup.com/Boston-Google-Cloud-Meetup/events/242964121/
Through the looking glass an intro to scalable, distributed counting in data...Geoff Cooney
Lightning talk I gave at GCP Boston meetup for a quick hands on intro to google dataflow. Example based on the public pubsub topic described here: https://github.com/googlecodelabs/cloud-dataflow-nyc-taxi-tycoon
Tips and tricks to maximize performance and minimize serverless costs with Firebase and Google Cloud Functions. Live examples and analysis to show that GCF is the cheapest function provider, compared to Azure Functions and AWS Lambda.
"Smooth Operator" [Bay Area NewSQL meetup]Kevin Xu
This slide was delivered at the Bay Area NewSQL meetup in California on how TiDB, an open source NewSQL distributed database, is deployed and managed on any Kubernetes-enabled cloud environment by applying the Operator pattern.
As your company accumulates more data, it’s important to leverage all of it to develop new advanced machine learning models. And now, you can scale Spark using Kubernetes. Thanks to the new native integration between Apache Spark’s and Kubernetes, scaling data processing has never been easier. Apache Spark is a well designed high level application that can increase your data processing speed and accuracy. It can handle batch and real-time analytic and data processing workloads. This high level and efficient technology can be used with Java/Spark/Python and R. Joined with Kubernetes, you can get twice the efficiency. Kubernetes is a great engine with the most popular framework for managing compute resources. Unfortunately, running Apache Spark on Kubernetes can be a pain for first-time users.
Join CTO of cnvrg.io Leah Kolben as she brings you through a step by step tutorial on how to run Spark on Kubernetes. You’ll have your Spark up and running on Kubernetes in just 30 minutes.
Running Spark on Kubernetes will help you:
Process larger amounts of data
Segment your data into sub groups
Watch all our webinars at https://cnvrg.io/webinars-and-workshops/
With Cloud Functions you write simple, functions that doing one unit of execution.
Cloud Functions can be written using JavaScript, Python 3, or Go
and you simply deploy a function bound to the event you want and you are all done.
In our case we will leavrage from Cloud Function to manage our K8s clusters based on work times in order to save budget.
At Opendoor, we do a lot of big data processing, and use Spark and Dask clusters for the computations. Our machine learning platform is written in Dask and we are actively moving data ingestion pipelines and geo computations to PySpark. The biggest challenge is that jobs vary in memory, cpu needs, and the load in not evenly distributed over time, which causes our workers and clusters to be over-provisioned. In addition to this, we need to enable data scientists and engineers run their code without having to upgrade the cluster for every request and deal with the dependency hell.
To solve all of these problems, we introduce a lightweight integration across some popular tools like Kubernetes, Docker, Airflow and Spark. Using a combination of these tools, we are able to spin up on-demand Spark and Dask clusters for our computing jobs, bring down the cost using autoscaling and spot pricing, unify DAGs across many teams with different stacks on the single Airflow instance, and all of it at minimal cost.
Moving 150 TB of data resiliently on Kafka With Quorum Controller on Kubernet...HostedbyConfluent
At Wells-Fargo, we move 150 TB of logs data from our syslogs to Splunk forwarders that get indexed and organized for analytic queries. As we modernize and migrate our applications to our hybrid cloud the performance expectations for this infrastructure will proportionately increase. Those improvements include the resilience of the end to end infrastructure. First, we decoupled the applications from their logging interface through a loglibrary which split the streams of logs from their sources to KAFKA which routed them to two separate destinations Splunk and ELK respectively. We also used prometheus and grafana for monitoring the metrics. We also deployed KAFKA, Splunk, ELK, Prometheus and Grafana on the Kubernetes clusters. Confluent had released a version of KAFKA without Zookeeper and replaced its functionality with Quorum Controller. The Quorum-Controller version exhibited better disposability one of the 12factors that's important for Cloud-Nativeness. We packaged this version into a Kubernetes operator called Keda and deployed this for auto-scaling. We tested this to simulate the amount of logdata that we typically generate in production. Based on the above we have also implemented distributed tracing and help make it just as resilient. We will share our lessons learnt, the patterns and practices to modernize both our underlying runtime platforms and our applications with highly performing and resilient event-driven architectures.
Martins Paurs from Telia Latvija came to tell us how his company have leveraged Cloudstack to help evolve their business from a traditional Telco business into a modern cloud providers.
Life of a startup - Sjoerd Mulder - Codemotion Amsterdam 2017Codemotion
Building a minimum viable product in 3 months is easy. Scaling it towards a reactive system that can handle thousands of requests per second and deploying new versions without causing a denial of service is another challenge. Find out how at Crobox we scaled from a single machine (and point of failures) towards the high-available server cluster we are now running. On this journey you can also learn how we solved challenges with monitoring, logging and deployments.
From business requirements to working pipelines with apache airflowDerrick Qin
In this talk we will be building Airflow pipelines. We’ll look at real business requirements and walk through pipeline design, implementation, testing, deployment and troubleshooting - all that by adhering to idempotency and ability to replay your past data through the pipelines.
FME the Workhorse of the Enterprise SystemSafe Software
I would like to talk about the enterprise level system implementation that TransCanada carried out over the last couple of years. FME is the core component of the solution, responsible for all data synchronization between numerous data sources and various input formats.
Planned Topics:
1. FME the connector that makes it attainable
2. FME server architectural setup considerations / workflows control
3. Importance of workflow performance / efficiency
4. Supportability, an aspect that we often forget about
Design Summit - Technology Vision - Oleg Barenboim and Jason FreyManageIQ
Oleg and Jason share the vision for the ManageIQ technology, integration with partners, and an overview of the roadmap.
See accompanying video: http://youtu.be/lokMmVCavas
For more on ManageIQ, see http://manageiq.org/
Kubernetes & Google Container Engine @ mablJoseph Lust
Validating 100 Million Pages a Month using Kubernetes and Google Container Engine (GKE).
How we used Docker to build our ML testing engine in four months. Lessons learned, best practices, and demonstrations.
Boston Google Cloud Meetup September Presentation @ mabl
https://www.meetup.com/Boston-Google-Cloud-Meetup/events/242964121/
Through the looking glass an intro to scalable, distributed counting in data...Geoff Cooney
Lightning talk I gave at GCP Boston meetup for a quick hands on intro to google dataflow. Example based on the public pubsub topic described here: https://github.com/googlecodelabs/cloud-dataflow-nyc-taxi-tycoon
Tips and tricks to maximize performance and minimize serverless costs with Firebase and Google Cloud Functions. Live examples and analysis to show that GCF is the cheapest function provider, compared to Azure Functions and AWS Lambda.
"Smooth Operator" [Bay Area NewSQL meetup]Kevin Xu
This slide was delivered at the Bay Area NewSQL meetup in California on how TiDB, an open source NewSQL distributed database, is deployed and managed on any Kubernetes-enabled cloud environment by applying the Operator pattern.
As your company accumulates more data, it’s important to leverage all of it to develop new advanced machine learning models. And now, you can scale Spark using Kubernetes. Thanks to the new native integration between Apache Spark’s and Kubernetes, scaling data processing has never been easier. Apache Spark is a well designed high level application that can increase your data processing speed and accuracy. It can handle batch and real-time analytic and data processing workloads. This high level and efficient technology can be used with Java/Spark/Python and R. Joined with Kubernetes, you can get twice the efficiency. Kubernetes is a great engine with the most popular framework for managing compute resources. Unfortunately, running Apache Spark on Kubernetes can be a pain for first-time users.
Join CTO of cnvrg.io Leah Kolben as she brings you through a step by step tutorial on how to run Spark on Kubernetes. You’ll have your Spark up and running on Kubernetes in just 30 minutes.
Running Spark on Kubernetes will help you:
Process larger amounts of data
Segment your data into sub groups
Watch all our webinars at https://cnvrg.io/webinars-and-workshops/
With Cloud Functions you write simple, functions that doing one unit of execution.
Cloud Functions can be written using JavaScript, Python 3, or Go
and you simply deploy a function bound to the event you want and you are all done.
In our case we will leavrage from Cloud Function to manage our K8s clusters based on work times in order to save budget.
At Opendoor, we do a lot of big data processing, and use Spark and Dask clusters for the computations. Our machine learning platform is written in Dask and we are actively moving data ingestion pipelines and geo computations to PySpark. The biggest challenge is that jobs vary in memory, cpu needs, and the load in not evenly distributed over time, which causes our workers and clusters to be over-provisioned. In addition to this, we need to enable data scientists and engineers run their code without having to upgrade the cluster for every request and deal with the dependency hell.
To solve all of these problems, we introduce a lightweight integration across some popular tools like Kubernetes, Docker, Airflow and Spark. Using a combination of these tools, we are able to spin up on-demand Spark and Dask clusters for our computing jobs, bring down the cost using autoscaling and spot pricing, unify DAGs across many teams with different stacks on the single Airflow instance, and all of it at minimal cost.
Moving 150 TB of data resiliently on Kafka With Quorum Controller on Kubernet...HostedbyConfluent
At Wells-Fargo, we move 150 TB of logs data from our syslogs to Splunk forwarders that get indexed and organized for analytic queries. As we modernize and migrate our applications to our hybrid cloud the performance expectations for this infrastructure will proportionately increase. Those improvements include the resilience of the end to end infrastructure. First, we decoupled the applications from their logging interface through a loglibrary which split the streams of logs from their sources to KAFKA which routed them to two separate destinations Splunk and ELK respectively. We also used prometheus and grafana for monitoring the metrics. We also deployed KAFKA, Splunk, ELK, Prometheus and Grafana on the Kubernetes clusters. Confluent had released a version of KAFKA without Zookeeper and replaced its functionality with Quorum Controller. The Quorum-Controller version exhibited better disposability one of the 12factors that's important for Cloud-Nativeness. We packaged this version into a Kubernetes operator called Keda and deployed this for auto-scaling. We tested this to simulate the amount of logdata that we typically generate in production. Based on the above we have also implemented distributed tracing and help make it just as resilient. We will share our lessons learnt, the patterns and practices to modernize both our underlying runtime platforms and our applications with highly performing and resilient event-driven architectures.
Martins Paurs from Telia Latvija came to tell us how his company have leveraged Cloudstack to help evolve their business from a traditional Telco business into a modern cloud providers.
Life of a startup - Sjoerd Mulder - Codemotion Amsterdam 2017Codemotion
Building a minimum viable product in 3 months is easy. Scaling it towards a reactive system that can handle thousands of requests per second and deploying new versions without causing a denial of service is another challenge. Find out how at Crobox we scaled from a single machine (and point of failures) towards the high-available server cluster we are now running. On this journey you can also learn how we solved challenges with monitoring, logging and deployments.
From business requirements to working pipelines with apache airflowDerrick Qin
In this talk we will be building Airflow pipelines. We’ll look at real business requirements and walk through pipeline design, implementation, testing, deployment and troubleshooting - all that by adhering to idempotency and ability to replay your past data through the pipelines.
Dutch Oracle Architects Platform - Reviewing Oracle OpenWorld 2017 and New Tr...Lucas Jellema
Not since the rise of Service Oriented Architecture (and the supporting Fusion Middleware technology) over a decade ago have we seen so much rapid change in terms of application and infrastructure architecture. Cloud, Microservices and DevOps are perhaps the most explicit examples – but many other developments in technology, architecture and even the industry at large have an impact on how enterprises consider and employ IT – such as machine learning, IoT, blockchain.
In this session for (infrastructure, solution, application, enterprise, security, data) architects – we will present the main stories, roadmaps and technologies from Oracle OpenWorld 2017 (and JavaOne) that influence, shape and enable architecture. We will brainstorm together on the consequences of the new directions outlined by Oracle – and coming our way from other quarters. We are seeing a a lot of change. New opportunities arise – that may become challenges or threats if we fail to recognize and embrace the change in time. This session will help us all to get a better handle on the winds in enterprise IT in general and in Oracle land in particular.
Among the topics we will present and discuss are:
- The Only Way is Up – the inevitable and imminent move from on premises to the cloud, and upwards in the stack – from IaaS to SaaS
- Security and Ops in a hybrid landscape (multiple clouds & on premises, multiple technologies & interaction channels)
- Autonomous Database – what, when, how
- Oracle’s cloud strategy, High PaaS and Low PaaS, Open [source] technology (star of the show: Apache Kafka) and the commodization of the traditional Oracle platform
- Container and Cloud Native at Oracle Cloud (Docker, Kubernetes Container Platform, Wercker, Istio Service Mesh, CNCF)
- Serverless
- Java Reborn – for microservices and cloud, modularized (highlights from the JavaOne conference)
- Disruptive: Blockchain, IoT, Machine Learning
***Project Summary***
A well established SaaS company in North America recently migrated workloads of 50,000 Virtual Servers, Five (5) petabytes of data with MySQL database backend from on-premises data center infrastructure to Google Cloud Platform (GCP) through a 'lift and shift' cloud migration methodology.
They are looking to expand their SaaS offering and customer base outside of North America and at the same time optimize cloud platform for High Availability, Scalability, and Resilience.
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2OUz6dt.
Chris Riccomini talks about the current state-of-the-art in data pipelines and data warehousing, and shares some of the solutions to current problems dealing with data streaming and warehousing. Filmed at qconsf.com.
Chris Riccomini works as a Software Engineer at WePay.
Parasoft Testing anything, any time with containerized service virtualizationChantalWauters
Continuous integration and delivery makes early and fast testing mission-critical for a lot of organizations. However, test execution is blocked by application dependencies being unavailable, not having the right amount of test data or otherwise being access-restricted.
Service virtualization is an approach that can be used to create, deploy and exercise virtual assets that your test team has full control over.
Now, by combining service virtualization with container and cloud technology, like Docker and Azure, development teams can create, share, use and destroy test environments on-demand, in seconds. It allows easy integration into continuous integration and delivery pipelines, enabling teams to regain full control over their test environments and the testing process.
Learn how Autodesk broke the 300,000 issues barrier without impacting performance, keeping excellent uptime, with more than 3000 registered users and average of 1800 concurrent users. In this session you will discover the hardware architecture, system settings and other interesting data from Autodesk experience in the field.
[QCon.ai 2019] People You May Know: Fast Recommendations Over Massive DataSumit Rangwala
The “People You May Know” (PYMK) recommendation service helps LinkedIn’s members identify other members that they might want to connect to and is the major driver for growing LinkedIn's social network. The principal challenge in developing a service like PYMK is dealing with the sheer scale of computation needed to make precise recommendations with a high recall. PYMK service at LinkedIn has been operational for over a decade, during which it has evolved from an Oracle-backed system that took weeks to compute recommendations to a Hadoop backed system that took a few days to compute recommendations to its most modern embodiment where it can compute recommendations in near real time.
This talk will present the evolution of PYMK to its current architecture. We will focus on various systems we built along the way, with an emphasis on systems we built for our most recent architecture, namely Gaia, our real-time graph computing capability, and Venice our online feature store with scoring capability, and how we integrate these individual systems to generate recommendations in a timely and agile manner, while still being cost-efficient. We will briefly talk about the lessons learned about scalability limits of our past and current design choices and how we plan to tackle the scalability challenges for the next phase of growth.
https://qcon.ai/qconai2019/presentation/people-you-may-know-fast-recommendations-over-massive-data
Valkuilen en successen binnen een implementatie van Office 365, SharePoint, Exchange en Lync Online in de Cloud. 4000 seats binnen Zorg - Health. Intramurele en extramurele medewerkers hebben samen een digitale werkomgeving. Het Nieuwe Werken voor een thuiszorginstelling.
Introduction to GPU Development for Java Developers. View the video at https://youtu.be/sOj8LsuSMFg - and find out more about the Seattle Java User Group (SeaJUG) at http://seajug.org/
The New Normal – Delivering Remote Professional ServicesNeo4j
The new normal for IT professionals is working out of home offices. While Neo4j Pre-Sales and Professional Services have always provided remote services, we have recently fine-tuned our remote delivery of workshops, trainings, bootcamps, health checks, expert services and more. We have boosted functionality, with extra conferencing tools, VPN and data security features, while offering more flexible schedules and timelines.
In this webinar, Stefan Kolmar will present some of the Neo4j services packages and demonstrate examples of successful implementation and deployment of Neo4j based projects. The webinar will focus on adapting Neo4j services to the needs of today's world, maintaining productivity by enabling virtual teams to implement and deliver projects remotely.
Automating Data Quality Processes at ReckittDatabricks
Reckitt is a fast-moving consumer goods company with a portfolio of famous brands and over 30k employees worldwide. With that scale small projects can quickly grow into big datasets, and processing and cleaning all that data can become a challenge. To solve that challenge we have created a metadata driven ETL framework for orchestrating data transformations through parametrised SQL scripts. It allows us to create various paths for our data as well as easily version control them. The approach of standardising incoming datasets and creating reusable SQL processes has proven to be a winning formula. It has helped simplify complicated landing/stage/merge processes and allowed them to be self-documenting.
But this is only half the battle, we also want to create data products. Documented, quality assured data sets that are intuitive to use. As we move to a CI/CD approach, increasing the frequency of deployments, the demand of keeping documentation and data quality assessments up to date becomes increasingly challenging. To solve this problem, we have expanded our ETL framework to include SQL processes that automate data quality activities. Using the Hive metastore as a starting point, we have leveraged this framework to automate the maintenance of a data dictionary and reduce documenting, model refinement, testing data quality and filtering out bad data to a box filling exercise. In this talk we discuss our approach to maintaining high quality data products and share examples of how we automate data quality processes.
Lessons from Building Large-Scale, Multi-Cloud, SaaS Software at DatabricksDatabricks
The cloud has become one of the most attractive ways for enterprises to purchase software, but it requires building products in a very different way from traditional software
Capacity Planning Infrastructure for Web Applications (Drupal)Ricardo Amaro
In this session we will try to solve a couple of recurring problems:
Site Launch and User expectations
Imagine a customer that provides a set of needs for hardware, sets a date and launches the site, but then he forgets to warn that they have sent out some (thousands of) emails to half the world announcing their new website launch! What do you think it will happen?
Of course launching a Drupal Site involves a lot of preparation steps and there are plenty of guides out there about common Drupal Launch Readiness Checklists which is not a problem anymore.
What we are really missing here is a Plan for Capacity.
World renowned virtualization aficionados Eric Inch and Jason Cooper combine their collective 30 years of experience to provide a side-by-side comparison of the heavy hitters in application virtualization.
On one side: Microsoft App-V, part of the Desktop Optimization Pack and the Johnny-come-lately, streaming application packages to desktops and servers with ease.
On the other: VMware ThinApp, the 800-pound gorilla with a huge install base, incredible features, and a clear advantage over the up-and-comer.
Which of these sluggers will end up on top of the pile? View the Application Virtualization Smackdown slide deck to find out!
And for more information about this and other topics check our blog at www.cdhtalkstech.com.
Bringing Streaming Data To The Masses: Lowering The “Cost Of Admission” For Y...confluent
(Bob Lehmann, Bayer) Kafka Summit SF 2018
You’ve built your streaming data platform. The early adopters are “all in” and have developed producers, consumers and stream processing apps for a number of use cases. A large percentage of the enterprise, however, has expressed interest but hasn’t made the leap. Why?
In 2014, Bayer Crop Science (formerly Monsanto) adopted a cloud first strategy and started a multi-year transition to the cloud. A Kafka-based cross-datacenter DataHub was created to facilitate this migration and to drive the shift to real-time stream processing. The DataHub has seen strong enterprise adoption and supports a myriad of use cases. Data is ingested from a wide variety of sources and the data can move effortlessly between an on premise datacenter, AWS and Google Cloud. The DataHub has evolved continuously over time to meet the current and anticipated needs of our internal customers. The “cost of admission” for the platform has been lowered dramatically over time via our DataHub Portal and technologies such as Kafka Connect, Kubernetes and Presto. Most operations are now self-service, onboarding of new data sources is relatively painless and stream processing via KSQL and other technologies is being incorporated into the core DataHub platform.
In this talk, Bob Lehmann will describe the origins and evolution of the Enterprise DataHub with an emphasis on steps that were taken to drive user adoption. Bob will also talk about integrations between the DataHub and other key data platforms at Bayer, lessons learned and the future direction for streaming data and stream processing at Bayer.
Similar to Capacity Planning, To be or not to be virtualized (20)
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
6. Virtualization Projects
1. Pre study (Capacity Planning)
o Analyze
2. Build
o Design
o Configure
o Test
3. Operation
o In Production
7. Capacity Planning
• Inventory
• Workload profile
o Obtain data on the current load
o File versus database servers
• Consolidation analysis
o Input to design
o License consolidation
8. MAP
• Microsoft Assessment and Planning toolkit
o Inventory
o Performance metrics
o Consolidation scenario
o Hyper-V Cloud Fast-track
o ROI
My name is Niklas Akerlund, I work for a company devoted in virtualization, In this session I will try to give you a understanding why you need to do capacity planning and design before buying the hardware and not the other way aroundProper planning and sizing can save you a lot of money, if not imidiatly then in the future, it is important to to see the long run
It all started with the mainframe computers which was the first using virtualization, then the x86 server era came and lots of servers populated the datacenters because of the software developed and could not co-exist on the same OSUtalization of the datacenters was low which leed to the development of x86 virtualization, this in contradiction with the mainframe computers was affordable for the massesNow everyone is talking about the cloud, private clouds or public clouds, well the world has not changed so much, the principle is the same..
I love Dilbert and their way of detraramaturgise new buzzwords and suchIn this strip we can se the boss getting excitied about the consultant cloud talk
As I see it in a virtualization project no matter if it is a new or a redesign/refresh of a existing there are three steps1 Pre study in this face you will measure your existing environment to see how it could fit in a virtualization plattform, also include any existing environment in the scope2. Build in this face you will design your solution based on the outcome from the Pre-Study this will include the sizing and aproximations of a futuralgroth, with new systems etc and also as important as we have learned in the field, test your configuration for resiliance and fail-over before putting critical load on3. Operation When testing has been done you will set the solution in production state and start deliver A virtualization project can also get the ball running on other nearby systems/solutionsThe one in italics we will not cover at this session but you are welcome tomorrow when I do another session on that
Iinventory –Many companys do not have a current CMDB or inventory list of what they have in their datacenter, trust me I have been in projects where we found new servers after some time in the project, even in an employees room desktop computers with production databases on itWorkload profile - To get a understanding how the machine is performing, we can often see if a server is under or over utilized, maybe it would just be enough to add memoryConsolidation Analysis – with this data you can get a good hint how to size the platform that is going to handle this. Maybe there is several SQL servers in the environment hat could be consolidated in the same time?You should not plan for a empty platform but you should also always be prepared to add more power when needed
Inventory: You can use several different methods to inventory your environment with this tool, Active directory, IP range, lists of servers, manually enter computers or if you already use SCCM, connect to that inventory database and get the data from that systemPerformance Metrics: For windows the MAP uses remote registry to get the data from the Permon countersConsolidation Scenario: Use the Server Consolidation Wizard to help in planning your server virtualization effort. In this wizard, you can select a virtualization technology platform, set a virtual host machine’s hardware configuration, manage assessment properties, and identify which computers you would like to virtualize. Use the Hardware Library Configuration Wizard to create and manage often-used hardware configurations for quick what-if analysis.Hyper-V Cloud Fast Track. In this wizard you can select a pre-configured Hyper-V Cloud Fast Track Infrastructure to use for evaluating server consolidation on different OEM Infrastructures. The Microsoft Hyper-V Cloud Fast Track Program is a joint effort between Microsoft and its OEM partners to help organizations quickly develop and implement private clouds, while reducing both the cost and the risk. Each OEM partner will provide Fast Track infrastructures (aka private cloud, rack) per the Fast Track reference architecture and the infrastructures will be running Windows Server, Hyper-V and System Center. Return of investment: Export a xml file from map and import into the Alean ROI webapp to get a good ROI analysis report to support your case
Tell the åhörarevad demon ärförochvad den gör! Visa hur man inventerarmaskinerVisa data fråninventering, vilkaoperativsystemetc3. Show the way to use the data for a consolidation report
Performancemontior can also be used to get more deep knowledge about workloads that are a bit more heavyScom has virtualization reports that could be used after integrated with the SCVMM, if you have been using it for a while you will have invaluable data how the systems have performed in the pastPlatespin Recon is a great product for inventory and performance analyzis, it is a bit more advanced than MAP but it comes with a license costOther third party monitoring solutions that can give you some understanding about how the machines are behaving, Maybe it is normal for the server to use 100 % cpu at night
After the Consolidation/Capacity planning you will have to decide what to do next and how to do it. What is the purpose of the platform, use a workshop with more than just the it department to get a view of what their needs areIn the consolodation phase there is no taking in account if the servers have limits for beeing virtualized, at this stage we will have to check wich servers actually would be suitable
Preparation and planning is everything and in a virtualization project you will have a foundation for your design and can justify the cost for the one with the wallet which is ofthen the CEO, It is much better to size right and with maybe a higher cost at the beginning than having to do a emergency investment because of performance problems