Technology Division of the American Planning Association (APA)
2013 National Conference
Hyatt Regency Chicago
Harsh Prakash, Senior GIS Manager
Past Chair
Agenda
1. What are they?
2. How have they changed?
3. How are they still changing?
4. How does that affect me?
5. What are some of the issues?
6. What could the future hold?
7. Conclusion
(Tags: api, app, data, demo, division, education, esri, georeference, google, howto, hybrid, image, javascript, location, mashable, raster, technology, tile, vector, web2)
Technology Division of the American Planning Association (APA)
2013 National Conference
Hyatt Regency Chicago
Harsh Prakash, Senior GIS Manager
Past Chair
Agenda
1. What are they?
2. How have they changed?
3. How are they still changing?
4. How does that affect me?
5. What are some of the issues?
6. What could the future hold?
7. Conclusion
(Tags: api, app, data, demo, division, education, esri, georeference, google, howto, hybrid, image, javascript, location, mashable, raster, technology, tile, vector, web2)
Operationalizing Machine Learning at Scale at StarbucksDatabricks
As ML-driven innovations are propelled by the Self-Service capabilities in the Enterprise Data and Analytics Platform, teams face a significant entry barrier and productivity issues in moving from POCs to Operating ML-powered apps at scale in production.
Watch this recorded demonstration of SnapLogic from our team of experts who answer your hybrid cloud and big data integration questions.
demo, ipaas, elastic integration, cloud data, app integration, data integration, hybrid could integration, big data, big data integration
Serhii Kholodniuk: What you need to know, before migrating data platform to G...Lviv Startup Club
Serhii Kholodniuk: What you need to know, before migrating data platform to GCP (Google cloud platform)
AI & BigData Online Day 2022
Website: https://aiconf.com.ua
Youtube: https://www.youtube.com/startuplviv
FB: https://www.facebook.com/aiconf
Life occurs in real-time, and not surprisingly, more solutions are being built using streaming technologies. Event-based architectures are becoming the norm, and customers are expecting immediate access to their data. This new world offers many exciting opportunities, but also some new challenges. What do you do when your streaming data is not complete? What if it relies on another data source? Does the dependent data exist yet, and does it come from a 3rd party? How do we merge a complete picture of data when data is sourcing from multiple places at the same time? A new norm in the world of distributed services. Join us as we dive deep into the technical details around these scenarios and more. Expect to learn about stream-stream joins, enriching stream data using local or remote data, and ways to anticipate and correct errors within the stream. Leave with a better understanding of managing data dependencies within a Spark Structured Streaming application.
Scaling and Unifying SciKit Learn and Apache Spark PipelinesDatabricks
Pipelines have become ubiquitous, as the need for stringing multiple functions to compose applications has gained adoption and popularity. Common pipeline abstractions such as “fit” and “transform” are even shared across divergent platforms such as Python Scikit-Learn and Apache Spark.
Scaling pipelines at the level of simple functions is desirable for many AI applications, however is not directly supported by Ray’s parallelism primitives. In this talk, Raghu will describe a pipeline abstraction that takes advantage of Ray’s compute model to efficiently scale arbitrarily complex pipeline workflows. He will demonstrate how this abstraction cleanly unifies pipeline workflows across multiple platforms such as Scikit-Learn and Spark, and achieves nearly optimal scale-out parallelism on pipelined computations.
Attendees will learn how pipelined workflows can be mapped to Ray’s compute model and how they can both unify and accelerate their pipelines with Ray.
How to Rebuild an End-to-End ML Pipeline with Databricks and Upwork with Than...Databricks
Upwork has the biggest closed-loop online dataset of jobs and job seekers in labor history (>10M Profiles, >100M Job Posts, Job Proposals and Hiring Decisions, >10B of Messages, Transaction and Feedback Data). Besides sheer quantity, our data is also contextually very rich. We have client and contractor data for the entire job-funnel – from finding jobs to getting the job done.
For various machine learning applications including search and recommendations and labor marketplace optimization (rate, supply and demand), we heavily relied on a Greenplum-based data warehouse solution for data processing and ad-hoc ML pipelines (weka, scikit-learn, R) for offline model development and online model scoring.
In this talk, we present our modernization efforts in moving towards a 1) holistic data processing infrastructure for batch and stream data processing using S3, Kinesis, Spark and Spark Structured Streaming 2) model development using Spark MLlib and other ML libraries for Spark 3) model serving using Databricks Model Scoring, Scoring over Structured Streams and microservices and 3) how we orchestrate and streamline all these processes using Apache Airflow and a CI/CD workflow customized to our Data Science product engineering needs. The focus of this talk is on how we were able to leverage the Databricks service offering to reduce DevOps overhead and costs, complete the entire modernization with moderate efforts and adopt a collaborative notebook-based solution for all our data scientists to develop model, reuse features and share results. We will shared the core lessons learned and pitfalls we encountered during this journey.
Leveraging big data to maximize value from rail and power infrastructure assets.Chijioke “CJ” Ejimuda
To intelligently design and optimally operate infrastructure assets, a combination of big data batch and streaming execution models are very essential before useful insights are generated. This presentation specifically focuses on how these models could be applied throughout the life cycle of the Design, Build, Finance, Operate and Maintenance phases of rail and power system infrastructures to maximize their value.
Operationalizing Machine Learning at Scale at StarbucksDatabricks
As ML-driven innovations are propelled by the Self-Service capabilities in the Enterprise Data and Analytics Platform, teams face a significant entry barrier and productivity issues in moving from POCs to Operating ML-powered apps at scale in production.
Watch this recorded demonstration of SnapLogic from our team of experts who answer your hybrid cloud and big data integration questions.
demo, ipaas, elastic integration, cloud data, app integration, data integration, hybrid could integration, big data, big data integration
Serhii Kholodniuk: What you need to know, before migrating data platform to G...Lviv Startup Club
Serhii Kholodniuk: What you need to know, before migrating data platform to GCP (Google cloud platform)
AI & BigData Online Day 2022
Website: https://aiconf.com.ua
Youtube: https://www.youtube.com/startuplviv
FB: https://www.facebook.com/aiconf
Life occurs in real-time, and not surprisingly, more solutions are being built using streaming technologies. Event-based architectures are becoming the norm, and customers are expecting immediate access to their data. This new world offers many exciting opportunities, but also some new challenges. What do you do when your streaming data is not complete? What if it relies on another data source? Does the dependent data exist yet, and does it come from a 3rd party? How do we merge a complete picture of data when data is sourcing from multiple places at the same time? A new norm in the world of distributed services. Join us as we dive deep into the technical details around these scenarios and more. Expect to learn about stream-stream joins, enriching stream data using local or remote data, and ways to anticipate and correct errors within the stream. Leave with a better understanding of managing data dependencies within a Spark Structured Streaming application.
Scaling and Unifying SciKit Learn and Apache Spark PipelinesDatabricks
Pipelines have become ubiquitous, as the need for stringing multiple functions to compose applications has gained adoption and popularity. Common pipeline abstractions such as “fit” and “transform” are even shared across divergent platforms such as Python Scikit-Learn and Apache Spark.
Scaling pipelines at the level of simple functions is desirable for many AI applications, however is not directly supported by Ray’s parallelism primitives. In this talk, Raghu will describe a pipeline abstraction that takes advantage of Ray’s compute model to efficiently scale arbitrarily complex pipeline workflows. He will demonstrate how this abstraction cleanly unifies pipeline workflows across multiple platforms such as Scikit-Learn and Spark, and achieves nearly optimal scale-out parallelism on pipelined computations.
Attendees will learn how pipelined workflows can be mapped to Ray’s compute model and how they can both unify and accelerate their pipelines with Ray.
How to Rebuild an End-to-End ML Pipeline with Databricks and Upwork with Than...Databricks
Upwork has the biggest closed-loop online dataset of jobs and job seekers in labor history (>10M Profiles, >100M Job Posts, Job Proposals and Hiring Decisions, >10B of Messages, Transaction and Feedback Data). Besides sheer quantity, our data is also contextually very rich. We have client and contractor data for the entire job-funnel – from finding jobs to getting the job done.
For various machine learning applications including search and recommendations and labor marketplace optimization (rate, supply and demand), we heavily relied on a Greenplum-based data warehouse solution for data processing and ad-hoc ML pipelines (weka, scikit-learn, R) for offline model development and online model scoring.
In this talk, we present our modernization efforts in moving towards a 1) holistic data processing infrastructure for batch and stream data processing using S3, Kinesis, Spark and Spark Structured Streaming 2) model development using Spark MLlib and other ML libraries for Spark 3) model serving using Databricks Model Scoring, Scoring over Structured Streams and microservices and 3) how we orchestrate and streamline all these processes using Apache Airflow and a CI/CD workflow customized to our Data Science product engineering needs. The focus of this talk is on how we were able to leverage the Databricks service offering to reduce DevOps overhead and costs, complete the entire modernization with moderate efforts and adopt a collaborative notebook-based solution for all our data scientists to develop model, reuse features and share results. We will shared the core lessons learned and pitfalls we encountered during this journey.
Leveraging big data to maximize value from rail and power infrastructure assets.Chijioke “CJ” Ejimuda
To intelligently design and optimally operate infrastructure assets, a combination of big data batch and streaming execution models are very essential before useful insights are generated. This presentation specifically focuses on how these models could be applied throughout the life cycle of the Design, Build, Finance, Operate and Maintenance phases of rail and power system infrastructures to maximize their value.
The presentation on Performance Testing of Big Data Application was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Harpreet Kaur Kahai
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Oracle EBS R12.2 - The Upgrade Know-How Factorypanayaofficial
For Oracle EBS customers on R12.1 or lower, the upgrade to R12.2 may seem like an inconvenient necessity. However, when coupling Infosys know-how and Panaya's Change Intelligence technology, they can easily turn a necessity into an opportunity.
20220329 Ariel Partners Configuring Jira For Maximum AgilityCraeg Strong
A poorly tuned Jira is a daily struggle for Agile teams. Inconsistencies in Jira usage and configurations produce unreliable data that limit an organization’s ability to properly manage Agile projects from the team level to the executive level. Lacking an appropriate level of investment and understanding, Jira can become little more than an expensive task tracking tool, rather than something that can help catalyze improvements and drive organizational change. In order to derive the maximum possible benefit from their investment in Atlassian products, organizations must appreciate the complexity and sophistication of these tools and assign an appropriate level of investment – both initial and on an ongoing basis. This involves three main pieces:
(1) design a set of initial templates, taxonomies, configurations, and reporting metrics,
(2) establish roles and responsibilities for jira administration,
(3) and initiate a process of continuous improvement that welcomes configuration changes rather than discourages them.
Apache Spark Performance is too hard. Let's make it easierDatabricks
Apache Spark is a dynamic execution engine that can take relatively simple Scala code and create complex and optimized execution plans. In this talk, we will describe how user code translates into Spark drivers, executors, stages, tasks, transformations, and shuffles. We will then describe how this is critical to the design of Spark and how this tight interplay allows very efficient execution. We will also discuss various sources of metrics on how Spark applications use hardware resources, and show how application developers can use this information to write more efficient code. Users and operators who are aware of these concepts will become more effective at their interactions with Spark.
ATAGTR2017 Performance Testing and Non-Functional Testing Strategy for Big Da...Agile Testing Alliance
The presentation on Performance Testing and Non-Functional Testing Strategy for Big Data Applications was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Abhinav Gupta
The Practice of Presto & Alluxio in E-Commerce Big Data PlatformAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
The Practice of Presto & Alluxio in E-Commerce Big Data Platform
Wenjun Tao, Sr. Software Engineer, JD.com
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Performance comparison on java technologies a practical approachcsandit
Performance responsiveness and scalability is a make-or-break quality for software. Nearly
everyone runs into performance problems at one time or another. This paper discusses about
performance issues faced during one of the project implemented in java technologies. The
challenges faced during the life cycle of the project and the mitigation actions performed. It
compares 3 java technologies and shows how improvements are made through statistical
analysis in response time of the application. The paper concludes with result analysis.
A shallow irrigation borehole project, a Global MapAid (GMA) volunteer endeavor to model optimal drilling locations in Ethiopia with Ethiopian Water Technology Institute (EWTI) and George Mason University (GMU).
AGENDA
* Demo 1: Use Amazon Rekognition API connected to a camera for near real-time image analyses.
* Demo 2: Install and extend Google TensorFlow to retrain a classifier to classify custom images.
* Demo 3: Use AutoML.
This month's session is about Applied Machine Learning (ML) - a test personal project I am working on, the reasons thereof, and the technology underneath.
The project uses APIs from Cloud vendors to sift through satellite images.
The goal today is to start a discussion around Emerging Tech at NASA.
Raleigh County Hazard Mitigation Plan (HMP)
2002-2003
Date Submitted: February, 2003
Date Revised: May, 2003
Date Adopted: Summer, 2003
Submitted On Behalf Of:
* Raleigh County
* City of Beckley
* Town of Lester
* Town of Mabscott
* Town of Rhodell
* Town of Sophia
Submitted To:
State Office of Emergency Services (WVOES)
Prepared By:
Region I
-
Table of Contents
* General Need and Significance
* Background Information on Planning Area (Communities)
* 3.1 Prerequisite (Documentation Attached)
+ 3.1.1 Adoption by the Local Governing Body
+ 3.1.2 Multi-Jurisdictional Plan Adoption
+ 3.1.3 Multi-Jurisdictional Planning Participation
* 3.2 Planning Process
+ 3.2.1 Documentation of the Planning Process (Multi-Jurisdictional Planning Process)
* 3.3 Risk Assessment
+ 3.3.1 Identifying Hazards
+ 3.3.2 Profiling Hazard Events
+ 3.3.3 Assessing Vulnerability: Identifying Assets
+ 3.3.4 Assessing Vulnerability: Estimating Potential Losses
+ 3.3.5 Assessing Vulnerability: Analyzing Development Trends
+ 3.3.6 Multi-Jurisdictional Risk Assessment
* 3.4 Mitigation Strategy
+ 3.4.1 Local Hazard Mitigation Goals
+ 3.4.2 Identification and Analysis of Mitigation Measures
+ 3.4.3 Implementation of Mitigation Measures
+ 3.4.4 Multi-Jurisdictional Mitigation Strategy
* 3.5 Plan Maintenance Procedures
+ 3.5.1 Monitoring, Evaluating and Updating Plan
+ 3.5.2 Implementation through Existing Programs
+ 3.5.3 Continued Public Involvement
* Credit and Acknowledgment
* Summary Metadata
* Appendix
(Tags: arcgis, county, disaster, emergency, flood, hazard, hazus, management, mitigation, multi, natural, need, nfhl, oes, office, plan, prerequisite, process, project, risk)
Divisions Council - Education Committee (Summer 2011)
Introduction
This report provides a description of existing services, both external and in-house, available to APA divisions for hosting and broadcasting webcasts to their members and other interested professionals, and specifically looks at the external Planning Webcast series. In addition, it includes an analysis of options for expanding these services. The report was produced in response to a request from the APA Divisions Council (DC), as follows:
Mission Statement *
To develop DC’s recommendations for educational programs, professional development and mentoring to be provided by divisions.
1. To seek opportunities for complementary efforts among divisions, and with APA's component groups, including the Chapter Presidents Council (CPC), Student Representative Council (SRC), and American Institute of Certified Planners (AICP).
2. The committee will also consider collaboration opportunities with external organizations where it serves APA's interests and furthers the adopted Development Plan.
“In addition to its standing mission, the DC Education Committee (EC) shall develop one or more models through which Divisions may both deliver Certification Maintenance (CM) content and also generate additional sources of revenue for Divisions. The Committee shall consider current APA policies regarding access to Webinar software and the pricing of such access. Also, in building a revenue or business model, consider the pricing of other CM offerings, especially Webinars.”
Towards these objectives, key team members were recruited at the National Conference in Boston. Refer to Appendix 2 for committee composition.
(Tags: aicp, american, association, chapter, committee, conference, council, education, external, goto, internal, meeting, planning, revenue, series, service, sponsor, training, utah, webcast)
DIVISIONS COUNCIL
FY2012 ANNUAL DIVISION PERFORMANCE REPORT
TECHNOLOGY DIVISION
1. Summary of Major Accomplishments
The Technology Division links members who share interests in the uses of technology in land use planning and community development to make great communities happen.
Vision
Our vision is to be a leading voice in the planning community for technology in support of planning and to become the preferred forum for debate about technology as it applies to planning.
Why?
The use of technology is integral to how communities are communicating and engaging their citizens to plan for the future. Big data and rapidly evolving tools for community design and decision making are creating new opportunities to enhance public involvement, share information, and make more informed decisions in cities and towns throughout the country.
Purpose
Our purpose is to exchange information about planning applications that incorporate technology, to explore potential applications that may have benefit to the profession, and to inform all members of the association about solutions that work. We believe the planning community needs a voice to advocate for best practices in technology. The division advocates for the responsible and best use of technology in support of planning.
(Tags: accomplishment, agenda, american, annual, association, chair, conference, division, finance, fiscal, leadership, media, meeting, member, newsletter, opensource, planning, social, technology, website)
INDEPENDENT STUDY
Annual Scholarship, 2001
Virginia Association for Mapping and Land Information Systems (VAMLIS)
Harsh Prakash, Graduate
cs.virginia.edu
https://www.cs.virginia.edu/~webteam/
Guide:
Prof. David L. Phillips, Associate Professor
University of Virginia
Contents
1. Objectives of Study
2. Data collection and Explanation
3. About the Study-Area
4. Analysis and Projection
5. Population Forecast and Distribution
6. Preliminary Conclusions from Map Analysis
7. Character of Neighborhood Elements within Study-Area
8. Credits
9. Citation
10. Metadata for Softcopy
11. Notes on Discrepancies in Digitized Maps
12. Appendix
1. Objectives of Study
This study was initiated as a break from abstract theorizing. Part of the objective was to familiarize self with neighborhood. Part was to constantly update understanding of the varied factors effecting urban growth. The study was designed to better estimate trends in growth and reasons thereof, and refine the present to face this estimation. It was a response to answer some basic planning questions- how does one refine planning methodologies to best achieve social good?
Admits all these questions grew a desire to equip the planner with tangible skills and tools. The limited potential of such skills in planning was never ignored. However, an attempt has been made. The approach was to use geographic and demographic information about a city and its neighborhood, and make a growth projection for that region. In brief, the analysis can be broken down into the following main steps:
• Evaluating Potential Effect and Assigning Weights
• Calculating Growth Potential
• Distributing Projected Population
• Detailing Distributed Population into Neighborhood Elements
(Tags: albemarle, association, cville, data, environment, estimate, graduate, impact, information, land, mapping, neighborhood, population, predict, project, scholarship, system, uva, vamlis, virginia)
What is a Mashup? Mashups in Planning? How to Create Your Own Mashup? Issues with Mashups?
(Tags: api, app, data, demo, division, education, esri, georeference, google, howto, hybrid, image, javascript, location, mashable, raster, technology, tile, vector, web2)
More from Harsh Prakash (AWS, Azure, Security+, Agile, PMP, GISP) (12)
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
3. Solution Summary
• ArcGIS Portal, ArcGIS Servers (federated, cluster), ArcGIS Server
(unfederated, stand-alone), ArcGIS DataStore, StreetMap Premium
(Implemented: On-premise geocoding – ¼ billion addresses; Routing in
a disconnected environment)
• ArcGIS Online
FedGIS - Mar 2018 Harsh Prakash, PMP, GISP 3 / 13
API Query “Find all Providers X miles from Y”
Foreground Data
From Backend Database
Background Map
From ArcGIS –
Internal & External
Web Application
Map Sandwich
4. Challenges Faced
• Esri –
’Installing ArcGIS here is like pushing a square block up
a right-angle hill’
• Unique security responsibilities of the federal government
around high-value PII/PHI-based data assets and
Expedited Life Cycle (XLC) processes
FedGIS - Mar 2018 Harsh Prakash, PMP, GISP 4 / 13
5. In Retrospect:
Lessons & Tips
Data
• No PII/PHI could leave to arcgis.com, so a hybrid solution, but multi-VPN & multi-NICs i.e. different
networks for different groups
ArcGIS is not designed for such fractured environments (BUG logged for mixing backdoor
[privatePortalURL] with frontdoor [WebContextURL]).
So, discourage hybrid design of ArcGIS within multi-NIC and multi-VPN environment –
Consider Esri Data Appliance.
Setup VIEWER role in ArcGIS for users with least privileges.
• Not Public-facing
Use aerial imagery from the National Agriculture Imagery Program (NAIP) or
OpenAerialMap to test internal basemaps.
Budget
• Hours
Allow hours to move across contract option years.
FedGIS - Mar 2018 Harsh Prakash, PMP, GISP 5 / 13
6. In Retrospect:
Lessons & Tips
Process
• Architecture Review (AR)
• Preliminary Design Review (PDR)
• Detail Design Review (DDR)
• User Acceptance Test (UAT)
• Operational Readiness Review (ORR)
Consolidate Gate Reviews to keep up the project pace.
Prefer Agile over Waterfall (XLC).
FedGIS - Mar 2018 Harsh Prakash, PMP, GISP 6 / 13
Not Started In Progress Testing Accepted
Task 1 Task 2 Task 4 Task 5
Task 3
Kanban
7. In Retrospect:
Lessons & Tips
Prototyping
• HTTPS requirement – Needed to decrypt
• 3-zone architecture – Needed to negotiate SSL handshakes and establish trust to
route token authentication between daisy-chained servers
• No Web Adapter – Needed to proxy without
We replicated the 3-zones in Amazon Web Services (AWS).
[AWS 1] [AWS 2] [AWS 3]
So, use Infrastructure as a Service (IaaS) for rapid piloting &
prototyping. Provide test box (with admin privileges) for
tool installation and prototype development.
Note, Minimum Viable Product (MVP) doesn't have to be
pixel-perfect.
FedGIS - Mar 2018 Harsh Prakash, PMP, GISP 7 / 13
8. In Retrospect:
Lessons & Tips
Development
• No custom development – Needed to use ArcGIS Web AppBuilder (WAB)
Use WAB for development, but don't oversell its ease (Ended up scripting for
caching).
Note, WAB can't run in a truly disconnected environment out-of-the-box.
• Teams
Coordinate, but decouple frontend and backend release schedules,
esp. with “horizontally-sliced” projects.
• Testing
Test one app at a time in initial User Acceptance Testing (UAT).
Write clear test cases, and use screenshots/videos during testing
to better capture bugs or vulnerabilities.
FedGIS - Mar 2018 Harsh Prakash, PMP, GISP 8 / 13
Backend
Frontend
Infrastructur
e
Teams
Team 1
Team 2
Team 3
Vertically
Sliced
Team 1
Team 2
Team 3
Horizontally
Sliced
9. In Retrospect:
Lessons & Tips
ETL/ELT
• Extract, Transform, Load
Prefer native ETL/ELT processes for less overhead.
Communication
• Triage
Setup regular touch-point calls to coordinate with various teams for
transparent communication and timely escalation across appropriate
management chains.
FedGIS - Mar 2018 Harsh Prakash, PMP, GISP 9 / 13
10. In Retrospect:
Lessons & Tips
Support
• Vendors – Esri, Red Hat, Teradata
• E.g. Teradata’s ODBC 14.10 Driver Bug
We found it was issuing multiple queries to get multiple geometries
(a.k.a. Offline Fetching), instead of using one query to get multiple
geometries (or Inline Fetching) – Implemented option of local Cache or
Cube.
So, increase visibility of fixes to tools or widgets, and pursue out-of-cycle
patches with vendors.
FedGIS - Mar 2018 Harsh Prakash, PMP, GISP 10 / 13
11. In Retrospect:
Lessons & Tips
Tools
• Administration
Use great tools.
Wireshark, Nmap, Nagios
Fiddler, Postman, LDAP Browser
New Relic, PuTTY, WinSCP
Browser Dev Tools, Katalon, GlassWire
TeamViewer, Cygwin
FedGIS - Mar 2018 Harsh Prakash, PMP, GISP 11 / 13
GIS & (SAP) BusinessObjects Manager, Business Intelligence (BI) / Extract, Load & Transform (ETL) program
Health & Federal Business Unit, MANTECH
Esri and Amazon Partner
17y – previously, with NIH implementing Esri + OGC/FOSS4G; before that, with FEMA implementing Esri
Graduate of the University of Virginia, previously, served as the chairperson of the American Planning Association’s (APA) Technology Division
Relate & Share
Map Sandwich
Database is called the Integrated Data Repository (IDR), comprising of Teradata and other Online Analytical Processing (OLAP) and Online Transaction Processing (OLTP) resources
In no particular order
See http://www.slideshare.net/gisblog/fedgis2017-72293729