the presentation describes how to connect the issue tracker software atlassian jira with apex, using REST Webservices.
The slides were presented in Rotterdam at the APEX World conference in the netherlands.
YAML is a data serialization language that is human-friendly and designed for interacting well with programming languages. Unlike XML, YAML aims to be easily readable by humans for config files, logging, and messaging. YAML uses indentation rather than tags to indicate hierarchy. It supports common data structures like lists, dictionaries, and scalar values like strings and integers.
Unify Stream and Batch Processing using Dataflow, a Portable Programmable Mod...DataWorks Summit
Google Cloud Dataflow is a fully managed service that allows users to build batch or streaming parallel data processing pipelines. It provides a unified programming model for batch and streaming workflows. Cloud Dataflow handles resource management and optimization to efficiently execute data processing jobs on Google Cloud Platform.
This document provides an overview of Oracle Fusion HCM implementation including key features, packages, timelines, scope, methodology, and project plan. The implementation aims to streamline HR processes, generate metrics-based reporting, integrate data across applications, and reduce implementation time/costs through a SaaS model. The scope includes deploying essential functions in Package 1 within 6 months, then additional functions in Package 2 within 12 months. Data migration will focus on active employee data provided in templates. Optional expansion packs also exist.
This document discusses using Grafana to visualize test data in real time. It provides an introduction to Grafana and monitoring. Test data can be represented as time series data and metrics can be built around test runtime and results. Grafana allows querying and visualizing metrics from various sources. The document demonstrates collecting test class and method results as time series data points in InfluxDB and then querying and visualizing the results in Grafana dashboards. This provides real-time monitoring of test data.
Building a Data Pipeline using Apache Airflow (on AWS / GCP)Yohei Onishi
This is the slide I presented at PyCon SG 2019. I talked about overview of Airflow and how we can use Airflow and the other data engineering services on AWS and GCP to build data pipelines.
Apache Spark and Apache Ignite: Where Fast Data Meets the IoT with Denis MagdaDatabricks
It’s not enough to build a mesh of sensors or embedded devices to get more insights about the surrounding environment and optimize your production. Usually, your IoT solution needs to be capable of transferring enormous amounts of data to a storage or cloud where the data has to be processed further. Quite often, the processing of the endless streams of data has to be done almost in real-time so that you can react on the IoT subsystem’s state accordingly, and in time.
During this session, see how to build a Fast Data solution that will receive endless streams from the IoT side and will be capable of processing the streams in real-time using Apache Ignite’s cluster resources. In particular, learn about data streaming to an Apache Ignite cluster from embedded devices and real-time data processing with Apache Spark.
This document provides an overview of building data pipelines using Apache Airflow. It discusses what a data pipeline is, common components of data pipelines like data ingestion and processing, and issues with traditional data flows. It then introduces Apache Airflow, describing its features like being fault tolerant and supporting Python code. The core components of Airflow including the web server, scheduler, executor, and worker processes are explained. Key concepts like DAGs, operators, tasks, and workflows are defined. Finally, it demonstrates Airflow through an example DAG that extracts and cleanses tweets.
YAML is a data serialization language that is human-friendly and designed for interacting well with programming languages. Unlike XML, YAML aims to be easily readable by humans for config files, logging, and messaging. YAML uses indentation rather than tags to indicate hierarchy. It supports common data structures like lists, dictionaries, and scalar values like strings and integers.
Unify Stream and Batch Processing using Dataflow, a Portable Programmable Mod...DataWorks Summit
Google Cloud Dataflow is a fully managed service that allows users to build batch or streaming parallel data processing pipelines. It provides a unified programming model for batch and streaming workflows. Cloud Dataflow handles resource management and optimization to efficiently execute data processing jobs on Google Cloud Platform.
This document provides an overview of Oracle Fusion HCM implementation including key features, packages, timelines, scope, methodology, and project plan. The implementation aims to streamline HR processes, generate metrics-based reporting, integrate data across applications, and reduce implementation time/costs through a SaaS model. The scope includes deploying essential functions in Package 1 within 6 months, then additional functions in Package 2 within 12 months. Data migration will focus on active employee data provided in templates. Optional expansion packs also exist.
This document discusses using Grafana to visualize test data in real time. It provides an introduction to Grafana and monitoring. Test data can be represented as time series data and metrics can be built around test runtime and results. Grafana allows querying and visualizing metrics from various sources. The document demonstrates collecting test class and method results as time series data points in InfluxDB and then querying and visualizing the results in Grafana dashboards. This provides real-time monitoring of test data.
Building a Data Pipeline using Apache Airflow (on AWS / GCP)Yohei Onishi
This is the slide I presented at PyCon SG 2019. I talked about overview of Airflow and how we can use Airflow and the other data engineering services on AWS and GCP to build data pipelines.
Apache Spark and Apache Ignite: Where Fast Data Meets the IoT with Denis MagdaDatabricks
It’s not enough to build a mesh of sensors or embedded devices to get more insights about the surrounding environment and optimize your production. Usually, your IoT solution needs to be capable of transferring enormous amounts of data to a storage or cloud where the data has to be processed further. Quite often, the processing of the endless streams of data has to be done almost in real-time so that you can react on the IoT subsystem’s state accordingly, and in time.
During this session, see how to build a Fast Data solution that will receive endless streams from the IoT side and will be capable of processing the streams in real-time using Apache Ignite’s cluster resources. In particular, learn about data streaming to an Apache Ignite cluster from embedded devices and real-time data processing with Apache Spark.
This document provides an overview of building data pipelines using Apache Airflow. It discusses what a data pipeline is, common components of data pipelines like data ingestion and processing, and issues with traditional data flows. It then introduces Apache Airflow, describing its features like being fault tolerant and supporting Python code. The core components of Airflow including the web server, scheduler, executor, and worker processes are explained. Key concepts like DAGs, operators, tasks, and workflows are defined. Finally, it demonstrates Airflow through an example DAG that extracts and cleanses tweets.
The document discusses effective release management for Salesforce development teams using AutoRABIT. It introduces AutoRABIT as a tool for continuous integration, test automation, and release management. It then demonstrates AutoRABIT's capabilities such as continuous integration workflows, automated testing, sandbox management, and visualization dashboards to improve release velocity. The presentation concludes by emphasizing how AutoRABIT can help teams achieve more frequent, higher quality releases.
This document provides an overview of building a command line interface (CLI) application in Go. It discusses UX considerations for CLIs, common CLI patterns and philosophies, and Go-specific topics. Some key points include:
- CLI apps should follow Unix philosophies of being simple, clear, composable, and extensible.
- Common CLI patterns include commands, arguments, options/flags, and subcommands.
- Go is a statically typed, compiled language with built-in concurrency and a large standard library.
- The document concludes by outlining plans to build a sample TODO app in Go called "Tri" to demonstrate CLI design and development.
Azure DevOps provides developer services for allowing teams to plan work, collaborate on code development, and build and deploy applications. Azure DevOps supports a collaborative culture and set of processes that bring together developers, project managers, and contributors to develop software. It allows organizations to create and improve products at a faster pace than they can with traditional software development approaches.
A 20 minute talk about how WePay runs airflow. Discusses usage and operations. Also covers running Airflow in Google cloud.
Video of the talk is available here:
https://wepayinc.box.com/s/hf1chwmthuet29ux2a83f5quc8o5q18k
PromQL Deep Dive - The Prometheus Query Language Weaveworks
- What is PromQL
- PromQL operators
- PromQL functions
- Hands on: Building queries in PromQL
- Hands on: Visualizing PromQL in Grafana
- Prometheus alerts in PromQL
- Hands on: Creating an alert in Prometheus with PromQL
Presentation given at Coolblue B.V. demonstrating Apache Airflow (incubating), what we learned from the underlying design principles and how an implementation of these principles reduce the amount of ETL effort. Why choose Airflow? Because it makes your engineering life easier, more people can contribute to how data flows through the organization, so that you can spend more time applying your brain to more difficult problems like Machine Learning, Deep Learning and higher level analysis.
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
InfluxDB is an open source time series database written in Go that stores metric data and performs real-time analytics. It has no external dependencies. InfluxDB stores data as time series with measurements, tags, and fields. Data is written using a line protocol and can be visualized using Grafana, an open source metrics dashboard.
Building an Equitable Tech Future - By ThoughtWorks BrisbaneThoughtworks
At the heart of ThoughtWorks is an ambitious mission: to be a proactive agent of progressive change in the world. Aware of our own privilege, we strive to see the world from the perspective of the oppressed, the powerless and the invisible.
With QUT, here in Brisbane, we’re kicking off a series of research, projects, and conversations about the social impact of tech trends, with a view to building a more equitable tech future. Some of these topics include:
- Algorithmic accountability, transparency, bias & inclusion
- Responsible data practices (privacy and ownership of data)
- Automation and the future of work
- Data use in social media and elections
- Fake news and echo chambers
- Regulating decentralised technologies
- Blockchain for good
- End-user autonomy and privacy
Slides from: Felicity Ruby, Eru Penkman, Clayton Nyakana,
Assoc. Prof. Nic Suzor (QUT) & Dr. Monique Mann (QUT)
Image Similarity Detection at Scale Using LSH and Tensorflow with Andrey GusevDatabricks
Learning over images and understanding the quality of content play an important role at Pinterest. This talk will present a Spark based system responsible for detecting near (and far) duplicate images. The system is used to improve the accuracy of recommendations and search results across a number of production surfaces at Pinterest.
At the core of the pipeline is a Spark implementation of batch LSH (locality sensitive hashing) search capable of comparing billions of items on a daily basis. This implementation replaced an older (MR/Solr/OpenCV) system, increasing throughput by 13x and decreasing runtime by 8x. A generalized Spark Batch LSH is now used outside of the image similarity context by a number of consumers. Inverted index compression using variable byte encoding, dictionary encoding, and primitives packing are some examples of what allows this implementation to scale. The second part of this talk will detail training and integration of a Tensorflow neural net with Spark, used in the candidate selection step of the system. By directly leveraging vectorization in a Spark context we can reduce the latency of the predictions and increase the throughput.
Overall, this talk will cover a scalable Spark image processing and prediction pipeline.
Airflow is a platform created by Airbnb to automate and schedule workflows. It uses a Directed Acyclic Graph (DAG) structure to define dependencies between tasks, and allows scheduling tasks on a timetable or triggering them manually. Some key features include monitoring task status, resuming failed tasks, backfilling historical data, and a web-based user interface. While additional databases are required for high availability, Airflow provides a flexible way to model complex data workflows as code.
Intro to Airflow: Goodbye Cron, Welcome scheduled workflow managementBurasakorn Sabyeying
This document discusses Apache Airflow, an open-source workflow management platform for authoring, scheduling, and monitoring workflows or pipelines. It provides an overview of Airflow's key features and components, including Directed Acyclic Graphs (DAGs) for defining workflows as Python code, various operators for building tasks, and its rich web UI. The document compares Airflow to traditional cron jobs, noting Airflow can handle task dependencies and failures better than cron. It also outlines how to set up an Airflow cluster on multiple nodes for scaling workflows.
- Multi-tenancy allows cloud computing platforms like Salesforce to efficiently utilize server capacity, storage, and labor by hosting multiple customers on the same infrastructure (shared stack model), avoiding wasted resources of the single-tenant model.
- Salesforce's metadata-driven architecture enables seamless upgrades where customizations and integrations are automatically upgraded to the latest version without hassle for the customer.
- Major areas under development at Salesforce include programmable user interfaces, cloud logic, workflow and approvals, integration, mobile deployment, analytics, security and sharing models, and applications.
Using AI to Build a Self-Driving Query Optimizer with Shivnath Babu and Adria...Databricks
This document discusses using AI to build a self-driving query optimizer called Unravel Sessions. It introduces Shivnath Babu and Adrian Popescu, who founded Unravel to focus on ease of use and manageability of data systems. The document outlines challenges with current approaches to application performance tuning. It then demonstrates how Unravel Sessions uses a probe algorithm and Gaussian process model to automatically recommend performance optimizations through iterative probes. Key lessons learned in building the Unravel Sessions architecture include using model ensembles, cheap probes when possible, full-stack monitoring data, and keeping the user in the loop.
The document provides an overview of the agile software development process. It begins with defining agile as an iterative and adaptive approach to software development performed collaboratively by self-organizing teams. It then discusses agile principles like valuing customer collaboration, responding to change, and delivering working software frequently. The document also covers specific agile frameworks like Scrum and Extreme Programming, the role of user stories, estimation techniques like planning poker, and ceremonies like daily stand-ups, sprint planning and retrospectives. It concludes by comparing agile to the traditional waterfall model and defining some common agile metrics.
Scaled Agile Framework, SAFe, has been adopted by organizations of domain ranging from finance, logistics, insurance and government. SAFe provides a framework to apply Lean and Agile practices at an enterprise level. But why use SAFe? In this interactive session based on Rishi Chaddha, SAFe consultant, experience in implementing SAFe in big financial institute. Going beyond the theory we will talk about the challenges faced when implementing SAFe in portfolio which includes hundred of people distributed worldwide. Each initiative in the portfolio can be worth from few thousand of dollars to millions of dollars. The talk will cover both the good and the bad and will show how to practically start SAFe transformation.
This document outlines the advanced features of the Xporter add-on for JIRA, including: getting templates to the next level with formatting functions, conditional blocks, and JavaScript statements; using workflow post-functions to upload or email exported documents; creating scheduled reports to email regularly; integrating with other add-ons like Xray and Structure; and providing a REST API for external systems to generate documents. It also briefly introduces the Xporter team and some of their clients and products.
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
Data Lakes have been built with a desire to democratize data - to allow more and more people, tools, and applications to make use of data. A key capability needed to achieve it is hiding the complexity of underlying data structures and physical data storage from users. The de-facto standard has been the Hive table format addresses some of these problems but falls short at data, user, and application scale. So what is the answer? Apache Iceberg.
Apache Iceberg table format is now in use and contributed to by many leading tech companies like Netflix, Apple, Airbnb, LinkedIn, Dremio, Expedia, and AWS.
Watch Alex Merced, Developer Advocate at Dremio, as he describes the open architecture and performance-oriented capabilities of Apache Iceberg.
You will learn:
• The issues that arise when using the Hive table format at scale, and why we need a new table format
• How a straightforward, elegant change in table format structure has enormous positive effects
• The underlying architecture of an Apache Iceberg table, how a query against an Iceberg table works, and how the table’s underlying structure changes as CRUD operations are done on it
• The resulting benefits of this architectural design
Pandas UDF and Python Type Hint in Apache Spark 3.0Databricks
In the past several years, the pandas UDFs are perhaps the most important changes to Apache Spark for Python data science. However, these functionalities have evolved organically, leading to some inconsistencies and confusions among users. In Apache Spark 3.0, the pandas UDFs were redesigned by leveraging type hints.
This document discusses connecting the issue tracking software Jira to Oracle Application Express (APEX) by utilizing Jira's REST web services and JSON formatting. It covers motivating the need to integrate the tools, an overview of Jira features, using REST and JSON to retrieve and parse Jira issue data, and demonstrations of consuming the web services in APEX including using collections to cache responses.
Tavisca travel technology uses JIRA to track technological issues reported by its global clientele. JIRA converts issues into compressed "cards" containing details about the client, problem dates, assigned team, status, tests performed, and recommendations. This card format allows engineers easy access to problem information and standardizes the problem-solving process. By improving agile practices like continuous improvement, JIRA implementation helps Tavisca enhance customer experience and reduce costs in accordance with its objectives.
The document discusses effective release management for Salesforce development teams using AutoRABIT. It introduces AutoRABIT as a tool for continuous integration, test automation, and release management. It then demonstrates AutoRABIT's capabilities such as continuous integration workflows, automated testing, sandbox management, and visualization dashboards to improve release velocity. The presentation concludes by emphasizing how AutoRABIT can help teams achieve more frequent, higher quality releases.
This document provides an overview of building a command line interface (CLI) application in Go. It discusses UX considerations for CLIs, common CLI patterns and philosophies, and Go-specific topics. Some key points include:
- CLI apps should follow Unix philosophies of being simple, clear, composable, and extensible.
- Common CLI patterns include commands, arguments, options/flags, and subcommands.
- Go is a statically typed, compiled language with built-in concurrency and a large standard library.
- The document concludes by outlining plans to build a sample TODO app in Go called "Tri" to demonstrate CLI design and development.
Azure DevOps provides developer services for allowing teams to plan work, collaborate on code development, and build and deploy applications. Azure DevOps supports a collaborative culture and set of processes that bring together developers, project managers, and contributors to develop software. It allows organizations to create and improve products at a faster pace than they can with traditional software development approaches.
A 20 minute talk about how WePay runs airflow. Discusses usage and operations. Also covers running Airflow in Google cloud.
Video of the talk is available here:
https://wepayinc.box.com/s/hf1chwmthuet29ux2a83f5quc8o5q18k
PromQL Deep Dive - The Prometheus Query Language Weaveworks
- What is PromQL
- PromQL operators
- PromQL functions
- Hands on: Building queries in PromQL
- Hands on: Visualizing PromQL in Grafana
- Prometheus alerts in PromQL
- Hands on: Creating an alert in Prometheus with PromQL
Presentation given at Coolblue B.V. demonstrating Apache Airflow (incubating), what we learned from the underlying design principles and how an implementation of these principles reduce the amount of ETL effort. Why choose Airflow? Because it makes your engineering life easier, more people can contribute to how data flows through the organization, so that you can spend more time applying your brain to more difficult problems like Machine Learning, Deep Learning and higher level analysis.
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
InfluxDB is an open source time series database written in Go that stores metric data and performs real-time analytics. It has no external dependencies. InfluxDB stores data as time series with measurements, tags, and fields. Data is written using a line protocol and can be visualized using Grafana, an open source metrics dashboard.
Building an Equitable Tech Future - By ThoughtWorks BrisbaneThoughtworks
At the heart of ThoughtWorks is an ambitious mission: to be a proactive agent of progressive change in the world. Aware of our own privilege, we strive to see the world from the perspective of the oppressed, the powerless and the invisible.
With QUT, here in Brisbane, we’re kicking off a series of research, projects, and conversations about the social impact of tech trends, with a view to building a more equitable tech future. Some of these topics include:
- Algorithmic accountability, transparency, bias & inclusion
- Responsible data practices (privacy and ownership of data)
- Automation and the future of work
- Data use in social media and elections
- Fake news and echo chambers
- Regulating decentralised technologies
- Blockchain for good
- End-user autonomy and privacy
Slides from: Felicity Ruby, Eru Penkman, Clayton Nyakana,
Assoc. Prof. Nic Suzor (QUT) & Dr. Monique Mann (QUT)
Image Similarity Detection at Scale Using LSH and Tensorflow with Andrey GusevDatabricks
Learning over images and understanding the quality of content play an important role at Pinterest. This talk will present a Spark based system responsible for detecting near (and far) duplicate images. The system is used to improve the accuracy of recommendations and search results across a number of production surfaces at Pinterest.
At the core of the pipeline is a Spark implementation of batch LSH (locality sensitive hashing) search capable of comparing billions of items on a daily basis. This implementation replaced an older (MR/Solr/OpenCV) system, increasing throughput by 13x and decreasing runtime by 8x. A generalized Spark Batch LSH is now used outside of the image similarity context by a number of consumers. Inverted index compression using variable byte encoding, dictionary encoding, and primitives packing are some examples of what allows this implementation to scale. The second part of this talk will detail training and integration of a Tensorflow neural net with Spark, used in the candidate selection step of the system. By directly leveraging vectorization in a Spark context we can reduce the latency of the predictions and increase the throughput.
Overall, this talk will cover a scalable Spark image processing and prediction pipeline.
Airflow is a platform created by Airbnb to automate and schedule workflows. It uses a Directed Acyclic Graph (DAG) structure to define dependencies between tasks, and allows scheduling tasks on a timetable or triggering them manually. Some key features include monitoring task status, resuming failed tasks, backfilling historical data, and a web-based user interface. While additional databases are required for high availability, Airflow provides a flexible way to model complex data workflows as code.
Intro to Airflow: Goodbye Cron, Welcome scheduled workflow managementBurasakorn Sabyeying
This document discusses Apache Airflow, an open-source workflow management platform for authoring, scheduling, and monitoring workflows or pipelines. It provides an overview of Airflow's key features and components, including Directed Acyclic Graphs (DAGs) for defining workflows as Python code, various operators for building tasks, and its rich web UI. The document compares Airflow to traditional cron jobs, noting Airflow can handle task dependencies and failures better than cron. It also outlines how to set up an Airflow cluster on multiple nodes for scaling workflows.
- Multi-tenancy allows cloud computing platforms like Salesforce to efficiently utilize server capacity, storage, and labor by hosting multiple customers on the same infrastructure (shared stack model), avoiding wasted resources of the single-tenant model.
- Salesforce's metadata-driven architecture enables seamless upgrades where customizations and integrations are automatically upgraded to the latest version without hassle for the customer.
- Major areas under development at Salesforce include programmable user interfaces, cloud logic, workflow and approvals, integration, mobile deployment, analytics, security and sharing models, and applications.
Using AI to Build a Self-Driving Query Optimizer with Shivnath Babu and Adria...Databricks
This document discusses using AI to build a self-driving query optimizer called Unravel Sessions. It introduces Shivnath Babu and Adrian Popescu, who founded Unravel to focus on ease of use and manageability of data systems. The document outlines challenges with current approaches to application performance tuning. It then demonstrates how Unravel Sessions uses a probe algorithm and Gaussian process model to automatically recommend performance optimizations through iterative probes. Key lessons learned in building the Unravel Sessions architecture include using model ensembles, cheap probes when possible, full-stack monitoring data, and keeping the user in the loop.
The document provides an overview of the agile software development process. It begins with defining agile as an iterative and adaptive approach to software development performed collaboratively by self-organizing teams. It then discusses agile principles like valuing customer collaboration, responding to change, and delivering working software frequently. The document also covers specific agile frameworks like Scrum and Extreme Programming, the role of user stories, estimation techniques like planning poker, and ceremonies like daily stand-ups, sprint planning and retrospectives. It concludes by comparing agile to the traditional waterfall model and defining some common agile metrics.
Scaled Agile Framework, SAFe, has been adopted by organizations of domain ranging from finance, logistics, insurance and government. SAFe provides a framework to apply Lean and Agile practices at an enterprise level. But why use SAFe? In this interactive session based on Rishi Chaddha, SAFe consultant, experience in implementing SAFe in big financial institute. Going beyond the theory we will talk about the challenges faced when implementing SAFe in portfolio which includes hundred of people distributed worldwide. Each initiative in the portfolio can be worth from few thousand of dollars to millions of dollars. The talk will cover both the good and the bad and will show how to practically start SAFe transformation.
This document outlines the advanced features of the Xporter add-on for JIRA, including: getting templates to the next level with formatting functions, conditional blocks, and JavaScript statements; using workflow post-functions to upload or email exported documents; creating scheduled reports to email regularly; integrating with other add-ons like Xray and Structure; and providing a REST API for external systems to generate documents. It also briefly introduces the Xporter team and some of their clients and products.
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
Data Lakes have been built with a desire to democratize data - to allow more and more people, tools, and applications to make use of data. A key capability needed to achieve it is hiding the complexity of underlying data structures and physical data storage from users. The de-facto standard has been the Hive table format addresses some of these problems but falls short at data, user, and application scale. So what is the answer? Apache Iceberg.
Apache Iceberg table format is now in use and contributed to by many leading tech companies like Netflix, Apple, Airbnb, LinkedIn, Dremio, Expedia, and AWS.
Watch Alex Merced, Developer Advocate at Dremio, as he describes the open architecture and performance-oriented capabilities of Apache Iceberg.
You will learn:
• The issues that arise when using the Hive table format at scale, and why we need a new table format
• How a straightforward, elegant change in table format structure has enormous positive effects
• The underlying architecture of an Apache Iceberg table, how a query against an Iceberg table works, and how the table’s underlying structure changes as CRUD operations are done on it
• The resulting benefits of this architectural design
Pandas UDF and Python Type Hint in Apache Spark 3.0Databricks
In the past several years, the pandas UDFs are perhaps the most important changes to Apache Spark for Python data science. However, these functionalities have evolved organically, leading to some inconsistencies and confusions among users. In Apache Spark 3.0, the pandas UDFs were redesigned by leveraging type hints.
This document discusses connecting the issue tracking software Jira to Oracle Application Express (APEX) by utilizing Jira's REST web services and JSON formatting. It covers motivating the need to integrate the tools, an overview of Jira features, using REST and JSON to retrieve and parse Jira issue data, and demonstrations of consuming the web services in APEX including using collections to cache responses.
Tavisca travel technology uses JIRA to track technological issues reported by its global clientele. JIRA converts issues into compressed "cards" containing details about the client, problem dates, assigned team, status, tests performed, and recommendations. This card format allows engineers easy access to problem information and standardizes the problem-solving process. By improving agile practices like continuous improvement, JIRA implementation helps Tavisca enhance customer experience and reduce costs in accordance with its objectives.
Dimitri Gielis is the founder and CEO of APEX R&D. He has over 18 years of experience with Oracle technologies including being an Oracle ACE Director. He presented on using service workers in Oracle Application Express. Service workers allow web applications to have rich offline experiences through features like periodic background syncs, push notifications, and geofencing. The presentation covered what service workers are, the problems they solve, their lifecycle, an example of using one, and potential use cases in APEX applications.
Managing Product Growth Experiments With JIRA and Confluence - Zane SalimAtlassian
Experiments and A/B tests are at the heart of innovation at Twitter. A culture of experimentation empowers the organization to try new things with minimal downside. Twitter uses Confluence and JIRA to manage hundreds of experiments at any given time. Learn Twitter's philosophy on experimentation, how we enhance organizational learning, and uncover deeper insights that help everyone become smarter with every new experiment.
Learn how Autodesk broke the 300,000 issues barrier without impacting performance, keeping excellent uptime, with more than 3000 registered users and average of 1800 concurrent users. In this session you will discover the hardware architecture, system settings and other interesting data from Autodesk experience in the field.
Jira plugin dev introduction 14012014 alukasgotter
JIRA is a project-, process- and product management tool from Atlassian. It has great customization possibilities and an open architecture to develop plug-ins / add-ons - In this talk I explore the possibilities developers have and give an overview and introduction into the atlassian plugin framework.
Internet security is a topical subject these days. It becomes more and more important to secure your applications against threats such as hackers because more and more important information becomes available for them. The biggest risk is that one could take over your identity. During this presentation, discover the best practices in securing mobile applications written in APEX to protect them against different threats.
This document discusses Agile project management tools and methodologies. It covers JIRA Agile for tracking work in an Agile workflow, the Scrum framework, and its events and artifacts like sprints, product backlogs, and burn down charts. It also mentions the Agile manifesto and its values of prioritizing working software and customer collaboration over documentation and contracts.
Wearables are hot these days. One might say it is a true revolution. We at APEX R&D are entering that wearables revolution as well, through Oracle Application Express. During this presentation, learn about the APEX R&D project and features of the research, including the Apple Watch. Wouldn't it be great to facilitate the work that people do in such a manner that they could do more other important things?
Dimitri Gielis is the founder and CEO of APEX R&D. He has over 17 years of experience with Oracle technologies and is an Oracle ACE Director. In this presentation, he demonstrates how to print from an APEX application using Node.js and the APEX Office Print module. He shows how APEX Office Print allows using Microsoft Office templates to generate output in Word, Excel, PowerPoint and PDF without having to code the documents.
Using JIRA & Greenhopper for Agile DevelopmentJeff Leyser
This document discusses using JIRA and Greenhopper for agile development. It provides an overview of JIRA as an issue tracking platform that can be used for various purposes including project management, help desk support, and software development. It also discusses Greenhopper, an agile project management plugin for JIRA that includes planning, task, and chart boards. The document concludes by encouraging evaluation of JIRA and Greenhopper.
This document discusses the issue tracking software JIRA. It provides an overview of what JIRA is, key concepts like customizable workflows, reasons to use JIRA like its extensive features and customizability, who uses JIRA with over 10,000 customers in 90 countries, and examples of features like issue creation, roadmaps, reports, notifications, searching and security. It concludes that JIRA is a capable issue management application that has grown to manage various business processes through workflow automation.
This document provides guidance on designing RESTful APIs. It recommends using nouns instead of verbs, keeping URLs simple with only two endpoints per resource, and following conventions from leading APIs. Complex variations and optional parameters should be "swept behind the '?'." The document emphasizes designing for application developers by making APIs intuitive, consistent and complete while also accommodating exceptional clients. It suggests adding an API virtualization layer to handle complexity.
The document provides an agenda for a presentation on JIRA. The agenda includes explaining what JIRA is, discussing JIRA concepts and features, explaining why JIRA is useful, demonstrating how to use JIRA live, and holding a question and answer session. Sections of the presentation will cover topics like what JIRA is used for, how issues, projects and subtasks are organized in JIRA, example implementations of JIRA in different contexts, and key features and benefits of the software.
Les Hazlewood, Stormpath co-founder and CTO and the Apache Shiro PMC Chair demonstrates how to design a beautiful REST + JSON API. Includes the principles of RESTful design, how REST differs from XML, tips for increasing adoption of your API, and security concerns.
Presentation video: https://www.youtube.com/watch?v=5WXYw4J4QOU
More info: http://www.stormpath.com/blog/designing-rest-json-apis
Further reading: http://www.stormpath.com/blog
Sign up for Stormpath: https://api.stormpath.com/register
Stormpath is a user management and authentication service for developers. By offloading user management and authentication to Stormpath, developers can bring applications to market faster, reduce development costs, and protect their users. Easy and secure, the flexible cloud service can manage millions of users with a scalable pricing model.
Project managers use Key Performance Indicators (KPIs) and dashboards to monitor and communicate the status of a project. KPIs should be measurable metrics that indicate if objectives are being met. Effective KPIs are specific, measurable, attainable, relevant and time-bound. KPIs can be quantitative or qualitative and should be selected to provide insights without overwhelming stakeholders with too much data. Dashboards consolidate multiple KPIs using visual widgets like charts, tables and gauges to give viewers a quick status update in an easy to understand format.
Building applications with Serverless Framework and AWS LambdaFredrik Vraalsen
Slides from intro workshop at Berlin Buzzwords 2019 about building serverless applications using Serverless Framework on AWS. Covers basic building of backend/REST APIs, event processing, orchestration.
This presentation was prepared for a Webcast where John Yerhot, Engine Yard US Support Lead, and Chris Kelly, Technical Evangelist at New Relic discussed how you can scale and improve the performance of your Ruby web apps. They shared detailed guidance on issues like:
Caching strategies
Slow database queries
Background processing
Profiling Ruby applications
Picking the right Ruby web server
Sharding data
Attendees will learn how to:
Gain visibility on site performance
Improve scalability and uptime
Find and fix key bottlenecks
See the on-demand replay:
http://pages.engineyard.com/6TipsforImprovingRubyApplicationPerformance.html
1) The authors describe how they secured a web application and backend systems to win an OpenHack competition by focusing on principles like reducing the attack surface, using strong authentication and encryption, validating all inputs, and implementing defense in depth.
2) Key aspects of their approach included using forms authentication for the web app, encrypting secrets, validating all user inputs with multiple checks, configuring IIS, Windows, SQL Server, and IPSec policies following security best practices.
3) They were able to securely manage the systems remotely using a VPN, Terminal Services, and restricted file shares while preventing firewalls.
Azure Functions are great for a wide range of scenarios, including working with data on a transactional or event-driven basis. In this session, we'll look at how you can interact with Azure SQL, Cosmos DB, Event Hubs, and more so you can see how you can take a lightweight but code-first approach to building APIs, integrations, ETL, and maintenance routines.
PyData Berlin 2023 - Mythical ML Pipeline.pdfJim Dowling
This talk is a mental map for building ML systems as ML Pipelines that are factored into Feature Pipelines, Training Pipelines, and Inference Pipelines.
Data analytics master class: predict hotel revenueKris Peeters
We predict future revenues in hotels by solving the data science puzzle end-to-end: from infrastructure in the cloud and security, to data ingestion, data cleaning, feature building and model training and model scoring.
The video of this talk is here: https://www.facebook.com/datamindedbe/posts/1385820021562117
Doctor Flow: Enterprise Flows best practices - patterns (SharePoint Saturday...serge luca
This document summarizes a presentation on advanced tips, patterns and demos for Microsoft Flow. The presentation covers state machine patterns, controller patterns, calling APIs, managing errors, parallelism and long calls. It also demonstrates approval flows, custom connectors, and integrating Flow with PowerBI. The presenter emphasizes keeping forms simple, using service accounts, and splitting flows to avoid PowerShell dependency for automation.
Gimel is a data abstraction framework built on Apache Spark - providing unified Data Access via API & SQL to different technologies such as kafka, elastic, HBASE, Rest API, File, Object stores, Relational , etc.
We spoke about this recently in the "cloud track" in the "Scale By The Bay" Conference.
https://www.scale.bythebay.io/schedule
https://sched.co/e55D
Youtube - https://www.youtube.com/watch?v=cy8g2WZbEBI&ab_channel=FunctionalTV
https://youtu.be/m6_0iI4XDpU
Apache Samza is a stream processing framework that provides high-level APIs and powerful stream processing capabilities. It is used by many large companies for real-time stream processing. The document discusses Samza's stream processing architecture at LinkedIn, how it scales to process billions of messages per day across thousands of machines, and new features around faster onboarding, powerful APIs including Apache Beam support, easier development through high-level APIs and tables, and better operability in YARN and standalone clusters.
Nagios Conference 2013 - Eric Stanley and Andy Brist - API and NagiosNagios
Eric Stanley and Andy Brist's presentation on API and Nagios.
The presentation was given during the Nagios World Conference North America held Sept 20-Oct 2nd, 2013 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
This document provides an overview of Apache Apex and real-time data visualization. Apache Apex is a platform for developing scalable streaming applications that can process billions of events per second with millisecond latency. It uses YARN for resource management and includes connectors, compute operators, and integrations. The document discusses using Apache Apex to build real-time dashboards and widgets using the App Data Framework, which exposes application data sources via topics. It also covers exporting and packaging dashboards to include in Apache Apex application packages.
The document provides steps to connect to a CloudFoundry environment and deploy a sample Predix application. It includes instructions on installing the CF CLI, logging in, listing services, creating a PostgreSQL service instance, pushing a sample app, and binding the app to the database. The steps cover common operations for deploying and managing apps on Pivotal CloudFoundry and interacting with services on Predix.
Doctor Flow- Best practices Microsoft flow - Techorama 2019serge luca
This document summarizes a presentation about advanced tips, patterns, and best practices for using Microsoft Flow. It includes demos of creating automated timesheets, calling the Graph API, using custom connectors, implementing approval escalation workflows using the state machine pattern, managing errors and parallelism in flows, and integrating Flow with Power BI. It also covers topics like licensing requirements, throttling limits, and strategies for handling long-running processes and service accounts.
This document provides a summary of an individual's qualifications and experience working with Salesforce and cloud computing technologies. It summarizes their educational background, work history including projects implementing Salesforce applications for various clients, and technical skills including Salesforce administration, development with Apex, and integration of Salesforce with other applications. Their experience includes customizing standard and building custom objects in Salesforce, as well as implementing workflows, approvals, reports, and dashboards.
MuleSoft London Community February 2020 - MuleSoft and ODataPace Integration
Our February Meetup in London took us through MuleSoft and OData. Our guest speaker Martin Gardner (Solution Principal at Slalom), covered how you can use the Mulesoft OData APIKit to wrap a SOAP web service in a Mule app that will present an OData interface for use with the Salesforce connect product. With examples from a recent project, Martin showed us how to avoid the pitfalls he fell into and allow you to be successful.
Chris O'Brien - Modern SharePoint development: techniques for moving code off...Chris O'Brien
This document discusses modern techniques for developing for SharePoint in a cloud-friendly way by moving code off SharePoint servers. It covers remote event receivers, PowerShell with CSOM, and Microsoft's App Model Samples. Remote event receivers allow executing code in response to events. PowerShell and CSOM is a powerful combination. The App Model Samples provide helper libraries and examples for common tasks like uploading files, provisioning sites and managing terms. While Microsoft's optimal approach is debated, these techniques allow customizations to be deployed to Office 365.
This document provides information on how to build a Maximizer API that allows editing an address book entry.
It involves creating an ASPX project with HTML and ASPX files to display the UI. JavaScript files are used to generate tokens and call the Maximizer API methods.
The process includes generating a token, declaring JavaScript files, creating a text box and buttons in the ASPX file to change the entry name. On click, the JavaScript makes an API call with the token to update the address book entry name. The files are then placed in the correct Maximizer folders and tested on the server.
Chris O'Brien - Best bits of Azure for Office 365/SharePoint developersChris O'Brien
Discussion of Azure web apps, App Insights, "Azure Functions in the real world", ARM templates, queues, BLOB storage and more. Includes a video demo of AAD-secured Azure Function called from a SharePoint Framework (SPFx) web part with SPO cookie auth.
Qualitätssicherung für APEX Anwendungen.pdfOliver Lemm
Im Jahr 2022 wurde das Open Source Framework Quasto als reines PL/SQL Qualitätssicherungs Framework veröffentlicht. Innerhalb der letzten DOAG Konferenz, sowie auf Basis mehrerer Kundenrückmeldungen wurde das Framework nun gezielt in Richtung einfacherer Nutzung und um eine APEX Anwendung zur Verwaltung erweitert.
Egal ob nun Entwicklungsvorgaben innerhalb von APEX, Anforderungen an die PL/SQL Logik oder ans Datenmodell, all diese Eigenschaften können nun mit Hilfe einer APEX Anwendung einfacher durch Quasto geprüft und verwaltet werden.
Eine optionale Integration in das bekannte Open Source Logger Framework, oder auch in ein eigene Customlösung als Framework zum Loggen von Fehler ist auch möglich.
Ex & Importe von Regeln lassen sich mittels JSON ausführen, so dass diese einfacher zwischen Instanzen oder verschiedenen Projekten transferiert werden können.
Zuletzt ist ein Region Plugin innerhalb Quasto vorhanden, um auch während der Entwicklung direkt zu sehen ob auf der aktuellen Seite auf welcher man gerade entwickelt, Regeln verletzt werden.
Das aktuelle Release von Quasto wird anfang November 2023 veröffentlich und wird erstmals auf der DOAG 2023 in der aktuellsten Funktionen vorgestellt.
Qualitätsstandards in der Datenbankentwicklung.pdfOliver Lemm
Die Qualität von Anwendungen, welche im Daten in der Datenbank speichern beruht zu großen Teilen auf der Qualität des Datenbankmodells, der Codequalität und den Daten selber.
In diesem Vortrag werden für alle 3 Kategorien Qualitätsstandards vorgestellt, die maßgeblich sind für eine hohe Qualität. Gerade im Bereich des Datenmodells, lassen sich viele logische Fehler innerhalb der Daten durch ein möglichst restriktives Datenmodell inklusive passender Metadaten verhindern. Auch in Bezug auf Definition von Berechtigungen oder Auswertungen sind sauber definierte Datenmodelle der Schlüssel zum Erfolg. Im Code gilt es dann angefangen von Code Guidelines, bis hin zu Best Practices im Bereich Logging, Exception Handling und allgemeiner Logik passend einfließen zu lassen.
Zuletzt sind Anforderung im Bereich der Stammdaten oder erwartete Mengen von Daten in bestimmten Tabellen ein Ansatz um böse Überraschungen zu vermeiden. Insgesamt hilft dieser Vortrag jedem der einem der den hohen Qualitätsstandards heutiger Anwendungen im Bereich der Datenbankentwicklung gerecht werden will.
APEX richtig installieren und konfigurierenOliver Lemm
Folien zum Thema Installation von Oracle Application Express bis zur Version 20.2. Dabei werden neben den Standard Installationsschritte Hinweise gegeben was man zur optimalen von APEX und ORDS ggf. anpassen sollte.
Der Vortrag wurde auf der APEX Connect 2021 online gehalten.
In this presentation the different types of an APEX Migration are described. Moving from old themes to the universal theme, moving from an old universal theme to universal theme and the dependencies to the database.
Using Jenkins in nower days you have to learn all about using Pipelines. This presentation shows how to user Jenkins Pipelines inside Oracle Projects.
The Presentation was held on the DOAG Conference 2019 in nuremberg.
Presentation was held in nuremberg on the DOAG 2019 conference.
It's about culture, Tools and Process changes which are necessary for the DevOps Change.
New Features regarding the Main Theme in APEX 19.1 and 19.2 New Features regarding Template Options, Theme Roller and Icons.
Presentation was held at the DOAG conference 2019 in nuremberg.
Presentation about using Jenkins as an automation tool for deploying database objects and APEX Applications. Jenkins Pipelines are used and compared to Jenkins Jobs.
This presentation was held on the DOAG 2018 conference in Nuremberg. It describes how to handle REST Webservice with Web Source Moduls inside APEX 18. Examples like using fantasydata and jira as webservices endpoints are described.
The presentation was held in april 2018 on the APEX Connect 2018 conference.
It describes how to challenge problems and special customer requirements regarding the universal grid in apex 5.1.
Mastering Universal Theme with corporate design from Union InvestmentOliver Lemm
The presentation was hold in March 2017 at the APEX World conference in Rotterdamm.
Building a custom application including a corporate design in combination with the universal theme is always a challenge. In this presentation some customization are made without stopping the subscription on the universal theme itself.
Mastering Universal Theme with corporate design from union investmentOliver Lemm
Building a custom application including a corporate design in combination with the universal theme is always a challenge. In this presentation some customization are made without stopping the subscription on the universal theme itself.
The presentation was hold in Juni 2017 in San Antonion/Texas (USA).
In diesem Vortrag werden die Features von APEX bezogen auf Oracle Jet vorgestellt. Mit APEX 5.1 hat das JavaScript Framework Oracle Jet in APEX die Visualisierung der Charts übernommen. In diesem Vortrag werden die Charts vorgestellt und Eigenschaften erläutert, die beim Erstellen von Chart Regionen wichtig sind.
Der Vortrag wurde auf der DOAG Konferenz in Nürnberg am 22.11.2017 gehalten.
In diesem Vortrag wird vorgestellt, welche Besonderheiten bei Aufwandsanalysen beachtet werden sollten. Der Vortrag wurde auf der DOAG 2017 in Nürnberg gehalten. Neben dem Entwicklungsaufwand, sind Risiko, Testaufwände, Konzeption, Automatisierung nur einige Punkte die den Gesamtaufwand einer Aufgabe darstellen.
Komplexe Daten mit Oracle Jet einfach aufbereitetOliver Lemm
In diesem Vortrag werden die Features von OracleJet innerhalb von Oracle Application Express (kurz APEX) dargestellt. Durch die Visualisierung mittels Charts können komplexe oder große Datenmengen sehr viel einfacher und schneller vom Endbenutzer ausgewertet und verstanden werden.
Der Vortrag wurde am 14.12.2017 in Frankfurt auf den IT-Tagen 2017 gehalten.
Mastering Universal Theme with corporate design from Union InvestmentOliver Lemm
When creating an oracle apex application with apex 5 or higher the universal theme was introduced to be the standard User Interface for all applications.
If you wanna combine the universal theme with a corporate design the big challenge is to change the look and feel without changing the most templates, only by adjusting css and a few templates.
Echtzeitvisualisierung von Twitter & CoOliver Lemm
The presentation was hold on APEX Connect 2016 in Berlin 26th of april together with Kai Donato. It demonstrates how to use the Twitter streaming api and visualize it by realtime in a graph using VivagraphJS.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
2. Facts & Figures
Independent Technology House
with Cross-Industry Expertise
Headquarter
Ratingen
(North Rhine – Westphalia)
240
Employees
Founded
1994
Branches
Dortmund, Cologne,
Frankfurt
Top Company
for Trainees &
Students
Privately-
Owned
Corporation
Oracle
Platinum
Partner
24 Mio. Euro
Revenue
2
3. about me
Oliver Lemm
since february 2007 working for MT AG in Ratingen
Junior Consultant > Consultant > Senior Consultant
> Competence Center Leader APEX & Service Center Leader APEX
Diploma applied computer science at the university of Duisburg-Essen
Projectleader, IT-Architect and Developer
working with Oracle Databases and Oracle Application Express since 2007
Blog http://oliverlemm.blogspot.de
Twitter https://twitter.com/OliverLemm
3
4. APEX Connect
26th – 28th April 2016 in Berlin
Get your early-bird ticket now!
APEX.DOAG.ORG#APEXCONN16
6. Motivation
„Working with one tool to handle the developing process,
controlling and all other processes in one project“
„Calculating key figures based on Jira values which are not delivered by Jira itself.“
„Using integrated Jira Plugins and adding additional
functionality by using APEX as known technology“
6
requirements
7. Jira
https://www.atlassian.com/JIRA
Issue tracking and code integration
Supporting Dashboards & Plugins
perfect integration from Confluence (wiki)
and Subversion (versioning)
supporting complex workflows
Issue import from Bugzilla, Mantis, GitHub, …
Supporting von REST-Webservices
7
10. Jira
supporting a huge number of attributes
time tracking (estimated, time spent, remaining)
Components (can be used for APEX pages and database objects)
versions
fields and screens adjustable per project
external issue numbers, date of commision, date of payment
Supporting own workflows
Describing a whole process, usable for non developing processes
Every step can be defined by user rights or issue dependencies
Using „JIRA Timesheet Reports and Gadgets Plugin“
10
Integrated features
11. Jira
key figures not in desired aggregation
time per year/month
no further support for SOAP Webservices in Jira
SOAP Interface doesn‘t return all values
Using Jira-Plugins
Listed Plugins not fullfilling all requiremends
Developing own Jira-Plugins is complex and time consuming
11
lack of features
12. Jira
API documentation for Jira REST-Webservices
https://docs.atlassian.com/jira/REST/latest/
URL structure
http://host:port/context/rest/api-name/api-version/resource-name
Using an issue identfiied by the key JRA-9 would be like:
https://jira.atlassian.com/rest/api/latest/issue/JRA-9
Using JQL Language (syntax)
https://jira.mt-ag.com/rest/api/2/search?jql=project=BP
Returning values in JSON Format (testable also in browser)
12
REST-Webservice
16. Webservices
Getting existing ACL Entries
Important for ACL
Using Web Service References (shared
components or the APEX_WEB_SERVICE
(package)
principal = APEX_050000
using utl_http or other packages calling
external resources
principal = my_schema_name
16
ACL select a.acl
,a.host
,a.lower_port
,a.upper_port
,p.principal
,p.privilege
,p.is_grant
,to_char(p.start_date
,'DD-MON-YYYY') as start_date
,to_char(p.end_date
,'DD-MON-YYYY') as end_date
from dba_network_acl_privileges p
left join dba_network_acls a on a.acl = p.acl
18. Webservices
calling https:
To solve this problem, you have to get the certificates which are used by the server which
runs Jira
Getting certificate
use your browser and call the URL from the REST-Webservice or even from Jira
click on the lock symbol
click show certificate
18
exporting certificate
ORA-29273: HTTP request failed
ORA-29024: Certificate validation failure
20. Webservices
use the Oracle Wallet Manager (OWM) to import the certificate
run ORACLE_HOMEbinowm.cl (on windows a link is created)
Import of the certificate is also possible using the command line
create a wallet for the certificate using a path like this
ORACLE_BASEadmin<SID><name_wallet>
Use automatic login for your wallet (otherwise you have to use the wallet always with your
password in your plsql code)
20
importing certificate
22. Webservices
Shared Components
Web Service References
Create
Problems
based on single items
authentification
result as CLOB
only in Collection.
No support in
apex_items
report columns
JSON Format
23
Web Service Reference
26. JSON
convert clob
to JSON object
looping entries
get value
27
processing
l_values apex_json.t_values;
….
apex_json.parse(p_values => l_values
,p_source => l_clob);
for i in 1 .. apex_json.get_count(p_values => l_values
,p_path => '.')
loop … end loop
l_jira_issue.key := apex_json.get_varchar2(p_values => pi_json_issue
,p_path => 'key');
l_jira_issue.timespent := apex_json.get_number(p_values => pi_json_issue
,p_path => 'fields.timespent');
27. JSON
Number as String
you have to convert
Datetime as String
Converting with
apex_json.get_date
doesn‘t work because
of format
custom fields
in Jira are named like
customfield_xxxxx
28
special cases
"id": "17149"
"created": "2015-11-03T13:48:16.630+0100"
l_timestamp := to_timestamp_tz(pi_string
,'YYYY-MM-DD"T"hh24:mi:ss.FF3TZHTZM');
l_string := to_char(l_timestamp
,'yyyy.mm.dd hh24:mi:ss');
l_date := to_date(l_string
,'yyyy.mm.dd hh24:mi:ss');
apex_json.get_varchar2(p_values => pi_json_issue
,p_path => 'fields.customfield_10000');
28. JSON
Time worked / Worklog not with information by day encapsuleted in issue
you have call the worklog for every single ticket by one REST Call
29
special cases
for i in 1 .. apex_json.get_count(p_values => l_values, p_path => 'issues')
loop
l_rest_response := make_rest_request(pi_url => pi_jira_base_url || c_jira_rest_base_path || '/issue/' ||
apex_json.get_varchar2(p_values => l_values, p_path => 'issues[' || i || '].key'
,pi_username => pi_username
,pi_password => pi_password);
apex_json.parse(p_values => l_values_issue, p_source => l_rest_response);
l_jira_issue := get_issue_from_json(pi_json_issue => l_values_issue);
for j in 1 .. apex_json.get_count(p_values => l_values_issue
,p_path => 'fields.worklog.worklogs')
loop
l_jira_issue_worklog := get_issue_worklog_from_json(pi_json_issue_worklog => l_values_issue
,pi_path => 'fields.worklog.worklogs[' || j || '].'
,pi_jira_issue_id => l_jira_issue.id
,pi_jira_issue_key => l_jira_issue.key);
pipe row(l_jira_issue_worklog);
29. JSON
transform JSON into type
easier to use
Transformation in package instead APEX
testing possible by using sql
Entities defined as column names
documentation von
CollectionSpalte – JSON – Spaltenname im Type
using table function based on defined types
30
processing create or replace type t_jira_issue force as object
(
-- { id: "16276"
-- c001
id number, -- Jira Issue ID
-- { self: https://jira.mt-
ag.com/rest/api/2/issue/16276
-- c003
url_json varchar2(32767), -- c003 - JSON URL
-- { key: "UITFPP-1057„
-- c001
key varchar2(32767), -- Issue Key
select *
from table(jira_rest_ws_pkg.get_projects(pi_base_url => 'https://jira.atlassian.com'))
30. APEX
using APEX_COLLECTIONS
not every search means new webservice call
loading page doesn‘t call a webservice
APEX itself relies also on collections when working with Web Service References
problem authentication for webservice call
every call needs a username & password
using Web Service Reference it‘s inconvenient
using Application Items instead
when logging into the apex application password is saved on server-side by application item
attention the password value is visible in session state
31
32. Conclusion
complexity is huge because of many different technical aspects
all important key facts are calculateable
transforming JSON in JavaScript or PL/SQL is not easy in the beginning
first time parsing JSON even with APEX_JSON needs practice
remember in 12c you can directly parse JSON by SQL
Using APEX integrated „Web Service Reference“ only for simple examples
33
From April 26th to 28th, 2016, the second edition of APEX Connect will take place in Berlin.
Due to the positive feedback this year, there will be an extra day: The conference will be three days long.
April 26th: Focus on PL/SQL
April 27th - 28th: Full range of APEX programming
There will again be many great submissions and topics.
Main topics
Project Management & Methods
Operation
Web technology
Application Development
The conference program is in completion and will be released in spring.
The top speaker of the PL/SQL and APEX scene will be there.
David Peake
Patrick Wolf
Anton Nielsen
Martin Giffy D'Souza
Roger Troller
Jürgen Sieben
Olaf Jessensky
Chris Saxon
In addition to the lecture program, the following activities are planned:
Workshops
Unconference sessions
Many networking opportunities
At the moment, the early bird prices still apply. The sale of the exhibition is already running.